id
stringlengths 18
42
| text
stringlengths 0
2.38M
| added
stringlengths 24
24
| created
stringlengths 20
24
| source
stringclasses 4
values | original_shard_dir
stringclasses 208
values | original_shard_idx
int64 0
311k
| num_tokens
int64 1
406k
|
---|---|---|---|---|---|---|---|
proofpile-arXiv_065-4 | \section{Introduction}\label{sec:intro}
Space provides a useful vantage point for monitoring large-scale trends on the surface of the Earth~\cite{manfreda2018use,albert2017using,yeh2020using}. Accordingly, numerous EO satellite missions have been launched or are being planned. Many EO satellites carry multispectral or hyperspectral sensors that measure the electromagnetic radiations emitted or reflected from the surface, which are then processed to form \emph{data cubes}. The data cubes are the valuable inputs to the EO applications.
However, two thirds of the surface of the Earth is under cloud cover at any given point in time~\cite{jeppesen2019cloud}. In many EO applications, the clouds occlude the targets of interest and reduce the value of the data. In fact, many weather prediction tasks actually require clear-sky measurements~\cite{liu2020hyperspectral}. Dealing with cloud cover is part-and-parcel of practical EO processing pipelines~\cite{transon2018survey, li2019deep-ieee, paoletti2019deep, mahajan2020cloud, yuan2021review}. Cloud mitigation strategies include segmenting and masking out the portion of the data that is affected by clouds~\cite{griffin2003cloud,gomez-chova2007cloud}, and restoring the cloud-affected regions~\cite{li2019cloud,meraner2020cloud,zi2021thin} as a form of data enhancement. Increasingly, deep learning forms the basis of the cloud mitigation routines~\cite{li2019deep-ieee,castelluccio2015land,sun2020satellite,yang2019cdnet}.
\begin{figure}[t]\centering
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/rgb_cloudy.pdf}
\caption{Cloudy image (in RGB).}
\end{subfigure}
\hspace{0.5em}
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/rgb_notcloudy.pdf}
\caption{Non-cloudy image (in RGB).}
\end{subfigure}
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/b128_patch.pdf}
\caption{Adversarial cube to bias the detector in the cloud-sensitive bands.}
\label{fig:falsecolor}
\end{subfigure}
\hspace{0.5em}
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/rgb_patch.pdf}
\caption{Adversarial cube blended in the environment in the RGB domain.}
\end{subfigure}
\vspace{-0.5em}
\caption{(Row 1) Cloudy and non-cloudy scenes. (Row 2) Our \emph{adversarial cube} fools the multispectral cloud detector~\cite{giuffrida2020cloudscout} to label the non-cloudy scene as cloudy with high confidence.}
\label{fig:example}
\end{figure}
As the onboard compute capabilities of satellites improve, it has become feasible to conduct cloud mitigation directly on the satellites~\cite{li2018onboard,giuffrida2020cloudscout}. A notable example is CloudScout~\cite{giuffrida2020cloudscout}, which was tailored for the PhiSat-1 mission~\cite{esa-phisat-1} of the European Space Agency (ESA). PhiSat-1 carries the HyperScout-2 imager~\cite{esposito2019in-orbit} and the Eyes of Things compute payload~\cite{deniz2017eyes}. Based on the multispectral measurements, a convolutional neural network (CNN) is executed on board to perform cloud detection, which, in the case of~\cite{giuffrida2020cloudscout}, involves making a binary decision on whether the area under a data cube is \emph{cloudy} or \emph{not cloudy}; see Fig.~\ref{fig:example} (Row 1). To save bandwidth, only \emph{non-cloudy} data cubes are downlinked, while \emph{cloudy} ones are not transmitted to ground~\cite{giuffrida2020cloudscout}.
However, deep neural networks (DNNs) in general and CNNs in particular are vulnerable towards adversarial examples, \ie, carefully crafted inputs aimed at fooling the networks into making incorrect predictions~\cite{akhtar2018threat, yuan2019adversarial}. A particular class of adversarial attacks called physical attacks insert adversarial patterns into the environment that, when imaged together with the targeted scene element, can bias DNN inference~\cite{athalye2018synthesizing, brown2017adversarial, eykholt2018robust, sharif2016accessorize, thys2019fooling}. In previous works, the adversarial patterns were typically colour patches optimised by an algorithm and fabricated to conduct the attack.
It is natural to ask if DNNs for EO data are susceptible to adversarial attacks. In this paper, we answer the question in the affirmative by developing a physical adversarial attack against a multispectral cloud detector~\cite{giuffrida2020cloudscout}; see Fig.~\ref{fig:example} (Row 2). Our adversarial pattern is optimised in the multispectral domain (hence is an \emph{adversarial cube}) and can bias the cloud detector to assign a \emph{cloudy} label to a \emph{non-cloudy} scene. Under the mission specification of CloudScout~\cite{giuffrida2020cloudscout}, EO data over the area will not be transmitted to ground.
\vspace{-1em}
\paragraph{Our contributions}
Our specific contributions are:
\begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item We demonstrate the optimisation of adversarial cubes to be realised as an array of exterior paints that exhibit the multispectral reflectance to bias the cloud detector.
\item We propose a novel multi-objective adversarial attack concept, where the adversarial cube is optimised to bias the cloud detector in the cloud sensitive bands, while remaining visually camouflaged in the visible bands.
\item We investigate mitigation strategies against our adversarial attack and propose a simple robustification method.
\end{enumerate}
\vspace{-1em}
\paragraph{Potential positive and negative impacts}
Research into adversarial attacks can be misused for malicious activities. On the other hand, it is vital to highlight the potential of the attacks so as to motivate the development of mitigation strategies. Our contributions above are aimed towards the latter positive impact, particularly \#3 where a defence method is proposed. We are hopeful that our work will lead to adversarially robust DNNs for cloud detection.
\section{Related work}\label{sec:related_work}
Here, we review previous works on dealing with clouds in EO data and adversarial attacks in remote sensing.
\subsection{Cloud detection in EO data}\label{sec:related_hyperspectral}
EO satellites are normally equipped with multispectral or hyperspectral sensors, the main differences between the two being the spectral and spatial resolutions~\cite{madry2017electrooptical,transon2018survey}. Each ``capture'' by a multi/hyperspectral sensor produces a data cube, which consists of two spatial dimensions with as many channels as spectral bands in the sensor.
Since 66-70\% of the surface of the Earth is cloud-covered at any given time~\cite{jeppesen2019cloud,li2018onboard}, dealing with clouds in EO data is essential. Two major goals are:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Cloud detection, where typically the location and extent cloud coverage in a data cube is estimated;
\item Cloud removal~\cite{li2019cloud,meraner2020cloud,zi2021thin}, where the values in the spatial locations occluded by clouds are restored.
\end{itemize}
Since our work relates to the former category, the rest of this subsection is devoted to cloud detection.
Cloud detection assigns a \emph{cloud probability} or \emph{cloud mask} to each pixel of a data cube. The former indicates the likelihood of cloudiness at each pixel, while the latter indicates discrete levels of cloudiness at each pixel~\cite{sinergise-cloud-masks}. In the extreme case, a single binary label (\emph{cloudy} or \emph{not cloudy}) is assigned to the whole data cube~\cite{giuffrida2020cloudscout}; our work focusses on this special case of cloud detection.
Cloud detectors use either \emph{hand-crafted features} or \emph{deep features}. The latter category is of particular interest because the methods have shown state-of-the-art performance~\cite{lopezpuigdollers2021benchmarking,liu2021dcnet}. The deep features are extracted from data via a series of hierarchical layers in a DNN, where the highest-level features serve as optimal inputs (in terms of some loss function) to a classifier, enabling discrimination of subtle inter-class variations and high intra-class variations~\cite{li2019deep-ieee}. The majority of cloud detectors that use deep features are based on an extension or variation of Berkeley's fully convolutional network architecture~\cite{long2015fully, shelhamer2017fully}, which was designed for pixel-wise semantic segmentation and demands nontrivial computing resources. For example, \cite{li2019deep} is based on SegNet~\cite{badrinarayanan2017segnet}, while \cite{mohajerani2018cloud, jeppesen2019cloud, yang2019cdnet, lopezpuigdollers2021benchmarking, liu2021dcnet, zhang2021cnn} are based on U-Net~\cite{ronneberger2015u-net}, all of which are not suitable for on-board implementation.
\subsection{On-board processing for cloud detection}
On-board cloud detectors can be traced back to the thresholding-based Hyperion Cloud Cover algorithm~\cite{griffin2003cloud}, which operated on 6 of the hyperspectral bands of the EO-1 satellite. Li \etal's on-board cloud detector~\cite{li2018onboard} is an integrative application of the techniques of decision tree, spectral angle map~\cite{decarvalhojr2000spectral}, adaptive Markov random field~\cite{zhang2011adaptive} and dynamic stochastic resonance~\cite{chouhan2013enhancement}, but no experimental feasibility results were reported. Arguably the first DNN-based on-board cloud detector is CloudScout~\cite{giuffrida2020cloudscout}, which operates on the HyperScout-2 imager~\cite{esposito2019in-orbit} and Eye of Things compute payload~\cite{deniz2017eyes}. As alluded to above, the DNN assigns a single binary label to the whole input data cube; details of the DNN will be provided in Sec.~\ref{sec:training}.
\subsection{Adversarial attacks in remote sensing}
Adversarial examples can be \emph{digital} or \emph{physical}. Digital attacks apply pixel-level perturbations to legitimate test images, subject to the constraints that these perturbations look like natural occurrences, \eg, electronic noise. Classic white-box attacks such as the FGSM~\cite{goodfellow2015explaining}
have been applied to attacking CNN-based classifiers for RGB images~\cite{xu2021assessing}, multispectral images~\cite{kalin2021automating} and synthetic aperture radio images~\cite{li2021adversarial}. A key observation is the generalisability of attacks from RGB to multispectral images~\cite{ortiz2018integrated, ortiz2018on}. Generative adversarial networks have been used to generate natural-looking hyperspectral adversarial examples~\cite{burnel2021generating}.
Physical attacks, as defined in Sec.~\ref{sec:intro}, need only access to the environment imaged by the victim, whereas digital attacks need access to the victim's test images (\eg, in a memory buffer); in this sense, physical attacks have weaker operational requirements and the associated impact is more concerning. For \emph{aerial/satellite RGB imagery}, physical attacks on a classifier~\cite{czaja2018adversarial}, aircraft detectors~\cite{den2020adversarial, lu2021scale} and a car detector~\cite{du2022physical} have been investigated but only \cite{du2022physical} provided real-world physical test results. For \emph{aerial/satellite multi/hyperspectral imagery}, our work is arguably the first to consider physical adversarial attacks.
\section{Threat model}\label{sec:threat_model}
We first define the threat model that serves as a basis for our proposed adversarial attack.
\begin{description}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item[Attacker's goals] The attacker aims to generate an adversarial cube that can bias a pretrained multispectral cloud detector to label non-cloudy space-based observation of scenes on the surface as cloudy. In addition, the attacker would like to visually camouflage the cube in a specific \textbf{region of attack (ROA)}; see Fig.~\ref{fig:rgb_scenes} for examples. Finally, the cube should be physically realisable.
\begin{figure}[ht]\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/threat_model/hills-roa.pdf}
\caption{Hills.}
\label{fig:hills}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/threat_model/desert-roa.pdf}
\caption{Desert.}
\label{fig:desert}
\end{subfigure}
\vspace{-0.5em}
\caption{Sample regions of attack.}
\label{fig:rgb_scenes}
\end{figure}
\item[Attacker's knowledge] The attacker has full information of the targeted DNN, including architecture and parameter values, \ie, white-box attack. This is a realistic assumption due to the publication of detailed information on the model and training data~\cite{giuffrida2020cloudscout}. Moreover, from a threat mitigation viewpoint, assuming the worst case is useful.
\item[Attacker's strategy] The attacker will optimise the adversarial cube on training data sampled from the same input domain as the cloud detector; the detailed method will be presented in Sec.~\ref{sec:attacking}. The cube will then be fabricated and placed in the environment, including the ROA, although Sec.~\ref{sec:limitations} will describe limitations on real-world evaluation of the proposed attack in our study.
\end{description}
\section{Building the cloud detector}\label{sec:training}
We followed Giuffrida \etal.~\cite{giuffrida2020cloudscout} to build a multispectral cloud detector suitable for satellite deployment.
\subsection{Dataset}\label{sec:cloud_detectors}
We employed the Cloud Mask Catalogue~\cite{francis_alistair_2020_4172871}, which contains cloud masks for 513 Sentinel-2A~\cite{2021sentinel-2} data cubes collected from a variety of geographical regions, each with 13 spectral bands and 20 m ground resolution (1024$\times$1024 pixels). Following Giuffrida \etal., who also used Sentinel-2A data, we applied the Level-1C processed version of the data, \ie, top-of-atmosphere reflectance data cubes. We further spatially divide the data into 2052 data (sub)cubes of 512$\times$512 pixels each.
To train the cloud detector model, the data cubes were assigned a binary label (\textit{cloudy} vs.~\textit{not cloudy}) by thresholding the number of cloud pixels in the cloud masks. Following Giuffrida \etal., two thresholds were used: 30\%, leading to dataset version TH30, and 70\%, leading to dataset version TH70 (the rationale will be described later). Each dataset was further divided into training, validation, and testing sets. Table~\ref{tab:cm_dataset} in the supp.~material summarises the datasets.
\subsection{Model}
We employed the CNN of Giuffrida \etal., which contains four convolutional layers in the feature extraction layers and two fully connected layers in the decision layers (see Fig.~\ref{fig:cnn_model} in the supp.~material for more details). The model takes as input 3 of the 13 bands of Sentinel-2A: band 1 (coastal aerosol), band 2 (blue), and band 8 (NIR). These bands correspond to the cloud-sensitive wavelengths; see Fig.~\ref{fig:falsecolor} for a false colour image in these bands. Using only 3 bands also leads to a smaller CNN ($\le 5$ MB) which allows it to fit on the compute payload of CloudScout~\cite{giuffrida2020cloudscout}.
Calling the detector ``multispectral'' can be inaccurate given that only 3 bands are used. However, in Sec.~\ref{sec:mitigation}, we will investigate adversarial robustness by increasing the input bands and model parameters of Giuffrida \etal.'s model.
\subsection{Training}
Following~\cite{giuffrida2020cloudscout}, a two stage training process was applied:
\begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Train on TH30 to allow the feature extraction layers to recognise ``cloud shapes''.
\item Then, train on TH70 to fine-tune the decision layers, while freezing the weights in the feature extraction layers.
\end{enumerate}
The two stage training is also to compensate for unbalanced distribution of training samples. Other specifications (\eg, learning rate and decay schedule, loss function) also follow that of Giuffrida \etal.; see~\cite{giuffrida2020cloudscout} for details.
Our trained model has a memory footprint of 4.93 MB (1,292,546 32-bit float weights), and testing accuracy and false positive rate of 95.07\% and 2.46\%, respectively.
\section{Attacking the cloud detector}\label{sec:attacking}
Here, we describe our approach to optimising adversarial cubes to attack multispectral cloud detectors.
\subsection{Adversarial cube design}\label{sec:material_selection}
Digitally, an adversarial cube $\mathbf{P}$ is the tensor
\begin{equation*}
\mathbf{P} =
\begin{pmatrix}
\mathbf{p}_{1,1} & \mathbf{p}_{1,2} & \cdots & \mathbf{p}_{1,N} \\
\mathbf{p}_{2,1} & \mathbf{p}_{2,2} & \cdots & \mathbf{p}_{2,N} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{p}_{M,1} & \mathbf{p}_{M,2} & \cdots & \mathbf{p}_{M,N}
\end{pmatrix} \in [0,1]^{M \times N \times 13},
\end{equation*}
where $M$ and $N$ (in pixels) are the sizes of the spatial dimensions, and $\mathbf{p}_{i,j} \in [0,1]^{13}$ is the intensity at pixel $(i,j)$ corresponding to the 13 multispectral bands of Sentinel-2A.
Physically, $\mathbf{P}$ is to be realised as an array of exterior paint mixtures (see Fig.~\ref{fig:colour_swatches}) that exhibit the multispectral responses to generate the attack. The real-world size of each pixel of $\mathbf{P}$ depends on the ground resolution of the satellite-borne multispectral imager (more on this in Sec.~\ref{sec:limitations}).
\subsubsection{Material selection and measurement}
To determine the appropriate paint mixtures for $\mathbf{P}$, we first build a library of multispectral responses of exterior paints. Eighty exterior paint swatches (see Fig.~\ref{fig:colour_swatches_real}) were procured and scanned with a Field Spec Pro 3 spectrometer~\cite{asd2008fieldspec3} to measure their reflectance (Fig.~\ref{fig:paint_reflectance}) under uniform illumination. To account for solar illumination when viewed from the orbit, the spectral power distribution of sunlight (specifically, using the AM1.5 Global Solar Spectrum\cite{astm2003specification}; Fig.~\ref{fig:solar_spectrum}) was factored into our paint measurements via element-wise multiplication to produce the apparent reflection; Fig.~\ref{fig:paint_apparent_reflectance}. Lastly, we converted the continuous spectral range of the apparent reflectance of a colour swatch to the 13 Sentinel-2A bands by averaging over the bandwidth of each band; Fig.~\ref{fig:paint_13bands}. The overall result is the matrix
\begin{align}
\mathbf{C} = \left[ \begin{matrix} \mathbf{c}_1, \mathbf{c}_2, \dots, \mathbf{c}_{80} \end{matrix} \right] \in [0,1]^{13 \times 80}
\end{align}
called the \emph{spectral index}, where $\mathbf{c}_q \in [0,1]^{13}$ contains the reflectance of the $q$-th colour swatch over the 13 bands.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches_diagram.pdf}
\vspace{-2.0em}
\caption{The adversarial cube (digital size $4 \times 5$ pixels in the example) is to be physically realised as a mixture of exterior paint colours that generate the optimised multispectral responses.}
\label{fig:colour_swatches}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches.pdf}
\vspace{-1.5em}
\caption{A subset of our colour swatches (paint samples).}
\label{fig:colour_swatches_real}
\end{figure}
\begin{figure*}[ht]\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/ybr_reflectance.pdf}
\caption{Reflectance of a colour swatch.}
\label{fig:paint_reflectance}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/solar_spectrum.pdf}
\caption{AM1.5 Global Solar Spectrum.}
\label{fig:solar_spectrum}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/ybr_apparent_reflectance.pdf}
\caption{Apparent reflectance of (a).}
\label{fig:paint_apparent_reflectance}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/ybr_13bands.pdf}
\caption{13 Sentinel-2 bands of (c).}
\label{fig:paint_13bands}
\end{subfigure}
\vspace{-0.5em}
\caption{Process of obtaining the 13 Sentinel-2 spectral bands of a colour swatch.}
\label{fig:spectrometer}
\end{figure*}
\subsubsection{Adversarial cube parametrisation}
We obtain $\mathbf{p}_{i,j}$ as a linear combination of the spectral index
\begin{align}\label{eq:convex}
\mathbf{p}_{i,j} = \mathbf{C}\cdot \sigma(\mathbf{a}_{i,j}),
\end{align}
where $\mathbf{a}_{i,j}$ is the real vector
\begin{align}
\mathbf{a}_{i,j} = \left[ \begin{matrix} a_{i,j,1} & a_{i,j,2} & \dots & a_{i,j,80} \end{matrix} \right]^T \in \mathbb{R}^{80},
\end{align}
and $\sigma$ is the softmax function
\begin{align}
\sigma(\mathbf{a}_{i,j}) = \frac{1}{\sum^{80}_{d=1} e^{a_{i,j,d}}} \left[ \begin{matrix} e^{a_{i,j,1}} & \dots & e^{a_{i,j,80}} \end{matrix} \right]^T.
\end{align}
Effectively, $\mathbf{p}_{i,j}$~\eqref{eq:convex} is a convex combination of $\mathbf{C}$.
Defining each $\mathbf{p}_{i,j}$ as a linear combination of $\mathbf{C}$ supports the physical realisation of each $\mathbf{p}_{i,j}$ through proportional mixing of the existing paints, as in colour printing~\cite{sharma2017digital}. Restricting the combination to be convex, thereby placing each $\mathbf{p}_{i,j}$ in the convex hull of $\mathbf{C}$, contributes to the sparsity of the coefficients~\cite{caratheodory-theorem}. In Sec.~\ref{sec:opimisation}, we will introduce additional constraints to further enhance physical realisability.
To enable the optimal paint mixtures to be estimated, we collect the coefficients for all $(i,j)$ into the set
\begin{align}
\mathcal{A} = \{ \mathbf{a}_{i,j} \}^{j = 1,\dots,N}_{i=1,\dots,M},
\end{align}
and parametrise the adversarial cube as
\begin{equation*}
\mathbf{P}(\mathcal{A}) =
\begin{pmatrix}
\mathbf{C}\sigma(\mathbf{a}_{1,1}) & \mathbf{C}\sigma(\mathbf{a}_{1,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{1,N}) \\
\mathbf{C}\sigma(\mathbf{a}_{2,1}) & \mathbf{C}\sigma(\mathbf{a}_{2,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{2,N}) \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{C}\sigma(\mathbf{a}_{M,1}) & \mathbf{C}\sigma(\mathbf{a}_{M,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{M,N})
\end{pmatrix},
\end{equation*}
and where $\mathbf{p}_{i,j}(\mathcal{A})$ is pixel $(i,j)$ of $\mathbf{P}(\mathcal{A})$. Optimising a cube thus reduces to estimating $\mathcal{A}$.
\subsection{Data collection for cube optimisation}\label{sec:data_collection}
Based on the attacker's goals (Sec.~\ref{sec:threat_model}), we collected Sentinel-2A Level-1C data products~\cite{2021copernicus} over the globe with a distribution of surface types that resembles the Hollstein dataset~\cite{hollstein2016ready-to-use}. The downloaded data cubes were preprocessed following~\cite{francis_alistair_2020_4172871}, including spatial resampling to achieve a ground resolution of 20~m and size $512 \times 512 \times 13$. Sen2Cor~\cite{main-knorn2017sen2cor} was applied to produce probabilistic cloud masks, and a threshold of 0.35 was applied on the probabilities to decide \textit{cloudy} and \textit{not cloudy} pixels. The binary cloud masks were further thresholded with 70\% cloudiness (Sec.~\ref{sec:cloud_detectors}) to yield a single binary label for each data cube. The data cubes were then evaluated with the cloud detector trained in Sec.~\ref{sec:training}. Data cubes labelled \emph{not cloudy} by the detector was separated into training and testing sets
\begin{align}
\mathcal{D} = \{ \mathbf{D}_k \}^{2000}_{k=1}, \;\;\;\; \mathcal{E} = \{ \mathbf{E}_\ell \}^{400}_{\ell=1},
\end{align}
for adversarial cube training. One data cube $\mathbf{T} \in \mathcal{D}$ is chosen as the ROA (Sec.~\ref{sec:threat_model}).
\begin{figure*}[ht]\centering
\includegraphics[width=0.95\linewidth]{./figures/methods/pipeline.pdf}
\vspace{-0.5em}
\caption{Optimisation process for generating adversarial cubes.}
\label{fig:pipeline}
\end{figure*}
\subsection{Optimising adversarial cubes}\label{sec:patch}
We adapted Brown \etal's~\cite{brown2017adversarial} method, originally developed for optimising adversarial patches (visible domain). Fig.~\ref{fig:pipeline} summarises our pipeline for adversarial cube optimisation, with details provided in the rest of this subsection.
\vspace{-1em}
\paragraph{Subcubes}
First, we introduce the subcube notation. Let $b \subseteq \{1,2,\dots,13\}$ index a subset of the Sentinel-2A bands. Using $b$ in the superscript of a data cube, e.g., $\mathbf{P}^{b}$, implies extracting the subcube of $\mathbf{P}$ with the bands indexed by $b$. Of particular interest are the following two band subsets:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item $c = \{1, 2, 8\}$, \ie, the cloud sensitive bands used in~\cite{giuffrida2020cloudscout}.
\item $v = \{2, 3, 4\}$, \ie, the visible bands.
\end{itemize}
\subsubsection{Cube embedding and augmentations}\label{sec:augmentations}
Given the current $\mathcal{A}$, adversarial cube $\mathbf{P}(\mathcal{A})$ is embedded into a training data cube $\mathbf{D}_k$ through several geometric and spectral intensity augmentations that simulate the appearance of the adversarial cube when captured in the field by a satellite. The geometric augmentations include random rotations and positioning to simulate variations in placement of $\mathbf{P}(\mathcal{A})$ in the scene. The spectral intensity augmentations include random additive noise, scaling and corruption to simulate perturbation by ambient lighting.
\subsubsection{Loss function and optimisation}\label{sec:opimisation}
Define $\mathbf{D}_k(\mathcal{A})$ as the training data cube $\mathbf{D}_k$ embedded with $\mathbf{P}(\mathcal{A})$ (with the augmentations described in Sec.~\ref{sec:augmentations}). The data cube is forward propagated through the cloud detector $f$ to estimate the \emph{confidence}
\begin{align}
\hat{y}_k = f(\mathbf{D}^c_k(\mathcal{A}))
\end{align}
of $\mathbf{D}_k(\mathcal{A})$ being in the \emph{cloudy} class. Note that the cloud detector considers only the subcube $\mathbf{D}^c_k(\mathcal{A})$ corresponding to the cloud sentitive bands. Since we aim to bias the detector to assign high $\hat{y}_k$ to $\mathbf{D}_k(\mathcal{A})$, we construct the loss
\begin{align}\label{eq:loss}
\Psi(\mathcal{A},\mathcal{D}) = \sum_k -\log(f(\mathbf{D}^c_k(\mathcal{A}))).
\end{align}
In addition to constraining the spectral intensities in $\mathbf{P}(\mathcal{A})$ to be in the convex hull of $\mathbf{C}$, we also introduce the multispectral non-printability score (NPS)
\begin{align}\label{eq:nps_loss}
\Phi(\mathcal{A}, \mathbf{C}) = \frac{1}{M N} \sum_{i,j} \left( \min_{\textbf{c} \in \mathbf{C}} \left\| \textbf{p}_{i,j}(\mathcal{A}) - \mathbf{c}\right\|_2 \right).
\end{align}
Minimising $\Phi$ encourages each $\textbf{p}_{i,j}(\mathcal{A})$ to be close to (one of) the measurements in $\textbf{C}$, which sparsifies the coefficients $\sigma(\mathbf{a}_{i,j})$ and helps with the physical realisability of $\mathbf{P}(\mathcal{A})$. The multispecral NPS is an extension of the original NPS for optimising (visible domain) adversarial patches~\cite{sharif2016accessorize}.
To produce an adversarial cube that is ``cloaked'' in the visible domain in the ROA defined by $\mathbf{T}$, we devise the term
\begin{align}\label{eq:cloaking_loss}
\Omega(\mathcal{A}, \mathbf{T}) = \left\| \textbf{P}^{v}(\mathcal{A}) - \mathbf{T}^v_{M \times N} \right\|_2,
\end{align}
where $\mathbf{T}^v_{M \times N}$ is a randomly cropped subcube of spatial height $M$ and width $N$ in the visible bands $\mathbf{T}^v$ of $\mathbf{T}$.
The overall loss is thus
\begin{equation}
L(\mathcal{A}) = \underbrace{\Psi(\mathcal{A},\mathcal{D})}_{\textrm{cloud sensitive}} + \alpha\cdot \underbrace{\Phi(\mathcal{A}, \mathbf{C})}_{\textrm{multispectral}} + \beta \cdot \underbrace{\Omega(\mathcal{A}, \mathbf{T})}_{\textrm{visible domain}}, \label{eq:overall_loss}
\end{equation}
where weights $\alpha, \beta \ge 0$ control the relative importance of the terms. Notice that the loss incorporates multiple objectives across different parts of the spectrum.
\vspace{-1em}
\paragraph{Optimisation}
Minimising $L$ with respect to $\mathcal{A}$ is achieved using the Adam~\cite{kingma2014adam} stochastic optimisation algorithm. Note that the pre-trained cloud detector $f$ is not updated.
\vspace{-1em}
\paragraph{Parameter settings}
See Sec.~\ref{sec:results}.
\subsection{Limitations on real-world testing}\label{sec:limitations}
While our adversarial cube is optimised to be physically realisable, two major constraints prevent physical testing:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Lack of precise knowledge of and control over the operation of a real satellite makes it difficult to perform coordinated EO data capture with the adversarial cube.
\item Cube dimensions of about 100$\times$100 pixels are required for effective attacks, which translates to 2 km$\times$2 km = 4 km$^2$ ground size (based on the ground resolution of the data; see Sec.~\ref{sec:data_collection}). This prevents full scale fabrication on an academic budget. However, the size of the cube is well within the realm of possibility, \eg, solar farms and airports can be much larger than $4$ km$^2$~\cite{ong2013land}.
\end{itemize}
We thus focus on evaluating our attack in the digital domain, with real-world testing left as future work.
\section{Measuring effectiveness of attacks}\label{sec:metrics}
Let $\mathbf{P}^\ast = \mathbf{P}(\mathcal{A}^\ast)$ be the adversarial cube optimised by our method (Sec.~\ref{sec:attacking}). Recall from Sec.~\ref{sec:data_collection} that both datasets $\mathcal{D}$ and $\mathcal{E}$ contain \emph{non-cloudy} data cubes. We measure the effectiveness of $\mathbf{P}^\ast$ on the training set $\mathcal{D}$ via two metrics:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Detection accuracy of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e.,
\begin{equation}\label{eq:accuracy}
\text{Accuracy}({\mathcal{D}}) \triangleq
\frac{1}{|\mathcal{D}|}
\sum^{|\mathcal{D}|}_{k=1} \mathbb{I}(f(\mathbf{D}^c_k(\mathcal{A}^\ast)) \le 0.5),
\end{equation}
where the lower the accuracy, the less often $f$ predicted the correct class label (\emph{non-cloudy}, based on confidence threshold $0.5$), hence the more effective the $\mathbf{P}^\ast$.
\item Average confidence of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e.,
\begin{equation}\label{eq:average_probability}
\text{Cloudy}({\mathcal{D}}) \triangleq
\frac{1}{|\mathcal{D}|}
\sum^{|\mathcal{D}|}_{k=1} f(\mathbf{D}^c_k(\mathcal{A}^\ast).
\end{equation}
The higher the avg confidence, the more effective the $\mathbf{P}^\ast$.
\end{itemize}
To obtain the effectiveness measures on the testing set $\mathcal{E}$, simply swap $\mathcal{D}$ in the above with $\mathcal{E}$.
\section{Results}\label{sec:results}
We optimised adversarial cubes of size 100$\times$100 pixels on $\mathcal{D}$ (512$\times$512 pixel dimension) under different loss configurations and evaluated them digitally (see Sec.~\ref{sec:limitations} on obstacles to real-world testing). Then, we investigated different cube designs and mitigation strategies for our attack.
\subsection{Ablation tests}\label{sec:ablation}
Based on the data collected, we optimised adversarial cubes under different combinations of loss terms:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item $\Psi$: Adversarial biasing in the cloud-sensitive bands~\eqref{eq:loss}.
\item $\Phi$: Multispectral NPS~\eqref{eq:nps_loss}.
\item $\Omega$-Hills: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Hills (Fig.~\ref{fig:hills}).
\item $\Omega$-Desert: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Desert (Fig.~\ref{fig:desert}).
\end{itemize}
The weights in~\eqref{eq:overall_loss} were empirically determined to be $\alpha = 5.0$ and $\beta = 0.05$.
\vspace{-1em}
\paragraph{Convex hull and NPS}
Fig.~\ref{fig:cubes_hull} shows the optimised cubes $\mathbf{P}^\ast$ and its individual spectral intensities $\mathbf{p}^\ast_{i,j}$ in the cloud sensitive bands (false colour) and visible domain. Note that without the convex hull constraints, the intensities (green points) are scattered quite uniformly, which complicates physical realisability of the paint mixtures. The convex hull constraints predictably limit the mixtures to be in the convex hull of $\mathbf{C}$. Carath{\'e}odory's Theorem~\cite{caratheodory-theorem} ensures that each $\mathbf{p}^\ast_{i,j}$ can be obtained by mixing at most 13 exterior paints. In addition, the multispectral NPS term encourages the mixtures to cluster closely around the columns of $\mathbf{C}$ (red points), \ie, close to an existing exterior paint colour.
\vspace{-1em}
\paragraph{Visual camouflage}
Fig.~\ref{fig:cubes_loss_images} illustrates optimised cubes $\mathbf{P}^\ast$ embedded in the ROA Hills and Desert, with and without including the cloaking term~\eqref{eq:cloaking_loss} in the loss function. Evidently the cubes optimised with $\Omega$ are less perceptible.
\vspace{-1em}
\paragraph{Effectiveness of attacks}
Table~\ref{tab:result_loss} shows quantitative results on attack effectiveness (in terms of the metrics in Sec.~\ref{sec:metrics}) on the training $\mathcal{D}$ and testing $\mathcal{E}$ sets---again, recall that these datasets contain only \emph{non-cloudy} data cubes. The results show that the optimised cubes are able to strongly bias the pretrained cloud detector, by lowering the accuracy by at least $63\%$ (1.00 to 0.37) and increasing the cloud confidence by more than $1000\%$ (0.05 to 0.61). The figures also indicate the compromise an attacker would need to make between the effectiveness of the attack, physical realisablity and visual imperceptibility of the cube.
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\begin{tabular}{p{4.0cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}}
\rowcolor{black} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\
\hline
\textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\
\hline
- (no adv.~cubes) & 1.00 & 1.00 & 0.05 & 0.05 \\
$\Psi$ (no convex hull constr.) & 0.04 & 0.03 & 0.95 & 0.95 \\
$\Psi$ & 0.13 & 0.12 & 0.81 & 0.83 \\
$\Psi + \alpha\Phi$ & 0.22 & 0.19 & 0.73 & 0.75 \\
$\Psi + \beta\Omega$-Hills & 0.17 & 0.14 & 0.77 & 0.80 \\
$\Psi + \beta\Omega$-Desert & 0.23 & 0.25 & 0.72 & 0.73 \\
$\Psi + \alpha\Phi + \beta\Omega$-Hills & 0.25 & 0.28 & 0.71 & 0.70 \\
$\Psi + \alpha\Phi + \beta\Omega$-Desert & 0.37 & 0.37 & 0.61 & 0.61 \\
\end{tabular}
\vspace{-0.5em}
\caption{Effectiveness of 100$\times$100 adversarial cubes optimised under different loss configurations (Sec.~\ref{sec:ablation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack.}
\label{tab:result_loss}
\end{table}
\begin{figure*}[ht]\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{./figures/results/hull/log_nohull.pdf}
\caption{$L = \Psi$ (without convex hull constraints).}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{./figures/results/hull/log_hull.pdf}
\caption{$L = \Psi$.}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{./figures/results/hull/log+nps_hull.pdf}
\caption{$L = \Psi + \alpha \cdot \Phi$.}
\end{subfigure}
\vspace{-0.5em}
\caption{Effects of convex hull constraints and multispectral NPS on optimised cube $\mathbf{P}^\ast$. The top row shows the cube and individual pixels $\mathbf{p}^\ast_{i,j}$ (green points) in the visible bands $v$, while the bottom row shows the equivalent values in the cloud sensitive bands $c$ (in false colour). In the 3-dimensional plots, the red points indicate the columns of the spectral index $\mathbf{C}$ and black lines its convex hull.}
\label{fig:cubes_hull}
\end{figure*}
\begin{figure}[ht]\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/not_camo_hills.pdf}
\caption{$L = \Psi + \alpha \Phi$.}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/camo_hills.pdf}
\caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Hills}$.}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/not_camo_desert.pdf}
\caption{$L = \Psi + \alpha \Phi$.}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/camo_desert.pdf}
\caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Desert}$.}
\end{subfigure}
\vspace{-0.5em}
\caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ with and without the cloaking term~\eqref{eq:cloaking_loss}.}
\label{fig:cubes_loss_images}
\end{figure}
\subsection{Different cube configurations}\label{sec:multcube}
Can the physical footprint of the adversarial cube be reduced to facilitate real-world testing? To answer this question, we resize $\mathbf{P}$ to 50$\times$50 pixels and optimise a number of them (4 or 6) instead. We also tested random configurations with low and high proximity amongst the cubes. The training pipeline for the multi-cube setting remains largely the same. Fig.~\ref{fig:cubes_config_images} shows (in visible domain) the optimised resized cubes embedded in a testing data cube.
Quantitative results on the effectiveness of the attacks are given in Table~\ref{tab:result_cubeconfig}. Unfortunately, the results show a significant drop in attack effectiveness when compared against the 100$\times$100 cube on all loss configurations. This suggests that the size and spatial continuity of the adversarial cube are important factors to the attack.
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\begin{tabular}{p{0.7cm} | p{1.50cm} | p{1.80cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}}
\rowcolor{black} \multicolumn{3}{l |}{\textcolor{white}{\textbf{Cube configurations}}} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\
\hline
\textbf{\#} & \textbf{Size} & \textbf{Proximity} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\
\hline
\multicolumn{3}{l |}{- (no adv.~cubes)} & 1.00 & 1.00 & 0.05 & 0.05 \\
4 & 50$\times$50 & Low & 0.87 & 0.87 & 0.26 & 0.27 \\ %
6 & 50$\times$50 & Low & 0.71 & 0.72 & 0.33 & 0.33 \\
4 & 50$\times$50 & High & 0.63 & 0.62 & 0.42 & 0.44 \\
6 & 50$\times$50 & High & 0.63 & 0.63 & 0.40 & 0.41 \\
\end{tabular}
\vspace{-0.5em}
\caption{Effectiveness of 50$\times$50 adversarial cubes under different cube configurations (Sec.~\ref{sec:multcube}) optimised with loss $L = \Psi + \alpha\Phi$. Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.}
\label{tab:result_cubeconfig}
\end{table}
\begin{figure}[ht]\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/four_random.pdf}
\caption{Four 50$\times$50 cubes (low prox).}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/six_random.pdf}
\caption{Six 50$\times$50 cubes (low prox).}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/four_fixed.pdf}
\caption{Four 50$\times$50 cubes (high prox).}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/six_fixed.pdf}
\caption{Six 50$\times$50 cubes (high prox).}
\end{subfigure}
\vspace{-0.5em}
\caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ of different cube configurations.}
\label{fig:cubes_config_images}
\end{figure}
\subsection{Mitigation strategies}\label{sec:mitigation}
We investigated several mitigation strategies against our adversarial attack:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item 13 bands: Increasing the number of input bands of the cloud detector from 3 to 13 (all Sentinel-2A bands);
\item $\sqrt{2}$: Doubling the model size of the cloud detector by increasing the number of filter/kernels in the convolutional layers and activations in the fully connected layers by $\sqrt{2}$
\item $2\times$ CONV: Doubling the model size of the cloud detector by adding two additional convolutional layers.
\end{itemize}
Table~\ref{tab:result_mitigations} shows that using a ``larger'' detector (in terms of the number of input channels and layers) yielded slightly worse cloud detection accuracy. However, increasing the number of input bands significantly reduced our attack effectiveness, possibly due to the increased difficulty of biasing all 13 channels simultaneously. This argues for using greater satellite-borne compute payloads than that of~\cite{giuffrida2020cloudscout}.
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\begin{tabular}{p{1.5cm} | p{2.5cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}}
\rowcolor{black} & & \multicolumn{2}{l |} {\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l} {\textcolor{white}{\textbf{Cloudy}}} \\
\hline
\textbf{Detectors} & \textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\
\hline
13 bands & - (no adv.~cubes) & 1.00 & 1.00 & 0.06 & 0.06 \\
& $\Psi + \alpha\Phi$ & 0.94 & 0.96 & 0.15 & 0.14 \\
\hline
$\sqrt{2}$ & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\
& $\Psi + \alpha\Phi$ & 0.36 & 0.38 & 0.62 & 0.60 \\
\hline
$2\times$CONV & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\
& $\Psi + \alpha\Phi$ & 0.26 & 0.25 & 0.74 & 0.73 \\
\end{tabular}
\vspace{-0.75em}
\caption{Effectiveness of 100$\times$100 adversarial cubes optimised for different cloud detector designs (Sec.~\ref{sec:mitigation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.}
\label{tab:result_mitigations}
\end{table}
\section{Conclusions and limitations}\label{sec:conclusion}
We proposed a physical adversarial attack against a satellite-borne multispectral cloud detector. Our attack is based on optimising exterior paint mixtures that exhibit the required spectral signatures to bias the cloud detector. Evaluation in the digital domain illustrates the realistic threat of the attack, though the simple mitigation strategy of using all input multispectral bands seems to offer good protection.
As detailed in Sec.~\ref{sec:limitations}, our work is limited to digital evaluation due to several obstacles. Real-world testing of our attack and defence strategies will be left as future work.
\vfill
\section{Usage of existing assets and code release}
The results in this paper were partly produced from ESA remote sensing data, as accessed through the Copernicus Open Access Hub~\cite{2021copernicus}. Source code and/or data used in our paper will be released subject to securing permission.
\vfill
\section*{Acknowledgements}\label{sec:acknowledgement}
Tat-Jun Chin is SmartSat CRC Professorial Chair of Sentient Satellites.
{\small
\bibliographystyle{ieee_fullname}
| 2024-02-18T23:39:39.782Z | 2021-12-06T02:11:43.000Z | algebraic_stack_train_0000 | 0 | 6,601 |
|
proofpile-arXiv_065-66 |
\section{Preface}
\label{s_preface}
This paper primarily serves as a reference for my Ph.D. dissertation, which I am currently writing.
As a consequence, the framework is not under active development.
The presented concepts, problems, and solutions may be interesting regardless, even for other problems than Neural Architecture Search (NAS).
The framework's name, UniNAS, is a wordplay of University and Unified NAS since the framework was intended to incorporate almost any architecture search approach.
\section{Introduction and Related Work}
\label{s_introduction}
An increasing supply and demand for automated machine learning causes the amount of published code to grow by the day. Although advantageous, the benefit of such is often impaired by many technical nitpicks.
This section lists common code bases and some of their disadvantages.
\subsection{Available NAS frameworks}
\label{u_introduction_available}
The landscape of NAS codebases is severely fragmented, owing to the vast differences between various NAS methods and the deep-learning libraries used to implement them.
Some of the best supported or most widely known ones are:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item {NASLib~\citep{naslib2020}}
\item {
Microsoft NNI \citep{ms_nni} and Archai \citep{ms_archai}
}
\item {
Huawei Noah Vega \citep{vega}
}
\item {
Google TuNAS \citep{google_tunas} and PyGlove \citep{pyglove} (closed source)
}
\end{itemize}
Counterintuitively, the overwhelming majority of publicly available NAS code is not based on any such framework or service but simple and typical network training code.
Such code is generally quick to implement but lacks exact comparability, scalability, and configuration power, which may be a secondary concern for many researchers.
In addition, since the official code is often released late or never, and generally only in either TensorFlow~\citep{tensorflow2015-whitepaper} or PyTorch~\citep{pytorch},
popular methods are sometimes re-implemented by some third-party repositories.
Further projects include the newly available and closed-source cloud services by, e.g., Google\footnote{\url{https://cloud.google.com/automl/}}
and Microsoft\footnote{\url{https://www.microsoft.com/en-us/research/project/automl/}}. Since they require very little user knowledge in addition to the training data, they are excellent for deep learning in industrial environments.
\subsection{Common disadvantages of code bases}
\label{u_introduction_disadvantages}
With so many frameworks available, why start another one?
The development of UniNAS started in early 2020, before most of these frameworks arrived at their current feature availability or were even made public.
In addition, the frameworks rarely provide current state-of-the-art methods even now and sometimes lack the flexibility to include them easily.
Further problems that UniNAS aims to solve are detailed below:
\paragraph{Research code is rigid}
The majority of published NAS code is very simplistic.
While that is an advantage to extract important method-related details, the ability to reuse the available code in another context is severely impaired.
Almost all details are hard-coded, such as:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item {
the used gradient optimizer and learning rate schedule
}
\item {
the architecture search space, including candidate operations and network topology
}
\item {
the data set and its augmentations
}
\item {
weight initialization and regularization techniques
}
\item {
the used hardware device(s) for training
}
\item {
most hyper-parameters
}
\end{itemize}
This inflexibility is sometimes accompanied by the redundancy of several code pieces that differ slightly for different experiments or phases in NAS methods.
Redundancy is a fine way to introduce subtle bugs or inconsistencies and also makes the code confusing to follow.
Hard-coded details are also easy to forget, which is especially crucial in research where reproducibility depends strongly on seemingly unimportant details.
Finally, if any of the hard-coded components is ever changed, such as the optimizer, configurations of previous experiments can become very misleading.
Their details are generally not part of the documented configuration (since they are hard-coded), so earlier results no longer make sense and become misleading.
\paragraph{A configuration clutter}
In contrast to such simplistic single-purpose code, frameworks usually offer a variety of optimizers, schedules, search spaces, and more to choose from.
By configuring the related hyper-parameters, an optimizer can be trivially and safely exchanged for another. Since doing so is a conscious and intended choice, it is also documented in the configuration. In contrast, the replacement of hard-coded classes was not intended when the code was initially written.
The disadvantage of this approach comes with the wealth of configurable hyper-parameters, in different ways:
Firstly, the parametrization is often cluttered.
While implementing more classes (such as optimizers or schedules) adds flexibility, the list of available hyper-parameters becomes increasingly bloated and opaque.
The wealth of parametrization is intimidating and impractical since it is often nontrivial to understand exactly which hyper-parameters are used and which are ineffective.
As an example, the widely used PyTorch Image Models framework~\citep{rw2019timm} (the example was chosen due to the popularity of the framework, it is no worse than others in this respect) implements an intimidating mix of regularization and data augmentation settings that are partially exclusive.\footnote{\url{https://github.com/rwightman/pytorch-image-models/blob/ba65dfe2c6681404f35a9409f802aba2a226b761/train.py}, checked Dec. 1st 2021; see lines 177 and below.}
Secondly, to reduce the clutter, parameters can be used by multiple mutually exclusive choices.
In the case of the aforementioned PyTorch Image Models framework, one example would be the selection of gradient-descent optimizers.
Sharing common parameters such as the learning rate and the momentum generally works well, but can be confusing since, once again, finding out which parameters affect which modules necessitates reading the code or documentation.
Thirdly, even with an intimidating wealth of configuration choices, not every option is covered. To simplify and reduce the clutter, many settings of lesser importance always use a sensible default value.
If changing such a parameter becomes necessary, the framework configurations become more cluttered or changing the hard-coded default value again results in misleading configurations of previous experiments.
To summarize, the hyper-parametrization design of a framework can be a delicate decision, trying for them to be complete but not cluttered.
While both extremes appear to be mutually exclusive, they can be successfully united with the underlying configuration approach of UniNAS: argument trees.
\paragraph{}
Nonetheless, it is great if code is available at all.
Many methods are published without any code that enables verifying their training or search results, impairing their reproducibility.
Additionally, even if code is overly simplistic or accompanied by cluttered configurations, reading it is often the best way to clarify a method's exact workings and obtain detailed information about omitted hyper-parameter choices.
\section{Argument trees}
\label{u_argtrees}
The core design philosophy of UniNAS is built on so-called \textit{argument trees}.
This concept solves the problems of Section~\ref{u_introduction_disadvantages} while also providing immense configuration flexibility.
As its basis, we observe that any algorithm or code piece can be represented hierarchically.
For example, the task to train a network requires the network itself and a training loop, which may use callbacks and logging functions.
Sections~\ref{u_argtrees_modularity} and~\ref{u_argtrees_register} briefly explain two requirements: strict modularity and a global register.
As described in Section~\ref{u_argtrees_tree}, this allows each module to define which other types of modules are needed. In the previous example, a training loop may use callbacks and logging functions.
Sections~\ref{u_argtrees_config} and~\ref{u_argtrees_build} explain how a configuration file can fully detail these relationships and how the desired code class structure can be generated.
Finally, Section~\ref{u_argtrees_gui} shows how a configuration file can be easily manipulated with a graphical user interface, allowing the user to create and change complex experiments without writing a single line of code.
\subsection{Modularity}
\label{u_argtrees_modularity}
As practiced in most non-simplistic codebases, the core of the argument tree structure is strong modularity.
The framework code is fragmented into different components with clearly defined purposes, such as training loops and datasets.
Exchanging modules of the same type for one another is a simple issue, for example gradient-descent optimizers.
If all implemented code classes of the same type inherit from one base class (e.g., AbstractOptimizer) that guarantees specific class methods for a stable interaction, they can be treated equally. In object-oriented programming, this design is termed polymorphism.
UniNAS extends typical PyTorch~\citep{pytorch} classes with additional functionality.
An example is image classification data sets, which ordinarily do not contain information about image sizes. Adding this specification makes it possible to use fake data easily and to precompute the tensor shapes in every layer throughout the neural network.
\begin{figure*}[ht]
\hfill
\begin{minipage}[c]{0.97\textwidth}
\begin{python}
@Register.task(search=True)
class SingleSearchTask(SingleTask):
@classmethod
def args_to_add(cls, index=None) -> [Argument]:
return [
Argument('is_test_run', default='False', type=str, is_bool=True),
Argument('seed', default=0, type=int),`
Argument('save_dir', default='{path_tmp}', type=str),
]
@classmethod
def meta_args_to_add(cls) -> [MetaArgument]:
methods = Register.methods.filter_match_all(search=True)
return [
MetaArgument('cls_device', Register.devices_managers, num=1),
MetaArgument('cls_trainer', Register.trainers, num=1),
MetaArgument('cls_method', methods, num=1),
]
\end{python}
\end{minipage}
\vskip-0.3cm
\caption{
UniNAS code excerpt for a SingleSearchTask. The decorator function in Line~1 registers the class with type ''task'' and additional information.
The method in Line~5 returns all arguments for the task to be set in a config file.
The method in Line~13 defines the local tree structure by stating how many modules of which types are needed. It is also possible to specify additional requirements, as done in Line~14.
}
\label{u_fig_register}
\end{figure*}
\subsection{A global register}
\label{u_argtrees_register}
A second requirement for argument trees is a global register for all modules. Its functions are:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item {
Allow any module to register itself with additional information about its purpose. The example code in Figure~\ref{u_fig_register} shows this in Line~1.
}
\item {
List all registered classes, including their type (task, model, optimizer, data set, and more) and their additional information (search, regression, and more).
}
\item {
Filter registered classes by types and matching information.
}
\item {
Given only the name of a registered module, return the class code located anywhere in the framework's files.
}
\end{itemize}
As seen in the following Sections, this functionality is indispensable to UniNAS' design.
The only difficulties in building such a register is that the code should remain readable and that every module has to register itself when the framework is used.
Both can be achieved by scanning through all code files whenever a new job starts, which takes less than five seconds.
Python executes the decorators (see Figure~\ref{u_fig_register}, Line~1) by doing so, which handle registration in an easily readable fashion.
\subsection{Tree-based dependency structures}
\label{u_argtrees_tree}
\begin{figure*}
\vskip-0.7cm
\begin{minipage}[l]{0.42\linewidth}
\centering
\includegraphics[trim=0 320 2480 0, clip, width=\textwidth]{./images/uninas/args_tree_s1_col.pdf}
\vskip-0.2cm
\caption{
Part of a visualized SingleSearchTask configuration, which describes the training of a one-shot super-network with a specified search method (omitted for clarity, the complete tree is visualized in Figure~\ref{app_u_argstree_img}).
The white colored tree nodes state the type and number of requested classes, the turquoise boxes the specific classes used. For example, the \textcolor{red}{SingleSearchTask} requires exactly one type of \textcolor{orange}{hardware device} to be specified, but the \textcolor{cyan}{SimpleTrainer} accepts any number of \textcolor{green}{callbacks} or loggers.
\\
\hfill
}
\label{u_argstree_trimmed_img}
\end{minipage}
\hfill
\begin{minipage}[r]{0.5\textwidth}
\begin{small}
\begin{lstlisting}[backgroundcolor = \color{white}]
"cls_task": <@\textcolor{red}{"SingleSearchTask"}@>,
"{cls_task}.save_dir": "{path_tmp}/",
"{cls_task}.seed": 0,
"{cls_task}.is_test_run": true,
"cls_device": <@\textcolor{orange}{"CudaDevicesManager"}@>,
"{cls_device}.num_devices": 1,
"cls_trainer": <@\textcolor{cyan}{"SimpleTrainer"}@>,
"{cls_trainer}.max_epochs": 3,
"{cls_trainer}.ema_decay": 0.5,
"{cls_trainer}.ema_device": "cpu",
"cls_exp_loggers": <@\textcolor{black}{"TensorBoardExpLogger"}@>,
"{cls_exp_loggers#0}.log_graph": false,
"cls_callbacks": <@\textcolor{green}{"CheckpointCallback"}@>,
"{cls_callbacks#0}.top_n": 1,
"{cls_callbacks#0}.key": "train/loss",
"{cls_callbacks#0}.minimize_key": true,
\end{lstlisting}
\end{small}
\vskip-0.2cm
\caption{
Example content of the configuration text-file (JSON format) for the tree in Figure~\ref{u_argstree_trimmed_img}.
The first line in each text block specifies the used class(es), the other lines their detailed settings. For example, the \textcolor{cyan}{SimpleTrainer} is set to train for three epochs and track an exponential moving average of the network weights on the CPU.
}
\label{u_argstree_trimmed_text}
\end{minipage}
\end{figure*}
A SingleSearchTask requires exactly one hardware device and exactly one training loop (named trainer, to train an over-complete super-network), which in turn may use any number of callbacks and logging mechanisms.
Their relationship is visualized in Figure~\ref{u_argstree_trimmed_img}.
Argument trees are extremely flexible since they allow every hierarchical one-to-any relationship imaginable.
Multiple optional callbacks can be rearranged in their order and configured in detail.
Moreover, module definitions can be reused in other constellations, including their requirements.
The ProfilingTask does not need a training loop to measure the runtime of different network topologies on a hardware device, reducing the argument tree in size.
While not implemented, a MultiSearchTask could use several trainers in parallel on several devices.
The hierarchical requirements are made available using so-called MetaArguments, as seen in Line~16 of Figure~\ref{u_fig_register}.
They specify the local structure of argument trees by stating which other modules are required. To do so, writing the required module type and their amount is sufficient. As seen in Line~14, filtering the modules is also possible to allow only a specific subset.
This particular example defines the upper part of the tree visualized in Figure~\ref{u_argstree_trimmed_img}.
The names of all MetaArguments start with "cls\_" which improves readability and is reflected in the visualized arguments tree (Figure~\ref{u_argstree_trimmed_img}, white-colored boxes).
\subsection{Tree-based argument configurations}
\label{u_argtrees_config}
While it is possible to define such a dynamic structure, how can it be represented in a configuration file?
Figure~\ref{u_argstree_trimmed_text} presents an excerpt of the configuration that matches the tree in Figure~\ref{u_argstree_trimmed_img}.
As stated in Lines~6 and~9 of the configuration, CudaDevicesManager and SimpleTrainer fill the roles for the requested modules of types "device" and "trainer".
Lines~14 and~17 list one class of the types ''logger'' and ''callback'' each, but could provide any number of comma-separated names.
Also including the stated "task" type in Line~1, the mentioned lines state strictly which code classes are used and, given the knowledge about their hierarchy, define the tree structure.
Additionally, every class has some arguments (hyper-parameters) that can be modified.
SingleSearchTask defined three such arguments (Lines~7 to~9 in Figure~\ref{u_fig_register}) in the visualized example,
which are represented in the configuration (Lines~2 to~4 in Figure~\ref{u_argstree_trimmed_text}).
If the configuration is missing an argument, maybe to keep it short, its default value is used.
Another noteworthy mechanism in Line~2 is that "\{cls\_task\}.save\_dir" references whichever class is currently set as "cls\_task" (Line~1), without naming it explicitly.
Such wildcard references simplify automated changes to configuration files since, independently of the used task class, overwriting "\{cls\_task\}.save\_dir" is always an acceptable way to change the save directory.
A less general but perhaps more readable notation is "SingleSearchTask.save\_dir", which is also accepted here.
A very interesting property of such dynamic configuration files is that they contain only the hyper-parameters (arguments) of the used code classes.
Adding any additional arguments will result in an error since the configuration-parsing mechanism, described in Section~\ref{u_argtrees_build}, is then unable to piece the information together.
Even though UniNAS implements several different optimizer classes, any such configuration only contains the hyper-parameters of those used. Generated configuration files are always complete (contain all available arguments), sparse (contain only the available arguments), and never ambiguous.
A debatable design decision of the current configuration files, as seen in Figure~\ref{u_argstree_trimmed_text}, is that they do not explicitly encode any hierarchy levels. Since that information is already known from their class implementations, the flat representation was chosen primarily for readability.
It is also beneficial when arguments are manipulated, either automatically or from the terminal when starting a task.
The disadvantage is that the argument names for class types can only be used once ("cls\_device", "cls\_trainer", and more); an unambiguous assignment is otherwise not possible. For example, since the SingleSearchTask already owns "cls\_device", no other class that could be used in the same argument tree can use that particular name. While this limitation is not too significant, it can be mildly confusing at times.
Finally, how is it possible to create configuration files?
Since the dynamic tree-based approach offers a wide variety of possibilities, only a tiny subset is valid.
For example, providing two hardware devices violates the defined tree structure of a SingleSearchTask and results in a parsing failure.
If that happens, the user is provided with details of which particular arguments are missing or unexpected.
While the best way to create correct configurations is surely experience and familiarity with the code base, the same could be said about any framework.
Since UniNAS knows about all registered classes, which other (possibly specified) classes they use, and all of their arguments (including defaults, types, help string, and more), an exhaustive list can be generated automatically. However, resulting in almost 1600 lines of text, this solution is not optimal either.
The most convenient approach is presented in Section~\ref{u_argtrees_gui}: Creating and manipulating argument trees with a graphical user interface.
\begin{algorithm}
\caption{
Pseudo-code for building the argument tree, best understood with Figures~\ref{u_argstree_trimmed_img} and~\ref{u_argstree_trimmed_text}
For a consistent terminology of code classes and tree nodes: If the $Task$ class uses a $Trainer$, then in that context, $Trainer$ the child. Lines starting with \# are comments.
}
\label{alg_u_argtree}
\small
\begin{algorithmic}
\Require $Configuration$ \Comment{Content of the configuration file}
\Require $Register$ \Comment{All modules in the code are registered}
\State{}
\State{$\#$ recursive parsing function to build a tree}
\Function{parse}{$class,~index$}
\Comment{E.g. $(SingleSearchTask,~0)$}
\State $node = ArgumentTreeNode(class,~index)$
\State{}
\State{$\#$ first parse all arguments (hyper-parameters) of this tree node}
\ForEach{($idx, argument\_name$) \textbf{in} $class.get\_arguments()$}
\Comment{E.g. (0, $''save\_dir''$)}
\State $value = get\_used\_value(Configuration,~class,~index,~argument\_name)$
\State $node.add\_argument(argument\_name,~value)$
\EndFor
\State{}
\State{$\#$ then recursively parse all child classes, for each module type...}
\ForEach{$child\_class\_type$ \textbf{in} $class.get\_child\_types()$}
\Comment{E.g. $cls\_trainer$}
\State $class\_names = get\_used\_classes(Configuration,~child\_classes\_type)$
\Assert{ The number of $class\_names$ is in the specified limits}
\State{}
\State{$\#$ for each module type, check all configured classes}
\ForEach{($idx,~class\_name$) \textbf{in} $class\_names$}
\Comment{E.g. (0, $''SimpleTrainer''$)}
\State $child\_class = Register.get(child\_class\_name)$
\State $child\_node = $\Call{parse}{$child\_class,~idx$}
\State $node.add\_child(child\_class\_type,~idx,~child\_node)$
\EndFor
\EndFor
\Returnx{ $node$}
\EndFunction
\State{}
\State $tree = $\Call{parse}{$Main, 0$}
\Comment{Recursively parse the tree, $Main$ is the entry point}
\Ensure every argument in the configuration has been parsed
\end{algorithmic}
\end{algorithm}
\subsection{Building the argument tree and code structure}
\label{u_argtrees_build}
The arguably most important function of a research code base is to run experiments.
In order to do so, valid configuration files must be translated into their respective code structure.
This comes with three major requirements:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item{
Classes in the code that implement the desired functionality.
As seen in Section~\ref{u_argtrees_tree} and Figure~\ref{u_argstree_trimmed_img}, each class also states the types, argument names and numbers of additionally requested classes for the local tree structure.
}
\item{
A configuration that describes which code classes are used and which values their parameters take.
This is described in Section~\ref{u_argtrees_config} and visualized in Figure~\ref{u_argstree_trimmed_text}.
}
\item{
To connect the configuration content to classes in the code, it is required to reference code modules by their names. As described in Section~\ref{u_argtrees_register} this can be achieved with a global register.
}
\end{itemize}
Algorithm~\ref{alg_u_argtree} realizes the first step of this process: parsing the hierarchical code structure and their arguments from the flat configuration file.
The result is a tree of \textit{ArgumentTreeNodes}, of which each refers to exactly one class in the code, is connected to all related tree nodes, and knows all relevant hyper-parameter values.
While they do not yet have actual class instances, this final step is no longer difficult.
\begin{figure*}[h]
\vskip -0.0in
\begin{center}
\includegraphics[trim=30 180 180 165, clip, width=\linewidth]{images/uninas/gui/gui1desc.png}
\hspace{-0.5cm}
\caption{
The graphical user interface (left) that can manipulate the configurations of argument trees (visualized right).
Since many nodes are missing classes of some type ("cls\_device", ...), their parts in the GUI are highlighted in red.
The eight child nodes of DartsSearchMethod are omitted for visual clarity.
}
\label{fig_u_gui}
\end{center}
\end{figure*}
\subsection{Creating and manipulating argument trees with a GUI}
\label{u_argtrees_gui}
Manually writing a configuration file can be perplexing since one must keep track of tree specifications, argument names, available classes, and more.
The graphical user interface (GUI) visualized in Figures~\ref{fig_u_gui} and~\ref{app_u_gui} solves these problems to a large extent, by providing the following functionality:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item{
Interactively add and remove nodes in the argument tree, thus also in the configuration and class structure. Highlight violations of the tree specification.
}
\item{
Setting the hyper-parameters of each node, using checkboxes (boolean), dropdown menus (choice from a selection), and text fields (other cases like strings or numbers) where appropriate.
}
\item{
Functions to save and load argument trees.
Since it makes sense to separate the configurations for the training procedure and the network design to swap between different constellations easily, loading partial trees is also supported. Additional functions enable visualizing, resetting, and running the current argument tree.
}
\item{
A search function that highlights all matches since the size of some argument trees can make finding specific arguments tedious.
}
\end{itemize}
In order to do so, the GUI manipulates \textit{ArgumentTreeNodes} (Section~\ref{u_argtrees_build}), which can be easily converted into configuration files and code.
As long as the required classes (for example, the data set) are already implemented, the GUI enables creating and changing experiments without ever touching any code or configuration files.
While not among the original intentions, this property may be especially interesting for non-programmers that want to solve their problems quickly.
Still, the current version of the GUI is a proof of concept.
It favors functionality over design, written with the plain Python Tkinter GUI framework and based on little previous GUI programming experience.
Nonetheless, since the GUI (frontend) and the functions manipulating the argument tree (backend) are separated, a continued development with different frontend frameworks is entirely possible.
The perhaps most interesting would be a web service that runs experiments on a server, remotely configurable from any web browser.
\subsection{Using external code}
\label{u_external}
There is a variety of reasons why it makes sense to include external code into a framework.
Most importantly, the code either solves a standing problem or provides the users with additional options. Unlike newly written code, many popular libraries are also thoroughly optimized, reviewed, and empirically validated.
External code is also a perfect match for a framework based on argument trees.
As shown in Figure~\ref{u_fig_external_import}, external classes of interest can be thinly wrapped to ensure compatibility, register the module, and specify all hyper-parameters for the argument tree.
The integration is seamless so that finding out whether a module is locally written or external requires an inspection of its code.
On the other hand, if importing the AdaBelief~\citep{zhuang2020adabelief} code fails, the module will not be registered and therefore not be available in the graphical user interface.
UniNAS fails to parse configurations that require unregistered modules but informs the user which external sources can be installed to extend its functionality.
Due to this logistic simplicity, several external frameworks extend the core of UniNAS.
Some of the most important ones are:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item{
pymoo~\citep{pymoo}, a library for multi-objective optimization methods.
}
\item{
Scikit-learn~\citep{sklearn}, which implements many classical machine learning algorithms such as Support Vector Machines and Random Forests.
}
\item{
PyTorch Image Models~\citep{rw2019timm}, which provides the code for several optimizers, network models, and data augmentation methods.
}
\item{
albumentations~\citep{2018arXiv180906839B}, a library for image augmentations.
}
\end{itemize}
\begin{figure*}
\hfill
\begin{minipage}[c]{0.95\textwidth}
\begin{python}
from uninas.register import Register
from uninas.training.optimizers.abstract import WrappedOptimizer
try:
from adabelief_pytorch import AdaBelief
# if the import was successful,
# register the wrapped optimizer
@Register.optimizer()
class AdaBeliefOptimizer(WrappedOptimizer):
# wrap the original
...
except ImportError as e:
# if the import failed,
# inform the user that optional libraries are not installed
Register.missing_import(e)
\end{python}
\end{minipage}
\vskip-0.3cm
\caption{
Excerpt of UniNAS wrapping the official AdaBelief optimizer code.
The complete text has just 45 lines, half of which specify the optimizer parameters for the argument trees.
}
\label{u_fig_external_import}
\end{figure*}
\section{Dynamic network designs}
\label{u_networks}
As seen in the previous Sections, the unique design of UniNAS enables powerful customization of all components. In most cases, a significant portion of the architecture search configuration belongs to the network design. The FairNAS search example in Figure~\ref{app_u_argstree_img} contains 25 configured classes, of which 11 belong to the search network.
While it would be easy to create a single configurable class for each network architecture of interest, that would ignore the advantages of argument trees.
On the other hand, there are many technical difficulties with highly dynamic network topologies. Some of them are detailed below.
\subsection{Decoupling components}
In many published research codebases, network and architecture weights jointly exist in the network class. This design decision is disadvantageous for multiple reasons.
Most importantly, changing the network or NAS method requires a lot of manual work.
The reason is that different NAS methods need different amounts of architecture parameters, use them differently, and optimize them in different ways. For example:
\begin{itemize}[noitemsep,parsep=0pt,partopsep=0pt]
\item{
DARTS~\citep{liu2018darts} requires one weight vector per architecture choice.
They weigh all different paths, candidate operations, in a sum. Updating the weights is done with an additional optimizer (ADAM), using gradient descent.
}
\item{
MDENAS~\citep{mdenas} uses a similar vector for a weighted sample of a single candidate operation that is used in this particular forward pass. Global network performance feedback is used to increase or decrease the local weightings.
}
\item{
Single-Path One-Shot~\citep{guo2020single} does not use weights at all. Paths are always sampled uniformly randomly. The trained network is used as an accuracy prediction model and used by a hyper-parameter optimization method.
}
\item{
FairNAS~\citep{FairNAS} extends Single-Path One-Shot to make sure that all candidate operations are used frequently and equally often. It thus needs to track which paths are currently available.
}
\end{itemize}
\begin{figure}[t]
\vskip -0.0in
\begin{center}
\includegraphics[trim=0 0 0 0, clip, width=\linewidth]{images/draw/search_net.pdf}
\hspace{-0.5cm}
\caption{
The network and architecture weights are decoupled.
\textbf{Top}: The structure of a fully sequential super-network. Every layer (cell) uses the same set of candidate operations and weight strategy.
\textbf{Bottom left}: One set of candidate operations that is used multiple times in the network. This particular experiment uses the NAS-Bench-201 candidate operations.
\textbf{Bottom right}: A weight strategy that manages everything related to the used NAS method, such as creating the architecture weights or which candidates are used in each forward pass.
}
\label{fig_u_decouple}
\end{center}
\end{figure}
The same is also true for the set of candidate operations, which affect the sizes of the architecture weights.
Once the definitions of the search space, the candidate operations, and the NAS method (including the architecture weights) are mixed, changing any part is tedious.
Therefore, strictly separating them is the best long-term approach.
Similar to other frameworks presented in Section~\ref{u_introduction_available},
architectures defined in UniNAS do not use an explicit set of candidate architectures but allow a dynamic configuration.
This is supported by a \textit{WeightStrategy} interface, which handles all NAS-related operations such as creating and updating the architecture weights.
The interaction between the architecture definition, the candidate operations, and the weight strategy is visualized in Figure~\ref{fig_u_decouple}.
The easy exchange of any component is not the only advantage of this design.
Some NAS methods, such as DARTS, update network and architecture weights using different gradient descent optimizers. Correctly disentangling the weights is trivial if they are already organized in decoupled structures but hard otherwise.
Another advantage is that standardizing functions to create and manage architecture weights makes it easy to present relevant information to the user, such as how many architecture weights exist, their sizes, and which are shared across different network cells.
An example is presented in Figure~\ref{app_text}.
\begin{figure}[hb!]
\begin{minipage}[c]{0.24\textwidth}
\centering
\includegraphics[height=11.5cm]{./images/draw/mobilenetv2.pdf}
\end{minipage}
\hfill
\begin{minipage}[c]{0.5\textwidth}
\small
\begin{python}
"cell_3": {
"name": "SingleLayerCell",
"kwargs": {
"name": "cell_3",
"features_mult": 1,
"features_fixed": -1
},
"submodules": {
"op": {
"name": "MobileInvConvLayer",
"kwargs": {
"kernel_size": 3,
"kernel_size_in": 1,
"kernel_size_out": 1,
"stride": 1,
"expansion": 6.0,
"padding": "same",
"dilation": 1,
"bn_affine": true,
"act_fun": "relu6",
"act_inplace": true,
"att_dict": null,
"fused": false
}
}
}
},
\end{python}
\end{minipage}
\caption{
A high-level view on the MobileNet~V2 architecture~\citep{sandler2018mobilenetv2} in the top left,
and a schematic of the inverted bottleneck block in the bottom left.
This design uses two 1$\times$1 convolutions to change the channel count \textit{n} by an expansion factor of~6, and a spatial 3$\times$3 convolution in their middle.
The text on the right-hand side represents the cell structure by referencing the modules by their names ("name") and their keyworded arguments ("kwargs").
}
\label{u_fig_conf}
\end{figure}
\subsection{Saving, loading, and finalizing networks}
\label{u_networks_save}
As mentioned before, argument trees enable a detailed configuration of every aspect of an experiment, including the network topology itself.
As visualized in Figure~\ref{app_u_argstree_img}, such network definitions can become almost arbitrarily complex.
This becomes disadvantageous once models have to be saved or loaded or when super-networks are finalized into discrete architectures.
Unlike TensorFlow~\citep{tensorflow2015-whitepaper}, the used PyTorch~\citep{pytorch} library saves only the network weights without execution graphs.
External projects like ONNX~\citep{onnx} can be used to export limited graph information but not to rebuild networks using the same code classes and context.
The implemented solution is inspired by the official code\footnote{\url{https://github.com/mit-han-lab/proxylessnas/tree/master/proxyless_nas}} of ProxylessNAS~\citep{proxylessnas}, where every code module defines two functions that enable exporting and importing the entire module state and context.
As typical for hierarchical structures, the state of an outer module contains the states of all modules within.
An example is visualized in Figure~\ref{u_fig_conf}, where one cell in the famous MobileNet V2 architecture is represented as readable text.
The global register can provide any class definition by name (see Section~\ref{u_argtrees_register}) so that an identical class structure can be created and parameterized accordingly.
The same approach that enables saving and loading arbitrary class compositions can also be used to change their structure.
More specifically, an over-complete super-network containing all possible candidate operations can export only a specific configuration subset. The network recreated from this reduced configuration is the result of the architecture search.
This is made possible since the weight strategy controls the use of all candidate operations, as visualized in Figure~\ref{fig_u_decouple}. Similarly, when their configuration is exported, the weight strategy controls which candidates should be part of the finalized network architecture.
In another use case, some modules behave differently in super-networks and finalized architectures. For example, Linear Transformers~\citep{ScarletNAS} supplement skip connections with linear 1$\times$1 convolutions in super-networks to stabilize the training with variable network depths.
When the network topology is finalized, it suffices to simply export the configuration of a skip connection instead of their own.
Another practical way of rebuilding code structures is available through the argument tree configuration, which defines every detail of an experiment (see Section~\ref{u_argtrees_config}).
Parsing the network design and loading the trained weights of a previous experiment requires no further user interaction than specifying its save directory.
This specific way of recreating experiment environments is used extensively in \textit{Single-Path One-Shot} tasks.
In the first step, a super-network is trained to completion. Afterward, when the super-network is used to make predictions for a hyper-parameter optimization method (such as Bayesian optimization or evolutionary algorithms), the entire environment of its training can be recreated. This includes the network design and the dataset, data augmentations, which parts were reserved for validation, regularization techniques, and more.
\section{Discussion and Conclusions}
\label{u_conclusions}
We presented the underlying concepts of UniNAS, a PyTorch-based framework with the ambitious goal of unifying a variety of NAS algorithms in one codebase.
Even though the use cases for this framework changed over time, mostly from DARTS-based to SPOS-based experiments, its underlying design approach made reusing old code possible at every step.
However, several technical details could be changed or improved in hindsight. Most importantly, configuration files should reflect the hierarchy levels (see Section~\ref{u_argtrees_config}) for code simplicity and to avoid concerns about using module types multiple times. The current design favors readability, which is now a minor concern thanks to the graphical user interface.
Other considered changes would improve the code readability but were not implemented due to a lack of necessity and time.
In summary, the design of UniNAS fulfills all original requirements.
Modules can be arranged and combined in almost arbitrary constellations, giving the user an extremely flexible tool to design experiments.
Furthermore, using the graphical user interface does not require writing even a single line of code.
The resulting configuration files contain only the relevant information and do not suffer from a framework with many options.
These features also enable an almost arbitrary network design, combined with any NAS optimization method and any set of candidate operations. Despite that, networks can still be saved, loaded, and changed in various ways.
Although not covered here, several unit tests ensure that the essential framework components keep working as intended.
Finally, what is the advantage of using argument trees over writing code with the same results?
Compared to configuration files, code is more powerful and versatile but will likely suffer from problems described in Section~\ref{u_introduction_available}.
Argument trees make any considerations about which parameters to expose unnecessary and can enforce the use of specific module types and subsets thereof.
However, their strongest advantage is the visualization and manipulation of the entire experiment design with a graphical user interface. This aligns well with Automated Machine Learning (AutoML), which is also intended to make machine learning available to a broader audience.
{\small
\bibliographystyle{iclr2022_conference}
| 2024-02-18T23:39:40.040Z | 2021-12-06T02:16:42.000Z | algebraic_stack_train_0000 | 11 | 5,863 |
|
proofpile-arXiv_065-142 | \section{Introduction}
To estimate a regression when the
errors have a non-identity covariance matrix, we
usually turn first to generalized least squares (GLS). Somewhat
surprisingly, GLS proves to be computationally challenging
in the very simple setting of the unbalanced crossed random
effects models that we study here.
For that problem, the cost to compute the GLS estimate on $N$
data points grows at best like $O(N^{3/2})$ under the usual algorithms.
If we additionally assume Gaussian errors, then
\cite{crelin} show that even evaluating
the likelihood one time costs at least a multiple of $N^{3/2}$.
These costs make the usual algorithms for GLS
infeasible for large data sets such as
those arising in electronic commerce.
In this paper, we present an iterative algorithm based
on a backfitting approach from \cite{buja:hast:tibs:1989}.
This algorithm is known to converge to the
GLS solution. The cost of each iteration is $O(N)$
and so we also study how the number of iterations grows
with~$N$.
The crossed random effects model we consider has
\begin{equation}\label{eq:refmodel}
Y_{ij} =x_{ij}^\mathsf{T}\beta+a_i+b_j+e_{ij},\quad 1\le i\le R,\quad
1\le j\le C
\end{equation}
for random effects $a_i$ and $b_{j}$ and an error $e_{ij}$
with a fixed effects regression parameter $\beta\in\mathbb{R}^p$ for the covariates $x_{ij}\in\mathbb{R}^p$.
We assume that
$a_i\stackrel{\mathrm{iid}}{\sim} (0,\sigma^2_A)$, $b_j\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_B)$, and $e_{ij}\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_E)$
are all independent. It is thus a mixed effects model in which the
random portion has a crossed structure.
The GLS estimate is also the maximum likelihood
estimate (MLE), when $a_i$, $b_{j}$ and $e_{ij}$ are Gaussian.
Because we assume that $p$ is fixed as $N$ grows, we often
leave $p$ out of our cost estimates, giving instead the complexity in $N$.
The GLS estimate $\hat\beta_\mathrm{GLS}$ for crossed random effects can be efficiently
estimated if all $R\times C$ values are available.
Our motivating examples involve ratings data where $R$ people
rate $C$ items and then it is usual that the data are
very unbalanced with a haphazard observational pattern
in which only $N\ll R\times C$ of the $(x_{ij},Y_{ij})$ pairs are observed.
The crossed random effects setting is significantly more difficult
than a hierarchical model with just $a_i+e_{ij}$ but no $b_{j}$
term. Then the observations for index $j$ are `nested within' those
for each level of index $i$. The result is that the covariance matrix
of all observed $Y_{ij}$ values has a block diagonal structure
allowing GLS to be computed in $O(N)$ time.
Hierarchical models are very well suited to Bayesian
computation \citep{gelm:hill:2006}.
Crossed random effects are a much greater challenge.
\cite{GO17} find that the Gibbs sampler can take $O(N^{1/2})$
iterations to converge to stationarity, with each iteration
costing $O(N)$ leading once again to $O(N^{3/2})$ cost.
For more examples where
the costs of solving equations versus sampling from a
covariance attain the same rate see
\cite{good:soka:1989} and \cite{RS97}.
As further evidence of the difficulty of this problem,
the Gibbs sampler was one of nine MCMC algorithms that
\cite{GO17} found to be unsatisfactory.
Furthermore, \cite{lme4} removed the {\tt mcmcsamp} function from the R package lme4
because it was considered unreliable even for the problem
of sampling the posterior distribution of the parameters
from previously fitted models, and even for those with random effects variances
near zero.
\cite{papa:robe:zane:2020} present
an exception to the high cost of a Bayesian approach
for crossed random effects. They propose a collapsed Gibbs
sampler that can potentially mix in $O(1)$ iterations.
To prove this rate, they make an extremely stringent
assumption that every index $i=1,\dots,R$ appears in the
same number $N/C$ of observed data points and similarly
every $j=1,\dots,C$ appears in $N/R$ data points.
Such a condition is tantamount to requiring a designed
experiment for the data and it is much stronger than
what their algorithm seems to need in practice.
Under that condition their mixing rate asymptotes
to a quantity $\rho_{\mathrm{aux}}$, described in our discussion section,
that in favorable circumstances is $O(1)$.
They find empirically that their sampler has a cost that
scales well in many data sets where their balance condition
does not hold.
In this paper we study an iterative linear operation,
known as backfitting, for GLS.
Each iteration costs $O(N)$.
The speed of convergence depends on a certain
matrix norm of that iteration, which we exhibit below.
If the norm remains bounded strictly below $1$
as $N\to\infty$, then
the number of iterations to convergence is $O(1)$.
We are able to show that the matrix norm is $O(1)$
with probability tending to one, under conditions where
the number of observations per row (or per column) is random
and even the expected row or column counts may vary,
though in a narrow range.
While this is a substantial weakening of the conditions in
\cite{papa:robe:zane:2020}, it still fails to cover many
interesting cases. Like them, we find empirically that our
algorithm scales much more broadly than under the
conditions for which scaling is proved.
We suspect that the computational infeasibility of GLS leads
many users to use ordinary least squares (OLS) instead.
OLS has two severe problems.
First, it is \myemph{inefficient} with
$\var(\hat\beta_\mathrm{OLS})$ larger than $\var(\hat\beta_\mathrm{GLS})$.
This is equivalent to OLS ignoring some possibly large
fraction of the information in the data.
Perhaps more seriously, OLS is \myemph{naive}.
It produces an estimate of $\var(\hat\beta_\mathrm{OLS})$ that
can be too small by a large factor. That amounts
to overestimating the quantity of information behind $\hat\beta_\mathrm{OLS}$,
also by a potentially large factor.
The naivete of OLS can be countered by using better variance estimates.
One can bootstrap it by resampling the row and column entities as in \cite{pbs}.
There is also a version of Huber-White variance estimation
for this case in econometrics. See for instance \cite{came:gelb:mill:2011}.
While these methods counter the naivete of OLS, the inefficiency of OLS remains.
The method of moments algorithm in \cite{crelin}
gets consistent asymptotically normal estimates
of $\beta$, $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$.
It produces a GLS estimate $\hat\beta$ that is more
efficient than OLS but still not fully efficient
because it accounts for correlations due to only one of the
two crossed random effects. While inefficient, it is not naive
because its estimate of $\var(\hat\beta)$
properly accounts for variance due to $a_i$, $b_{j}$ and $e_{ij}$.
In this paper we get a GLS estimate $\hat\beta$
that takes account of all three variance components,
making it efficient.
We also provide an estimate of $\var(\hat\beta)$ that accounts
for all three components, so our estimate is not naive.
Our algorithm requires consistent estimates of the variance components
$\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ in computing $\hat\beta$ and $\widehat\var(\hat\beta)$.
We use the method of moments estimators from \cite{GO17} that can
be computed in $O(N)$ work.
By \citet[Theorem 4.2]{GO17}, these estimates of $\sigma^2_A$, $\sigma^2_B$ and
$\sigma^2_E$ are asymptotically uncorrelated and each of them has the same
asymptotic variance it would have had were the other two variance components equal to zero.
It is not known whether they are optimally estimated, much less optimal
subject to an $O(N)$ cost constraint.
The variance component estimates are known to be
asymptotically normal \citep{gao:thesis}.
The rest of this paper is organized as follows.
Section~\ref{sec:missing} introduces our notation and assumptions
for missing data.
Section~\ref{sec:backfitting} presents the backfitting algorithm
from \cite{buja:hast:tibs:1989}. That algorithm was defined for
smoothers, but we are able to cast the estimation of random effect
parameters as a special kind of smoother.
Section~\ref{sec:normconvergence} proves our result about
backfitting being convergent with a probability tending to one
as the problem size increases.
Section~\ref{sec:empiricalnorms} shows numerical measures
of the matrix norm of the backfitting operator. It remains
bounded below and away from one under more conditions than our theory shows.
We find that even one iteration of
the lmer function in lme4 package \cite{lme4} has a cost that grows like $N^{3/2}$
in one setting and like $N^{2.1}$ in another, sparser one.
The backfitting algorithm has cost $O(N)$ in both of these cases.
Section~\ref{sec:stitch} illustrates our GLS algorithm
on some data provided to us by Stitch Fix. These are customer
ratings of items of clothing on a ten point scale.
Section~\ref{sec:discussion} has a discussion of these results.
An appendix contains some regression output for the
Stitch Fix data.
\section{Missingness}\label{sec:missing}
We adopt the notation from \cite{crelin}.
We let $Z_{ij}\in\{0,1\}$ take the value $1$
if $(x_{ij},Y_{ij})$ is observed and $0$ otherwise,
for $i=1,\dots,R$ and $j=1,\dots,C$.
In many of the contexts we consider, the missingness
is not at random and is potentially informative.
Handling such problems is outside the scope of
this paper, apart from a brief discussion in Section~\ref{sec:discussion}.
It is already a sufficient challenge to work without
informative missingness.
The matrix $Z\in\{0,1\}^{R\times C}$, with elements $Z_{ij}$
has $N_{i\sumdot} =\sum_{j=1}^CZ_{ij}$ observations
in `row $i$' and $N_{\sumdot j}=\sum_{i=1}^RZ_{ij}$ observations
in `column $j$'.
We often drop the limits of summation so that $i$
is always summed over $1,\dots,R$ and $j$ over $1,\dots,C$.
When we need additional symbols for row and column indices we
use $r$ for rows and $s$ for columns.
The total sample size is $N=\sum_i\sum_jZ_{ij}
=\sum_iN_{i\sumdot} = \sum_jN_{\sumdot j}$.
There are two co-observation matrices, $Z^\mathsf{T} Z$ and $ZZ^\mathsf{T}$.
Here $(Z^\tran Z)_{js}=\sum_iZ_{ij}Z_{is}$ gives the number of rows in which
data from both columns $j$ and $s$ were observed,
while $(ZZ^\tran)_{ir}=\sum_jZ_{ij}Z_{rj}$ gives the number of
columns in which data from both rows $i$ and $r$ were observed.
In our regression models, we treat $Z_{ij}$ as nonrandom. We are conditioning
on the actual pattern of observations in our data.
When we study the rate at which our backfitting algorithm converges, we
consider $Z_{ij}$ drawn at random. That is, the analyst is solving a GLS
conditionally on the pattern of observations and missingness, while
we study the convergence rates that analyst will see for data
drawn from a missingness mechanism defined in Section~\ref{sec:modelz}.
If we place all of the $Y_{ij}$ into a vector $\mathcal{Y}\in\mathbb{R}^N$ and $x_{ij}$
compatibly into a matrix $\mathcal{X}\in\mathbb{R}^{N\times p}$, then
the naive and inefficient OLS estimator is
\begin{align}\label{eq:bhatols}
\hat\beta_\mathrm{OLS} = (\mathcal{X}^\mathsf{T} \mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{Y}.
\end{align}
This can be computed in $O(Np^2)$ work. We prefer to use
the GLS estimator
\begin{align}\label{eq:bhatgls}\hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{V}^{-1}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{V}^{-1}\mathcal{Y},
\end{align}
where $\mathcal{V}\in\mathbb{R}^{N\times N}$ contains all of the $\cov(Y_{ij},Y_{rs})$ in
an ordering compatible with $\mathcal{X}$ and $\mathcal{Y}$. A naive algorithm costs $O(N^3)$
to solve for $\hat\beta_\mathrm{GLS}$.
It can actually be solved through a Cholesky decomposition of an $(R+C)\times (R+C)$ matrix
\citep{sear:case:mccu:1992}.
That has cost $O(R^3+C^3)$.
Now $N\le RC$, with equality only for completely observed data.
Therefore $\max(R,C)\ge \sqrt{N}$, and so $R^3+C^3\ge N^{3/2}$.
When the data are sparsely enough observed it is possible
that $\min(R,C)$ grows more rapidly than $N^{1/2}$.
In a numerical example in Section~\ref{sec:empiricalnorms} we have $\min(R,C)$
growing like $N^{0.70}$.
In a hierarchical model, with $a_i$ but no $b_{j}$ we would find
$\mathcal{V}$ to be block diagonal and then
$\hat\beta_\mathrm{GLS}$ could be computed in $O(N)$ work.
A reviewer reminds us that it has been known since \cite{stra:1969} that
systems of equations can be solved more quickly than cubic time.
Despite that, current software is still dominated by cubic time algorithms.
Also none of the known solutions are quadratic
and so in our setting the cost would be at least a multiple
of $(R+C)^{2+\gamma}$ for some $\gamma>0$ and hence not $O(N)$.
We can write our crossed effects model as
\begin{align}\label{eq:cemodelviaz}
\mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} + \mathcal{Z}_B\boldsymbol{b} + \boldsymbol{e}
\end{align}
for matrices $\mathcal{Z}_A\in\{0,1\}^{N\times R}$ and $\mathcal{Z}_B\in\{0,1\}^{N\times C}$.
The $i$'th column of $\mathcal{Z}_A$ has ones for all of the $N$ observations that
come from row $i$ and zeroes elsewhere. The definition of $\mathcal{Z}_B$ is analogous.
The observation matrix can be written $Z = \mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B$.
The vector $\boldsymbol{e}$ has all $N$ values of $e_{ij}$ in compatible order.
Vectors $\boldsymbol{a}$ and $\boldsymbol{b}$ contain the row and column random effects
$a_i$ and $b_{j}$.
In this notation
\begin{equation}
\label{eq:Vee}
\mathcal{V} = \mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\sigma^2_A + \mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}\sigma^2_B + I_N\sigma^2_E,
\end{equation}
where $I_N$ is the $N \times N$ identity matrix.
Our main computational problem is to get
a value for $\mathcal{U}=\mathcal{V}^{-1}\mathcal{X}\in\mathbb{R}^{N\times p}$.
To do that we iterate towards a solution $\boldsymbol{u}\in\mathbb{R}^N$ of $\mathcal{V} \boldsymbol{u}=\boldsymbol{x}$,
where $\boldsymbol{x}\in\mathbb{R}^N$ is one of the $p$ columns of $\mathcal{X}$.
After that, finding
\begin{equation}
\label{eq:betahat}
\hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{U})^{-1}(\mathcal{Y}^\mathsf{T}\mathcal{U})^\mathsf{T}
\end{equation}
is not expensive, because $\mathcal{X}^\mathsf{T}\mathcal{U}\in\mathbb{R}^{p\times p}$ and we suppose that $p$ is not large.
If the data ordering in $\mathcal{Y}$ and elsewhere sorts by index $i$, breaking ties by index $j$,
then $\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\in\{0,1\}^{N\times N}$ is
a block matrix with $R$ blocks of ones
of size $N_{i\sumdot}\timesN_{i\sumdot}$ along the diagonal and zeroes elsewhere.
The matrix $\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}$ will not be block diagonal in that ordering.
Instead $P\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T} P^\mathsf{T}$ will be block diagonal with
$N_{\sumdot j}\timesN_{\sumdot j}$ blocks of ones on the diagonal,
for a suitable $N\times N$ permutation matrix $P$.
\section{Backfitting algorithms}\label{sec:backfitting}
Our first goal is to develop computationally efficient ways to
solve the GLS problem \eqref{eq:betahat} for the linear mixed model~\eqref{eq:cemodelviaz}.
We use the backfitting algorithm that
\cite{hast:tibs:1990} and \cite{buja:hast:tibs:1989}
use to fit additive models.
We write $\mathcal{V}$ in (\ref{eq:Vee}) as $\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B
+I_N\right)$ with $\lambda_A=\sigma^2_E/\sigma^2_A$ and
$\lambda_B=\sigma^2_E/\sigma^2_B$,
and define $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$.
Then the GLS estimate of $\beta$ is
\begin{align}
\hat\beta_{\mathrm{GLS}}&=\arg\min_\beta (\mathcal{Y}-\mathcal{X}\beta)^\mathsf{T}\mathcal{W}(\mathcal{Y}-\mathcal{X}\beta)
= (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{Y}\label{eq:betahatw}
\end{align}
and $\cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}$.
It is well known (e.g., \cite{robinson91:_that_blup}) that we can obtain
$\hat\beta_{\mathrm{GLS}}$ by solving the
following penalized least-squares problem
\begin{align}\label{eq:minboth}
\min_{\beta,\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2
+\lambda_A\Vert\boldsymbol{a}\Vert^2 +\lambda_B\Vert\boldsymbol{b}\Vert^2.
\end{align}
Then $\hat\beta=\hat\beta_{\mathrm{GLS}}$ and $\hat \boldsymbol{a}$ and $\hat \boldsymbol{b}$ are the
best linear unbiased prediction (BLUP) estimates
of the random effects.
This derivation works for any number of factors, but it is
instructive to carry it through initially for one.
\subsection{One factor}\label{sec:one-factor}
For a single factor,
we simply drop the $\mathcal{Z}_B\boldsymbol{b}$ term from \eqref{eq:cemodelviaz} to get
\begin{equation*}
\mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} +\boldsymbol{e}.
\end{equation*}
Then
$\mathcal{V}=\cov(\mathcal{Z}_A\boldsymbol{a}+\boldsymbol{e})= \sigma^2_A\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T} +\sigma^2_E I_N$, and $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$ as before.
The penalized least squares problem is to solve
\begin{align}\label{eq:equivmina}
\min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2.
\end{align}
We show the details as we need them for a later derivation.
The normal equations from~\eqref{eq:equivmina} yield
\begin{align}
\boldsymbol{0} & = \mathcal{X}^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a}),\quad\text{and}\label{eq:normbeta}\\
\boldsymbol{0} & = \mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a})
-\lambda_A\hat\boldsymbol{a}.\label{eq:normbsa}
\end{align}
Solving~\eqref{eq:normbsa} for $\hat\boldsymbol{a}$ and multiplying the solution by $\mathcal{Z}_A$ yields
$$
\mathcal{Z}_A\hat\boldsymbol{a} = \mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta)
\equiv \mathcal{S}_A(\mathcal{Y}-\mathcal{X}\hat\beta),
$$
for an $N\times N$ ridge regression ``smoother matrix'' $\mathcal{S}_A$.
As we explain below this smoother matrix implements shrunken within-group means.
Then substituting $\mathcal{Z}_A\hat\boldsymbol{a}$ into equation~\eqref{eq:normbeta}
yields
\begin{equation}
\label{eq:onefactor}
\hat\beta = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{Y}.
\end{equation}
Using the Sherman-Morrison-Woodbury (SMW) identity, one can show that $\mathcal{W}=I_N-\mathcal{S}_A$ and
hence $\hat\beta$ above equals $\hat\beta_\mathrm{GLS}$
from~\eqref{eq:betahatw}. This is not in itself a new discovery; see
for example \cite{robinson91:_that_blup} or \cite{hast:tibs:1990}
(Section 5.3.3).
To compute the solution in (\ref{eq:onefactor}), we need to compute
$\mathcal{S}_A \mathcal{Y}$ and $\mathcal{S}_A\mathcal{X}$. The heart of the computation in
$\mathcal{S}_A \mathcal{Y}$
is $(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}\mathcal{Y}$.
But $\mathcal{Z}_A^\mathsf{T}
\mathcal{Z}_A=\mathrm{diag}(N_{1\text{\tiny$\bullet$}},N_{2\text{\tiny$\bullet$}},\ldots,N_{R\text{\tiny$\bullet$}})$ and we
see that all we are doing is computing an $R$-vector of shrunken means of the elements
of $\mathcal{Y}$ at each level of the factor $A$; the $i$th element is $\sum_jZ_{ij} Y_{ij}/(N_{i\text{\tiny$\bullet$}}+\lambda_A)$.
This involves a single pass through the $N$ elements of $Y$,
accumulating the sums into $R$ registers, followed by an elementwise
scaling of the $R$ components. Then pre-multiplication by $\mathcal{Z}_A$ simply puts these
$R$ shrunken means back into an
$N$-vector in the appropriate positions. The total cost is $O(N)$.
Likewise $\mathcal{S}_A\mathcal{X}$ does the
same separately for each of the columns of $\mathcal{X}$.
Hence the entire computational cost for \eqref{eq:onefactor} is $O(Np^2)$, the same order as regression on $\mathcal{X}$.
What is also clear is that the indicator matrix
$\mathcal{Z}_A$ is not actually needed here; instead all we need to carry out
these computations is the
factor vector $f_A$ that records the level of factor $A$ for each
of the $N$ observations. In the R language \citep{R:lang:2015} the following pair of operations does
the computation:
\begin{verbatim}
hat_a = tapply(y,fA,sum)/(table(fA)+lambdaA)
hat_y = hat_a[fA]
\end{verbatim}
where {\tt fA} is a categorical variable (factor) $f_A$ of length $N$ containing the row indices $i$ in an order compatible with $Y\in\mathbb{R}^N$ (represented as {\tt y})
and {\tt lambdaA} is $\lambda_A=\sigma^2_A/\sigma^2_E$.
\subsection{Two factors}\label{sec:two-factors}
With two factors we face the problem of incompatible block diagonal
matrices discussed in Section~\ref{sec:missing}.
Define $\mathcal{Z}_G=(\mathcal{Z}_A\!:\!\mathcal{Z}_B)$ ($R+C$ columns),
$\mathcal{D}_\lambda=\mathrm{diag}(\lambda_AI_R,\lambda_BI_C)$,
and $\boldsymbol{g}^\mathsf{T}=(\boldsymbol{a}^\mathsf{T},\boldsymbol{b}^\mathsf{T})$.
Then solving \eqref{eq:minboth} is equivalent to
\begin{align}\label{eq:ming}
\min_{\beta,\boldsymbol{g}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_G\boldsymbol{g}\Vert^2
+\boldsymbol{g}^\mathsf{T}\mathcal{D}_\lambda\boldsymbol{g}.
\end{align}
A derivation similar to that used in the one-factor case gives
\begin{equation}
\label{eq:gfactor}
\hat\beta =
H_\mathrm{GLS}\mathcal{Y}\quad\text{for}\quad
H_\mathrm{GLS} = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G),
\end{equation}
where the hat matrix $H_\mathrm{GLS}$ is written in terms of
a smoother matrix
\begin{equation}
\label{eq:defcsg}
\mathcal{S}_G=\mathcal{Z}_G(\mathcal{Z}_G^\mathsf{T} \mathcal{Z}_G + \mathcal{D}_\lambda)^{-1}\mathcal{Z}_G^\mathsf{T}.
\end{equation}
We can again use SMW to show that $I_N-\mathcal{S}_G=\mathcal{W}$ and hence the
solution $\hat\beta=\hat\beta_{\mathrm{GLS}}$ in \eqref{eq:betahatw}.
But in applying $\mathcal{S}_G$ we do not enjoy the computational
simplifications that occurred in the one factor case, because
\begin{equation*}
\mathcal{Z}_G^\mathsf{T}\mathcal{Z}_G=
\left(
\begin{array}{cc}
\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B\\[0.25ex]
\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B
\end{array}
\right)
=\begin{pmatrix} \mathrm{diag}(N_{i\sumdot}) & Z\\
Z^\mathsf{T} & \mathrm{diag}(N_{\sumdot j})
\end{pmatrix},
\end{equation*}
where $Z\in\{0,1\}^{R\times C}$ is the observation matrix
which has no special structure.
Therefore we need to invert an $(R+C)\times (R+C)$ matrix to apply
$\mathcal{S}_G$ and hence to solve
\eqref{eq:gfactor}, at a cost of at least $O(N^{3/2})$ (see Section~\ref{sec:missing}).
Rather than group $\mathcal{Z}_A$ and $\mathcal{Z}_B$, we keep them separate, and
develop an algorithm to apply the operator $\mathcal{S}_G$ efficiently.
Consider a generic response vector $\mathcal{R}$ (such as $\mathcal{Y}$ or a column of $\mathcal{X}$) and the optimization problem
\begin{align}\label{eq:minab}
\min_{\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{R}-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2
+\lambda_A\|\boldsymbol{a}\|^2+\lambda_B\|\boldsymbol{b}\|^2.
\end{align}
Using $\mathcal{S}_G$ defined at~\eqref{eq:defcsg}
in terms of the indicator variables $\mathcal{Z}_G\in\{0,1\}^{N\times (R+C)}$
it is clear that the fitted values are given by
$\widehat\mathcal{R}=\mathcal{S}_G\mathcal{R}$.
Solving (\ref{eq:minab}) would result in two blocks of estimating
equations similar to equations \eqref{eq:normbeta} and \eqref{eq:normbsa}.
These can be written
\begin{align}\label{eq:backfit}
\begin{split}
\mathcal{Z}_A\hat\boldsymbol{a} & = \mathcal{S}_A(\mathcal{R}-\mathcal{Z}_B\hat\boldsymbol{b}),\quad\text{and}\\
\mathcal{Z}_B\hat\boldsymbol{b} & = \mathcal{S}_B(\mathcal{R}-\mathcal{Z}_A\hat\boldsymbol{a}),
\end{split}
\end{align}
where
$\mathcal{S}_A=\mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$ is again
the ridge regression smoothing matrix for row effects and similarly
$\mathcal{S}_B=\mathcal{Z}_B(\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B + \lambda_BI_C)^{-1}\mathcal{Z}_B^\mathsf{T}$ the
smoothing matrix for column effects.
We solve these equations iteratively by block coordinate descent,
also known as backfitting.
The iterations converge to the solution
of~\eqref{eq:minab} \citep{buja:hast:tibs:1989, hast:tibs:1990}.
It is evident that $\mathcal{S}_A,\mathcal{S}_B\in\mathbb{R}^{N\times N}$
are both symmetric matrices. It follows that the limiting smoother
$\mathcal{S}_G$ formed by combining them is also symmetric. See \citet[page 120]{hast:tibs:1990}.
We will need this result later for an important computational shortcut.
Here the simplifications we enjoyed in the one-factor case once again
apply. Each step applies its operator to a vector
(the terms in parentheses on the right hand side in
(\ref{eq:backfit})). For both $\mathcal{S}_A$ and $\mathcal{S}_B$ these are
simply the shrunken-mean operations described for the one-factor case,
separately for factor $A$ and $B$ each time. As before, we do not need to
actually construct $\mathcal{Z}_B$, but simply use a factor $f_B$
that records the level of factor $B$ for each of the $N$ observations.
The above description holds for a generic response $\mathcal{R}$; we apply that algorithm (in
parallel) to $\mathcal{Y}$ and each column of $\mathcal{X}$ to obtain
the quantities $\mathcal{S}_G\mathcal{X}$ and $\mathcal{S}_G\mathcal{Y}$
that we need to compute $H_{\mathrm{GLS}}\mathcal{Y}$ in \eqref{eq:gfactor}.
Now solving (\ref{eq:gfactor}) is $O(Np^2)$ plus a negligible $O(p^3)$ cost.
These computations deliver $\hat\beta_{\mathrm{GLS}}$; if the BLUP
estimates $\hat\boldsymbol{a}$ and $\hat{\boldsymbol{b}}$ are also required, the same algorithm
can be applied to the response $\mathcal{Y}-\mathcal{X}\hat\beta_{\mathrm{GLS}}$, retaining the $\boldsymbol{a}$ and
$\boldsymbol{b}$ at the final iteration.
We can also write
\begin{equation}\label{eq:covbhat}
\cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E(\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}.
\end{equation}
It is also clear that we can trivially extend this approach to
accommodate any number of factors.
\subsection{Centered operators}
\label{sec:centered-operators}
The matrices $\mathcal{Z}_A$ and $\mathcal{Z}_B$ both have row sums all ones, since
they are factor indicator matrices (``one-hot encoders''). This
creates a nontrivial intersection between their column spaces, and
that of $\mathcal{X}$ since we always include an intercept, that can
cause backfitting to converge more slowly. In this section we show
how to counter this intersection of column spaces
to speed convergence.
We work with this two-factor model
\begin{align}\label{eq:equivmina1}
\min_{\beta,\boldsymbol{a},\boldsymbol{b}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2+\lambda_B\Vert\boldsymbol{b}\Vert^2.
\end{align}
\begin{lemma}
If $\mathcal{X}$ in model~\eqref{eq:equivmina1}
includes a column of ones (intercept), and $\lambda_A>0$
and $\lambda_B>0$, then the solutions for $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfy
$\sum_{i=1}^R a_i=0$ and $\sum_{j=1}^C b_j=0$.
\end{lemma}
\begin{proof}
It suffices to show this for one factor and with $\mathcal{X}=\mathbf{1}$. The
objective is now
\begin{align}\label{eq:equivsimp}
\min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathbf{1}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2.
\end{align}
Notice that for any candidate solution $(\beta,\{a_i\}_1^R)$, the alternative
solution $(\beta+c,\{a_i-c\}_1^R)$ leaves the loss part of
\eqref{eq:equivsimp} unchanged, since the row sums of $\mathcal{Z}_A$ are all
one. Hence if $\lambda_A>0$, we would always improve $\boldsymbol{a}$ by picking
$c$ to minimize the
penalty term
$\sum_{i=1}^R(a_i-c)^2$, or $c=(1/R)\sum_{i=1}^Ra_i$.
\end{proof}
It is natural then to solve for $\boldsymbol{a}$ and $\boldsymbol{b}$ with these
constraints enforced, instead of waiting for them
to simply emerge in the process of iteration.
\begin{theorem}\label{thm:smartcenter}
Consider the generic optimization problem
\begin{align}\label{eq:equivsimp2}
\min_{\boldsymbol{a}} \Vert \mathcal{R} -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A
\Vert\boldsymbol{a}\Vert^2\quad \mbox{subject to } \sum_{i=1}^Ra_i=0.
\end{align}
Define the partial sum vector $\mathcal{R}^+ = \mathcal{Z}_A^\mathsf{T}\mathcal{R}$
with components $\mathcal{R}^+_{i} = \sum_jZ_{ij}\mathcal{R}_{ij}$,
and let
$$w_i=\frac{(N_{i\text{\tiny$\bullet$}}+\lambda)^{-1}}{\sum_{r}(N_{r\sumdot}+\lambda)^{-1}}.$$
Then the solution $\hat \boldsymbol{a}$ is given by
\begin{align}\label{eq:ahatsoln}
\hat
a_i=\frac{\mathcal{R}^+_{i}-\sum_{r}w_r\mathcal{R}^+_{r}}{N_{i\text{\tiny$\bullet$}}+\lambda_A},
\quad i=1,\ldots,R.
\end{align}
Moreover, the fit is given by
$$\mathcal{Z}_A\hat\boldsymbol{a}=\tilde\mathcal{S}_A\mathcal{R},$$ where $\tilde \mathcal{S}_A$ is a
symmetric operator.
\end{theorem}
The computations are a simple modification of the non-centered case.
\begin{proof}
Let $M$ be an $R\times R$ orthogonal matrix with first column
$\mathbf{1}/\sqrt{R}$. Then $\mathcal{Z}_A\boldsymbol{a}=\mathcal{Z}_AMM^\mathsf{T}\boldsymbol{a}=\tilde
\mathcal{G}\tilde\boldsymbol{\gamma}$ for $\mathcal{G}=\mathcal{Z}_AM$ and
$\tilde\boldsymbol{\gamma}=M^\mathsf{T}\boldsymbol{a}$.
Reparametrizing in this way leads to
the equivalent problem
\begin{align}\label{eq:equivsimp2}
\min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\tilde\mathcal{G}\tilde\boldsymbol{\gamma}\Vert^2 + \lambda_A
\Vert\tilde\boldsymbol{\gamma}\Vert^2,\quad \mbox{subject to } \tilde\gamma_1=0.
\end{align}
To solve (\ref{eq:equivsimp2}), we simply drop the first column of
$\tilde \mathcal{G}$. Let $\mathcal{G}=\mathcal{Z}_AQ$ where $Q$ is the matrix $M$ omitting
the first column, and $\boldsymbol{\gamma}$ the corresponding subvector of
$\tilde\boldsymbol{\gamma}$ having $R-1$ components. We now solve
\begin{align}\label{eq:equivsimp3}
\min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\mathcal{G}\boldsymbol{\gamma}\Vert^2 + \lambda_A
\Vert\tilde\boldsymbol{\gamma}\Vert^2
\end{align}
with no constraints, and the solution is $\hat\boldsymbol{\gamma}=(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}$.
The fit is given by $\mathcal{G}\hat\boldsymbol{\gamma}=\mathcal{G}(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A
I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}=\tilde \mathcal{S}_A\mathcal{R}$, and $\tilde \mathcal{S}_A$ is
clearly a symmetric operator.
To obtain the simplified expression for $\hat\boldsymbol{a}$, we write
\begin{align}
\mathcal{G}\hat\gamma&=\mathcal{Z}_AQ(Q^\mathsf{T}\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A Q+\lambda_A
I_{R-1})^{-1}Q^\mathsf{T}
\mathcal{Z}_A^\mathsf{T}\mathcal{R}\nonumber\\
&=\mathcal{Z}_AQ(Q^\mathsf{T} D Q+\lambda_A
I_{R-1})^{-1}Q^\mathsf{T}
\mathcal{R}^+\label{eq:tosimplify}\\
&=\mathcal{Z}_A\hat\boldsymbol{a},\nonumber
\end{align}
with $D=\mathrm{diag}(N_{i\sumdot})$.
We write $H=Q(Q^\mathsf{T} D Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T}$
and $\tilde
Q=(D+\lambda_A I_R)^{\frac12}Q$, and let
\begin{align}
\tilde H&= (D+\lambda_A I_R)^{\frac12} H (D+\lambda_A
I_R)^{\frac12}
= \tilde Q(\tilde Q^\mathsf{T}\tilde Q)^{-1}\tilde
Q^\mathsf{T}.\label{eq:Qproj}
\end{align}
Now (\ref{eq:Qproj}) is a projection matrix in $\mathbb{R}^R$ onto a
$R-1$ dimensional subspace. Let $\tilde q = (D+\lambda_A
I_R)^{-\frac12}\mathbf{1}.$ Then $\tilde q^\mathsf{T} \tilde Q={\boldsymbol{0}}$, and so
$$\tilde H=I_R-\frac{\tilde q\tilde q^\mathsf{T}}{\Vert \tilde
q\Vert^2}.$$
Unraveling this expression we get
$$ H=(D+\lambda_AI_R)^{-1}
-(D+\lambda_AI_R)^{-1}\frac{\mathbf{1}\bone^\mathsf{T}}{\mathbf{1}^\mathsf{T}(D+\lambda_AI_R)^{-1}\mathbf{1}}(D+\lambda_AI_R)^{-1}.$$
With $\hat\boldsymbol{a}=H\mathcal{R}^+$ in (\ref{eq:tosimplify}), this gives the
expressions for each $\hat a_i$ in~\eqref{eq:ahatsoln}.
Finally, $\tilde \mathcal{S}_A = \mathcal{Z}_A H\mathcal{Z}_A^\mathsf{T}$ is symmetric.
\end{proof}
\subsection{Covariance matrix for $\hat\beta_{\mathrm{GLS}}$ with centered operators}
\label{sec:covar-matr-hatb}
In Section~\ref{sec:two-factors} we saw in (\ref{eq:covbhat}) that we
get a simple expression for
$\cov(\hat\beta_{\mathrm{GLS}})$. This simplicity relies on the fact that
$I_N-\mathcal{S}_G=\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$, and the usual cancelation occurs when we
use the sandwich formula to compute this covariance.
When we backfit with our centered smoothers we get a modified residual
operator
$I_N-\widetilde \mathcal{S}_G$ such that the analog of (\ref{eq:gfactor})
still gives us the required coefficient estimate:
\begin{equation}
\label{eq:gfactorc}
\hat\beta_{\mathrm{GLS}} = (\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{Y}.
\end{equation}
However, $I_N-\widetilde\mathcal{S}_G\neq \sigma^2_E\mathcal{V}^{-1}$, and so now we need to
resort to the sandwich formula
$ \cov(\hat\beta_{\mathrm{GLS}})=H_\mathrm{GLS} \mathcal{V} H_\mathrm{GLS}^\mathsf{T}$
with $H_\mathrm{GLS}$ from \eqref{eq:gfactor}.
Expanding this we find that
$\cov(\hat\beta_{\mathrm{GLS}})$ equals
\begin{align*}
(\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)
\cdot \mathcal{V}\cdot (I_N-\widetilde\mathcal{S}_G)\mathcal{X}(\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}.
\end{align*}
While this expression might appear daunting, the computations are simple.
Note first that while $\hat\beta_{\mathrm{GLS}}$ can be computed via
$\tilde\mathcal{S}_G\mathcal{X}$ and $\tilde\mathcal{S}_G\mathcal{Y}$ this expression for $\cov(\hat\beta_{\mathrm{GLS}})$
also involves $\mathcal{X}^\mathsf{T} \tilde\mathcal{S}_G$. When we use the centered operator
from Theorem~\ref{thm:smartcenter} we get a symmetric matrix $\tilde \mathcal{S}_G$.
Let $\widetilde \mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$, the residual
matrix after backfitting each column of $\mathcal{X}$ using these centered operators. Then because
$\widetilde\mathcal{S}_G$ is symmetric, we have
\begin{align}
\hat\beta_{\mathrm{GLS}}&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\mathcal{Y},\quad\text{and} \notag\\
\cov(\hat\beta_{\mathrm{GLS}})&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\cdot\mathcal{V}\cdot\widetilde\mathcal{X}(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}.\label{eq:covbhatgls}
\end{align}
Since $\mathcal{V}=\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B
+I_N\right)$ (two low-rank matrices plus the identity), we can
compute $\mathcal{V}\cdot \widetilde\mathcal{X}$ very efficiently, and hence also the
covariance matrix in~\eqref{eq:covbhatgls}.
The entire algorithm is summarized in Section~\ref{sec:wholeshebang}.
\section{Convergence of the matrix norm}\label{sec:normconvergence}
In this section we prove a bound on the norm of the matrix
that implements backfitting for our random effects $\boldsymbol{a}$ and $\boldsymbol{b}$
and show how this controls the number of iterations required.
In our algorithm, backfitting is applied to $\mathcal{Y}$ as well as to each non-intercept column of $\mathcal{X}$
so we do not need to consider the updates for $\mathcal{X}\hat\beta$.
It is useful to take account of intercept adjustments in backfitting,
by the centerings described in Section~\ref{sec:backfitting}
because the space spanned by $a_1,\dots,a_R$
intersects the space spanned by $b_1,\dots,b_C$ because both
include an intercept column of ones.
In backfitting we alternate between adjusting $\boldsymbol{a}$ given $\boldsymbol{b}$ and
$\boldsymbol{b}$ given $\boldsymbol{a}$. At any iteration, the new $\boldsymbol{a}$ is an affine function of
the previous $\boldsymbol{b}$
and then the new $\boldsymbol{b}$ is an affine function of the new $\boldsymbol{a}$.
This makes the new $\boldsymbol{b}$ an affine function of the previous $\boldsymbol{b}$.
We will study that affine function to find conditions where
the updates converge. If the $\boldsymbol{b}$ updates converge, then so must the $\boldsymbol{a}$
updates.
Because the updates are affine they can be written in the form
$$
\boldsymbol{b} \gets M\boldsymbol{b} + \eta
$$
for $M\in\mathbb{R}^{C\times C}$ and $\eta\in\mathbb{R}^C$.
We iterate this update and
it is convenient to start with $\boldsymbol{b} = \boldsymbol{0}$.
We already know from \cite{buja:hast:tibs:1989} that this backfitting
will converge. However, we want more. We want to avoid
having the number of iterations required grow with $N$.
We can write the solution $\boldsymbol{b}$ as
$$
\boldsymbol{b} = \eta +\sum_{k=1}^\infty M^k\eta,
$$
and in computations we truncate this sum after $K$ steps
producing an error $\sum_{k>K}M^k\eta$. We want
$\sup_{\eta\ne0}\Vert \sum_{k>K}M^k\eta\Vert/\Vert\eta\Vert<\epsilon$
to hold with probability tending to one as the sample size
increases for any $\epsilon$, given sufficiently large $K$.
For this it suffices to have the spectral
radius $\lambda_{\max}(M)<1-\delta$ hold with probability
tending to one for some $\delta>0$.
Now for any $1\le p\le\infty$ we have
$$
\lambda_{\max}(M)\le \Vert M\Vert_{p}
\equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}}
\frac{\Vert M\boldsymbol{x}\Vert_p}{\Vert \boldsymbol{x}\Vert_p}.
$$
The explicit formula
$$
\Vert M\Vert_{1}
\equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}}
\frac{\Vert M\boldsymbol{x}\Vert_1}{\Vert \boldsymbol{x}\Vert_1}
= \max_{1\le s\le C}\sum_{j=1}^C | M_{js}|
$$
makes the matrix $L_1$ matrix norm very tractable theoretically
and so that is the one we study. We look at this and some
other measures numerically in Section~\ref{sec:empiricalnorms}.
\subsection{Updates}
Recall that $Z\in\{0,1\}^{R\times C}$ describes the pattern of observations.
In a model with no intercept, centering the responses and
then taking shrunken means as in \eqref{eq:backfit} would yield
these updates
\begin{align*}
a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A}\quad\text{and}\quad
b_j \gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}.
\end{align*}
The update from the old $\boldsymbol{b}$ to the new $\boldsymbol{a}$ and
then to the new $\boldsymbol{b}$
takes the form $\boldsymbol{b}\gets M\boldsymbol{b}+\eta$ for
$M=M^{(0)}$ where
$$
M^{(0)}_{js} =
\frac1{N_{\sumdot j}+\lambda_B}\sum_i \frac{\zisZ_{ij}}{N_{i\sumdot}+\lambda_A}.$$
This update $M^{(0)}$ alternates shrinkage estimates for $\boldsymbol{a}$
and $\boldsymbol{b}$ but does no centering.
We don't exhibit $\eta$ because it does not affect the
convergence speed.
In the presence of an intercept, we know that $\sum_ia_i=0$
should hold at the solution and we can impose this simply
and very directly by centering the $a_i$, taking
\begin{align*}
a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A}
-\frac1R\sum_{r=1}^R\frac{\sum_s Z_{rs}(Y_{rs}-b_s)}{N_{r\sumdot}+\lambda_A},
\quad\text{and}\\
b_j &\gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}.
\end{align*}
The intercept estimate will then be $\hat\beta_0=(1/C)\sum_jb_j$ which
we can subtract from $b_j$ upon convergence.
This iteration has the update matrix $M^{(1)}$ with
\begin{align}\label{eq:monejs}
M^{(1)}_{js}
&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rs}(Z_{rj}-N_{\sumdot j}/R)}{N_{r\sumdot}+\lambda_A}
\end{align}
after replacing a sum over $i$ by an equivalent one over $r$.
In practice, we prefer to use the weighted centering from
Section~\ref{sec:centered-operators} to center the $a_i$
because it provides a symmetric smoother $\tilde\mathcal{S}_G$
that supports computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$.
While it is more complicated to analyze it is easily computable
and it satisfies the optimality condition in Theorem~\ref{thm:smartcenter}.
The algorithm is for a generic response $\mathcal{R}\in\mathbb{R}^N$ such as $\mathcal{Y}$
or a column of $\mathcal{X}$.
Let us illustrate it for the case $\mathcal{R}=\mathcal{Y}$.
We begin with vector of $N$ values $Y_{ij}-b_{j}$
and so $Y^+_i = \sum_sZ_{is}(Y_{is}-b_s).$
Then
$w_i = (N_{i\sumdot}+\lambda_A)^{-1}/\sum_r(N_{r\sumdot}+\lambda_A)^{-1}$
and the updated $a_r$ is
\begin{align*}
\frac{Y^+_r-\sum_iw_i Y^+_i}{N_{r\sumdot}+\lambda_A}
&=
\frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i
\sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A}.
\end{align*}
Using shrunken averages of $Y_{ij}-a_i$, the new $b_{j}$ are
\begin{align*}
b_{j} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_rZ_{rj}
\biggl(Y_{rj}-
\frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i
\sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A}
\biggr).
\end{align*}
Now $\boldsymbol{b} \gets M\boldsymbol{b}+\eta$ for $M=M^{(2)}$, where
\begin{align}\label{eq:mtwojs}
M^{(2)}_{js}
&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
\biggl(Z_{rs} - \frac{\sum_{i}\frac{Z_{is}}{N_{i\sumdot}+\lambda_{A}}}{\sum_i{\frac{1}{N_{i\sumdot}+\lambda_{A}}}}\biggr).
\end{align}
Our preferred algorithm applies the optimal update
from Theorem~\ref{thm:smartcenter}
to both $\boldsymbol{a}$ and $\boldsymbol{b}$ updates. With that choice we do
not need to decide beforehand which random effects to center
and which to leave uncentered to contain the intercept.
We call the corresponding matrix $M^{(3)}$.
Our theory below analyzes $\VertM^{(1)}\Vert_1$
and $\VertM^{(2)}\Vert_1$
which have simpler expressions than
$\VertM^{(3)}\Vert_1$.
Update $M^{(0)}$ uses symmetric smoothers
for $A$ and $B$. Both are shrunken
averages. The naive centering update $M^{(1)}$ uses
a non-symmetric smoother
$\mathcal{Z}_A(I_R-\mathbf{1}_R\mathbf{1}_R^\mathsf{T}/R)(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A+\lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$
on the $a_i$ with a symmetric smoother on $b_{j}$
and hence it does not generally produce a symmetric
smoother needed for efficient computation
of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$.
The update $M^{(2)}$ uses two symmetric
smoothers, one optimal and one a simple shrunken mean.
The update $M^{(3)}$ takes the optimal
smoother for both $A$ and $B$.
Thus both $M^{(2)}$ and $M^{(3)}$
support efficient computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$.
A subtle point is that these symmetric smoothers are
matrices in $\mathbb{R}^{N\times N}$ while the matrices $M^{(k)}\in\mathbb{R}^{C\times C}$
are not symmetric.
\subsection{Model for $Z_{ij}$}\label{sec:modelz}
We will state conditions on $Z_{ij}$ under which
both $\Vert M^{(1)}\Vert_1$ and $\Vert M^{(2)}\Vert_1$
are bounded
below $1$ with probability tending to one, as the problem size grows.
We need the following exponential inequalities.
\begin{lemma}\label{lem:hoeff}
If $X\sim\mathrm{Bin}(n,p)$, then for any $t\ge0$,
\begin{align*}
\Pr( X\ge np+t ) &\le \exp( -2t^2/n ),\quad\text{and}\\
\Pr( X\le np-t ) &\le \exp( -2t^2/n )
\end{align*}
\end{lemma}
\begin{proof}
This follows from Hoeffding's theorem.
\end{proof}
\begin{lemma}\label{lem:binounionbound}
Let $X_i\sim\mathrm{Bin}(n,p)$ for $i=1,\dots,m$, not necessarily independent.
Then for any $t\ge0$,
\begin{align*}
\Pr\Bigl( \max_{1\le i\le m} X_{i} \ge np+t \Bigr) &\le m\exp( -2t^2/n ) ,\quad\text{and}\\
\Pr\Bigl( \min_{1\le i\le m} X_{i} \le np-t \Bigr) &\le m\exp( -2t^2/n ).
\end{align*}
\end{lemma}
\begin{proof}
This is from the union bound applied
to Lemma~\ref{lem:hoeff}.
\end{proof}
Here is our sampling model.
We index the size of our problem by $S\to\infty$.
The sample size $N$ will satisfy $\mathbb{E}(N)\ge S$.
The number of rows and columns in the data set are
$$R = S^\rho\quad\text{and}\quad C=S^\kappa$$
respectively, for positive numbers $\rho$ and $\kappa$.
Because our application domain has $N\ll RC$, we
assume that $\rho+\kappa>1$.
We ignore that $R$ and $C$ above are not necessarily integers.
In our model, $Z_{ij}\sim\mathrm{Bern}(p_{ij})$ independently with
\begin{align}\label{eq:defab}
\frac{S}{RC} \le p_{ij} \le \Upsilon\frac{S}{RC}
\quad\text{for}\quad 1\le\Upsilon<\infty.
\end{align}
That is $1\le p_{ij} S^{\rho+\kappa-1}\le\Upsilon$.
Letting $p_{ij}$ depend on $i$ and $j$
allows the probability model to capture
stylistic preferences affecting the missingness
pattern in the ratings data.
\subsection{Bounds for row and column size}
Letting $X \preccurlyeq Y$ mean that $X$ is stochastically smaller than $Y$, we know that
\begin{align*}
\mathrm{Bin}(R, S^{1-\rho-\kappa}) &\preccurlyeq N_{\sumdot j} \preccurlyeq \mathrm{Bin}( R, \Upsilon S^{1-\rho-\kappa}),\quad\text{and}\\
\mathrm{Bin}(C,S^{1-\rho-\kappa}) &\preccurlyeq N_{i\sumdot} \preccurlyeq \mathrm{Bin}( C, \Upsilon S^{1-\rho-\kappa}).
\end{align*}
By Lemma \ref{lem:hoeff}, if $t\ge0$, then
\begin{align*}
\Pr( N_{i\sumdot} \ge S^{1-\rho}(\Upsilon+t))
&\le \Pr\bigl( \mathrm{Bin}(C,\Upsilon S^{1-\rho-\kappa}) \ge S^{1-\rho}(\Upsilon+t)\bigr)\\
&\le \exp(-2(S^{1-\rho}t)^2/C)\\
&= \exp(-2S^{2-\kappa-2\rho}t^2).
\end{align*}
Therefore if $2\rho+\kappa<2$, we find using
using Lemma~\ref{lem:binounionbound} that
\begin{align*}
&\Pr\bigl( \max_iN_{i\sumdot} \ge S^{1-\rho}(\Upsilon+\epsilon)\bigr)
\le S^\rho\exp(-2S^{2-\kappa-2\rho}\epsilon^2)\to0
\end{align*}
for any $\epsilon>0$.
Combining this with an analogous lower bound,
\begin{align}\label{eq:boundnid}
\lim_{S\to\infty}\Pr\bigl( (1-\epsilon) S^{1-\rho}\le \min_i N_{i\sumdot} \le \max_i N_{i\sumdot} \le (\Upsilon+\epsilon) S^{1-\rho}\bigr)=1
\end{align}
Likewise, if $\rho+2\kappa<2$, then for any $\epsilon>0$,
\begin{align}\label{eq:boundndj}
\lim_{S\to\infty}\Pr\bigl( (1-\epsilon)S^{1-\kappa}\le \min_j N_{\sumdot j} \le \max_j N_{\sumdot j} \le (\Upsilon+\epsilon) S^{1-\kappa}\bigr)=1
\end{align}
\subsection{Interval arithmetic}
We will replace $N_{i\sumdot}$ and other quantities
by intervals that asymptotically contain them with probability one and then use interval arithmetic in order to streamline
some of the steps in our proofs.
For instance,
$$N_{i\sumdot}\in [(1-\epsilon)S^{1-\rho},(\Upsilon+\epsilon)S^{1-\rho}]
= [1-\epsilon,\Upsilon+\epsilon]\times S^{1-\rho}
= [1-\epsilon,\Upsilon+\epsilon]\times \frac{S}{R}$$
holds simultaneously for all $1\le i\le R$ with probability
tending to one as $S\to\infty$.
In interval arithmetic,
$$[A,B]+[a,b]=[a+A,b+B]\quad\text{and}\quad [A,B]-[a,b]=[A-b,B-a].$$
If $0<a\le b<\infty$ and $0<A\le B<\infty$, then
$$[A,B]\times[a,b] = [Aa,Bb]\quad\text{and}\quad [A,B]/[a,b] = [A/b,B/a].$$
Similarly, if $a<0<b$ and $X\in[a,b]$, then
$|X|\in[0,\max(|a|,|b|)]$.
Our arithmetic operations on intervals yield
new intervals guaranteed to contain the results
obtained using any members of the original intervals.
We do not necessarily use the smallest such interval.
\subsection{Co-observation}
Recall that the co-observation matrices are $Z^\mathsf{T} Z\in\{0,1\}^{C\times C}$
and $ZZ^\mathsf{T}\in\{0,1\}^{R\times R}$.
If $s\ne j$, then
$$
\mathrm{Bin}\Bigl( R,\frac{S^2}{R^2C^2}\Bigr)
\preccurlyeq (Z^\tran Z)_{sj}\preccurlyeq
\mathrm{Bin}\Bigl( R,\frac{\Upsilon^2S^2}{R^2C^2}\Bigr).
$$
That is
$\mathrm{Bin}(S^\rho, S^{2-2\rho-2\kappa})
\preccurlyeq
(Z^\tran Z)_{sj}
\preccurlyeq
\mathrm{Bin}(S^\rho, \Upsilon^2S^{2-2\rho-2\kappa}).
$
For $t\ge0$,
\begin{align*}
\Pr\Bigl( \max_s\max_{j\ne s}(Z^\tran Z)_{sj}\ge (\Upsilon^2+t)S^{2-\rho-2\kappa}\Bigr)
&\le \frac{C^2}2\exp( -(tS^{2-\rho-2\kappa})^2/R)\\
&= \frac{C^2}2\exp( -t^2 S^{4-3\rho-4\kappa}).
\end{align*}
If $3\rho+4\kappa<4$
then
\begin{align*}
&\Pr\Bigl( \max_s\max_{j\ne s} \,(Z^\tran Z)_{sj} \ge (\Upsilon^2+\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0,
\quad\text{and}\\
&\Pr\Bigl( \min_s\min_{j\ne s} \,(Z^\tran Z)_{sj} \le (1-\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0,
\end{align*}
for any $\epsilon>0$.
\subsection{Asymptotic bounds for $\Vert M\Vert_1$}
Here we prove upper bounds for $\Vert M^{(k)}\Vert_1$ for $k=1,2$
of equations~\eqref{eq:monejs} and~\eqref{eq:mtwojs}, respectively.
The bounds depend on $\Upsilon$ and there are values of $\Upsilon>1$
for which these norms are bounded strictly below one,
with probability tending to one.
\begin{theorem}\label{thm:m1norm1}
Let $Z_{ij}$ follow the model from Section~\ref{sec:modelz}
with $\rho,\kappa\in(0,1)$, that satisfy $\rho+\kappa>1$,
$2\rho+\kappa<2$ and $3\rho+4\kappa<4$.
Then for any $\epsilon>0$,
\begin{align}\label{eq:claim1}
&
\Pr\bigl( \Vert M^{(1)} \Vert_1\le
\Upsilon^2-\Upsilon^{-2}+\epsilon
\bigr)\to1
,\quad\text{and}\\
&\Pr\bigl( \Vert M^{(2)}\Vert_1\le
\Upsilon^2-\Upsilon^{-2}+\epsilon\bigr)\to1 \label{eq:claim2}
\end{align}
as $S\to\infty$.
\end{theorem}
\begin{figure}[t!
\centering
\includegraphics[width=.8\hsize]{figdomain2}
\caption{
\label{fig:domainofinterest}
The large shaded triangle is the domain of interest $\mathcal{D}$ for
Theorem~\ref{thm:m1norm1}.
The smaller shaded triangle shows a region where the analogous update
to $\boldsymbol{a}$ would have acceptable norm. The points marked are the ones we look at numerically,
including $(0.88,0.57)$ which corresponds to the Stitch Fix data in
Section~\ref{sec:stitch}.
}
\end{figure}
\begin{proof}
Without loss of generality we assume that $\epsilon<1$.
We begin with~\eqref{eq:claim2}.
Let $M=M^{(2)}$.
When $j\ne s$,
\begin{align*}
M_{js}&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
(Z_{rs} -\bar Z_{\text{\tiny$\bullet$} s}),\quad\text{for}\\
\bar Z_{\text{\tiny$\bullet$} s}&=
\sum_i
\frac{Z_{is}}{N_{i\sumdot}+\lambda_A}
\Bigm/
{\sum_{i}\frac{1}{N_{i\sumdot}+\lambda_{A}}}.
\end{align*}
Although $|Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}|\le1$, replacing
$Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}$ by one does not prove to be
sharp enough for our purposes.
Every $N_{r\sumdot}+\lambda_A\in S^{1-\rho} [1-\epsilon, \Upsilon+\epsilon]$
with probability tending to one and so
\begin{align*}
\frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
&\in
\frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{[1-\epsilon,\Upsilon+\epsilon]S^{1-\rho}}\\
&\subseteq [1-\epsilon,\Upsilon+\epsilon]^{-1}\bar Z_{\text{\tiny$\bullet$} s} S^{\rho-1}.
\end{align*}
Similarly
\begin{align*}
\bar Z_{\text{\tiny$\bullet$} s} &\in
\frac{\sum_iZ_{is}[1-\epsilon,\Upsilon+\epsilon]^{-1}}
{R[1-\epsilon,\Upsilon+\epsilon]^{-1}}
\subseteq\frac{N_{\sumdot s}}{R}[1-\epsilon,\Upsilon+\epsilon][1-\epsilon,\Upsilon+\epsilon]^{-1}\\
&\subseteq S^{1-\rho-\kappa}
[1-\epsilon,\Upsilon+\epsilon]^2[1-\epsilon,\Upsilon+\epsilon]^{-1}
\end{align*}
and so
\begin{align}\label{eq:zrsbarpart}
\frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r
\frac{Z_{rj}}{N_{r\sumdot}+\lambda_A}
\in S^{-\kappa}
\frac{[1-\epsilon,\Upsilon+\epsilon]^2}{[1-\epsilon,\Upsilon+\epsilon]^2}
\subseteq \frac1C
\Bigl[
\Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2
, \Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2
\Bigr].
\end{align}
Next using bounds on the co-observation counts,
\begin{align}\label{eq:zrspart}
\frac1{N_{\sumdot j}+\lambda_B}\sum_r\frac{Z_{rj}Z_{rs}}{N_{r\sumdot}+\lambda_A}
\in \frac{S^{\rho+\kappa-2}(Z^\tran Z)_{sj}}{[1-\epsilon,\Upsilon+\epsilon]^2}
\subseteq
\frac1C
\frac{[1-\epsilon,\Upsilon^2+\epsilon]}{[1-\epsilon,\Upsilon+\epsilon]^2}.
\end{align}
Combining~\eqref{eq:zrsbarpart} and~\eqref{eq:zrspart}
\begin{align*}
M_{js} \in &
\frac1C
\Bigl[
\frac{1-\epsilon}{(\Upsilon+\epsilon)^2}-
\Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2
,
\frac{\Upsilon^2+\epsilon}{1-\epsilon}
-\Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2
\Bigr]
\end{align*}
For any $\epsilon'>0$ we can choose $\epsilon$ small enough that
$$M_{js} \in C^{-1}[\Upsilon^{-2}-\Upsilon^2-\epsilon',
\Upsilon^2-\Upsilon^{-2}+{\epsilon'}]
$$
and then $|M_{js}|\le (\Upsilon^2-\Upsilon^{-2}+\epsilon')/C$.
Next, arguments like the preceding
give $|M_{jj}|\le (1-\epsilon')^{-2}(\Upsilon+\epsilon')S^{\rho-1}\to0$.
Then with probability tending to one,
$$
\sum_j|M_{js}|
\le\Upsilon^2-\Upsilon^{-2}
+2\epsilon'.
$$
This bound holds for all $s\in\{1,2,\dots,C\}$, establishing~\eqref{eq:claim2}.
The proof of~\eqref{eq:claim1} is similar.
The quantity $\bar Z_{\text{\tiny$\bullet$} s}$
is replaced by $(1/R)\sum_iZ_{is}/(N_{i\sumdot}+\lambda_A)$.
\end{proof}
It is interesting to find the largest $\Upsilon$ with
$\Upsilon^2-\Upsilon^{-2}\le1$.
It is
$((1+5^{1/2})/2)^{1/2}\doteq 1.27$.
\section{Convergence and computation}\label{sec:empiricalnorms}
In this section we make some computations on synthetic data
following the probability model from Section~\ref{sec:normconvergence}.
First we study the norms of our update matrix $M^{(2)}$
which affects the number of iterations to convergence.
In addition to $\Vert\cdot\Vert_1$ covered in Theorem~\ref{thm:m1norm1}
we also consider $\Vert\cdot\Vert_2$, $\Vert\cdot\Vert_\infty$ and $\lambda_{\max}(\cdot)$.
Then we compare the cost to compute $\hat\beta_\mathrm{GLS}$ by
our backfitting method with that of lmer \citep{lme4}.
The problem size is indexed by $S$.
Indices $i$ go from $1$ to $R=\lceil S^\rho\rceil$
and indices $j$ go from $1$ to $C=\lceil S^\kappa\rceil$.
Reasonable parameter values have $\rho,\kappa\in(0,1)$
with $\rho+\kappa>1$.
Theorem~\ref{thm:m1norm1} applies when
$2\rho+\kappa<2$ and $3\rho+4\kappa<4$.
Figure~\ref{fig:domainofinterest} depicts this
triangular domain of interest $\mathcal{D}$.
There is another triangle $\mathcal{D}'$ where a corresponding update
for $\boldsymbol{a}$ would satisfy the conditions of Theorem~\ref{thm:m1norm1}.
Then $\mathcal{D}\cup\mathcal{D}'$ is a non-convex polygon of five sides.
Figure~\ref{fig:domainofinterest}
also shows $\mathcal{D}'\setminus\mathcal{D}$ as a second triangular region.
For points $(\rho,\kappa)$ near the line $\rho+\kappa=1$, the matrix $Z$
will be mostly ones unless $S$ is very large. For points $(\rho,\kappa)$
near the upper corner $(1,1)$, the matrix $Z$ will be extremely sparse
with each $N_{i\sumdot}$ and $N_{\sumdot j}$ having nearly a
Poisson distribution with mean between $1$ and $\Upsilon$.
The fraction of potential values that have been observed
is $O(S^{1-\rho-\kappa})$.
Given {$p_{ij}$}, we generate our observation matrix via $Z_{ij} \stackrel{\mathrm{ind}}{\sim}\mathrm{Bern}({p_{ij})}$.
These probabilities are first generated via
${p_{ij}}= U_{ij}S^{1-\rho-\kappa}$ where
$U_{ij}\stackrel{\mathrm{iid}}{\sim}\mathbb{U}[1,\Upsilon]$ and $\Upsilon$ is
the largest value for which $\Upsilon^2-\Upsilon^{-2}\le1$.
For small $S$ and $\rho+\kappa$ near $1$ we can get
some values ${p_{ij}>1}$ and in that case we take ${p_{ij}=1}$.
The following $(\rho,\kappa)$ combinations are of interest.
First, $(4/5,2/5)$ is the closest vertex of the domain of interest to the point $(1,1)$.
Second, $(2/5,4/5)$ is outside the domain of interest for the $\boldsymbol{b}$
but within the domain for the analogous $\boldsymbol{a}$ update.
Third, among points with $\rho=\kappa$, the value $(4/7,4/7)$
is the farthest one from the origin that is in the domain of interest.
We also look at some points on the $45$ degree line that are outside
the domain of interest because the sufficient conditions in
Theorem~\ref{thm:m1norm1}
might not be necessary.
In our matrix norm computations we took $\lambda_A=\lambda_B=0$.
This completely removes shrinkage and will make it harder for the algorithm to converge
than would be the case for the positive $\lambda_A$ and $\lambda_B$ that hold
in real data. The values of $\lambda_A$ and $\lambda_B$
appear in expressions $N_{i\sumdot}+\lambda_A$ and $N_{\sumdot j}+\lambda_B$ where their
contribution is asymptotically negligible, so conservatively setting them to zero
will nonetheless be realistic for large data sets.
\begin{figure
\centering
\includegraphics[width=.8\hsize]{norm_n_log_xy_with_lines_revised}
\caption{\label{fig:1normvsn}
Norm
$\Vert M^{(2)}\Vert_1$ of centered update matrix
versus problem size $S$ for different $(\rho, \kappa)$.
}
\end{figure}
\noindent
We sample from the model multiple times at various values of $S$
and plot $\Vert M^{(2)}\Vert_1$ versus $S$ on a logarithmic scale.
Figure~\ref{fig:1normvsn} shows the results.
We observe that $\Vert M^{(2)}\Vert_1$ is below $1$ and decreasing
with $S$ for all the examples $(\rho,\kappa)\in\mathcal{D}$.
This holds also for $(\rho,\kappa)=(0.60,0.60)\not\in\mathcal{D}$.
We chose that point because it is on the convex hull of $\mathcal{D}\cup\mathcal{D}'$.
The point $(\rho,\kappa)=(0.40,0.80)\not\in\mathcal{D}$.
Figure~\ref{fig:1normvsn} shows large values of $\VertM^{(2)}\Vert_1$ for this
case. Those values increase with $S$, but remain below $1$ in the range considered.
This is a case where the update from $\boldsymbol{a}$ to $\boldsymbol{a}$ would have norm well below $1$
and decreasing with $S$, so backfitting would converge.
We do not know whether $\VertM^{(2)}\Vert_1>1$ will occur for larger $S$.
The point $(\rho,\kappa)=(0.70,0.70)$ is not in the domain $\mathcal{D}$
covered by Theorem~\ref{thm:m1norm1}
and we see that $\VertM^{(2)}\Vert_1>1$ and generally increasing with $S$
as shown in Figure~\ref{fig:7070norms}.
This does not mean that backfitting must fail to converge.
Here we find that $\VertM^{(2)}\Vert_2<1$ and generally decreases as $S$
increases. This is a strong indication that
the number of backfitting iterations required
will not grow with $S$ for this $(\rho,\kappa)$ combination.
We cannot tell whether $\VertM^{(2)}\Vert_2$ will decrease to zero
but that is what appears to happen.
We consistently find in our computations
that $\lambda_{\max}(M^{(2)})\le \VertM^{(2)}\Vert_2\le\VertM^{(2)}\Vert_1$.
The first of these inequalities must necessarily hold.
For a symmetric matrix $M$ we know that $\lambda_{\max}(M)=\Vert M\Vert_2$
which is then necessarily no larger than $\Vert M\Vert_1$.
Our update matrices are nearly symmetric but not perfectly so.
We believe that explains why their $L_2$ norms are
close to their spectral radius and also smaller than their $L_1$ norms.
While the $L_2$ norms are empirically more favorable than the $L_1$
norms, they are not amenable to our theoretical treatment.
\begin{figure
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.4]{norm_vs_S_with_lines_70_L1_written_norm_logxy}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.4]{norm_vs_S_with_lines_70_L2_written_norm_logxy_main_correct}
\end{subfigure}
\caption{\label{fig:7070norms}
The left panel shows $\VertM^{(2)}\Vert_1$ versus $S$.
The right panel shows $\VertM^{(2)}\Vert_2$ versus $S$
with a logarithmic vertical scale.
Both have $(\rho,\kappa)=(0.7,0.7)$.
}
\end{figure}
We believe that backfitting will have a spectral radius well below $1$
for more cases than we can as yet prove.
In addition to the previous figures showing matrix norms
as $S$ increases for certain special values of $(\rho,\kappa)$ we
have computed contour maps of those norms over
$(\rho,\kappa)\in[0,1]$ for $S=10{,}000$.
See Figure~\ref{fig:contours}.
To compare the computation times for algorithms we
generated $Z_{ij}$ as above and also took
$x_{ij}\stackrel{\mathrm{iid}}{\sim}\mathcal{N}(0,I_7)$ plus an intercept, making $p=8$
fixed effect parameters.
Although backfitting can run with $\lambda_A=\lambda_B=0$,
lmer cannot do so for numerical reasons. So we took $\sigma^2_A=\sigma^2_B=1$
and $\sigma^2_E=1$ corresponding to $\lambda_A=\lambda_B=1$.
The cost per iteration does not depend on $Y_{ij}$ and hence not
on $\beta$ either. We used $\beta=0$.
Figure~\ref{fig:comptimes} shows computation times
for one single iteration when $(\rho,\kappa)=(0.52,0.52)$ and when $(\rho,\kappa)=(0.70,0.70)$.
The time to do one iteration in lmer grows roughly like $N^{3/2}$
in the first case. For the second case, it appears to grow at
the even faster rate of $N^{2.1}$.
Solving a system of $S^\kappa\times S^\kappa$ equations would cost
$S^{3\kappa} = S^{2.1} = O(N^{2.1})$, which explains the observed rate.
This analysis would predict $O(N^{1.56})$ for $\rho=\kappa=0.52$
but that is only minimally different from $O(N^{3/2})$.
These experiments were carried out in R on a computer
with the macOS operating system, 16 GB of memory and an Intel i7 processor. Each backfitting iteration entails solving \eqref{eq:backfit} along with the fixed effects.
The cost per iteration for backfitting follows closely to the $O(N)$
rate predicted by the theory.
OLS only takes one iteration and it is also of
$O(N)$ cost. In both of these cases $\VertM^{(2)}\Vert_2$ is bounded away
from one so the number of backfitting iterations does not grow with $S$.
For $\rho=\kappa=0.52$,
backfitting took $4$ iterations to converge for the smaller values of $S$
and $3$ iterations for the larger ones.
For $\rho=\kappa=0.70$,
backfitting took $6$ iterations for smaller $S$ and $4$ or $5$ iterations
for larger $S$.
In each case our convergence criterion was a relative
change of $10^{-8}$
as described in Section~\ref{sec:wholeshebang}.
Further backfitting to compute BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$
given $\hat\beta_{\mathrm{GLS}}$
took at most $5$ iterations for $\rho=\kappa=0.52$
and at most $10$ iterations for $\rho=\kappa=0.7$.
In the second example, lme4 did not reach convergence in
our time window so we ran it for just $4$ iterations to measure its cost per iteration.
\begin{figure}[!t]
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.28]{one_norm_reshaped.png}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[scale=.28]{infinity_norm_reshaped.png}
\end{subfigure}
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[height = 5.2cm, width = 5.5cm]{two_norm_reshaped.png}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[height = 5.2cm, width = 5.44cm]{spectral_radius_reshaped.png}
\end{subfigure}
\caption{\label{fig:contours}
Numerically computed matrix norms
for $M^{(2)}$ using $S=10{,}000$.
The color code varies with the subfigures.
}
\end{figure}
\begin{figure
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{time_per_iter_vs_n_last_point_1_point_2716_reference_slope_at_end_52_52_review.pdf}
\caption{$(\rho, \kappa) = (0.52,0.52)$}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{backfitting_lmer_time_total}
\caption{$(\rho, \kappa) = (0.70,0.70)$}
\end{subfigure}
\caption{\label{fig:comptimes}
Time for one iteration versus the number of observations, $N$ at two points $(\rho,\kappa)$.
The cost for lmer is roughly $O(N^{3/2})$ in the top panel
and $O(N^{2.1})$ in the bottom panel. The costs for OLS and backfitting
are $O(N)$.
}
\end{figure}
\section{Example: ratings from Stitch Fix}\label{sec:stitch}
We illustrate backfitting for GLS on some data from Stitch Fix.
Stitch Fix sells clothing. They mail their customers a sample of items.
The customers may keep and purchase any of those items that they
want, while returning the others. It is valuable to predict
the extent to which a customer will like an item, not just whether they will purchase it.
Stitch Fix has provided us with some of their client ratings
data. It was anonymized, void of personally identifying
information, and as a sample it does not reflect their
total numbers of clients or items at the time they
provided it. It is also from 2015. While
it does not describe their current business, it is a valuable
data set for illustrative purposes.
The sample sizes for this data are as follows.
We received $N=5{,}000{,}000$ ratings
by $R=762{,}752$ customers on $C=6{,}318$ items.
These values of $R$ and $C$ correspond to the point $(0.88,0.57)$ in Figure~\ref{fig:domainofinterest}.
Thus $C/N\doteq 0.00126$ and $R/N\doteq 0.153$.
The data are not dominated by a single row or column because
$\max_iN_{i\sumdot}/R\doteq 9\times 10^{-6}$ and $\max_jN_{\sumdot j}/N\doteq 0.0143$.
The data are sparse because $N/(RC)\doteq 0.001$.
\subsection{An illustrative linear model}
The response $Y_{ij}$ is a rating on a ten point scale of
the satisfaction of customer $i$ with item $j$.
The data come with features about the clients and
items. In a business setting one would fit and compare
possibly dozens of different regression models to understand the data.
Our purpose here is to study large scale GLS and compare
it to ordinary least squares (OLS) and so we use just one model, not necessarily
one that we would have settled on.
For that purpose we use the same model that was
used in \cite{crelin}. It is not chosen to make OLS look as bad as
possible. Instead it is potentially the first model one might look at in
a data analysis.
For client $i$ and item $j$,
\begin{align}
Y_{ij}& = \beta_0+\beta_1\mathrm{match}_{ij}+\beta_2\mathbb{I}\{\mathrm{client\ edgy}\}_i+\beta_3\mathbb{I}\{\mathrm{item\ edgy}\}_j \notag \\
&\phe + \beta_4\mathbb{I}\{\mathrm{client\ edgy}\}_i*\mathbb{I}\{\mathrm{item\ edgy}\}_j+\beta_5\mathbb{I}\{\mathrm{client\ boho}\}_i \notag \\
&\phe + \beta_6\mathbb{I}\{\mathrm{item\ boho}\}_j+\beta_7\mathbb{I}\{\mathrm{client\ boho}\}_i*\mathbb{I}\{\mathrm{item\ boho}\}_j \notag \\
&\phe + \beta_8\mathrm{material}_{ij}+a_i+b_j+e_{ij}. \notag
\end{align}
Here $\mathrm{material}_{ij}$ is a categorical variable that is implemented via indicator variables for each type of material other than the baseline. Following \cite{crelin}, we chose ‘Polyester’, the most common material, as the baseline.
Some customers and some items were given the adjective `edgy' in the data set. Another adjective was `boho', short for `Bohemian'.
The variable match$_{ij}\in[0,1]$ is an estimate of the probability that the customer keeps the item, made before the item was sent.
The match score is a prediction from a baseline model and is not representative of all algorithms used at Stitch Fix.
All told, the model has $p=30$ parameters.
\subsection{Estimating the variance parameters}\label{sec:estim-vari-param}
We use the method of moments method from \cite{crelin}
to estimate $\theta^\mathsf{T}=(\sigma^2_A, \sigma^2_B, \sigma^2_E)$ in $O(N)$ computation.
That is in turn based on the method that
\cite{GO17} use in the intercept only model where
$Y_{ij} = \mu+a_i+b_{j}+e_{ij}$.
For that model they set
\begin{align*}
U_{A} &= \sum_{i} \sum_{j} Z_{ij}
\Bigl( Y_{ij}-\frac{1}{N_{i\sumdot}}\sum_{j^{\prime}}Z_{ij'}
Y_{ij^{\prime}}\Bigr)^{2}, \\
U_{B} &= \sum_{j}\sum_{i} Z_{ij}
\Bigl(Y_{ij}-\frac{1}{N_{\sumdot j}}\sum_{i^{\prime}}Z_{i'j}
Y_{i^{\prime}j}\Bigr)^{2}, \quad\text{and}\\
U_{E} &= N\sum_{i j} Z_{i j} \Bigl(Y_{i j}-\frac{1}{N}\sum_{i^{\prime} j^{\prime}}Z_{i'j'} Y_{i^{\prime} j^{\prime}}\Bigr)^{2}.
\end{align*}
These are, respectively, sums of within row sums of squares,
sums of within column sums of squares
and a scaled overall sum of squares.
Straightforward calculations
show that
\begin{align*}
\mathbb{E}(U_{A})&=\bigl(\sigma^2_B+\sigma^2_E\bigr)(N-R), \\
\mathbb{E}(U_{B})&=\bigl(\sigma^2_A+\sigma^2_E \bigr)(N-C), \quad\text{and}\\
\mathbb{E}(U_{E})&=\sigma^2_A\Bigl(N^{2}-\sum_{i} N_{i\sumdot}^{2}\Bigr)+\sigma^2_B\Bigl(N^{2}-\sum_{j} N_{\sumdot j}^{2}\Bigr)+\sigma^2_E(N^{2}-N).
\end{align*}
By matching moments, we can estimate $\theta$ by solving the $3 \times 3$ linear system
$$\begin{pmatrix}
0& N-R & N-R \\[.25ex]
N-C & 0 & N-C \\[.25ex]
N^{2}-\Sigma N_{i}^{2} & N^{2}-\Sigma N_{j}^{2} & N^{2}-N
\end{pmatrix}
\begin{pmatrix}
\sigma^2_A \\[.25ex] \sigma^2_B \\[.25ex] \sigma^2_E\end{pmatrix}
=\begin{pmatrix}
U_{A}\\[.25ex] U_{B} \\[.25ex] U_{E}\end{pmatrix}
$$
for $\theta$.
Following \cite{GO17} we note that
$\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\beta = a_i+b_{j}+e_{ij}$
has the same parameter $\theta$ as $Y_{ij}$ have.
We then take a consistent estimate of $\beta$,
in this case $\hat\beta_{\mathrm{OLS}}$ that \cite{GO17} show is consistent for $\beta$,
and define $\hat\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\hat\beta_\mathrm{OLS}$.
We then estimate $\theta$ by the above method
after replacing $Y_{ij}$ by $\hat\eta_{ij}$.
For the Stitch Fix data we obtained
$\hat{\sigma}_{A}^{2} = 1.14$ (customers),
$\hat{\sigma}^{2}_{B} = 0.11$ (items)
and $\hat{\sigma}^{2}_{E} = 4.47$.
\subsection{Computing $\hat\beta_\mathrm{GLS}$}\label{sec:wholeshebang}
The estimated coefficients $\hat\beta_\mathrm{GLS}$ and their standard errors are presented in a table in the appendix.
Open-source R code at
\url{https://github.com/G28Sw/backfit_code}
does these computations.
Here is a concise description of the algorithm we used:
\begin{compactenum}[\quad 1)]
\item Compute $\hat\beta_\mathrm{OLS}$ via \eqref{eq:bhatols}.
\item Get residuals $\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{OLS}}$.
\item Compute $\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ by the method of moments on $\hat\eta_{ij}$.
\item Compute $\widetilde\mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$ using doubly centered backfitting $M^{(3)}$.
\item Compute $\hat\beta_{\mathrm{GLS}}$ by~\eqref{eq:covbhatgls}.
\item If we want BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$ backfit
$\mathcal{Y} -\mathcal{X}\hat\beta_{\mathrm{GLS}}$ to get them.
\item Compute $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ by plugging
$\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ into $\mathcal{V}$ at~\eqref{eq:covbhatgls}.
\end{compactenum}
\smallskip
Stage $k$ of backfitting provides $(\tilde\mathcal{S}_G\mathcal{X})^{(k)}$.
We iterate until
$$
\frac{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k+1)}-(\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F}{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F}
< \epsilon
$$
where $\Vert \cdot \Vert_F$ is the Frobenius norm
(root mean square of all elements).
Our numerical results use $\epsilon =10^{-8}$.
{
When we want $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ then we need
to use a backfitting strategy with a symmetric smoother
$\tilde\mathcal{S}_G$. This holds for $M^{(0)}$, $M^{(2)}$ and $M^{(3)}$
but not $M^{(1)}$.
After computing $\hat\beta_{\mathrm{GLS}}$ one can return to step 2,
form new residuals
$\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{GLS}}$
and continue through steps 3--7.
We have seen small differences from doing this.
}
\subsection{Quantifying inefficiency and naivete of OLS}
In the introduction we mentioned two serious problems with the use of OLS on crossed
random effects data. The first is that OLS is naive about correlations in the
data and this can lead it to severely underestimate the variance of $\hat\beta$.
The second is that OLS is inefficient compared to GLS by the Gauss-Markov theorem.
Let $\hat\beta_\mathrm{OLS}$ and $\hat\beta_\mathrm{GLS}$ be the OLS and GLS
estimates of $\beta$, respectively. We can compute their
corresponding variance estimates
$\widehat\cov_\mathrm{OLS}(\hat\beta_\mathrm{OLS})$ and $\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{GLS})$.
We can also find
$\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{OLS})$, the variance under our GLS model of the
linear combination of $Y_{ij}$ values that OLS uses.
This section explore them graphically.
We can quantify the naivete of OLS
via the ratios
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$
for $j=1,\dots,p$.
Figure~\ref{fig:OLSisnaive} plots these values. They range from $ 1.75$
to $345.28$ and can be interpreted as factors by which OLS naively overestimates
its sample size.
The largest and second largest ratios are for material indicators
corresponding to `Modal' and `Tencel', respectively. These appear
to be two names for the same product with Tencel being a trademarked name
for Modal fibers (made from wood).
We can also identify the linear combination of $\hat\beta_\mathrm{OLS}$
for which $\mathrm{OLS}$ is most naive. We maximize
the ratio
$x^\mathsf{T}\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})x/x^\mathsf{T}\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}})x$
over $x\ne0$.
The resulting maximal ratio is the largest eigenvalue of
$$\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}}) ^{-1}
\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})$$
and it is about $361$ for the Stitch Fix data.
\begin{figure}
\centering
\includegraphics[width=.9\hsize]{figOLSisnaive_katelyn_interaction_polyester_reference}
\caption{\label{fig:OLSisnaive}
OLS naivete
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$
for coefficients $\beta_j$ in the Stitch Fix data.
}
\end{figure}
We can quantify the inefficiency of OLS
via the ratio
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$
for $j=1,\dots,p$.
Figure~\ref{fig:OLSisinefficient} plots these values. They range from just over $1$
to $50.6$ and can be interpreted as factors by which using
OLS reduces the effective sample size. There is a clear outlier: the coefficient of the match
variable is very inefficiently estimated by OLS. The second largest inefficiency
factor is for the intercept term.
The most inefficient linear combination of $\hat\beta$ reaches a
variance ratio of $52.6$, only slightly more inefficient than the match coefficient alone.
\begin{figure}
\centering
\includegraphics[width=.9\hsize]{figOLSisinefficient_katelyn_interaction_polyester_reference}
\caption{\label{fig:OLSisinefficient}
OLS inefficiency
$\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$
for coefficients $\beta_j$ in the Stitch Fix data.
}
\end{figure}
The variables for which OLS is more naive tend to also be the variables for
which it is most inefficient. Figure~\ref{fig:naivevsinefficient} plots these
quantities against each other for the $30$ coefficients in our model.
\begin{figure}[t]
\centering
\includegraphics[width=.8\hsize]{fignaivevsinefficient_katelyn_interaction_polyester_reference}
\caption{\label{fig:naivevsinefficient}
Inefficiency vs naivete for OLS coefficients in the Stitch Fix data.
}
\end{figure}
\subsection{Convergence speed of backfitting}
The Stitch Fix data have row and column sample sizes
that are much more uneven than our sampling model for $Z$ allows.
Accordingly we cannot rely on Theorem~\ref{thm:m1norm1} to show that
backfitting must converge rapidly for it.
The sufficient conditions in that theorem may not be necessary
and we can compute
our norms and the spectral radius on
the update matrices for the Stitch Fix data using some sparse matrix computations.
Here $Z\in\{0,1\}^{762,752\times6318}$,
so $M^{(k)}\in\mathbb{R}^{6318\times 6318}$ for $k \in \lbrace0,1,2,3\rbrace$.
The results are
$$
\begin{pmatrix}
\Vert M^{(0)}\Vert_1 \ & \ \Vert M^{(0)}\Vert_2 \ & \ |\lambda_{\max}(M^{(0)})|\\[.25ex]
\Vert M^{(1)}\Vert_1 \ & \ \Vert M^{(1)}\Vert_2 \ & \ |\lambda_{\max}(M^{(1)})|\\[.25ex]
\Vert M^{(2)}\Vert_1 \ & \ \Vert M^{(2)}\Vert_2 \ & \ |\lambda_{\max}(M^{(2)})|\\[.25ex]
\Vert M^{(3)}\Vert_1 \ & \ \Vert M^{(3)}\Vert_2 \ & \ |\lambda_{\max}(M^{(3)})|
\end{pmatrix}
=\begin{pmatrix}
31.9525 \ & \ 1.4051 \ & \ 0.64027 \\[.75ex]
11.2191 \ & \ 0.4512 \ & \ 0.33386\\[.75ex]
\phz8.9178 \ & \ 0.4541 \ & \ 0.33407\\[.75ex]
\phz9.2143\ & \ 0.4546 & \ 0.33377\\
\end{pmatrix}.
$$
All the updates have spectral radius comfortably below one.
The centered updates have $L_2$ norm below one
but the uncentered update does not.
Their $L_2$ norms are somewhat larger than their spectral
radii because those matrices are not quite symmetric.
The two largest eigenvalue moduli for $M^{(0)}$ are $0.6403$ and $0.3337$
and the centered updates have spectral radii close to the second
largest eigenvalue of $M^{(0)}$.
This is consistent with an intuitive explanation that the space spanned
by a column of $N$ ones that is common to the columns spaces
of $\mathcal{Z}_A$ and $\mathcal{Z}_B$ is the {biggest impediment} to $M^{(0)}$ and that
all three centering strategies essentially remove it.
The best spectral radius is for $M^{(3)}$, which employs two principled
centerings, although in this data set it made little difference.
Our backfitting algorithm took $8$ iterations when applied to $\mathcal{X}$
and $12$ more to compute the BLUPs.
We used a convergence threshold of $10^{-8}.$
\section{Discussion}\label{sec:discussion}
We have shown that the cost of our backfitting algorithm
is $O(N)$ under strict conditions that are nonetheless
much more general than having $N_{i\sumdot} = N/C$
for all $i=1,\dots,R$ and $N_{\sumdot j} = N/R$ for all $j=1,\dots,C$
as in \cite{papa:robe:zane:2020}.
As in their setting, the backfitting algorithm scales empirically to
much more general problems than those for which
rapid convergence can be proved.
Our contour map of the spectral radius of the update
matrix $M$ shows that this norm is well below $1$
over many more $(\rho,\kappa)$ pairs that our
theorem covers. The difficulty in extending our
approach to those settings is that the spectral radius
is a much more complicated function of the observation
matrix $Z$ than the $L_1$ norm is.
Theorem 4 of \cite{papa:robe:zane:2020}
has the rate of convergence for their collapsed Gibbs
sampler for balanced data.
It involves an auxilliary convergence rate $\rho_{\mathrm{aux}}$
defined as follows.
Consider the Gibbs sampler on $(i,j)$ pairs where
given $i$ a random $j$ is chosen with probability $Z_{ij}/N_{i\sumdot}$
and given $j$ a random $i$ is chosen with probability
$Z_{ij}/N_{\sumdot j}$. That Markov chain has invariant distribution $Z_{ij}/N$
on $(i,j)$ pairs and $\rho_{\mathrm{aux}}$ is the rate at which the chain converges.
In our notation
$$
\rho_{\mathrm{PRZ}} = \frac{N\sigma^2_A}{N\sigma^2_A+R\sigma^2_E}\times\frac{N\sigma^2_B}{N\sigma^2_B+C\sigma^2_E}\times\rho_{\mathrm{aux}}.
$$
In sparse data $\rho_{\mathrm{PRZ}}\approx\rho_{\mathrm{aux}}$ and under our asymptotic
setting $|\rho_{\mathrm{aux}}-\rho_{\mathrm{PRZ}}|\to0$.
\cite{papa:robe:zane:2020} remark that $\rho_{\mathrm{aux}}$ tends to decrease
as the amount of data increases. When it does, then their algorithm
takes $O(1)$ iterations and costs $O(N)$.
They explain that $\rho_{\mathrm{aux}}$ should decrease as the data set
grows because the auxiliary process then gets greater connectivity.
That connectivity increases for bounded $R$ and $C$ with increasing $N$
and from their notation, allowing multiple observations
per $(i,j)$ pair it seems like they have this sort of infill
asymptote in mind.
For sparse data from electronic commerce we think that
an asymptote like the one we study where $R$, $C$ and $N$
all grow is a better description.
It would be interesting to see how $\rho_{\mathrm{aux}}$ develops under such a model.
In Section 5.3 \cite{papa:robe:zane:2020}
state that the convergence rate of the collapsed Gibbs sampler
is $O(1)$ regardless of the asymptotic regime. That section is about
a more stringent `balanced cells' condition where every $(i,j)$ combination
is observed the same number of times, so it does not describe
the `balanced levels' setting where $N_{i\sumdot}=N/R$ and $N_{\sumdot j}=N/C$.
Indeed they provide a counterexample in which there are two
disjoint communities of users and two disjoint sets of items
and each user in the first community has rated every item
in the first item set (and no others) while each user in the
second community has rated every item in the second item
set (and no others). That configuration leads to an unbounded mixing time
for collapsed Gibbs. It is also one where backfitting takes
an increasing number of iterations as the sample size grows.
There are interesting parallels between methods to sample a high
dimensional Gaussian distribution with covariance matrix $\Sigma$
and iterative solvers for the system $\Sigma \boldsymbol{x} = \boldsymbol{b}$.
See \cite{good:soka:1989} and \cite{RS97}
for more on how the convergence rates
for these two problems coincide.
We found that backfitting with one or both updates centered
worked much better than uncentered backfitting.
\cite{papa:robe:zane:2020} used a collapsed sampler
that analytically integrated out the global mean of their model in each update
of a block of random effects.
Our approach treats $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ as nuisance parameters.
We plug in a consistent method of moments based estimator of them
in order to focus on the backfitting iterations.
In Bayesian computations, maximum a posteriori estimators of
variance components under non-informative priors can be
problematic for hierarchical models \cite{gelm:2006},
and so perhaps maximum likelihood estimation of these
variance components would also have been challenging.
Whether one prefers a GLS estimate or a Bayesian one
depends on context and goals. We believe that there is a strong
computational advantage to GLS for large data sets.
The cost of one backfitting iteration is comparable to the cost to generate
one more sample in the MCMC. We may well find that only a dozen
or so iterations are required for convergence of the GLS. A Bayesian
analysis requires a much larger number of draws from the posterior
distribution than that.
For instance, \cite{gelm:shir:2011} recommend an effective sample size of about $100$
posterior draws, with autocorrelations requiring a larger actual sample size.
\cite{vats:fleg:jone:2019} advocate even greater effective sample sizes.
It is usually reasonable to assume that there is a selection
bias underlying which data points are observed.
Accounting for any such selection bias must necessarily
involve using information or assumptions from outside the data set at
hand. We expect that any approach to take proper account of
informative missingness must also make use of solutions to
GLS perhaps after reweighting the observations.
Before one develops any such methods, it is necessary
to first be able to solve GLS without regard to missingness.
Many of the problems in electronic commerce involve categorical outcomes,
especially binary ones, such as whether an item was purchased or not.
Generalized linear mixed models are then appropriate ways to handle
crossed random effects, and we expect that the progress made here
will be useful for those problems.
\section*{Acknowledgements}
This work was supported by the U.S.\ National Science Foundation under grant IIS-1837931.
We are grateful to Brad Klingenberg and Stitch Fix for sharing some test data with us.
We thank the reviewers for remarks that have helped us improve the paper.
\bibliographystyle{imsart-nameyear}
| 2024-02-18T23:39:40.364Z | 2021-03-22T01:05:18.000Z | algebraic_stack_train_0000 | 25 | 14,154 |
|
proofpile-arXiv_065-234 | \section{Introduction}
It is well known that in certain disordered media wave propagation can be completely halted due to the back-scattering of the randomly distributed impurities.
This phenomenon, known as Anderson localization~\cite{Anderson:LocAnderson:PR58}, has been reported for different kinds of waves, such as light waves in diffusive media~\cite{Wiersma:LightLoc:N97,Maret:AndersonTransLight:PRL06} or in disordered photonic crystals~\cite{Segev:LocAnderson2DLight:N07,Lahini:AndersonLocNonlinPhotonicLattices:PRL08}, ultrasound~\cite{vanTiggelen:AndersonSound:NP08}, microwaves~\cite{Chabanov:StatisticalSignaturesPhotonLoc:N00} and atomic matter waves~\cite{Billy:AndersonBEC1D:N08,Roati:AubryAndreBEC1D:N08}.
Its occurrence is ruled by the spatial dimension of the system and by the symmetries of the model, which determine its universality class~\cite{Altland:PRB1997}.
When both spin-rotational and time-reversal symmetries are preserved, notably in the absence of magnetic fields and spin-orbit couplings, all wave-functions are exponentially localized in one and two dimensions. In three and higher dimensions the system possesses both localized and extended states, separated in energy by a critical point, dubbed the mobility edge, where the system
undergoes a metal-insulator transition~\cite{Evers:AndersonTransitions:RMP08}.
Anderson transitions have recently been detected using noninteracting atomic quantum gases~\cite{Kondov:ThreeDimensionalAnderson:S11,Jendrzejewski:AndersonLoc3D:NP12,Semeghini:2014} exposed to three-dimensional (3D) speckle potentials. Theoretical predictions for the mobility edge of atoms have also been reported~\cite{Yedjour:2010,Piraud:PRA2014,Delande:MobEdgeSpeckle:PRL2014,Pilati:LevelStats:2015,Pasek:3DAndersonSpeckle:PRA2015,Pilati:3DAndersonSpeckle:2015,Pasek:PRL2017,Orso:SpinOrbit:PRL2017} and compared with the experimental data.
Interactions can nevertheless significantly perturb the single-particle picture of Anderson localization. Puzzling metal-insulator transitions~\cite{Kravchenko:PRB1994}, discovered in high-mobility 2D electron systems in silicon, were later interpreted theoretically
in terms of a two-parameter scaling theory of localization, which combines disorder and strong electron-electron interactions~\cite{Punnoose:Science2005,Knyazev:PRL2008}.
In more recent years a growing interest has emerged
around the concept of many-body localization~\cite{GornyiPRL2005,Altshuler:MetalInsulator:ANP06} (MBL), namely the generalization of Anderson localization to disordered interacting quantum systems at finite particle density (for recent reviews see Refs.~\cite{Nandkishore2015,ALET2018498,Abanin:RMP2019}).
In analogy with the single-particle problem, MBL phases are separated from (ergodic) thermal phases by critical points situated at finite energy density, known as many-body mobility edges.
While MBL has been largely explored in one dimensional systems with short range interactions,
both experimentally~\cite{Schreiber:Science2015,Rispoli:Nature2019} and
theoretically~\cite{PhysRevB.75.155111,PhysRevB.91.081103,Michal:PRL2014,Andraschko:PRL2014,Mondaini:PRA2015,Reichl:PRA2016,Prelovsek:PRB2016,Zakrzewski:PRB2018,Hopjan:PRA2020,krause2019nucleation,yao2020manybody}, its very existence
in systems with higher dimensions remains unclear.
In particular it has been suggested~\cite{DeRoeck:PRB2016,DeRoeck:PRB2017} that the MBL is inherently unstable against thermalization in large enough samples. This prediction contrasts with subsequent experimental~\cite{Choi1547} and numerical~\cite{WahlNatPhys2019,geiler2019manybody,De_Tomasi_2019,Thomson:PRB2018} studies of 2D systems of moderate sizes, showing evidence of a many-body mobility edge.
It must be emphasized that thorough numerical investigations, including a finite-size scaling analysis, are computationally challenging beyond one dimension~\cite{theveniaut2019manybody}.
In the light of the above difficulties, it is interesting to focus on the localization properties of few interacting particles in large (ideally infinite) disordered lattices.
Although these systems may represent overly simplified examples of MBL states,
they can show similar effects, including interaction-induced delocalization transitions with genuine mobility edges\cite{Stellin:PRB2019,stellin2020twobody}.
In a seminal paper~\cite{Shepelyansky:AndLocTIP1D:PRL94}, Shepelyansky showed that two particles moving in a one-dimensional lattice and coupled by contact interactions can travel over a distance much larger than the single-particle localization length, before being localized by the disorder. This intriguing effect was confirmed by several numerical studies~\cite{Weinmann:PRL1995,vonOppen:AndLocTIPDeloc:PRL96,Frahm1999,Roemer:PhysicaE2001,Krimer:JETP2011,Dias:PhysicaA2014,Lee:PRA2014,Krimer:InterConnDisord2PStates:PRB15,Frahm:EigStructAL1DTIP16,Thongjaomayum:PRB2019,thongjaomayum2020multifractality}, trying to identify the explicit dependence of the pair localization length on the interaction strength. Quantum walk dynamics of two interacting particles moving in a disordered one-dimensional lattice has also been explored, revealing subtle correlation effects~\cite{Lahini:PRL2010,Chattaraj:PRA2016,Dariusz:PRA2017,Toikka:PRB2020,Malishava:PRB2020}.
Interacting few-body systems with more than two particles have also been studied numerically in one dimension, confirming the stability of the localized phase. In particular Ref.~\cite{Mujal:PRA2019} investigated a model of up to three bosonic atoms with mutual contact interactions and subject to a spatially correlated disorder generated by laser speckles, while Ref.~\cite{Schmidtke:PRB2017} addressed
the localization in the few-particle regime of the XXZ spin-chain with a random magnetic field.
The localization of two interacting particles has been much less explored in dimensions higher then one. Based on analytical arguments, it was suggested~\cite{Imry:CohPropTIP:EPL95, Borgonovi:NonLinearity1995} that all two-particle states are localized by the disorder in two dimensions, whereas in three dimensions a delocalization transition for the pair could occur even if all single-particle states are localized.
Nevertheless subsequent numerical investigations~\cite{Ortugno:AndLocTIPDeloc:EPL99,Cuevas:PRL1999,Roemer1999} in two dimensions reported evidence of an Anderson transition for the pair, providing explicit results for the corresponding position of the mobility edge and the value of the critical exponent.
Using large-scale numerics, we recently investigated~\cite{Stellin:PRB2019,stellin2020twobody}
Anderson transitions for a system of two interacting particles (either bosons or fermions with opposite spins), obeying the 3D Anderson-Hubbard model. We showed that the phase diagram in the energy-interaction-disorder space contains multiple metallic and insulating regions, separated by two-body mobility edges. In particular we observed metallic pair states for relatively strong disorder, where all single-particle states are localized, which can be thought of as a proxy for interaction-induced many-body delocalization. Importantly, our numerical data for the metal-insulator transition were found to be consistent with the (orthogonal) universality class of the noninteracting model. This feature is not unique to our model, since single-particle excitations in a disordered many-body electronic system also undergo a metal-insulator transition belonging to the noninteracting universality class~\cite{Burmistrov:PRB2014}.
In this work we revisit the Shepelyansky problem in two dimensions and shed light on the controversy. We find that no mobility edge exists for a single pair in an infinite lattice, although interactions can dramatically enhance the pair localization length. In particular we show that previous claims~\cite{Ortugno:AndLocTIPDeloc:EPL99,Cuevas:PRL1999,Roemer1999} of 2D interaction-driven Anderson transitions
were plagued by strong finite-size effects.
The paper is organized as follows. In Sec.~\ref{sec:theory} we revisit the theoretical approach based on the exact mapping
of the two-body Schrodinger equation onto an effective single-particle problem for the center-of-mass motion.
The effective model allows to recover the entire energy spectrum of orbitally symmetric pair states and is therefore equivalent to the exact diagonalization of the full Hamiltonian in the same subspace; an explicit proof for a toy
Hamiltonian is given in Sec.~\ref{sec:equivalence}.
In Sec.~\ref{sec:absence} we present the
finite-size scaling analysis used to discard the existence of the 2D Anderson transition for the pair, while in Sec.~\ref{sec:loclength}
we discuss the dependence of the two-body localization length on the interaction strength. The generality of the obtained results
is discussed in Sec.~\ref{general} while in Sec.~\ref{sec:conclusions} we provide
a summary and an outlook
\section{Effective single-particle model for the pair}
\label{sec:theory}
The Hamiltonian of the two-body system can be written as $\hat H=\hat H_0 + \hat U$, whose noninteracting part $\hat H_0$ can be decomposed as $\hat H^\textrm{sp} \otimes \hat{\mathds{1}} +\hat{\mathds{1}} \otimes \hat H^\textrm{sp}$. Here $\hat{\mathds{1}}$ refers to the one-particle identity operator, while $\hat H^\textrm{sp}$ denotes the single-particle Anderson Hamiltonian:
\begin{equation}
\label{Anderson3D}
\hat H^\textrm{sp}= -J \sum_{\langle \mathbf n, \mathbf m\rangle} |\mathbf m \rangle \langle \mathbf n| + \sum_{\mathbf n}V_\mathbf n |\mathbf n\rangle \langle \mathbf n|,
\end{equation}
where $J$ is the tunneling amplitude between nearest neighbor sites $\mathbf{m}$ and $\mathbf{n}$, whereas $V_{\mathbf{n}}$ represents the value of the random potential at site $\mathbf{n}$.
In the following we consider a random potential which is spatially uncorrelated $\langle V_\mathbf n V_{\mathbf n^\prime} \rangle= \langle V_\mathbf n^2\rangle \delta_{\mathbf n \mathbf n^\prime}$ and obeys a uniform on-site distribution, as in Anderson's original work~\cite{Anderson:LocAnderson:PR58}:
\begin{equation}\label{randombox}
P(V)=\frac{1}{W}\Theta(W/2-|V|),
\end{equation}
where $\Theta(x)$ is the Heaviside (unit-step) function and $W$ is the disorder strength. The two particles are coupled together by contact (Hubbard) interactions described by
\begin{equation}\label{intro1}
\hat U=U\sum_{\mathbf m}|{\mathbf m},{\mathbf m}\rangle \langle {\mathbf m},{\mathbf m}|,
\end{equation}
where $U$ represents the corresponding strength. We start by writing the two-particle Schr{\"o}dinger equation as $(E -\hat H_0)|\psi\rangle=\hat U|\psi\rangle$, where $E$ is the total energy of the pair.
If $U|\psi\rangle =0$, then $E$ must belong to the energy spectrum of the
noninteracting Hamiltonian $\hat H_0$. This occurs for instance if the two-particles correspond to fermions in the spin-triplet state, as in this
case the orbital part of the wave-function is antisymmetric and therefore
$\langle {\mathbf m},{\mathbf m}|\psi\rangle=0$.
Interactions are instead relevant for orbitally symmetric wave-functions, describing either bosons or fermions with opposite spins in the singlet state.
In this case from Eq.~(\ref{intro1}) we find that the wave-function obeys the following self-consistent equation
\begin{equation}
\label{formalism2}
|\psi\rangle=\sum_{\mathbf m} U \hat G(E) |{\mathbf m},{\mathbf m}\rangle \langle {\mathbf m},{\mathbf m}|\psi\rangle,
\end{equation}
where $\hat G(E)=(E \hat I -\hat H_0)^{-1}$ is the non-interacting two-particle Green's function. Eq.~(\ref{formalism2}) shows that
for contact interactions the wave-function of the pair can be completely determined once its diagonal amplitudes
$f_{\mathbf m}=\langle {\mathbf m},{\mathbf m}|\psi\rangle$ are known.
By projecting Eq.(\ref{formalism2}) over the state
$|{\mathbf n},{\mathbf n}\rangle$, we see that these terms obey a closed equation~\cite{Stellin:PRB2019,Dufour:PRL2012,Orso:PRL2005}:
\begin{equation}
\label{integral}
\sum_{\mathbf m} K_{\mathbf n \mathbf m} f_{\mathbf m} = \frac{1}{U}f_{\mathbf n},
\end{equation}
where $K_{\mathbf n \mathbf m} =\langle {\mathbf n},{\mathbf n }|\hat G(E) |{\mathbf m},{\mathbf m}\rangle$. Eq.(\ref{integral}) is then interpreted as an effective single-particle problem with Hamiltonian matrix $K$ and pseudoenergy $\lambda=1/U$, corresponding to the inverse of the interaction strength.
In the following we will address the localization properties of this effective
model for the pair.
To this respect, we notice that the matrix elements of $K$ are unknown and must be calculated explicitly in terms of the eigenbasis of the single-particle model, $\hat H^\textrm{sp} | \phi_r\rangle=\varepsilon_r | \phi_r\rangle$, as
\begin{equation}\label{KE0}
K_{\mathbf n \mathbf m} = \sum_{r,s=1}^N \frac{\phi_{\mathbf n r} \phi_{\mathbf m r}^* \phi_{\mathbf n s} \phi_{\mathbf m s}^*}{E-\varepsilon_r-\varepsilon_s},
\end{equation}
where $N$ is the total number of lattice sites in the grid and $\phi_{\mathbf n r} =\langle \mathbf n | \phi_r\rangle$ are the amplitudes of the one-particle wave-functions.
\section{Equivalence with exact diagonalization of the full model}
\label{sec:equivalence}
The effective single-particle model of the pair, Eq.~(\ref{integral}), allows to
reconstruct the entire energy spectrum of orbitally symmetric states for a given interaction strength $U$.
At first sight this is not obvious because the matrix $K$ is $N\times N$, and therefore possesses $N$ eigenvalues, while the dimension of the Hilbert space of orbitally symmetric states is $N(N+1)/2$, which is much larger.
The key point is that one needs to compute the matrix $K$ and the associated eigenvalues $\lambda_{r}=\lambda_{r}(E)$, with $r=1,2 ...N$, for different values of the energy $E$. The energy levels for fixed $U$
are then obtained by solving the equations $\lambda_{r}(E)=1/U$ via
standard root-finding algorithms.
Let us illustrate the above point for a toy model with $N=2$ lattice sites in the absence of disorder.
In this case the Hilbert space of symmetric states is spanned by the three vectors $|1,1\rangle$,
$|2,2\rangle $ and $(|1,2\rangle +|2,1\rangle)/\sqrt{2}$.
The corresponding energy levels of the pair can be found from the exact diagonalization of the $3\times 3$ matrix of the projected Hamiltonian:
\begin{equation}
H_{ed}=
\begin{pmatrix}
U & -\sqrt{2} & 0 \\
-\sqrt{2} & 0 & -\sqrt{2} \\
0 & -\sqrt{2} & U
\end{pmatrix}.
\end{equation}
An explicit calculation yields $E=U$ and $E=(U\pm \sqrt{U^2+16})/2$.
Let us now show that we recover exactly the same results using our effective model.
The single-particle Hamiltonian is represented by the matrix
\begin{equation}\label{example}
H^{sp}=\begin{pmatrix}
0 & -1\\
-1 & 0
\end{pmatrix},
\end{equation}
whose eigenvalues are given by $\varepsilon_1=-1$ and $\varepsilon_2=1$. The associated wavevectors are $| \phi_1\rangle =(|1\rangle +|2\rangle)/2$ and
$| \phi_2 \rangle =(|1\rangle -|2\rangle)/2$.
From Eq.(\ref{KE0}) we immediately find
\begin{equation}\label{example2}
K=\begin{pmatrix}
A & B\\
B & A
\end{pmatrix},
\end{equation}
where $A=(E/(E^2-4)+1/E)/2$ and $B=(E/(E^2-4)-1/E)/2$. The corresponding eigenvalues of $K$ are given by $\lambda_1(E)=A-B=1/E$ and $\lambda_2(E)=A+B=E/(E^2-4)$. The condition $\lambda_1=1/U$ yields $E=U$, while
$\lambda_2=1/U$ admits two solutions, $E=(U\pm \sqrt{U^2+16})/2$, allowing to recover the exact-diagonalization energy spectrum.
In Fig.\ref{fig:example} we plot the energy dependence of the two eigenvalues of $K$ for our toy model. Intersecting the curves
with the horizontal line $\lambda=1/U$ (dashed red line) yields visually the three sought energy levels for the orbitally symmetric states.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{exampleK.eps}
\caption{Eigenvalues of the matrix $K$ of the effective model of the pair, Eq.~(\ref{integral}) for a toy model of $N=2$ coupled sites with no disorder, plotted as a function of the energy $E$ of the pair (blues data curves).
For a given interaction strength $U$, the entire spectrum of $N(N+1)/2$ energy levels of orbitally symmetric states of the pair can be obtained by intersecting the data curves with the horizontal line, $\lambda=1/U$, here shown for $U=1$ (dashed red line). The corresponding three energy levels are $E=-1.56155$, $E=1$ and $E=2.56155$. }
\label{fig:example}
\end{figure}
We stress that extracting the full energy spectrum of the pair based on the effective model, for a fixed value of the interaction strength $U$, is computationally demanding as $N$ becomes large.
The effective model is instead very efficient, as compared to the exact diagonalization, when we look at the properties of the pair as a function of the interaction strength $U$, for a fixed value of the total energy $E$. This is exactly the situation that we will be interested in below.
\section{Absence of 2D delocalization transitions for the pair}
\label{sec:absence}
Numerical evidence of 2D Anderson transition for two particles obeying the Anderson-Hubbard model in two dimensions
was first reported~\cite{Ortugno:AndLocTIPDeloc:EPL99} on the basis of transmission-amplitude calculations~\cite{McKinnonKramer:TransferMatrix:ZPB83} performed on
rectangular strips of length $L=62$ and variable width up to $M=10$. For a pair with zero total energy and for interaction strength $U=1$, the delocalization transition was found to occur for $W=9.3\pm 0.5$.
The result was also confirmed~\cite{Cuevas:PRL1999} from the analysis of the energy-level statistics, although with slightly different numbers.
The existence of a 2D mobility edge for the pair was also reported in Ref.~\cite{Roemer1999}, where a decimation method was employed to compute the critical disorder strength as a function of the interaction strength $U$, based on lattices of similar sizes.
For $U=1.59$, a pair with zero total energy was shown to undergo an Anderson transition at $W=9\pm 0.13$.
Below we verify the existence of the 2D delocalization transition of the pair, following the procedure developed in Ref.~\cite{Stellin:PRB2019}. In order to compare with the previous numerical predictions, we set $E=0$ and $W=9$.
We consider a rectangular strip of dimensions $L, M$, with $L\gg M$, containing $N=ML$ lattice sites. In order to minimize finite-size effects, the boundary conditions on the single-particle Hamiltonian $H^{sp}$ are chosen periodic in the orthogonal direction ($y$) and open along the transmission axis ($x$).
We rewrite the
rhs of Eq.~(\ref{KE0}) as
\begin{equation}\label{KE0bis}
K_{\mathbf n \mathbf m} =\sum_{r=1}\phi_{\mathbf n r} \phi_{\mathbf m r}^* \langle \mathbf{n}|G^{\mathrm{sp}}(E-\varepsilon_{r})|\mathbf{m}\rangle,
\end{equation}
where $G^{\mathrm{sp}}(\varepsilon)=(\varepsilon I - H^{\mathrm{sp}})^{-1}$ is the Green's function (e.g. the resolvent) of the single-particle Anderson Hamiltonian (\ref{Anderson3D}), $I$ being the identity matrix.
Due to the open boundary conditions along the longitudinal direction, the Anderson
Hamiltonian possesses a block tridiagonal structure, each block corresponding
to a transverse section of the grid. This structure can be exploited to efficiently compute the
Green's function $G^{\mathrm{sp}}(\varepsilon)$ in Eq.~(\ref{KE0bis}) via matrix inversion.
In this way the
total number of elementary operations needed to compute the matrix $K$ scales as $M^{4}L^{3}$, instead of $M^{4}L^{4}$, as naively expected from the rhs of Eq.~(\ref{KE0}).
Once computed the matrix $K$ of the effective model, we use it to evaluate the logarithm of the transmission amplitude between two transverse sections of the strip as a function of their relative distance $n_x$:
\begin{equation}\label{logT}
F(n_x)=\ln \sum_{ m_y,n_y} |\langle 1,m_y| G^{\textrm p}(\lambda )| n_x,n_y \rangle |^2.
\end{equation}
In Eq.~(\ref{logT}) $G^{\textrm p}(\lambda)=(\lambda I -K)^{-1}$ is the Green's function
associated to $K$ with $\lambda=1/U$ and the sum is taken over the sites $m_y,n_{y}$ of the two transverse sections.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{AL2D_LambdaM-U_SmallM.eps}
\caption{ Reduced localization length of the pair plotted as a function of the interaction strength for increasing values of the transverse size $M=8, 10, 12, 16, 20$ of the grid. The results are obtained by averaging over $N_{tr}$ different disorder realizations, varying from $N_{tr}=600\; (M=8)$ to $N_{tr}=1000\; (M=20)$. The disorder strength is fixed to $W=9$ and the pair has zero total energy, $E=0$,
implying that $\Lambda(-U)=\Lambda(U)$.
The different curves cross in the interval $0.75<U<1.1$, indicating a possible 2D delocalization transition, as claimed in previous investigations~\cite{Ortugno:AndLocTIPDeloc:EPL99,Roemer1999}. The 2D Anderson transition is actually a finite-size effect, as the crossing points disappear for larger values of $M$, see Fig.\ref{fig:TIP_2D_U-LM_HighM}.}
\label{fig:TIP_2D_U-LM_SmallM}
\end{figure}
For each disorder realization, we evaluate $F(n_x)$ at regular intervals along the bar and apply a linear fit to the data, $f_{fit}(n_x)=p n_x+q$. For a given value of the interaction strength, we evaluate the (disorder-averaged) Lyapunov exponent $\gamma=\gamma(M,U)$ as $\gamma=-\overline{p}/2$, where $\overline{p}$ is the average of the slope.
We then infer the localization properties of the system from the behavior of the reduced localization length, which is defined as $\Lambda=(M \gamma)^{-1}$. In the metallic phase $\Lambda$ increases as $M$ increases, whereas in the insulating phase the opposite trend is seen. At the critical point, $\Lambda$ becomes constant for values of $M$ sufficiently large. Hence the critical point $U=U_c$ of the Anderson transition can be identified by plotting the reduced localization length versus $U$ for different values of the transverse size $M$ and looking at their common crossing points.
In Fig. \ref{fig:TIP_2D_U-LM_SmallM} we show the reduced localization length
$\Lambda$ as a function of the interaction strength for increasing values of
the strip width, ranging from $M=8$ to $M=20$. The length
of the grid is fixed to $L=400$. Notice that, since $E=0$, the reduced localization length is an even function of the interaction strength,
$\Lambda(-U)=\Lambda(U)$.
We see that $\Lambda$ exhibits a nonmonotonic dependence on $U$, as previously found
in one~\cite{Frahm:EigStructAL1DTIP16} and in three~\cite{Stellin:PRB2019} dimensions. In particular, interactions favor the
delocalization of the pair, the effect being more pronounced near $U=6$.
We also notice from Fig. \ref{fig:TIP_2D_U-LM_SmallM} that the curves corresponding to different values of $M$ intersect each others around $U=1$, suggesting a possible phase transition, as previously reported in Ref.~\cite{Ortugno:AndLocTIPDeloc:EPL99,Roemer1999}. A closer inspection of the data, however, reveals that the crossing points are spread out in the interval $0.73 \lesssim U \lesssim 1.1$; in particular, they drift to stronger interactions as the system size increases, in analogy with the three-dimensional case~\cite{Stellin:PRB2019}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{AL2D_LambdaM-U_HighM.eps}
\caption{Same plot as in Fig.\ref{fig:TIP_2D_U-LM_SmallM} but for larger grids with transverse sizes $M=30, 40, 50$
obtained by averaging over $N_{tr}=3600\; (M=30), 4400\; (M=40)$, and $N_{tr}=2850\; (M=50)$ different disorder realizations.
Notice that all crossing points have disappeared, indicating that the pair is ultimately localized by the disorder for any value of the
interaction strength.}
\label{fig:TIP_2D_U-LM_HighM}
\end{figure}
A key question is whether a further increase of the strip's width $M$ will only cause a (possibly large) shift of the critical point, or
rather, the localized phase will ultimately take over for any value of the interaction strength. To answer this question, we have performed additional calculations using larger grids, corresponding to $M=30, 40, 50$. In order to guarantee a sufficiently large aspect ratio, the
length of the bar was fixed to $L=500$. The obtained results are displayed in Fig.\ref{fig:TIP_2D_U-LM_HighM}.
We notice that the crossing points have completely disappeared and the pair localizes in an infinite lattice irrespectively of the specific value of $U$.
This leads us to conclude that the results of Refs.~\cite{Ortugno:AndLocTIPDeloc:EPL99,Roemer1999} were plagued by severe finite-size effects, due to the limited computational
ressources, and no Anderson transition can actually take place for a pair in a disordered lattice of infinite size.
\section{Pair localization length}
\label{sec:loclength}
Although the pair cannot fully delocalize in two dimensions,
interactions can lead to a drastic enhancement of the two-particle localization length.
This quantity can be estimated using the one-parameter scaling
ansatz $\Lambda=f(\tilde \xi/M)$, stating that the reduced localization length
depends solely on the ratio between two quantities: the width $M$ of the strip and a characteristic length $\tilde \xi=\tilde \xi(U,W,E)$, which instead depends on the model parameters and on the total energy of the pair (but not on the system sizes $L, M$). This latter quantity coincides, up to a multiplicative numerical constant $a$, with the pair localization length, $\xi=a\tilde \xi$.
We test the scaling ansatz for our effective model (\ref{integral}) using the numerical data for $M=30,40, 50$ displayed in Fig.\ref{fig:TIP_2D_U-LM_HighM}, corresponding to the largest system sizes.
Let $U_j$, with $j=1,2 ..N_U$, be the values of the interaction strength
used to compute the reduced localization length (in our case $N_U=44$).
We then determine the corresponding unknown parameters $\tilde \xi(U=U_j)$ through a least squares procedure,
following the procedure developed in Ref.~\cite{McKinnonKramer:TransferMatrix:ZPB83}.
Plotting our data in the form $\ln \Lambda(M,U)$ vs $\ln M$ results in multiple data curves, each of them containing three data points connected by straight lines (corresponding to linear interpolation).
Let $\Lambda_i$ be one of the $(3N_U)$ numerical values available for the reduced localization length. The horizontal line $\ln \Lambda=\ln \Lambda_i$ will generally intersect some of these curves. We find convenient to introduce
a matrix $\eta$ which keeps track of such events: if the curve $U=U_j$ is crossed,
we set $\eta_{ij}=1$ and call $\ln M_{ij}$ the corresponding point; otherwise we set $\eta_{ij}=0$.
The unknown parameters are then obtained by minimizing the variance of the difference $\ln M-\ln \tilde \xi$, yielding the
following set of equations (see Ref.~\cite{McKinnonKramer:TransferMatrix:ZPB83} for a detailed derivation):
\begin{multline}
\label{eqn:scaling}
\sum_{j}\left [\sum_{i}\eta_{ij}\biggl(\frac{1}{N_{i}^{2}}-\frac{\delta_{jk}}{N_{i}}\biggr)\right ]\ln{\tilde \xi (U_{j})}=\\=\sum_{j}\left [\sum_{i}\eta_{ij}\biggl(\frac{1}{N_{i}^{2}}-\frac{\delta_{jk}}{N_{i}}\biggr) \ln M_{ij} \hspace{0.1cm} \right ] ,
\end{multline}
where $N_i=\sum_j \eta_{ij}$ is the total number of crossing points obtained for each $\Lambda_i$ value.
Equation (\ref{eqn:scaling}) is of the form $AX=B$ and can be easily solved. Notice however that the solution is not unique because
the matrix $A$ is singular. Indeed the correlation length $\tilde \xi(U)$ is defined up to a multiplicative constant,
$\tilde \xi\rightarrow a \tilde \xi$, implying that $\ln \tilde \xi$ is defined up to an \emph{additive}
constant, $\ln \tilde \xi \rightarrow \ln \tilde \xi +\ln a$.
\begin{figure}
\includegraphics[width=\columnwidth]{scaling2D.eps}
\caption{ Double logarithmic plot of the reduced localization length
as a function of the ratio $\tilde \xi/M$, where $\tilde \xi$ is the unnormalized localization length obtained from the solution of Eq.~(\ref{eqn:scaling}) and $M$ is the width of the strip.
The different symbols correspond to the data for $M=30$ (up triangles), $M=40$ (circles) and $M=50$ (diamonds), shown in Fig.~\ref{fig:TIP_2D_U-LM_HighM}. All data approximately collapse on a single curve, verifying the scaling ansatz $\Lambda=f(\tilde \xi/M)$. }
\label{fig:TIP_2D_xiInvM-LM}
\end{figure}
In Fig.\ref{fig:TIP_2D_xiInvM-LM} we verify the correctness of the scaling ansatz, by plotting the reduced localization length as a function of the ratio
$\tilde \xi/M$, where $\tilde \xi$ is obtained from the solution of Eq.~(\ref{eqn:scaling}). We see that our numerical data for different values of the interaction strength and system size do collapse on a single curve, thus confirming the scaling hypothesis.
In the main panel of Fig. \ref{fig:TIP_2D_xi-U} we plot the unnormalized localization length of the pair as a function of the interaction strength. We see that $\tilde \xi$ varies over more than three orders of magnitude in the interval of $U$ values considered.
In particular, for weak interactions the growth is approximately exponential in $U$, as highlighted by the semi-logarithmic plot.
Based on analytical arguments, Imry suggested~\cite{Imry:CohPropTIP:EPL95} that the localization length of the pair in the weakly interacting regime should obey the relation $\xi \propto \xi_{\mathrm{sp}}\mathrm{e}^{b(U\xi_{\mathrm{sp}})^{2}}$,
where $\xi_{\mathrm{sp}}$ is the single-particle localization length of the Anderson model and $b$ is a numerical factor.
A possible reason of the discrepancy is that the cited formula might apply only for relatively modest
values of the interaction strength, which were not explored in our numerics.
Further work will be needed to address this point explicitly.
\begin{figure}
\includegraphics[width=\columnwidth]{xi2Dv3.eps}
\caption{Unnormalized localization length $\tilde \xi$ of the pair plotted as a function of the interaction strength.
Notice the logarithmic scale in the $y$ axis, showing
that interactions can enhance the 2D localization length of the pair by more than three orders of magnitude. The inset displays the estimate of the multiplicative constant $a$, fixing the absolute scale of the localization length, plotted as a function of the interaction strength. The estimate is obtained by fitting the numerical data in Fig.\ref{fig:TIP_2D_U-LM_HighM} corresponding to weak interactions using Eq.~(\ref{eqn:finda}), from which we extract $a_\textrm{est}=\xi/\tilde \xi$. This quantity keeps increasing as $U$ diminishes, signaling that the strongly localized regime is not fully reached in our simulations.
}
\label{fig:TIP_2D_xi-U}
\end{figure}
The constant $a$, allowing to fix the absolute scale of the localization length of the pair,
is independent of the interaction strength. Its numerical value can in principle be inferred by fitting the data in the strongly localized regime,
according to
\begin{equation}
\label{eqn:finda}
\Lambda =\frac{\xi}{M}+c\biggl(\frac{\xi}{M}\biggr)^{2},
\end{equation}
where $c$ is a number.
In our case the most localized states are those at weak interactions, where the reduced localization length takes its minimum value.
For each values $U=U_j$ falling in this region, we fit our numerical data according to Eq.~(\ref{eqn:finda}), yielding $\xi=\xi(U)$.
The estimate of the multiplicative constant, which is defined as $a_\textrm{est}=\xi(U)/\tilde \xi (U)$, is displayed
in the inset of Fig.~\ref{fig:TIP_2D_xi-U}.
Since the estimate of $a$ does not saturates for small $U$, we conclude that, even for the weakest interactions and the largest system sizes considered, the pair has not yet entered the strongly localized regime underlying Eq.~(\ref{eqn:finda}). This asymptotic regime is typically achieved for $\Lambda \lesssim 0.1$, whereas our smallest value of the reduced localization length is $\Lambda(M=50,U=0.5)\simeq 0.2929$.
From the inset of Fig.~\ref{fig:TIP_2D_xi-U} we also see that $a_\textrm{est}$ increases as $U$ diminishes, suggesting that the result obtained for $U=0.5$ actually provides a lower bound for the multiplicative constant. This allows us to conclude that $a \geq 18.2$.
\section{Generality of the obtained results}
\label{general}
\begin{figure}
\includegraphics[width=\columnwidth]{Evar_2D_W9_M12_N400_Ns1000.eps}
\caption{Reduced localization length of the pair as a function of the interaction strength for $W=9$ and for different values of the total energy going from
$E=0$ (top curve) to $E=-12$ (bottom curve). The sizes of the strip
is $M=12$ and $L=400$, while the number of different disorder realizations is $N_{tr}=1000$.
The data show that the pair state with zero total energy possesses the largest reduced localization length, see Eq.~(\ref{nonzeroE}), implying that for $W=9$ the pair remains localized for any nonzero total energy. }
\label{fig:Efinite}
\end{figure}
In Sec.~\ref{sec:absence} we have shown that all pair states with total energy $E=0$ are localized for $W=9$. A natural question is whether
the localization scenario changes at nonzero energy or at weak disorder.
Let us consider the two cases separately.
Our numerical results indicate that, for any values of $U,W$ and system size $M$, the reduced localization length always takes its maximum value for $E=0$:\begin{equation}\label{nonzeroE}
\Lambda (E,M,U,W)\leq \Lambda(0,M,U,W).
\end{equation}
As an example, in Fig.\ref{fig:Efinite} we plot $\Lambda$ as a function of the interaction strength,
for $W=9$ and for different negative values of the energy (results for positive energies are simply obtained from
the corresponding data at energy $-E$ by reversing the sign of the interaction strength, $U\rightarrow -U$).
All calculations are performed on a strip with constant sizes $M=12$ and $L=400$.
When combined with the finite-size scaling analysis, the inequality~(\ref{nonzeroE}) implies that the pair remains localized for \emph{any} nonzero energy
with an even shorter localization length, thus excluding a delocalization transition.
The above inequality expresses the general fact that the pair can better spread when its total energy lies in the middle of the noninteracting two-particle
energy spectrum. For instance, in three dimensions, where genuine Anderson transitions for the pair do occur, we found~\cite{stellin2020twobody} that
metallic regions in the
interaction-disorder plane become progressively insulating as the energy of the pair departs from zero.
We note from Fig.\ref{fig:Efinite} that all data curves with $|E|\leq 8$ have absolute minimum at $U=0$. Moreover, the largest enhancement of the reduced localization length
takes place for weaker interactions as $|E|$ increases. These are specific features of scattering states, whose energy lies inside the noninteracting two-body energy spectrum,
as already observed in one~\cite{Frahm:EigStructAL1DTIP16} and in three~\cite{stellin2020twobody} dimensions.
In the asymptotic regime $|E|\gg W$, pairs behave as pointlike molecules and the effective model $K$ takes the form of a single-particle Anderson model,
as discussed in Ref.~\cite{stellin2020twobody}, which again precludes the possibility of a delocalization transition in two dimensions.
Let us now discuss whether an Anderson transition for the pair can appear for weak disorder at fixed total energy, $E=0$. The effective single-particle model $K$ possesses both time reversal and spin rotational symmetries, suggesting that $K$ belongs to the same (orthogonal) universality class of the Anderson model $\hat H^\textrm{sp}$.
In Ref.~\cite{Stellin:PRB2019} we showed numerically that, in three dimensions, the Anderson transition for a pair with zero energy yields critical exponents in agreement with the predictions of the orthogonal class.
Since 2D Anderson transitions are generally forbidden in the orthogonal class, one expects that the pair is localized for \emph{any} finite disorder. For this reason,
the previous claims of 2D delocalization transitions for two particles are puzzling. Our numerics shows explicitly that these results were
biased by strong finite-size effects and there is no evidence of violation of the conventional localization scenario.
From the numerical point of view, the observation of the asymptotic 2D scaling behavior for $W=9$ required large system sizes as compared to
the 3D case studied in Ref.~\cite{Stellin:PRB2019}, where the finite-size scaling analysis was limited to system sizes up to $M=17$.
Verifying numerically the absence of the 2D transition for weaker disorder is very challenging, because
the reduced localization length will exhibit an apparent crossing for even larger values of $M$ as $W$ diminishes. To appreciate this point, we have repeated the
same finite-size scaling analysis for $W=10$ and plotted the results in Fig.\ref{fig:W=10}. We see that, already for $M=22$, the pair is localized for any values of the interaction strength, whereas for $W=9$
the same asymptotic behavior is reached for larger system sizes, between $M=30$ and $M=40$.
\begin{figure}
\includegraphics[width=\columnwidth]{U-LM_E0_W10.eps}
\caption{Finite-size scaling analysis for $W=10$ and $E=0$. The reduced localization length is plotted as a function of the interaction strength
for different system sizes $M=8$ (squares), $10$ (circles), $13$ (up triangles), $22$ (down triangles), and $38$ (right triangles). The length of the strip is $L=400$ for $M\leq 13$
and $L=500$ otherwise. Notice that the two-particle system exhibits an insulating
behavior already for $M=22$. The number of different disorder realizations is $N_{tr}=600$ for $M=38$ and $N_{tr}=1000$ otherwise. }
\label{fig:W=10}
\end{figure}
\section{Conclusion and outlook}
\label{sec:conclusions}
Based on an efficient mapping of the two-body Schrodinger equation, we have addressed the localization properties of two bosons or two spin 1/2 fermions in a singlet state obeying the 2D Anderson-Hubbard model.
We have found that no interaction-induced Anderson transition occurs for disordered lattices of infinite size in contrast with previous numerical works, which we have shown to be biased by finite-size effects. In this way we
reconcile the numerics with the one-parameter scaling theory of localization, predicting the absence of one-particle Anderson transition in two dimensions, in the presence of both time reversal and spin rotational symmetries. Moreover, we found that the pair localization length exhibits a nonmonotonic behavior as a function of $U$, characterized by an exponential
growth for weak interactions.
We point out that the absence of the 2D mobility edge for the two-particle system has been proven for the case of contact interactions; similar conclusions should apply also for short but finite-range interactions. The case of true long-range (e.g Coulomb) interactions is conceptually different and can lead to opposite conclusions~\cite{Cuevas:PRL1999,Shepelyanski:PRB2000}.
From the above discussion, we also expect that the 2D delocalization transition will appear when the two particles are exposed to spin-orbit couplings, driving the system towards the symplectic universality class, where single-particle metal-insulator transitions are generally allowed even in two dimensions~\cite{Evers:AndersonTransitions:RMP08}.
An interesting and compelling problem is to investigate the implications of our results for a 2D system at finite density of particles, where many-body delocalization transitions have instead been observed, both numerically and experimentally, in the strongly interacting regime.
We expect that, in the zero density limit, the many-body mobility edge disappears, irrespective of the bosonic or fermionic
statistics of the two particles.
Another interesting direction is to generalize our numerical approach to study the effect of disorder on the transport and spectral properties of excitons in 2D
semiconductors~\cite{C9CP04111G}.
\section*{ACKNOWLEDGEMENTS}
We acknowledge D. Delande, K. Frahm, C. Monthus, S. Skipetrov and T. Roscilde for fruitful discussions.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the
Marie Sklodowska-Curie grant agreement No 665850. This work was granted access to the HPC resources of CINES (Centre Informatique National de l'Enseignement Sup\' erieur) under the allocations 2017-A0020507629, 2018-A0040507629, 2019-A0060507629 and 2020-A0080507629 supplied by GENCI (Grand Equipement National de Calcul Intensif).
\bibliographystyle{apsrev}
| 2024-02-18T23:39:40.767Z | 2020-10-16T02:04:59.000Z | algebraic_stack_train_0000 | 42 | 6,370 |
|
proofpile-arXiv_065-287 | \section{Data Specifications Table}
\begin{table}[htb]
\centering
\footnotesize
\label{DataSpecificationTable}
\begin{tabular}{|l|p{10cm}|}
\hline
\textbf{ Subject }& Management of Technology and Innovation. \\\hline
\textbf{ Specific subject area }& A focus area maturity model for API management. \\\hline
\textbf{ Type of data }& Text, literature references, and tables. \\\hline
\textbf{ How data were acquired }& Systematic literature review and expert interviews. \\\hline
\textbf{ Data format }& Raw, analyzed, and evaluated. \\\hline
\textbf{ Parameters for data collection }& The collected practices had to fit strict requirements in terms of having to be executable, implementable, and easily understandable by practitioners that are involved with API management within their organization. \\\hline
\textbf{ Description of data collection }& The initial data was collected through a SLR \cite{mathijssen2020identification}. Initially, the data was grouped according to topical similarity. Practices were categorized, analyzed and verified through discussion sessions with all involved researchers, inter-rater agreement and information gathered from grey literature. Capabilities and practices were then evaluated through 11 expert interviews. For information on selection of the practitioners, we refer to the related research article \textit{(to be published)}. If at least 2 or more practitioners found a practice relevant and useful, they became a part of the collection. Additionally, six discussion sessions among the researchers were conducted, during which all suggested changes (i.e. removal, addition, and relocation of practices and capabilities) were discussed, interpreted, and processed. The resulting practices and capabilities were then evaluated with 3 experts whom were previously interviewed.
Finally five case studies were conducted to evaluate different software products.
\\\hline
\textbf{ Data source location }& All included source literature can be reviewed in the associated research article~\cite{mathijssen2020identification}. \\\hline
\textbf{ Related research article }& Mathijssen, M., Overeem, M., \& Jansen, S. (2020). Identification of Practices and Capabilities in API Management: A Systematic Literature Review. arXiv preprint arXiv:2006.10481.\\\hline
\end{tabular}
\end{table}
\onecolumn
\section{Introduction}
\label{sec:introduction}
This data set describes the API Management Focus Area Maturity Model (API-m-FAMM).
The model supports organizations that expose their API(s) to third-party developers, in a structured manner, in their API management activities.
Using the API-m-FAMM, organizations may evaluate, improve upon and assess the degree of maturity their business processes regarding the topic of API management have.
We define API Management as an activity that enables organizations to design, publish and deploy their APIs for (external) developers to consume. API Management encompasses capabilities such as controlling API lifecycles, access and authentication to APIs, monitoring, throttling and analyzing API usage, as well as providing security and documentation.
\begin{itemize}
\item The data may be used by API management researchers for evaluation, validation and extension of the model.
\item The data can be used by focus area maturity researchers to establish the vocabulary used in the field.
\item The data can be used by researchers as a basis for future research work in the domains of API management, versioning and evolution.
\item The data is reusable by consultants and practitioners to assess whether they have implemented a practice fully.
\end{itemize}
The research approach is explained in Section~\ref{sec:design}.
Section~\ref{sec:apimfamm} describes the final API-m-FAMM in full detail.
The different intermediate versions are described in Sections~\ref{sec:version01}, \ref{sec:version02}, \ref{sec:version03}, \ref{sec:version04}, \ref{sec:version05}, and \ref{sec:version10}.
\section{Experimental Design, Materials, and Methods}
\label{sec:design}
The Focus Area Maturity Model is constructed using the design methodology of \cite{van2010design} and \cite{de2005understanding}.
The development of the FAMM is done in five phases: \emph{Scope}, \emph{Design}, \emph{Populate}, \emph{Test}, and \emph{Deploy}.
These phases are executed through a SLR, expert interviews, case studies, and numerous discussions among the authors.
Between the execution of every method, the authors discussed the state of the model until consensus was reached on its contents and structure.
This was done using online \textit{Card Sorting}~\citep{nielsen1995}, with \textit{Google Drawings} as a tool.
Figure~\ref{fig:research-steps} shows which methods were used in each phase, by linking them to the different intermediate versions of the API-m-FAMM.
The intermediate versions including a changelog are described in Sections~\ref{sec:version01}, \ref{sec:version02}, \ref{sec:version03}, \ref{sec:version04}, \ref{sec:version05}, and \ref{sec:version10}.
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=1.0cm 12.5cm 2.1cm 0.8cm, width=\textwidth]{Figures/ResearchApproach.pdf}
\caption{The steps that were executed in constructing the API-m-FAMM and its various intermediate versions.}
\label{fig:research-steps}
\end{figure*}
\subsection{Scope, Design, Populate Phases}
The initial data was acquired through the SLR as described in \cite{mathijssen2020identification}.
Based on this SLR, a primary source was chosen~\cite{de2017api}.
Using this source as a starting point, the scope of the API-m-FAMM was determined and the initial model was constructed (\textbf{version 0.1}, Section~\ref{sec:version01}).
Subsequently, the SLR was used to populate the model, which resulted in a FAMM consisting of 114 practices and 39 capabilities that are categorized into 6 focus areas (\textbf{version 0.2}, Section~\ref{sec:version02}).
These practices and capabilities were then analyzed and verified through four validation sessions with all involved researchers, inter-rater agreement and information gathered from grey literature, such as online blog posts, websites, commercial API management platform documentation and third-party tooling (\textbf{version 0.3}, Section~\ref{sec:version03}).
\subsection{Test Phase}
The API-m-FAMM underwent two evaluation cycles.
First, 11 semi-structured interviews with experts were conducted.
During these interviews, experts were asked whether they agree with the inclusion of practices, capabilities, and focus areas as part of the API-m-FAMM, as well as whether they could suggest the addition of any new practices or capabilities.
Additionally, practices were ranked by these experts in terms of their perceived maturity in order to determine their respective maturity levels.
As a result of these interviews, many suggestions were made to either move practices to a different capability, remove them entirely, rename them, or newly add practices.
These suggestions were then analyzed, processed, and discussed through 6 discussion sessions with all involved researchers.
As a result, the model was quite substantially modified, with the existing body of practices and capabilities being narrowed down to 87 practices and capabilities, as well as numerous focus areas, capabilities, and practices being renamed.
Additionally, all practices were assigned to individual maturity levels within their respective capabilities (\textbf{version 0.4}, Section~\ref{sec:version04}).
The second evaluation cycle consisted of three unstructured interviews with experts originating from the sample of experts that were interviewed during the first evaluation cycle.
During these interviews, the changes made as a result of the previous evaluation cycle, as well as the newly introduced maturity assignments were presented and discussed.
Additionally, experts were asked to evaluate the model again with regards to the same criteria used in the first cycle.
The API-m-FAMM was not significantly changed after this second cycle (\textbf{version 0.5}, Section~\ref{sec:version05}).
\subsection{Deploy Phase}
Finally the API-m-FAMM was used to evaluate five different software products.
The evaluation was done by using a \emph{do-it-yourself} kit, which is available on \url{https://www.movereem.nl/api-m-famm.html}.
These evaluations led to some minor changes (\textbf{version 1.0}, Section~\ref{sec:version10}).
\section{API-m-FAMM}
\label{sec:apimfamm}
The API-m-FAMM and the practices and capabilities it consists of are divided into six focus areas. The focus areas are not equal in size, with the smallest focus area consisting of 2 capabilities and 11 practices, while the largest is composed of 5 capabilities and 18 practices. This is caused by the fact that the topic of API management is broad and not evenly distributed across its domains. For example, the \textit{Community} and \textit{Lifecycle Management} focus areas that are described below contain many practices, while \textit{Observability} is a domain consisting of a small but relevant amount of practices and capabilities.
We have defined capabilities as the ability to achieve a goal related to API Management, through the execution of two or more interrelated practices. Combined, these practices and capabilities form the focus areas which describe the functional domain the topic of API management is composed of. A practice is defined as an action that has the express goal to improve, encourage, and manage the usage of APIs. Furthermore, the practice has to be executable, implementable and verifiable by an employee of the organization.
Each individual practice is assigned to a maturity level within its respective capability. As mentioned earlier, these maturity levels were determined by having experts rank the practices according to their perceived maturity within their respective capabilities. Additionally, they were asked whether they could identify any dependencies with regards to the implementation of other practices. Practices can not depend on practices as part of another capability that have a higher maturity level. For example, practice 1.1.6 is dependant on the implementation of practices 1.3.3 and 4.2.3, resulting in a higher overall maturity level being assigned to this practice. The API-m-FAMM in its entirety, including the maturity level that each practice has been assigned to, is depicted visually in Figure~\ref{fig:api-m-famm}.\\
Section~\ref{subsec:areas} describes and defines the focus areas and capabilities. Section~\ref{subsec:practices} details the practices. Practices are described by using the following elements:
\begin{itemize}
\item \textbf{Practice code -} The practice code is made up of three numbers. The first number concerns the focus area, the second number the capability, and the third number the maturity level it has been assigned to.
\item \textbf{Practice -} The name of the practice, as it is mentioned in the API-m-FAMM.
\item \textbf{Focus area -} The focus area is mentioned to indicate the domain in which this practice is relevant.
\item \textbf{Description -} A paragraph of text is provided to
describe the practice in detail. The main reason for providing a lengthy description is internal validity: in future evaluations by third parties, they should be able to perform the evaluations independently.
\item \textbf{When implemented -} Provides a series of necessary conditions before this practice can be marked as implemented. Again, to strengthen internal validity of the API-m-FAMM.
\item \textbf{Literature -} Several references are included to articles that mention the practice. The literature can be found in the SLR~\cite{mathijssen2020identification}. References may also consist of online blog posts, websites, commercial API management platform documentation and third-party tooling.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[page=1, clip, trim=0.5cm 0.5cm 0.5cm 0.5cm, width=\textwidth]{Figures/API-m-FAMMv1.0.pdf}
\caption{The API-m-FAMM model, showing all six focus areas, the capabilities, and the practices regarding API management. The columns correspond with the maturity level of the practice. }
\label{fig:api-m-famm}
\end{figure*}
\newpage
\subsection{Focus Areas \& Capabilities}
\label{subsec:areas}
\begin{enumerate}
\item \textbf{Lifecycle Management}: Generally speaking, an API undergoes several stages over the course of its lifetime; creation, publication, realization, maintenance and retirement \citedata{medjaoui2018continuous}. In order to control and guide the API through these stages, the organization must be able to perform a variety of activities. In order to maintain the API, the organization must decide on a versioning strategy, notification channels and methods in case of updates, as well as decouple their API from their application. In doing so, the organization is able to manage and maintain the versions the API goes through as it evolves over time.\\
\begin{enumerate}
\item [1.1] \textit{Version Management}: APIs evolve over time with newer business requirements. In order to cope with this, the organization should have a versioning strategy in place, such as managing multiple versions of an API to support existing consumers, or by avoiding breaking changes as part of an evolutionary strategy. Additionally, the organization should be able to deprecate and retire older versions of their API smoothly. With proper notice and period, deprecated APIs should be retired and removed so as to avoid any maintenance overheads \citedata{de2017api}. In order to guide this process, the organization may also have a deprecation protocol in place.
\item [1.2] \textit{Decoupling API \& Application}: When an organization creates an API to expose its data and services, it needs to ensure that the API interface is intuitive enough for developers to easily use \citedata{de2017api}. However, the interface for the API will most likely be different from that of the back-end services that it exposes. Therefore, the organization should be able to transform the API interface to a form that the back end can understand.
\item [1.3] \textit{Update Notification}: Changes made to an API may adversely affect its consumers. Hence, consumers must be notified of any planned updates of the API \citedata{de2017api}. The organization should have the ability to inform developers using the API of any changes by distributing change logs, using a communication channel such as email, the developer portal, or preemptively through the use warning headers or a versioning roadmap.\\
\end{enumerate}
\item \textbf{Security}: APIs provide access to valuable and protected data and assets \citedata{de2017api}. Therefore, security for APIs is necessary to protect the underlying assets from unauthenticated and unauthorized access. Due to the programmatic nature of APIs and their accessibility over the public cloud, they are also prone to various kinds of attacks. Hence, the organization should undertake various measures to prevent this from happening. For example, one of many available authentication and authorization protocols should be implemented, prevention for attacks such as DoS or SQL script injection attacks should be in place and sensitive data should be encrypted or masked.\\
\begin{enumerate}
\item [2.1] \textit{Authentication}: Authentication is the process of uniquely determining and validating the identity of a client \citedata{de2017api}. In order to achieve this, the organization may implement an authentication mechanism such as API keys or protocols such as WSS or OpenID Connect, or the Single Sign-on method.
\item [2.2] \textit{Authorization}: Authorization controls the level of access that is provided to an app making an API call and controls which API resources and methods that can invoke \citedata{de2017api}. The organization may implement authorization through access control or an industry-standardized authorization protocol such as OAuth 2.0.
\item [2.3] \textit{Threat Detection \& Protection}: The likelihood of bad actors making attacks using malicious content is high, in addition to common threats such as DoS attacks. Content-based attacks can be in the form of malformed XML or JSON, malicious scripts, or SQL within the payload \citedata{de2017api}. Therefore, the organization should be able to detect malformed request formats or malicious content within the payload and then protect against such attacks.
\item [2.4] \textit{Encryption}: Oftentimes, message payloads sent in API calls contain sensitive information that can be the target for man-in-the-middle attacks \citedata{de2017api}. Therefore, the organization should secure all communication between the client app and the API service through using techniques such as TLS encryption by default. Furthermore, it is desirable for the organization to prevent exposure of sensitive data by making utilizing methods such as masking or hashing.\\
\end{enumerate}
\item \textbf{Performance}: APIs are no longer exclusively seen as mechanisms for integration but have become mainstream for the delivery of data and services to end users through various digital channels \citedata{de2017api}. This increases the demand on APIs to perform well under loads. The overall performance of a client app is dependent on the performance of the underlying APIs powering the app. Hence, the importance of performance for APIs increases greatly. In order to ensure performance and stability of their APIs, organizations must be able to perform various activities. For example, enabling consumers to implement caching improves an API's performance through reduced latency and network traffic. Additionally, using rate limiting and throttling mechanisms to manage traffic and using load balancing to route traffic more effectively also improves the API's performance.\\
\begin{enumerate}
\item [3.1] \textit{Resource Management}: In order to improve the performance of their API(s), it is important for an organization to effectively manage the available resources. This may be accomplished through the use of mechanisms such as load balancing, scaling, or by having a failover policies in place.
\item [3.2] \textit{Traffic Management}: Another aspect of improving API performance is effectively managing incoming traffic. In order to do so, the organization may choose to implement mechanisms such as caching, rate limiting or throttling, or by prioritizing traffic based on customer characteristics.\\
\end{enumerate}
\item \textbf{Observability}: As an organization, it is necessary to have insight into the API program to make the right investments and decisions during its maintenance. Through various monitoring techniques, the organization is able to collect metrics which can shed light on the API's health, performance and resource usage. In turn, these metrics may be aggregated and analyzed to improve the decision making process on how to enhance the business value by either changing the API or by enriching it \citedata{de2017api}. Additionally, by being able to log API access, consumption and performance, input may be gathered for analysis, business value or monetization reports. These may be used to strengthen communication with consumers and stakeholders or check for any potential service-level agreement violations.\\
\begin{enumerate}
\item [4.1] \textit{Monitoring}: As an organization, it is important to be able to collect and monitor metrics and variables concerning the exposed API. For example, information regarding the health and performance of the API, as well as resources used by the API should be monitored so that it may be used as input for activities such as generating analysis reports and broadcasting the API's operational status.
\item [4.2] \textit{Logging}: In monitoring their API(s), it is helpful for the organization to be able to perform logging of consumer behavior and activities. This may include logging of API access, usage and reviewing historical information.
\item [4.3] \textit{Analytics}: As an organization, it is important to be able to analyze the metrics and variables that are collected through monitoring. For example, information regarding the health and performance of the API may be utilized to decide which features should be added to the API. Additionally, it is desirable for the organization to be able to extract custom variables from within the message payload for advanced analytics reporting.\\
\end{enumerate}
\item \textbf{Community}: As an organization exposing APIs for external consumers and developers to consume, it is often desirable to foster, engage and support the community that exists around the API. For example, this entails offering developers the ability register on the API and offering them access to test environments, code samples and documentation. Additionally, the organization may support developers in their usage of the API by offering them support through a variety of communication channels and allowing them to communicate with the organization or among another through a community forum or developer portal. Furthermore, it is desirable for developers to be able to freely browse through the API offering, review operational status updates regarding the API, create support tickets in the event of an error and to share knowledge, views and opinions with other developers.\\
\begin{enumerate}
\item [5.1] \textit{Developer Onboarding}: To start consuming APIs, developers must first register with the organization that is providing them. The sign up process should be simple and easy, possibly by supporting developers with resources such as (automatically generated) SDKs and testing tools such as an API console or sandbox environment.
\item [5.2] \textit{Support}: In order to strengthen the community around the API, the organization should support developers whom are consuming it. This may be accomplished by establishing an appropriate communication channel, adequately managing issues and handling errors, should they present themselves.
\item [5.3] \textit{Documentation}: API documentation can help speed up the adoption, understanding and effectiveness of APIs \citedata{de2017api}. Hence, the organization must provide consumers of their API(s) with reference documentation. Additionally, they may be supplied with start-up documentation, code samples and FAQs to further accelerate understanding of the API.
\item [5.4] \textit{Community Management}: Oftentimes, app developers wish to know the views of other developers in the community. They may want to collaborate and share their API usage learnings and experiences with one another \citedata{de2017api}. In order to facilitate these wishes, the organization may choose to provide developers with a community forum or developer portal.
\item [5.5] \textit{Portfolio Management}: As an API providing organization, a platform to publicize and document APIs is needed. Hence, a discoverable catalog of APIs through which potential consumers are able to browse may be provided.\\
\end{enumerate}
\item \textbf{Commercial}: Organizations have been consuming third-party APIs to simplify and expand business partnership. APIs provide faster integration and an improved partner/customer experience, enabling organizations to grow rapidly \citedata{de2017api}. Oftentimes, exposing and consuming APIs has a commercial aspect tied to it. For API consumers and providers, this is often embodied by legal business contracts for the use of the APIs which they are bound to. These business contracts called service-level agreements govern the service levels and other aspects of API delivery and consumption. Another commercial aspect of API management is that of monetization. Considering APIs provide value to the consuming party, organizations often opt to monetize the services and APIs and build a business model for them \citedata{de2017api}. Utilizing the right monetization model for APIs enables organizations to reap the benefits of their investment in their APIs.\\
\begin{enumerate}
\item [6.1] \textit{Service-Level Agreements}: A service-level agreement (SLA) defines the API’s non-functional requirements, serving as a contract between the organization and consumers of their API. As such, the organization should ensure that the consumer of their API agrees with the SLA's contents. These may include matters such as terms and conditions for API usage, consumption quotas, uptime guarantees and maintenance or downtime information.
\item [6.2] \textit{Monetization Strategy}: APIs securely expose digital assets and services that are of value to consumers. Hence, the organization may wish to adopt a monetization strategy to enable monetization of the exposed services and APIs by constructing a business model around them. This may be accomplished through a monetization model which can be based on consumer characteristics such as their type of subscription, access tier or the amount of resources used.
\item [6.3] \textit{Account Management}: It is desirable to effectively manage accounts in order to foster a qualitative relationship with customers, stakeholders and the organization's management. This may be achieved by reporting on the API's business value internally through the use of business value reports, as well as externally by providing consumers of the API with subscription reports and training them in using the API as efficiently as possible. \\
\end{enumerate}
\end{enumerate}
\subsection{Practices}
\label{subsec:practices}
\newarray\MyData
\readarray{MyData}
{
1.1.2 &
Implement Evolutionary API Strategy &
Version Management &
Lifecycle Management &
The organization utilizes an evolutionary strategy to continuously version their API over time. Using this strategy, the organization evolves a single API by avoiding the introduction of breaking changes. Optionally, this may be accomplished by adhering to the GraphQL specification \citedata{graphqlVersioning}. &
$\bullet$ The organization maintains one version of their API. \newline
$\bullet$ The organization utilizes an evolutionary API versioning strategy.
& \citedata{ploesserVersioning, icappsVersioning} &
&
6&
1.1.5 &
Implement Multiple API Versioning Strategy &
Version Management &
Lifecycle Management &
The organization has a versioning strategy in place which entails the process of versioning from one API to a newer version. In order to do so, the organization must be able to maintain multiple versions of (one of) their API(s) for a period of time. Possible strategies include URI/URL Versioning (possibly in combination with adherence to the Semantic Versioning specification), Query Parameter versioning, (Custom) Header versioning, Accept Header versioning or Content Negotiation. &
$\bullet$ The organization utilizes one of the following versioning strategies: URI/URL Versioning, Query Parameter versioning, (Custom) Header versioning, Accept Header versioning or Content Negotiation.
& \citedata{de2017api, redhatVersioning, anjiVersioning, rapidVersioning} &
&
6&
1.1.6 &
Implement API Deprecation Protocol &
Version Management &
Lifecycle Management &
The organization has a protocol in place that details what steps should be taken when deprecating one of their APIs. This includes determining the amount of developers currently consuming the API through the use of monitoring, and then setting a threshold that details the amount of developers that should have migrated to the new version of the API before commencing with deprecation of the old version. Furthermore, developers, including their contact information, should be identified so that they may be notified of the deprecation through their preferred communication channel. This notification should be accompanied by a migration period and deprecation date, so that consumers have a clear target to migrate their apps over to the new API version. Additionally, referrals to to documentation and the new endpoint should be included. Furthermore, the protocol should detail what course of action should be taken to roll back to a previously deployed version of an API in the event of an incorrect deployment of the API. &
$\bullet$ The organization has implemented the 'Distribute Versioning Notification Through Channel(s)' (1.3.3) and 'Log Activity' (4.2.3) practices. \newline
$\bullet$ The organization has a deprecation protocol in place.
& \citedata{peterLifecycle} &
&
6&
1.1.7 &
Check Backwards Compatibility &
Version Management &
Lifecycle Management &
The organization has an approach in place with which it is able to detect breaking changes when versioning their API(s). Approaches include using a unit test suite, plugging an automated contract test suite into the CI/CD pipeline or by using the \emph{swagger-spec-compatibility} library to detect differences between two Swagger / OpenAPI specifications \citedata{swaggerComp}. &
$\bullet$ The organization has implemented the 'Implement Evolutionary API Versioning Strategy' (1.1.2) practice. \newline
$\bullet$ The organization has a backwards compatibility checking approach in place.
& \citedata{bhojwaniCheck} &
&
6&
1.2.1 &
Decouple API \& Software Versioning &
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the version of their API(s) from its software implementation. The API version should never be tied to the software version of the back-end data/service. A new API version should be created only if there is a change in the contract of the API that impacts the consumer. &
$\bullet$ The organization has decoupled the version of their API(s) from its software implementation.
& \citedata{de2017api} &
&
6&
1.2.4 &
Decouple Internal \& External Data Model &
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the data models that are used internally and externally from one another. Doing so is considered to be beneficial, since an application might use a normalized relation data model internally. While this data model is less suitable to expose through a public API, this separation of concerns allows the organization to evolve the relational data model at a different speed than the API.
& The organization has decoupled the data models that are used internally and externally from one another & None. &
&
6&
1.2.5 &
Decouple Internal \& External Data Format
&
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the data format that are used internally and externally from one another. Doing so is considered to be beneficial, since an application might use a data format such as XML internally, while using a data format such as JSON for the API(s). This separation of concerns grants the organization greater flexibility in designing and developing their APIs.
&
$\bullet$ The organization has decoupled the data format that are used internally and externally from one another.
& None. &
&
6&
1.2.6 &
Decouple Internal \& External Transport Protocol &
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the transport protocol that are used internally and externally from one another. Considering that an application might internally use a protocol that is less commonly used in modern APIs such as SOAP or JDBC internally, which may be less suitable for public APIs, the organization may opt to use a different protocol for their API(s). This separation of concerns grants the
These protocols are less commonly used in modern APIs, or are less suitable for public APIs, and the organization can decide to use a different protocol for the APIs. This separation of concerns grants the organization greater flexibility in designing and developing their APIs.
&
$\bullet$ The organization has decoupled the transport protocol that are used internally and externally from one another.
& None. &
&
6&
1.3.2 &
Distribute Changelogs &
Update Notification &
Lifecycle Management &
The organization uses (automated) email services to distribute changelogs describing the versioning of their API(s) to consumers. Ideally, the organization offers consumers the ability to opt-in or opt-out of this service. &
$\bullet$ The organization uses (automated) email services to distribute changelogs describing the versioning of their API(s) to consumers. & \citedata{sandovalChange} &
&
6&
1.3.3 &
Distribute Versioning Notification Through Channel(s) &
Update Notification &
Lifecycle Management &
The organization has the ability to distribute versioning notifications among consumers of their API(s) through established communication channels. Possible channels include email, social media, and announcements within the developer portal or reference documentation. Ideally, the organization offers consumers of their API(s) the option to select the communication channel they prefer receiving versioning notifications through.
&
$\bullet$ The organization has implemented the 'Establish Communication Channel' (5.2.1) and 'Distribute Changelogs' (1.3.2) practices. \newline
$\bullet$ The organization has the ability to distribute versioning notifications among consumers of their API(s) through established communication channels.
& \citedata{de2017api, sandovalChange} &
&
6&
1.3.5 &
Extend API with Versioning Information &
Update Notification &
Lifecycle Management &
The organization has the ability to extend their API specification to incorporate warning headers into responses in run-time. By doing so, consumers of the API are notified of its impending deprecation, and possibly requested to change their implementation. &
$\bullet$ The organization has the ability to introduce warning headers.
& \citedata{de2017api} &
&
6&
1.3.9 &
Announce Versioning Roadmap &
Update Notification &
Lifecycle Management &
The organization has announced a roadmap that details the planned dates on which the current (old) version of their API will be versioned to a new version, in order to notify consumers ahead of time. This may be done through email, social media, announcements within the developer portal or reference documentation.&
$\bullet$ The organization has implemented the 'Distribute Versioning Notification Through Channel(s)' (1.3.3) practice. \newline
$\bullet$ The organization has announced a versioning roadmap.
& \citedata{de2017api} &
&
6&
2.1.1 &
Implement Basic Authentication &
Authentication &
Security &
The organization has the ability to implement basic authentication in order to authenticate consumers of their API(s). This may be accomplished through the use of HTTP Basic Authentication, with which the consumer is required to provide a username and password to authenticate, or by issuing API keys to consumers of the API. An app is identified by its name and a unique UUID known as the API key, often serving as an identity for the app making a call to the API. &
$\bullet$ The organization has implemented HTTP Basic Authentication, or is able to issue API keys.
& \citedata{biehl2015api, de2017api, Zhao_2018, sandoval2018_2} &
&
6&
2.1.4 &
Implement Authentication Protocol &
Authentication &
Security &
The organization has implemented an authentication protocol or method in order to authenticate consumers of their API(s). In order to apply security For SOAP APIs, the usage of a WS Security (WSS) protocol \citedata{wikipediaWS} may be opted for. This protocol specifies how integrity and confidentiality can be enforced on messages and allows the communication of various security token formats, such as Security Assertion Markup Language (SAML), X.509 and User ID/Password credentials. Consumers of REST APIs may be authenticated by using methods and protocols such as Client Certificate authentication, SAML authentication, or OpenID Connect \citedata{openIDConnect}. OpenID Connect 1.0 is an authentication protocol that builds on top of OAuth 2.0 specs to add an identity layer. It extends the authorization framework provided by OAuth 2.0 to implement authentication.&
$\bullet$ The organization has implemented a WSS authentication protocol, or methods and protocols such as Client Certificate authentication, SAML authentication, or OpenID Connect.
& \citedata{de2017api, oracleWS, wikipediaWS} &
&
6&
2.1.7 &
Implement Single Sign-On &
Authentication &
Security &
The organization has implemented Single Sign-on (SSO), which is a authentication method that enables users to securely authenticate with multiple applications and websites by using one set of credentials. The user is then signed in to other applications automatically, regardless of the platform, technology, or domain the user is using.
&
$\bullet$ The organization has implemented the 'Implement Authentication Protocol' (2.1.4) practice. \newline
$\bullet$ The organization has implemented the Single Sign-on (SSO) authentication method.
& \citedata{de2017api, Onelogin, SSO} &
&
6&
2.2.2 &
Implement Access Control &
Authorization &
Security &
The organization has implemented an access control method in order to identify and authorize consumer potential users of their API(s). In order to accomplish this, the Role-based Access Control (RBAC) method may be used, with which permissions may be assigned to users based on their role within the organization. Alternatively, the Attribute-based Access Control (ABAC) may be used, with which permissions are granted based on an identities' attributes. Optionally, RBAC and ABAC policies may be expressed by using the eXtensible Access Control Markup Language (XACML).
&
$\bullet$ The organization has implemented the Role-based Access Control (RBAC) or Attribute-based Access Control (ABAC) method.
& \citedata{de2017api, hofman2014technical, thielens2013apis, WikiXACML} &
&
6&
2.2.4 &
Implement Token Management &
Authorization &
Security &
The organization provides consumers of their API(s) with the ability to perform (access) token and API key management. This is an activity that involves measures to manage (i.e. review, store, create and delete) the tokens and API keys that are required to invoke back-end APIs. &
$\bullet$ The organization allows consumers to manage their tokens and API keys.
& \citedata{de2017api, hofman2014technical} &
&
6&
2.2.6 &
Implement Standardized Authorization Protocol &
Authorization &
Security &
The organization has implemented an industry-standardized authorization protocol, such as the OAuth 2.0 Authorization protocol. OAuth is used as a mechanism to provide authorization to a third-party
application for access to an end user resource on behalf of them. OAuth helps with granting authorization without the need to share user credentials. &
$\bullet$ The organization has an industry-standardized authorization protocol.
& \citedata{de2017api,gadge2018microservice,gamez2015towards,hohenstein2018architectural,matsumoto2017fujitsu,patni2017pro,thielens2013apis,hofman2014technical,Xu_2019,Zhao_2018} &
&
6&
2.2.7 &
Implement Authorization Scopes &
Authorization &
Security &
The organization has implemented an authorization scopes mechanism, such as the OAuth 2.0 Scopes mechanism \citedata{OAuthScopes}, to limit access to their application(s) to their users' accounts. An application can request one or more scopes, where after this information is then presented to the user in a consent screen. Then, the access token that was issued to the application will be limited to the scopes granted. &
$\bullet$ The organization has an authorization scopes mechanism in place.
& None. &
&
6&
2.3.1 &
Implement Allow \& Deny IP Address Lists &
Threat Detection \& Protection &
Security &
The organization has the ability to impose allow and deny list policies. Through these policies, specific IPs can either be excluded from requests, or separate quotas can be given to internal users by throttling access depending on their IP address or address range.
&
$\bullet$ The organization has the ability to impose allow and deny list policies.
& \citedata{gadge2018microservice, gamez2015towards, hohenstein2018architectural} &
&
6&
2.3.2 &
Implement Injection Threat Protection Policies &
Threat Detection \& Protection &
Security &
The organization has implemented injection threat protection security policies. Injection threats are common forms of attacks, in which attackers try to inject malicious code that, if executed on the server, can divulge sensitive information. These attacks may take the form of XML and JSON bombs or SQL and script injection.&
$\bullet$ The organization has injection threat policies in place against XML or JSON bombs or SQL or script injection.
& \citedata{de2017api, preibisch2018api, OWASPInjection} &
&
6&
2.3.5 &
Implement DoS Protection &
Threat Detection \& Protection &
Security &
The organization has protection against DoS attacks in place. Hackers may try to bring down back-end systems by pumping unexpectedly high traffic through the APIs. Denial-of-service (DoS) attacks are very common on APIs. Hence, the organization should be able to detect and stop such attacks. Identification of a DoS attack is done through Spike Arrest. &
$\bullet$ The organization has protection against DoS attacks in place.
& \citedata{de2017api, gadge2018microservice, gamez2015towards} &
&
6&
2.3.7 &
Implement Security Breach Protocol &
Threat Detection \& Protection &
Security &
The organization has a security breach protocol in place, which details what steps should be taken in the event where a security breach occurs. This protocol may include activities such as notifying stakeholders and consumers of the API, identifying the source of the breach by scanning activity logs, containing the breach by stopping the data leakage, and consulting third-party IT security and legal advice providers.
&
$\bullet$ The organization has a security breach protocol in place.
& \citedata{Reynold2020, Soliya2020} &
&
6&
2.3.9 &
Conduct Security Review &
Threat Detection \& Protection &
Security &
The organization has the ability to conduct security reviews that potential consumers of their API(s) must pass before being allowed to integrate the organization's API(s) into their application. This typically involves testing the degree to which customer data is protected and encrypted, and identifying security vulnerabilities that may be exploited, such as threats related to script injections and non-secure authentication and access control protocols.
&
$\bullet$ The organization has the ability to conduct security reviews.
& \citedata{Salesforce2020} &
&
6&
2.3.10 &
Implement Zero Trust Network Access (ZTNA) &
Threat Detection \& Protection &
Security &
The organization has implemented a Zero Trust Network Access (ZTNA) security architecture, where only traffic from authenticated users, devices, and applications is granted access to other users, devices, and applications within an organization. ZTNA may be regarded as a fine-grained approach to network access control (NAC), identity access management (IAM) and privilege access management (PAM), offering a replacement for VPN architectures. Optionally, a ZTNA may be implemented through third-party providers such as Akamai, Cloudflare, or Cisco.
&
$\bullet$ The organization has implemented a Zero Trust Network Access (ZTNA) security architecture.
& \citedata{ZTNAwiki2020} &
&
6&
2.4.1 &
Implement Transport Layer Encryption &
Encryption &
Security &
The organization has implemented current and up-to-date encryption protocols such as Transport Layer Security (TLS). It is always desirable to have TLS compliant endpoints to safeguard against man-in-middle attacks, and bi-directional encryption of message data to protect against tampering. &
$\bullet$ The organization has implemented a current and up-to-date transport layer encryption protocol.
& \citedata{de2017api, familiar2015iot, gadge2018microservice, hofman2014technical, preibisch2018api} &
&
6&
2.4.3 &
Implement Certificate Management &
Encryption &
Security &
The organization has the ability to manage its TLS certificates. This involves monitoring and managing the certificates' acquisition and deployment, tracking renewal, usage, and expiration of SSL/TLS certificates. &
$\bullet$ The organization has the ability to manage its TLS certificates.
& \citedata{de2017api,hohenstein2018architectural,sine2015api,thielens2013apis,gadge2018microservice} &
&
6&
3.1.2 &
Implement Load Balancing &
Resource Management &
Performance &
The organization has implemented load balancing to distribute API traffic to the back-end services. Various load balancing algorithms may be supported. Based on the selected algorithm, the requests must be routed to the appropriate resource that is hosting the API. Load balancing also improves the overall performance of the API. &
$\bullet$ The organization has implemented load balancing.
& \citedata{biehl2015api,ciavotta2017microservice,de2017api,gadge2018microservice,gamez2015towards,montesi2016circuit,nakamura2017fujitsu,Xu_2019,Zhao_2018} &
&
6&
3.1.5 &
Implement Scaling &
Resource Management &
Performance &
The organization has the ability to scale the amount of available resources up or down depending on traffic and API usage in a reactive manner. This may be done either manually or automatically, through the use of a load balancer. &
$\bullet$ The organization has implemented the 'Implement Load Balancing' (3.1.2) practice. \newline
$\bullet$ The organization has the ability to scale the amount of available resources up or down.
& \citedata{akbulut2019software,jacobson2011apis,gadge2018microservice,hofman2014technical} &
&
6&
3.1.6 &
Implement Failover Policies &
Resource Management &
Performance &
The organization has the ability to mitigate outages through the implementation of failover policies. This may be done by automatically deploying a service to a standby data center if the primary system fails, or is shut down for servicing. By being able to perform a failover, the particular service is guaranteed to be operational at one of the data centers. This is an extremely important function for critical systems that require always-on accessibility. &
$\bullet$ The organization has the ability to mitigate outages through the implementation of failover policies.
& \citedata{Barracuda2020} &
&
6&
3.1.10 &
Implement Predictive Scaling &
Resource Management &
Performance &
The organization has the ability to scale the amount of available resources up or down depending on traffic and API usage in a proactive manner. This may be done automatically, through the use of a load balancer as based on insights gained from predictive analytics. &
$\bullet$ The organization has implemented the 'Implement Load Balancing' (3.1.2) and 'Enable Predictive Analytics' (4.3.9) practices. \newline
$\bullet$ The organization has implemented predictive scaling.
& None. &
&
6&
3.2.1 &
Set Timeout Policies &
Traffic Management &
Performance &
The organization is able to set timeout policies, by detecting and customizing the amount of time that is allowed to pass before a connection times out and is closed. Using timeout policies, the organization is able to ensure that the API always responds within a given amount of time, even if a long-running process hangs. This is important in high-availability systems where response performance is crucial so errors can be dealt with cleanly. &
$\bullet$ The organization is able to set timeout policies on their API(s).
& \citedata{tykTimeout} &
&
6&
3.2.2 &
Implement Request Caching &
Traffic Management &
Performance &
The organization utilizes caching as a mechanism to optimize performance. As consumers of the API make requests on the same URI, the cached response can be used to respond instead of forwarding those requests to the back-end server. Thus caching can help to improve an APIs performance through reduced latency and network traffic. &
$\bullet$ The organization utilizes caching as a mechanism to optimize performance.
& \citedata{biehl2015api,de2017api,gadge2018microservice,gamez2015towards,indrasiri2018developing,patni2017pro,preibisch2018api,vsnuderl2018rate,vijayakumar2018practical,hofman2014technical,Zhao_2018} &
&
6&
3.2.3 &
Perform Request Rate Limiting &
Traffic Management &
Performance &
The organization has a mechanism in place with which limits on the amount of requests or faulty calls API consumers are allowed to make, may be imposed. Requests made within the specified limit are routed successfully to the target system. Those beyond the limit are rejected. &
$\bullet$ The organization has a rate limiting mechanism in place for their API(s).
& \citedata{de2017api,gamez2015towards,jacobson2011apis,lourencco2019framework,raivio2011towards,jayathilaka2015eager,vsnuderl2018rate,hofman2014technical,gadge2018microservice} &
&
6&
3.2.4 &
Perform Request Rate Throttling &
Traffic Management &
Performance &
The organization has a mechanism in place with which API requests may be throttled down, without the connection being closed. This can help to improve the overall performance and reduce impacts during peak hours. It helps to ensure that the API infrastructure is not slowed down by high volumes of requests from a certain group of customers or apps. &
$\bullet$ The organization has a rate throttling mechanism in place for their API(s).
& \citedata{de2017api,fremantle2015web,familiar2015iot,gadge2018microservice,hohenstein2018architectural,indrasiri2018developing,jacobson2011apis,thielens2013apis,weir2015oracle} &
&
6&
3.2.5 &
Manage Quota &
Traffic Management
&
Performance &
The organization has policies in place regarding the number of API calls that an app is allowed to make to the back end over a given time interval. Calls exceeding the quota limit may be throttled or halted. The quota allowed for an app depends on the business policy and monetization model of the API. A common purpose for a quota is to divide developers into categories, each of which has a different quota and thus a different relationship with the API. &
$\bullet$ The organization has implemented the 'Perform Request Rate Limiting' (3.2.3) practice or 'Perform Request Rate Throttling' (3.2.4) practice.\newline
$\bullet$ The organization has quota policies for their API(s) in place.
& \citedata{de2017api} &
&
6&
3.2.6 &
Apply Data Volume Limits &
Traffic Management &
Performance &
The organization has a mechanism in place with which the amount of data consumers of their API(s) are allowed to consume in one call may be limited. This can help to improve the overall performance and reduce impacts during peak hours. It helps to ensure that the API infrastructure is not slowed down by calls that transport unnecessarily high chunks of data volumes. &
$\bullet$ The organization has implemented the 'Monitor Resource Usage' (4.1.5) practice.\newline
$\bullet$ The organization has a data volume limiting mechanism in place.
& \citedata{DropboxDatalimiting} &
&
6&
3.2.9 &
Prioritize Traffic &
Traffic Management &
Performance &
The organization is able to give a higher priority in terms of processing API calls, based on certain customer characteristics and/or classes. This priority may be based on their subscription, customer relationships, or agreements made in the SLA. &
$\bullet$ The organization is able to prioritize traffic based on customer characteristics and/classes.
&\citedata{de2017api} &
&
6&
4.1.1 &
Monitor API Health &
Monitoring &
Observability &
The organization is able to perform health monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. This should return basic information such as the operational status of the API, indicating its ability to connect to dependent services. &
$\bullet$ The organization is able to perform health monitoring on its API(s).
& \citedata{averdunkHealth, gadge2018microservice} &
&
6&
4.1.3 &
Monitor API Performance &
Monitoring &
Observability &
The organization is able to perform performance monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. Doing so should provide performance statistics that track the latency within the platform and the latency for back-end calls. This helps the organization in finding the source of any performance issues reported on any API. &
$\bullet$ The organization is able to perform performance monitoring on its API(s).
& \citedata{de2017api, Xu_2019} &
&
6&
4.1.5 &
Monitor Resource Usage &
Monitoring &
Observability &
The organization is able to perform resource monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. Doing so should provide insights into the amount of resources that are consumed as a result of calls made to the API(s). This may be done by measuring
hardware metrics such as CPU, disk, memory, and network usage, or by using an indirect approximation of the amount of resources that are consumed by calls. &
$\bullet$ The organization is able to perform resource monitoring on its API(s).
& \citedata{KubernetesResources} &
&
6&
4.2.1 &
Log Errors &
Logging &
Observability &
The organization has the ability to internally log errors that are generated as a result of consumption of their APIs. Error logs should typically contain fields that capture information such as the date and time the error has occurred, the error code, and the client IP and port numbers.
&
$\bullet$ The organization has the ability to internally log errors.
& \citedata{andrey_kolychev_konstantin_zaytsev_2019_3256462, de2017api, medjaoui2018continuous} &
&
6&
4.2.2 &
Log Access Attempts &
Logging &
Observability &
The organization has the ability to generate access logs, in which HTTP requests/responses are logged, to monitor the activities related to an APIs usage. Access logs offer insight into who has accessed the API, by including information such as the consumer's IP address. &
$\bullet$ The organization is able to perform access logging.
& \citedata{wso2Access} &
&
6&
4.2.3 &
Log Activity &
Logging &
Observability &
The organization has the ability to perform basic logging of API activity, such as access, consumption, performance, and any exceptions. In doing so, it may be determined what initiated various actions to allow for troubleshooting any errors that occur. &
$\bullet$ The organization is able to perform activity logging.
& \citedata{de2017api, fremantle2015web, gadge2018microservice} &
&
6&
4.2.5 &
Audit User Activity &
Logging &
Observability &
The organization is able to perform user auditing. Doing so enables the organization to review historical information regarding API activity, to analyze who accesses an API, when it is accessed, how it is used, and how many calls are made from the various consumers of the API. &
$\bullet$ The organization is able to perform user auditing.
& \citedata{de2017api, gadge2018microservice} &
&
6&
4.3.2 &
Report Errors &
Analytics &
Observability &
The organization has the ability to report any errors to consumers that may occur during usage of their API(s). Error reports typically include information such as the error code and text describing why the error has occurred. &
$\bullet$ The organization has implemented the 'Log Errors' (4.2.1) practice.\newline
$\bullet$ The organization is able to report any errors to consumers.
& \citedata{andrey_kolychev_konstantin_zaytsev_2019_3256462, de2017api, medjaoui2018continuous} &
&
6&
4.3.3 &
Broadcast API Status &
Analytics &
Observability &
The organization broadcasts the status of its API(s) to consumers by providing them with operational information on the API in the form of an external status page, possibly on the developer portal or a website. The function of this status page is to let consumers know what is going on with the API at a technical level at any point in time. &
$\bullet$ The organization has implemented the 'Monitor API Health' (4.1.1) practice.\newline
$\bullet$ The organization broadcasts the operational status of its API(s) to consumers.
& \citedata{sandoval2018} &
&
6&
4.3.6 &
Generate Custom Analysis Reports &
Analytics &
Observability &
The organization is able to generate custom analysis reports on metrics of choice, possibly through an API management platform or monitoring tool. &
$\bullet$ The organization is able to generate custom analysis reports.
& \citedata{de2017api} &
&
6&
4.3.7 &
Set Alerts &
Analytics &
Observability &
The organization has the ability to set and configure alerts that should trigger in case of certain events or thresholds being exceeded. Such events or thresholds may include resource limits being exceeded, or occurrence of outages. Ideally, the organization is able to configure what persons should be alerted about the event, and through what communication channel they should be contacted. &
$\bullet$ The organization has implemented the 'Monitor API Health' (4.1.1), 'Monitor API Performance' (4.1.3), and 'Monitor API Resource Usage' (4.1.5) practices.\newline
$\bullet$ The organization has the ability to set and configure alerts.
& \citedata{UptrendsAlerting} &
&
6&
4.3.9 &
Enable Predictive Analytics &
Analytics &
Observability &
The organization has the ability to aggregate predictive analytics, through techniques such as pattern recognition, data mining, predictive modelling, or machine learning, by analyzing current and historical facts to make predictions about future or otherwise unknown events. &
$\bullet$ The organization has implemented the 'Monitor API Performance' (4.1.3) and 'Monitor API Resource Usage' (4.1.5) practices.\newline
$\bullet$ The organization has the ability to aggregate predictive analytics.
& None. &
&
6&
5.1.1 &
Facilitate Developer Registration &
Developer Onboarding &
Community &
The organization has a mechanism in place with which API consumers are able to register to the API so that they can obtain access credentials. Consumers can then select an API and register their apps to use it. &
$\bullet$ The organization has a mechanism in place with which API consumers are able to register to their API(s). &
\citedata{de2017api} &
&
6&
5.1.4 &
Provide SDK Support &
Developer Onboarding &
Community &
The organization offers API consumers the option to either download client-side SDKs for the API, or generate the SDK themselves from standard API definition formats such as OpenAPI (formerly known as Swagger). These functionalities are usually offered through the developer portal, where app developers often look for device-specific libraries to interact with the services exposed by the API. &
$\bullet$ The organization offers API consumers the option to download or generate client-side SDKs for their API(s).
&
\citedata{de2017api} &
&
6&
5.1.5 &
Implement Interactive API Console &
Developer Onboarding &
Community &
The organization provides API consumers with an interactive console. Using this console, developers are able to test the behavior of an API. &
$\bullet$ The organization provides API consumers with an interactive console. &
\citedata{biehl2015api} &
&
6&
5.1.8 &
Provide Sandbox Environment Support &
Developer Onboarding &
Community &
The organization provides API consumers with an environment that they can use to mimic the characteristics of the production environment and create simulated responses from all APIs the application relies on. &
$\bullet$ The organization provides API consumers with a sandbox environment.
&
\citedata{buidesign, jacobson2011apis, Mueller:2020, patni2017pro} &
&
6&
5.2.1 &
Establish Communication Channel &
Support &
Community &
The organization has established a communication channel between the API provider and consumer with which support may be provided to the consumer. Possible communication media include email, phone, form, web, community forum, blogs or the developer portal.&
$\bullet$ The organization has established one of the following communication channels with consumers of their API(s): email/phone/form/web/ community forum/blog/developer portal. &
\citedata{de2017api, jacobson2011apis} &
&
6 &
5.2.4 &
Manage Support Issues &
Support &
Community &
The organization is able to manage any support issues with their API(s). API consumers must be able to report any issues, bugs or shortcomings related to the API. They should be able to raise support tickets and seek help regarding API usage. Additionally, the API provider must be able to track and prioritize support tickets. &
$\bullet$ The organization is able to manage any support issues with their API(s).
& \citedata{de2017api, jacobson2011apis} &
&
6&
5.2.6 &
Dedicate Developer Support Team &
Support &
Community &
The organization employs a dedicated that offers support to consumers of their API(s). This team should be well-trained and possess knowledge that enables them to assist consumers with any problems or difficulties they may experience during the usage or implementation of the API. &
$\bullet$ The organization has implemented the 'Establish Communication Channel' (5.2.1) practice. \newline
$\bullet$ The organization employs a dedicated developer team that offers support to consumers of their API(s).
& None. &
&
6&
5.3.1 &
Use Standard for Reference Documentation &
Documentation &
Community &
The organization provides consumers of their API(s) with basic reference documentation on their website, developer portal or an external, third-party documentation platform. This documentation should document every API call, every parameter, and every result so that consumers are informed on the API's functionality. Additionally, it must be specified using a documentation framework such as Swagger, RAML, API Blueprint, WADL, Mashery ioDocs, Doxygen, ASP.NET API Explorer, Apigee Console To-Go, Enunciate, Miredot, Dexy, Docco or TurnAPI. &
$\bullet$ The organization provides consumers of their API(s) with basic reference documentation.\newline
$\bullet$ The organization utilizes one of the following (or comparable) documentation tools to specify its API documentation: Swagger (OpenAPI), RAML, API Blueprint, WADL, Mashery ioDocs, Doxygen, ASP.NET API Explorer, Apigee Console To-Go, Enunciate, Miredot, Dexy, Docco or TurnAPI.
& \citedata{de2017api, jacobson2011apis, medjaoui2018continuous} &
&
6&
5.3.3 &
Provide Start-up Documentation \& Code Samples &
Documentation &
Community &
The organization provides consumers of their API(s) with start-up documentation on on their website, developer portal or an external, third-party documentation platform. This type of documentation explains key concepts by summarizing the reference documentation, accelerating understanding as a result. Optionally, a list of Frequently Asked Questions and code samples that may be readily used in apps to invoke the API may be included.
&
$\bullet$ The organization has implemented the 'Use Standard for Reference Documentation' (5.3.1) practice. \newline
$\bullet$ The organization provides consumers of their API(s) with start-up documentation.
& \citedata{de2017api, jacobson2011apis} &
&
6&
5.3.5 &
Create Video Tutorials &
Documentation &
Community &
The organization is able to create video tutorials in order to provide consumers with visual information that details how to use the API and integrate it into their applications.
&
$\bullet$ The organization is able to create video tutorials.
& None. &
&
6&
5.4.1 &
Maintain Social Media Presence &
Community Engagement &
Community &
The organization is able to maintain their social media presence on platforms such as Facebook or Twitter. This may involve activities such as reporting on the API's status, announcing news and updates, responding to questions, or reacting to feedback.
&
$\bullet$ The organization is able to maintain their social media presence on platforms such as Facebook or Twitter.
& None. &
&
6&
5.4.3 &
Provide Community Forum &
Community Engagement &
Community &
The organization provides (potential) consumers of their API(s) with a community forum, possibly through a website or API management platform. This forum may assist in building and interconnecting a developer community, by providing them with a central hub they can use to communicate with one another and the organization. Additionally, it may serve as a repository with guides on API usage, documentation and support. &
$\bullet$ The organization provides API consumers with a community forum.
& \citedata{de2017api} &
&
6&
5.4.4 &
Provide Developer Portal &
Community Engagement &
Community &
The organization provides (potential) consumers of their API(s) with a developer portal. A developer portal provides the platform for an API provider to communicate with the developer community. Addtionally, it typically offers functionality such as user registration and login, user management, documentation, API key management, test console and dashboards. &
$\bullet$ The organization has implemented a developer portal.
& \citedata{de2017api, fremantle2015web, medjaoui2018continuous, sine2015api} &
&
6&
5.4.7 &
Organize Events &
Community Engagement &
Community &
The organization is actively involved in organizing or participating in events that are aimed towards engaging and motivating the developer community to incorporate their API(s) into their applications. This may include events such as hackathons, conferences, or workshops. &
$\bullet$ The organization is actively involved in organizing or participating in developer community events.
& None. &
&
6&
5.4.9 &
Dedicate Evangelist &
Community Engagement &
Community &
The organization employs a dedicated API evangelist. This individual is responsible for evangelizing the API by gathering consumer feedback, and promoting the organization's API(s) by creating samples, demos, training materials and performing other support activities aimed towards maximizing the developer experience. &
$\bullet$ The organization employs a dedicated API evangelist.
& None. &
&
6&
5.5.1 &
Enable API Discovery &
Portfolio Management &
Community &
The organization provides potential consumers of their API(s) with a mechanism to obtain information, such as documentation and metadata, about their API(s). This mechanism may take the shape of an external website, hub or repository that consumers can freely browse through. &
$\bullet$ The organization has a mechanism in place with which their API(s) may be discovered.
& \citedata{biehl2015api, hofman2014technical} &
&
6&
5.5.4 &
Provide API Catalog &
Portfolio Management &
Community &
The organization provides API consumers with an API Catalog. This is a a searchable catalog of APIs. An API catalog is also sometimes referred to as an API registry. API consumers should be able to search the catalog based on various metadata and tags. The catalog should document the API functionality, its interface, start-up documentation, terms and conditions, reference documentation, and so forth.&
$\bullet$ The organization has implemented the 'Enable API Discovery' (5.5.1) practice. \newline
$\bullet$ The organization provides API consumers with a searchable API catalog.
& \citedata{de2017api, lourencco2019framework, vijayakumar2018practical, hofman2014technical, medjaoui2018continuous} &
&
6&
5.5.5 &
Bundle APIs &
Portfolio Management &
Community &
The organization is able to combine two or more APIs into a bundle. This is a collection of API products that is presented to developers as a group, and typically associated with one or more rate plans for monetization. &
$\bullet$ The organization is able to combine two or more APIs into a bundle.
& \citedata{apigeebundling} &
&
6&
6.1.1 &
Publish Informal SLA &
Service-Level Agreements
&
Commercial &
The organization has the ability to publish and agree upon an informal, bare-bones SLA with consumers of their API(s). This type of SLA is minimalistic and loose in terms of the nature and amount of agreements it contains, as well as the consequences attached to these agreements should they be violated. This type of SLA is satisfactory for organizations that provide non-critical services and that have close relationships with their consumers and partners. &
$\bullet$ The organization has the ability to publish and agree upon an informal SLA with consumers.
& None. &
&
6&
6.1.3 &
Provide SLA &
Service-Level Agreements
&
Commercial &
The organization has the ability to provide and agree upon a formal, elaborate SLA with consumers of their API(s). This type of SLA is extensive and strict in terms of the nature and amount of agreements it contains, as well as the consequences attached to these agreements should they be violated. Typically, agreements regarding the guaranteed uptime of the API on a monthly or yearly basis are included in this type of SLA, along with guaranteed response times in the event of incidents, as well as policies regarding privacy, security, and possibly rate and data quotas. Additionally, when providing a formal SLA, the organization should have a plan in place that details what course of action should be taken in the event where agreements are failed to be upheld.
&
$\bullet$ The organization has the ability to provide and agree upon a formal SLA with consumers.
& \citedata{de2017api} &
&
6&
6.1.6 &
Proactively Monitor SLAs &
Service-Level Agreements
&
Commercial &
The organization is able to proactively monitor metrics that are relevant in checking whether the agreements made with API consumers are adhered to. Such metrics may include availability, performance and functional correctness. &
$\bullet$ The organization has implemented the 'Monitor API Resource Usage' (4.1.5) practice.\newline
$\bullet$ The organization is able to perform SLA monitoring.
& \citedata{moizSLA} &
&
6&
6.1.7 &
Customize Personalized SLA &
Service-Level Agreements
&
Commercial &
The organization has the ability to provide consumers of their API(s) with personalized SLAs. This type of SLA is suitable for intensive consumers that utilize services offered by the API in such a way that requires customized agreements as compared to those that are offered as part of the organization's standard SLA. For example, some consumers may require minimal latency and response times for their calls, want to make large amounts of calls, or demand API uptime approaching 100\%. Additionally, a personalized SLA may be required due to the consumer being located in a different geographic location than other consumers, requiring customized agreements with regards to privacy laws and regulations. &
$\bullet$ The organization has implemented the 'Provide SLA' (6.1.3) practice.\newline
$\bullet$ The organization has the ability to provide consumers of their API(s) with personalized SLAs.
& \citedata{manualSLA} &
&
6&
6.2.6 &
Adopt Subscription-based Monetization Model &
Monetization Strategy &
Commercial &
The organization has adopted a monetization model that is based on a subscription basis. With this model, API consumers pay a flat monthly fee and are allowed to make a certain number of API calls per month. &
$\bullet$ The organization has implemented the 'Implement Subscription Management System' (6.3.2) and 'Manage Quota' (3.2.5) practices. \newline
$\bullet$ The organization has adopted a monetization model that is based on a subscription basis.
& \citedata{budzynskiMonetization} &
&
6&
6.2.8 &
Adopt Tier-Based Monetization Model &
Monetization Strategy &
Commercial &
The organization has adopted a monetization model that is based on tiered access. Typically, each tier has its own set of services and allowances for access to API resources, with increasing prices for higher tiers. &
$\bullet$ The organization has implemented the 'Prioritize Traffic' (3.2.7) and 'Manage Quota' (3.2.5) practices. \newline
$\bullet$ The organization utilizes a monetization model that is based on tiered access.
& \citedata{redhatMonetization, budzynskiMonetization} &
&
6&
6.2.9 &
Adopt Freemium Monetization Model &
Monetization Strategy &
Commercial &
The organization has adopted a monetization model that is based on freemium functionalities and access. This involves providing consumers with a limited part of the services and functionalities the API offers as a whole. Consumers that wish to utilize all services and functionalities are required to have an active, paid subscription to the API.
&
$\bullet$ The organization utilizes a monetization model that is based on freemium functionalities and access.
& \citedata{redhatMonetization, budzynskiMonetization} &
&
6&
6.2.10 &
Adopt Metering-Based Monetization Model &
Monetization Strategy &
Commercial &
The organization utilizes a monetization model that is based on metering. With this model, API consumers pay for the amount of resources they use. This may be measured in terms of bandwidth, storage or amount of calls made. &
$\bullet$ The organization has implemented the 'Monitor Resource Usage' (4.1.5) practice.\newline
$\bullet$ The organization utilizes a monetization model that is based on metering.
& \citedata{redhatMonetization, budzynskiMonetization} &
&
6&
6.3.2 &
Implement Subscription Management System &
Account Management &
Commercial &
The organization has a system in place with which it is able to manage existing subscriptions (consumers of) on their API. A subscription management system provides support for billing on a recurring basis, as well as providing insight into active subscriptions.
&
$\bullet$ The organization has implemented a subscription management system.
& \citedata{fremantle2015web, preibisch2018api, raivio2011towards} &
&
6&
6.3.7 &
Report on API Program Business Value &
Account Management &
Commercial &
The organization is able to generate business value reports associated with their API(s). Business value reports gauge the monetary value associated with the API program. Monetization reports of API usage provide information on the revenue generated from the API. Value-based reports should also be able to measure customer engagements. Engagements can be measured by the number of unique users, the number of developers registered, the number of active developers, the number of apps built using the APIs, the number of active apps, and many other items. Optionally, these metrics may be visualized in the form of dashboards, so that they may then easily be shared and presented to relevant internal stakeholders to communicate the API program's business value. &
$\bullet$ The organization has implemented the 'Generate Custom Analysis Reports' (4.3.6) practice. \newline
$\bullet$ The organization is able to generate business value reports associated with their API(s).
& \citedata{de2017api}&
&
6&
6.3.8 &
Provide Subscription Report to Customer &
Account Management &
Commercial &
The organization is able to generate subscription reports for consumers of their API(s). These reports contain metrics gathered through internal monitoring and analytics. Such metrics may include amount of calls made, performance, and status regarding remaining allowed quotas. &
$\bullet$ The organization has implemented the 'Generate Custom Analysis Reports' (4.3.6) and 'Implement Subscription Management System' (6.3.2) practices. \newline
$\bullet$ The organization is able to generate subscription reports for consumers of their API(s).
& \citedata{de2017api}&
&
6&
6.3.9 &
Proactively Suggest Optimizations to Customers &
Account Management &
Commercial &
The organization has the ability to train and help customers in using their API(s) as well and efficiently as possible. This may be in the best interest of both parties, as optimizing inefficient calls may positively impact traffic load on the API infrastructure. &
$\bullet$ The organization has implemented the 'Monitor API Performance' (4.1.3) and 'Monitor Resource Usage' (4.1.5) practices. \newline
$\bullet$ The organization is able to generate business value reports.
& \citedata{buidesign, de2017api}&
&
6&
}
\dataheight=9
\def\returnData(#1){\expandafter\checkMyData(#1)\cachedata}
\newcounter{deTeller}
\newcounter{volgendeStart}
\newcounter{volgendeStop}
\setcounter{deTeller}{1}
\setcounter{volgendeStart}{\value{deTeller}}
\newcounter{tempCount}
\newcounter{groteLoop}
\newcounter{loop}
\newcounter{loopPlusEen}
\newcounter{loopMinEen}
\newcounter{stopTeller}
\newcounter{oldStopTeller}
\newcommand{15.5cm}{15.5cm}
\forloop{groteLoop}{1}{\value{groteLoop}<21}{
\setcounter{oldStopTeller}{0}
\setcounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{6}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{2}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{7}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{5}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{5}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{loopPlusEen}{\value{loop}}
\setcounter{loopMinEen}{\value{loop}}
\addtocounter{loopPlusEen}{1}
\addtocounter{loopPlusEen}{-1}
\begin{table}[ht!]
\footnotesize
\begin{tabular}{|p{.1cm}|p{.1cm}|ll|ll|}
\hline
\multirow{15}{*}{\rotatebox[origin=c]{90}{\returnData(\value{deTeller},4)}} &
\multirow{15}{*}{\rotatebox[origin=c]{90}{\returnData(\value{deTeller},3)}} &
\forloop{loop}{\value{volgendeStart}}{\value{loop}<\value{volgendeStop}}{
\textbf{Practice Code}: & \returnData(\value{deTeller},1) & \textbf{Practice Name}: & \returnData(\value{deTeller},2) \\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Description: }}\returnData(\value{deTeller},5)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Implemented when:}} \newline \returnData(\value{deTeller},6)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{Literature: \returnData(\value{deTeller},7)}\\\cline{3-6}
&&\multicolumn{4}{|p{15.5cm}}{}\\\cline{3-6}
&&
\addtocounter{deTeller}{1}
}
\setcounter{volgendeStart}{\value{deTeller}}
\textbf{Practice Code}: & \returnData(\value{deTeller},1) & \textbf{Practice Name}: & \returnData(\value{deTeller},2) \\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Description: }}\returnData(\value{deTeller},5)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Implemented when:}} \newline \returnData(\value{deTeller},6)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{Literature: \returnData(\value{deTeller},7)}\\\hline
\end{tabular}
\end{table}
\addtocounter{deTeller}{1}
}
\newpage
\section{Version 0.1}
\label{sec:version01}
This version was populated using the primary source~\cite{de2017api}.
It consisted of four focus areas.
Further details are omitted because of the intermediate state of the model.
\begin{table}[h]
\centering
\begin{tabular}{l|c}
Focus Area & Number of capabilities\\
\hline
\textbf{Developer Enablement} & 4 \\
\textbf{Security and Communication} & 5 \\
\textbf{Lifecycle} & 2 \\
\textbf{Auditing and Analysis} & 3 \\
\end{tabular}
\caption{API-m-FAMM version 0.1}
\label{tab:version01}
\end{table}
\section{Version 0.2}
\label{sec:version02}
This version was populated using the SLR~\cite{mathijssen2020identification}.
The re-location of practices and capabilities was primarily driven by the decision to split the \textit{security and communication} focus area up into two separate focus areas: \textit{security} and \textit{communication}.
This decision was made because security was found to be an substantial and integral topic of API management in itself.
Moreover, it was decided that the communication focus area, which was later renamed to \textit{performance}, comprises capabilities such as \textit{service routing} that are unrelated to security.
Furthermore, the decision was made to split the \textit{auditing and analytics} focus area up into technical management, which was later renamed to \textit{monitoring}, and business-side, which was later renamed to \textit{commercial}.
This was done due to the difference in nature between capabilities such as \textit{monetization} and \textit{analytics}, which were originally grouped together.
This difference was further compounded by the decision to split the traffic management capability into two separate capabilities, with one capturing the business-level aspect of this capability and the other encompassing operational aspects.
The former capability was then moved to the new commercial focus area along with the monetization capability, while the latter was moved to the performance focus area.
\begin{table}[h]
\centering
\begin{tabular}{l|c}
Focus Area & Number of capabilities\\
\hline
\textbf{Community Engagement} & 4 \\
\textbf{Security} & 2 \\
\textbf{Communication} & 2 \\
\textbf{Lifecycle} & 5 \\
\textbf{Technical Management} & 4 \\
\textbf{Business Side} & 3 \\
\end{tabular}
\caption{API-m-FAMM version 0.2}
\label{tab:version02}
\end{table}
\section{Version 0.3}
\label{sec:version03}
More information was needed to determine whether practices and capabilities were suited to be included in the model with regards to their scope and relevance.
In order to resolve this, the collection of practices and capabilities was verified by using information gathered from grey literature such as online blog posts, websites, commercial API management platform documentation and third-party tooling.
Doing so resulted in the following changes made with regards to the contents of the API-m-FAMM:
\begin{itemize}
\item \textit{Removal} of several practices that were found to be irrelevant, redundant, or too granular. For example, \textit{filtering spam calls}, which was originally uncovered as part of the SLR, was found to be redundant as this practice is already covered by practices such as \textit{DoS protection} and \textit{rate limiting}. Consequently, such practices were removed.
\item \textit{Addition} of several practices that were newly identified. For example, \textit{predictive analytics} was found to be a practice that is offered by multiple commercial API management platform providers. Similarly, \textit{including change logs} was found to be a practice that is recommended by practitioners as a best practice when updating APIs. Consequently, such practices were added to the API-m-FAMM.
\item \textit{Merging} of several practices that were found to be irrelevant, redundant, or too granular. For example, practices that were originally uncovered through the SLR, such as \textit{email-based support}, \textit{phone-based support}, and \textit{form-based support} were found to be redundant, as no significant difference with regards to their maturity may be discerned among these practices. Consequently, these practices were merged into one practice: \textit{establish communication channel}.
\item \textit{Splitting} of practices that were found to be compounded by practices that were thought to warrant separate, individual practices. For example, the \textit{black or whitelist IP addresses} was split up into the \textit{blacklist IP addresses} and \textit{whitelist IP addresses} practices because these were found to be relevant practices on their own. Additionally, Consequently, these practices were merged into one practice: \textit{establish communication channel}.
\item \textit{Relocation} of practices to different capabilities than those they were originally assigned to. For example, the \textit{Oauth2.0 authorization} practice was moved from the \textit{authentication} capability to the newly introduced \textit{authorization} capability as Oauth is considered to be an authorization protocol.
\item \textit{Renaming} of several practices, as well as updating descriptions and formulation of practice descriptions that were previously missing or incomplete. For example, the \textit{provide code samples} practice was renamed to \textit{provide FAQ with code samples} because it was found that these two practices often go hand in hand. Additionally, this practice's description was updated.
\item \textit{Identification} of dependencies among practices, either among practices within the same capabilities or among practices across different capabilities or focus areas. Some dependencies were found to be relatively straightforward, such as the \textit{multiple API versioning strategy} practice depending on the implementation of the \textit{maintain multiple APIs} practice. However, dependencies between practices belonging to different capabilities such as \textit{quota management} depending on \textit{rate limiting} or \textit{rate throttling} were also identified.
\item \textit{Arrangement} of practices based on their interrelated maturity with regards to the other practices in the capability they are assigned to. At this point in time, this was performed on a mostly subjective and empirical basis, and thus should be regarded as a first attempt to discern practices with regards to their relative maturity.
\item \textit{Formulation} of implementation conditions corresponding to each practice, which are aimed at providing practitioners with an overview of the necessary conditions that must be met before a practice may be marked as implemented.
\end{itemize}
The amount of practices and capabilities that were added, removed, merged, split, relocated or renamed as a result of the supplemental material validation process and the aforementioned discussion session are shown in Table~\ref{tab:ResultsSupplemental} below.
However, it should be noted that some practices that were added as a result of the online verification process were later removed as a result of the discussion session.
As such, numbers corresponding to the \textit{added} and \textit{removed} operations presented in Table~\ref{tab:ResultsSupplemental} are slightly inflated.
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c|c|c|c}
\textbf{Component} & \textbf{Added} & \textbf{Removed} & \textbf{Merged} & \textbf{Split} & \textbf{Relocated} & \textbf{Renamed}\\
\hline
Practice & 17 & 27 & 39 & 4 & 12 & 93 \\
Capability & 1 & 1 & 1 & 0 & 1 & 2 \\
\end{tabular}
\caption{Number of practices and capabilities added, removed, merged, split, relocated or renamed as a result of the supplemental material validation process and the discussion session.}
\label{tab:ResultsSupplemental}
\end{table}
At this stage of the design process, the model is grounded in literature, and is verified and supplemented by using grey literature.
As a result of these activities, the initial body of 114 practices and 39 capabilities that was extracted as a result of the SLR was refined and narrowed down to 87 practices and 23 capabilities, which are divided among six focus areas.
Instead, the contents of this version of the API-m-FAMM can be found in \emph{version2} of this published source document on arXiv~\cite{mathijssen2021source}.
The general structure of the API-m-FAMM version 0.3 is presented in Figure~\ref{fig:api-m-famm03}. As shown, each individual practice is assigned to a maturity level within its respective capability. Additionally, it should be noted that practices can not depend on practices as part of another capability that have a higher maturity level. For example, practice 1.4.4 is dependant on the implementation of practice 1.2.3, resulting in a higher maturity level being assigned to the former of these practices.
Figure~\ref{fig:api-m-famm03} also shows that at this stage, 17 practices were added in addition to those extracted through the SLR. Furthermore, 14 new practices were introduced as a result of merging 39 former practices, as shown in Table~\ref{tab:ResultsSupplemental}. Moreover, descriptions that are based on grey literature were formulated for 18 practices for which adequate descriptions were not able to be identified in academic literature. Lastly, 6 practices are accompanied by descriptions that were formulated by the researchers themselves, as based on empirical knowledge. Even though suitable descriptions could not be identified for these practices in academic literature or grey literature, they were included in this version of the API-m-FAMM because they were hypothesized to be relevant for practitioners. Among other things, this hypothesis is tested through expert interviews, which are part of the next phase in constructing the API-m-FAMM.
\begin{figure*}
\centering
\includegraphics[page=1, clip, trim=0cm 0cm 0cm 0cm, width=\textwidth]{Figures/API-m-FAMMv0.3.pdf}
\caption{version 0.3 of the API-m-FAMM and the focus areas, capabilities, and practices it consists of. Additionally, it is shown which capabilities and practices were newly introduced between API-m-FAMM v0.2 and v0.3, as well as for which practices descriptions were formulated based on supplemental material. Please consult the legend on the top left-hand side of the figure for more information regarding the differently shaped and/or colored components.}
\label{fig:api-m-famm03}
\end{figure*}
\section{Version 0.4}
\label{sec:version04}
Eleven expert interviews were conducted.
During these interviews, many additions and changes in terms of the API-m-FAMM's structure and contents were suggested by experts, whom were encouraged to elaborate on their motivation regarding these suggestions.
By transcribing and processing the recordings of all interviews, the numerous suggestions that were made by experts to either add, remove, merge, split, relocate, or rename several focus areas, capabilities, and practices, are compiled.
The amount in which these suggestions for changes occurred are shown in Table \ref{tab:EvaluationChanges} below, as grouped by the type of suggested change as well as the type of component they apply to. Additionally, these changes are visually represented in their entirety in Figure \ref{fig:api-m-famm04a}, along with the number of experts that suggested for a specific change to be made. Evidently, the number of practices that were suggested to be added is relatively high. It should be noted that while a large part of these practices were explicitly mentioned by experts, some were also indirectly extracted from transcripts as a result of comments the expert had made. Additionally, no suggestions are rejected at this point, hence all suggestions that were made by experts are taken into account and incorporated into Table \ref{tab:EvaluationChanges} and Figure \ref{fig:api-m-famm04a}.
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c|c|c|c}
\textbf{Component} & \textbf{Added} & \textbf{Removed} & \textbf{Merged} & \textbf{Split} & \textbf{Relocated} & \textbf{Renamed}\\
\hline
\textbf{Practice} & 50 & 5 & 3 & 3 & 9 & 3 \\
\textbf{Capability} & 7 & 0 & 0 & 2 & 2 & 2 \\
\textbf{Focus Area} & 1 & 0 & 0 & 0 & 0 & 3\\
\end{tabular}
\caption{Number of practices, capabilities, and focus areas that were suggested to be added, removed, merged, split, relocated or renamed by experts during interviews.}
\label{tab:EvaluationChanges}
\end{table}
\begin{figure*}
\centering
\includegraphics[page=1, clip, trim=7cm 4cm 9cm 0.5cm, width=0.8\textwidth]{Figures/API-m-FAMMv0.4a.pdf}
\caption{API-m-FAMM version 0.3 plus all suggested changes that were made by experts during interviews. Please consult the legend on the left-hand side of the figure for more information regarding the manner in which the colored outlines should be interpreted. Practices and capabilities that were not directly categorized by the expert during interviews are placed in the 'undecided' box on the top-left hand side.}
\label{fig:api-m-famm04a}
\end{figure*}
After having compiled all suggestions made by experts, extensive discussion sessions are held among all authors to analyze, discuss, and interpret them.
All suggested changes to either a focus area itself, or the capabilities or practices it consists of are then analyzed and interpreted through the help of the transcribed arguments that were provided by experts during the interviews.
As a result, numerous modifications are made to the API-m-FAMM, which are visualized in its entirety in Figure \ref{fig:api-m-famm04b}.
Additionally, some fundamental decisions are made with regards to the scope and contents of the API-m-FAMM.
\begin{itemize}
\item Firstly, it was decided that all practices that are contained in the model should be implementable \textit{without} the usage of an API management platform. This decision was made due to several reasons. First of all, it was found that among the organizations that the experts that were consulted are employed at, only a small portion actively utilizes a third party platform to manage their API(s). When asked, experts belonging to the category that have not incorporated an API management platform into their organizations cited arguments such as wanting to avoid vendor lock-in, high costs, or simply not having a need for many of the functionalities provided by such management platforms. Oftentimes, the latter argument was tied to the organization currently exclusively using internal APIs, thus removing the need for using a management platform to manage and expose any partner or public APIs altogether. Considering that it may reasonably be hypothesized that these arguments may likely also apply to other organizations wishing to consult the API-m-FAMM to evaluate and improve upon their API management related practices, any practices or capabilities that were found to be directly tied to the usage of an API management platform were removed from the model. For example, this was the case for the \textit{Visual Data Mapping} practice, which is exclusively provided by the \textit{Axway} API management platform \footnote{\url{https://www.axway.com/en/products/api-management}}, as well as the practices corresponding to the newly suggested \textit{Error Handling} capability, which are implementable through the use of the \textit{Apigee} platform \footnote{\url{https://cloud.google.com/apigee/api-management?hl=nl}}.
An additional reason for excluding such capabilities and practices is that they are likely to evolve throughout the coming years, which would in turn require the API-m-FAMM to be updated as well. In order to prevent this, the API-m-FAMM and the practices it comprises should be platform-independent. Lastly, the purpose of the API-m-FAMM is not to guide practitioners in selecting an appropriate commercial API management platform for their organization. Instead, the API-m-FAMM aims to guide organizations in assessing and evaluating their current maturity in terms of those processes that are considered to be best-practices and are at the core of API management, so that they may then develop a strategy towards implementing practices that are currently not implemented and desirable in further maturing the organization in terms of API management.
\item Secondly, many practices were deemed to be too granular, specific, or irrelevant to be included. Consequently, such practices were either removed, or merged into a practice that is composed of these smaller practices. An example of practices that were found to be too granular include newly suggested practices such as \textit{Event Participation}, \textit{Event Hosting}, and \textit{Organize Hackathons}. Additionally, since determining a difference among these practices in terms of their maturity was found to be unfeasible, they were instead merged into the \textit{Organize Events} practice and included in its description.
\item Thirdly, some practices that describe a specific protocol were renamed to be more ambiguous and generic. For example, the former \textit{OAuth 2.0 Authorization} practice was renamed to \textit{Standardized Authorization Protocol}, with a referral to the OAuth 2.0 protocol being included in its description instead. This was done to ensure that the API-m-FAMM remains functional and applicable in the future, since it is likely that new protocols will be developed and adopted among the industry in the future. These concerns also applied to suggested practices corresponding to individual authentication methods such as client certificate and SAML authentication, which were ultimately merged into the \textit{Implement Authentication Protocol} practice and included in its description. An additional reason for doing so in the case of these authentication methods is that they each have their individual strengths and weaknesses, with one not always necessarily being 'better' or more mature than the other. Furthermore, some methods may be more appropriate for some use cases than others.
\item Furthermore, some capabilities and its corresponding practices that were also thought to apply to most organizations in general, that are not necessarily involved with API management were excluded from the model. An example of this is the \textit{Financial Management} capability that was suggested to be added. Considering that practices such as \textit{Automated Billing}, \textit{Third-Party Payment Provider Integration}, and \textit{Revenue Sharing} are best practices that apply to commercially oriented organizations in general, they were removed. This decision was made to ensure that the contents of the API-m-FAMM is exclusively composed of practices that are directly tied to API management.
\item During interviews focused on the \textit{Lifecycle} focus area, experts were asked to elaborate on the manner in which their organization has implemented \textit{Governance}. Based on the answers given however, it became clear that capturing processes related to governance in the form of practices is not feasible. This may largely be attributed to the observation that such processes seem to be inherent to specific characteristics of the organization, such as its culture, size, usage of a third party API management platform, as well as the amount of APIs that are used or exposed by the organization.
Some practices were suggested for addition, such as \textit{Define Naming Conventions}, \textit{Define Best Practices}, and \textit{Define Integration Patterns}. However, after having discussed these with experts in subsequent interviews, it was decided that these practices are too abstract and inconcrete in comparison with other practices, considering that they may be interpreted in different ways by practitioners due to the varying organizational characteristics mentioned earlier. Hence, the \textit{Governance} capability that was originally part of the \textit{Lifecycle} focus area was removed, along with the \textit{Design-time Governance} and \textit{Run-time Governance} practices it was composed of.
\item A valuable suggestion that was made by experts is the addition of monitoring in terms of the amount of resources that calls to the API consume, such as CPU, disk, memory, and network usage. Considering that this monitoring perspective was previously missing alongside performance and health monitoring, as well as it being suggested by multiple experts independently from one another, the \textit{Resource Monitoring} practice was newly added. Similarly, this resource perspective was also found to be missing among the \textit{Traffic Management} capability, alongside the \textit{Request Limiting} and \textit{Request Throttling} practices. Hence, the \textit{Data Volume Limiting} practice was newly added.
\item Another fundamental change that was made to the API-m-FAMM is the renaming of the former \textit{Monitoring} focus area to \textit{Observability}. This rename was independently suggested by two experts, whom argued that observability better describes the focus area, considering that the \textit{Analytics} capability was split into two capabilities: \textit{Monitoring} and \textit{Analytics}. This decision was made because experts were of the opinion that monitoring is concerned with gathering (real-time) metrics related to the API's health, performance, and resource usage, while analytics is concerned with aggregating these metrics so that insights may be formed and subsequent action may be taken based off of these. As a result, the monitoring capability was added, as well as practices related either to monitoring or analytics being moved to the capabilities they are associated with.
\item Moreover, some practices that were originally posed from a passive perspective, were changed with the intention of being conducted in an active manner. For example, the \textit{Include Changelogs} practice was renamed to \textit{Distribute Changelogs}, and its description was changed so that its focus is changed from passive inclusion of changelogs in the reference documentation, to active distribution of changelogs to consumers of the API. Similarly, the \textit{Provide API Status Page} was renamed to \textit{Broadcast API Status}, as well as its description being changed to signify the operational status of the API being broadcasted to consumers in an active manner, as opposed to providing an API status page in a passive fashion. These changes were made due to the fact that when phrased in a passive manner, these practices were deemed to be too irrelevant to be included in the API-m-FAMM, considering that the level of maturity required to implement these practices is too low when compared to other practices. When phrased from an active perspective however, these practices can be considered to be best practices that an organization should strive to implement.
\item Finally, a major fundamental change was made with regards to the \textit{Lifecycle Control} capability. While practices belonging to this capability such as \textit{API Endpoint Creation}, \textit{API Publication}, and \textit{Import Pre-existing API} are considered to be an integral aspect of API management in both literature as well as the industry, the decision was made to exclude these practices from the API-m-FAMM. This choice was made due to the fact that being able to design, create, publish, and deploy an API is a precondition for implementing all other practices the model consists of. Moreover, during interviews it became clear that it was difficult for experts to rank these practices in terms of their maturity, considering that they are often performed in chronological order.
\end {itemize}
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=7cm 4cm 2cm 0.5cm, width=0.8\textwidth]{Figures/API-m-FAMMv0.4b.pdf}
\caption{API-m-FAMM v0.4, including all suggested changes that were made by experts during interviews, as well as the manner in which they were subsequently interpreted and applied by the researchers. Please consult the legend on the top left-hand side of the figure for more information regarding the manner in which the colored outlines and fills should be interpreted.}
\label{fig:api-m-famm04b}
\end{figure*}
Next the practices are assigned to individual maturity levels.
This is done by using the results of the maturity ranking exercises during the interviews.
First however, all dependencies between practices are identified, which are depicted in Figure \ref{API-m-FAMM Dependencies}.
In this context, a dependency entails that one or more practices that the practice in question is dependant on are required to be implemented before the practice may be implemented.
These dependencies may either occur; (1) between practices within the same capability; (2) between practices that are assigned to different capabilities within the same focus area, or (3) between practices that are assigned to different capabilities and focus areas.
In total 34 dependencies are identified, which was done by analyzing literature stemming from the SLR and online supplemental material, as well as input received through expert interviews and the discussion sessions that were conducted among the researchers. The number of dependencies that are identified are shown for each focus area in Table \ref{tab:DependenciesTable}, as well as for each of the three dependency types mentioned.
\begin{table}[h]
\centering
\begin{tabular}{l c c c r}
\hline
\textbf{Focus Area} & \textbf{Within Capability} & \textbf{Within Focus Area} & \textbf{Between Focus Areas} & \textbf{Total} \\
\hline
Community & 3 & 0 & 0 & 3 \\
Security & 2 & 0 & 0 & 2 \\
Lifecycle Management & 3 & 1 & 2 & 6 \\
Observability & 0 & 6 & 0 & 6 \\
Performance & 4 & 0 & 2 & 6 \\
Commercial & 2 & 1 & 8 & 11 \\
\hline
\textbf{Total} & 14 & 8 & 12 & 34
\end{tabular}
\caption{The number of identified dependencies per focus area and per dependency type.}
\label{tab:DependenciesTable}
\end{table}
As an example of a dependency between practices within the same capability, implementation of the \textit{Implement Load Balancing} practice is required before the \textit{Implement Scaling} practice may be implemented.
An example of a dependency between practices that are assigned to different capabilities within the same focus area is the dependency between \textit{Enable Predictive Analytics} and \textit{Performance Monitoring}. The former practice belongs to the \textit{Analytics} capability, while the latter practice belongs to the \textit{Monitoring} capability, but both capabilities are contained within the \textit{Observability} focus area. An example of a dependency between practices that are assigned to different capabilities and focus areas may be observed in the case of the dependency between the \textit{Adopt Metering-based Monetization Model} and \textit{Resource Monitoring} practices. The former practice is assigned to the \textit{Monetization Strategies} capability within the \textit{Commercial} focus area, while the latter practice is assigned to the \textit{Monitoring} capability within the \textit{Performance} focus area.
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=1cm 3cm 8cm 1cm, width=0.7\textwidth]{Figures/API-m-FAMMv0.4dependencies.pdf}
\caption{The API-m-FAMM v0.4 after all changes had been applied, showing all dependencies that were identified between practices. In order to improve legibility, practices are not ranked in terms of their maturity in this figure.}
\label{API-m-FAMM Dependencies}
\end{figure*}
After having identified all dependencies between practices, all 34 practices that have one or more dependencies are juxtaposed in a matrix.
This is done by adhering to the constraint that practices can not depend on practices that have a higher maturity level.
As a result, the foundation of the API-m-FAMM is formed, with practices ranging from maturity levels 1 to 10.
Using this structure as a base, all other practices are subsequently assigned to individual maturity levels within their respective capabilities.
These assignments are performed by using the results of the maturity ranking exercises that were performed by experts as one of the main sources of input.
By again using the \textit{Logging} capability as an example, the interpretation of such a maturity ranking exercise is visualized in Figure \ref{Maturity_Ranking_Interpretation}.
In this figure, it can be seen that the \textit{Activity Logging}, \textit{Access Logging}, and \textit{User Auditing} practices were ranked by 3 experts in terms of their perceived maturity.
An additional practice, \textit{Application Logging}, was suggested for addition.
However, this practice was removed because the decision was made to exclude applications in terms of abstraction from the API-m-FAMM, which is why it is outlined in red.
Additionally, the decision was made to include and move the \textit{Error Logging} practice to the \textit{Logging} capability.
Hence, this practice is outlined in green, and is included in this ranking exercise by incorporating this practice in the figure, along with the capability it was originally categorized with by the expert.
Furthermore, the \textit{Error Reporting} practice was moved to the \textit{Analytics} capability (as can be seen in Figure \ref{fig:api-m-famm04b}, which is why it is outlined in purple and excluded from this maturity ranking exercise.
Lastly, the remaining 3 practices that were suggested to be added are excluded, along with the \textit{Error Handling} capability as a whole, which is denoted by the red outlines.
\begin{figure}[h]
\centering
\includegraphics[page=1, clip, trim=1cm 0cm 1cm 0cm, width=0.7\textwidth]{Figures/API-m-FAMMv0.4maturityranking.pdf}
\caption{Conceptual overview representing a rough approximation of the way in which the expert's maturity rankings were interpreted and used as a starting point for performing the maturity level assignments.}
\label{Maturity_Ranking_Interpretation}
\end{figure}
Arrows are included that range from the lowest a practice has been ranked in terms of its perceived maturity, to its highest. Dotted lines are attached to each practice, which are then connected to these arrows with a small circle in order to highlight and compare the maturity assignments of each expert with one another. Subsequently, dashed lines are used to indicate a rough estimate of the average of these assignments, which are then mapped on the maturity levels.
However, it should be noted that Figure \ref{Maturity_Ranking_Interpretation} was made for illustratory purposes, in order to provide the reader with a conceptual idea of the manner in which the maturity assignments were performed.
In practice, the maturity assignment of practices was done in a pragmatic manner, through discussion sessions among the researchers during which the expert's varying maturity rankings and their accompanying motivation and arguments were discussed and interpreted. Based on the outcome of these discussions, decisions were then made to assign practices to individual maturity levels, while taking the experts' opinions and maturity rankings into account.
Finally, all practices are renamed to fit an uniform syntactical structure, which starts with a verb, followed by one or more nouns.
For example, \textit{User Auditing} is renamed to \textit{Audit Users}, and \textit{Resource Monitoring} is renamed to \textit{Monitor Resource Usage}.
Furthermore, descriptions of the practices that are included in the API-m-FAMM after all changes had been applied are updated.
When possible, this is done using information and input that was provided by experts during interviews.
Ultimately, these activities produced a second, updated version of the API-m-FAMM, which is shown in Figure \ref{API-m-FAMM_2.4} and consists of 6 focus areas, 20 capabilities, and 81 practices.
These descriptions are available through \emph{version3} of this published source document on arXiv~\cite{mathijssen2021source}.
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=0.5cm 0.5cm 0.5cm 0.5cm, width=\textwidth]{Figures/API-m-FAMMv0.4.pdf}
\caption{API-m-FAMM v0.4, which includes the assignment of all practices to their respective maturity levels, which range from level 1 to level 10.}
\label{API-m-FAMM_2.4}
\end{figure*}
\section{Version 0.5}
\label{sec:version05}
After having updated the API-m-FAMM to incorporate all findings from the interviews a second evaluation cycle was conducted.
This is done as a means for evaluating and verifying whether experts agree with the fundamental decisions that were made, as well as gathering feedback on the way suggestions made by experts were interpreted and the maturity levels that practices had been assigned to.
This second evaluation cycle consists of unstructured interviews with three experts originating from the same sample of experts that were interviewed during the first evaluation cycle.
During these interviews, the changes made as a result of the previous evaluation cycle, as well as the newly introduced maturity assignments are presented and discussed.
Since all experts agreed with the fundamental decisions that were made, no major further adjustments are made to the API-m-FAMM as a result of this evaluation cycle.
\section{Version 1.0}
\label{sec:version10}
The final phase of the API-m-FAMM construction, the \emph{Deploy} phase, was executed through case studies.
These case studies were conducted by evaluating six software products.
Some additional changes are made to practices as a result of the discussion sessions with practitioners after the evaluation.
One practice was removed altogether, and the descriptions of six practices were modified. Specifically, the following changes were made to the following practices:
\begin{itemize}
\item \textbf{Perform Request Rate Limiting}: this practice was extended to also comprise error limiting. In the case of AFAS Profit, this is implemented by placing consumers on a temporary denylist when they perform an excessive amount of faulty calls within a predefined time span.
\item \textbf{Prevent Sensitive Data Exposure}: this practice was removed. During discussions, this practice caused confusion due to the observation that this practice is already captured by the \textit{Implement Transport Layer Encryption} and \textit{Decouple Internal \& External Data Model} practices. Additionally, after further investigation this practice was deemed to be out of scope, considering that the scope of this practice involves app data storage in general, as opposed to API management.
\item \textbf{Implement Predictive Scaling}: the description of this practice was modified. Originally, the description mentioned that this practice may be implemented 'manually or automatically', which caused confusion due to the fact that these methods are already capture in the \textit{Implement Scaling} practice. Because predictive scaling is envisioned by practitioners and the researchers to be done automatically, the manual element was removed from the description.
\item \textbf{Monitor Resource Usage}: the description of this practice was expanded. During discussions, it became clear that monitoring resources does not always specifically involve metrics such as CPU and disk usage. Instead, rough approximations may be used to determine resource usage instead, which is why the description was expanded to clarify this.
\end{itemize}
In addition to these changes, a small number of changes were made as a result of practitioners identifying errors such as typos.
The final version of the model can be seen in Figure~\ref{fig:api-m-famm}.
\clearpage
\bibliographystyledata{elsarticle-num}
\bibliographydata{apimanagement}
\clearpage
\bibliographystyle{elsarticle-num}
| 2024-02-18T23:39:40.960Z | 2021-05-31T02:02:16.000Z | algebraic_stack_train_0000 | 55 | 17,146 |
|
proofpile-arXiv_065-326 | \section{Introduction}
The prospect of achieving non-reciprocity in elastic systems is becoming increasingly appealing to the physics and engineering communities~\cite{nassar2020}. This is motivated by the potential exploitation of this effect to realize mechanical diodes and other uni-directional devices~\cite{Boechler2011, Maznev2013, Sklan2015, Devaux2015, Zhou2019, Brandenbourger2019}. In non-reciprocal systems, wave-like excitations propagate with markedly-different amplitudes in one direction and the opposite. One way to achieve this effect is by modulating the properties of the system in space and time~\cite{Lurie97}. The dynamic behavior of mechanical systems with time-varying parameters has attracted the scientific community for more than a century~\cite{Rayleigh87,Raman}. However, the simultaneous variation of the elastic or inertial properties of a medium in both time and space has not received much attention in the mechanics community partly due to the infeasibility of the experiments.
Only recent advances in smart structures~\cite{Airoldi2011, Hatanaka2014, Bilal2017}, together with fundamental studies on spatio-temporally modulated periodic media~\cite{Lurie97,Swinteck2015, Trainiti2016,Nassar2017jmps, Nassar2017prsa}, have allowed the realization of such systems in the context of periodic materials.
The phenomenon of time modulation-based non-reciprocity can be effectively explained with a one-dimensional example. Consider a 1D phononic crystal generated by periodically arranging an array of unit cells. Assume that the properties of each cell (stiffness and/or mass) can be independently varied in time. If we coordinate this variation in neighboring units to generate a wave-like pattern of properties that varies in space and time, we create a pump or modulating wave. Under specific frequency and wavelength constraints, mechanical waves that propagate in this system can interact with the modulating wave. In turn, this can lead to the appearance of asymmetric Bragg scattering bandgaps located at different frequency ranges for waves propagating from left to right and from right to left, and to non-reciprocal propagation~\cite{Swinteck2015, Trainiti2016, Deymier2017, Yi2018}. In physical terms, this spatio-temporal modulation breaks time-reversal symmetry. Similar considerations apply to locally-resonant metamaterials featuring an elastic wave-carrying medium equipped with a set of auxiliary resonators~\cite{Liu2000}. In this case, a wave-like modulation of the properties of the resonators causes the appearance of additional asymmetric features within the dispersion relation, such as bandgaps and veering points~\cite{Nassar2017prsa, Nassar2017eml, Attarzadeh2018, Chen2019, Huang2019}. Exciting a modulated metamaterial at specific frequencies leads to phenomena such as non-reciprocal wave filtering and frequency conversion of transmitted/reflected waves~\cite{Nassar2017eml}.
So far, investigations on elastic wave non-reciprocity via time-modulated resonators have been limited to axial and flexural waves in either discrete phononic systems~\cite{Wang2018} or beam-like metamaterials~\cite{Chen2019, Attarzadeh2020, Marconi2020}. However, it is of interest to extend this concept to elastic waves propagating near the surface of a semi-infinite medium, also known as surface acoustic waves (SAW). In this context, metamaterials can be realized by arrays of resonators located on the free surface, and are therefore known as \emph{elastic metasurfaces}~\cite{Colquitt2017}. To the best of our knowledge, surface wave non-reciprocity has been so far demonstrated only in semi-infinite structured media with a gyroscopic architecture~\cite{Zhao2020}. Achieving surface wave non-reciprocity on elastic half-spaces via metasurfaces could lead to the realization of novel SAW devices for high-frequency applications where phononic systems have already shown their promise, from acoustifluidics and particle manipulation~\cite{Guo2015, Collins2016} to mechanical signal processing~\cite{Hatanaka2014, Cha2018Nano}.
In this work, we study how surface waves of the Rayleigh type interact with spatio-temporally modulated metasurfaces, as illustrated in the schematic in Fig.~\ref{f:met}.
We use a combination of analytical tools and numerical simulations to investigate the effects of temporal stiffness modulations on an isolated resonator, and to identify ranges of modulation parameters where a small-modulation approximation is valid. We leverage this understanding to derive analytical solutions for the dispersion relation of Rayleigh surface waves interacting with a spatio-temporally modulated metasurface. In particular, we describe the interaction between the incident and scattered fields generated by the modulated resonators and predict the appearance of directional wave responses.
Additionally, by means of a first-order asymptotic analysis, we estimate how the modulation parameters affect the extent of the non-reciprocal wave features.
We confirm our analytical findings via numerical simulations, and demonstrate non-reciprocal wave effects such as one-way filtering and frequency conversion for transmitted and reflected signals. While our work is entirely theoretical, we envision that our analysis could guide the experimental realization of modulated metasurfaces, featuring, for example, electromechanical~\cite{Alan2019, Marconi2020} or tunable contact resonators~\cite{Palermo2019}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=1.0]{Fig_Metasurface.pdf}
\caption{Schematic of a time-modulated metasurface, depicting the non-reciprocal propagation of surface waves. A sinusoidal space-time evolution of the stiffness function of the resonators, $K(x,t)$, is illustrated. The inset is a close-up on one of the $N$ identical resonators placed on the free surface of the semi-infinite elastic medium.}
\label{f:met}
\end{figure}
The rest of the article is organized as follows. In Section~\ref{s:sdof}, we analyze the free response, stability and response to base excitation of a single time-modulated resonator. In Section~\ref{s:saw}, we study the interaction of Rayleigh waves with arrays of modulated surface resonators and obtain the dispersion curves. In Section~\ref{s:nr}, we use numerical analyses to further study the effects of spatio-temporal modulations on non-reciprocal propagation of surface waves. The conclusions and outlook of our work are reported in Section~\ref{s:concl}.
\section{Dynamics of a modulated resonator}
\label{s:sdof}
We begin by focusing on the dynamics of a single resonator. Two scenarios are captured by this analysis: a fixed, rigid substrate (Section~\ref{s:sfree}) and an oscillating, rigid substrate (Section~\ref{s:base}). These analyses allow us to better understand the interaction between the surface waves and an array of modulated resonators. By comparing analytical predictions and numerical simulations on a single resonator, we gain an understanding on the effects of stiffness modulations, we evaluate the quality of our analytical predictions, and we explore the stability of these modulated systems. This information allows us to set bounds on the choice of modulation parameters to be used for the surface wave-metasurface analysis.
\subsection{Free vibrations}
\label{s:sfree}
We first consider a single, clamped resonator with mass $m$, damping coefficient $c$ and time-varying stiffness $K(t)$ (see the inset in Fig.~\ref{f:met}). We assume $K(t)$ to be:
\begin{equation}
K(t)=K_0+2dK \cos{\left( \omega_m t \right)},
\label{e:kdef}
\end{equation}
where $K_0$ is the average stiffness, $2dK$ is the modulation amplitude and $\omega_m$ is the modulation frequency. Note that the modulation can have the form of any periodic function~\cite{Trainiti2016,Nassar2017eml}; we choose a sinusoidal one for simplicity. For future reference, we define $\omega_r=\sqrt{K_0/m}$ and choose a small damping ratio $\xi=c/(2 m\omega_r)=0.001$. Ignoring the motion of the substrate, the equation governing the displacement $V(t)$ reads:
\begin{equation}
m\frac{d^2V}{dt^2}+c\frac{dV}{dt}+K(t)V=0.
\label{e:eom}
\end{equation}
This is equivalent to assuming that the substrate is fixed and rigid. {As commonly done in the literature~\cite{Vila2017, Nassar2017eml}, we assume that the restoring force exerted by the time modulated spring is obtained by multiplying stiffness and displacement at the same time instant}. Since the stiffness in Eq.~\ref{e:eom} is time-periodic, we re-write it in complex Fourier series form:
\begin{equation}
K(t)=\sum_{p=-\infty}^{\infty}\hat{K}_p\,e^{i p \omega_m t},
\label{e:k}
\end{equation}
with Fourier coefficients defined as:
\begin{equation}
\hat{K}_p=\frac{\omega_m}{2\pi}\int_{-\frac{\pi}{\omega_m}}^{\frac{\pi}{\omega_m}} K(t)\,e^{-ip\omega_mt} dt.
\label{e:kh}
\end{equation}
For the specific choice of $K(t)$ in Eq.~\ref{e:kdef}, we are effectively truncating the sum such that $|p| \le P=1$ and the only Fourier coefficients we obtain are $\hat{K}_0=K_0$, $\hat{K}_{+1}=\hat{K}_{-1}=dK$. From now on, we adopt the truncated notation for $p$. We also assume a harmonic solution with time-modulated amplitude and expand it in Fourier series, obtaining:
\begin{equation}
V(t)=\left(\sum_{n=-\infty}^{\infty}\hat{V}_n\,e^{i n \omega_m t}\right)e^{i \omega t},
\label{e:V}
\end{equation}
with $\omega$ being an unknown frequency at this stage, and with $\hat{V}_n$ being the Fourier coefficients of the wave amplitude.
Differentiating $V(t)$, plugging it into Eq.~\ref{e:eom} together with $K(t)$ and simplifying $e^{i\omega t}$, yields:
\begin{equation}
\sum_{n=-\infty}^{\infty} \left[-m\left( \omega +n\omega_m \right)^2+ic\left( \omega +n\omega_m \right)\right]\hat{V}_n\,e^{in\omega_mt}+
\sum_{n=-\infty}^{\infty}\sum_{p=-P}^{P}\hat{K}_p\hat{V}_n\,e^{i (n+p) \omega_m t}=0.
\end{equation}
To simplify this expression, we pre-multiply it by $e^{ih\omega_m t}\omega_m/(2\pi)$, where $h$ is an arbitrary integer, and we integrate the result over the modulation period, from $-\pi/\omega_m$ to $\pi/\omega_m$. This averaging procedure is a standard method to study the dynamics of systems with time-varying properties, and has been adopted by others in the context of modulated media~\cite{Trainiti2016, Vila2017, Attarzadeh2018}.
Leveraging the orthogonality of harmonic functions, we drop the summation in $n$ and obtain the following equation, valid for all values of $h$:
\begin{equation}
\left[-m\left( \omega +h\omega_m \right)^2+ic\left( \omega +h\omega_m \right)\right]\hat{V}_h+\!\!\sum_{p=-P}^{P}\hat{K}_p\hat{V}_{h-p}=0.
\label{e:eig}
\end{equation}
This system of equations needs to be solved for all integer values of $h$ to obtain an exact solution. Here, we intend to verify the validity of a truncated expansion of the solution by setting $|h| \le H=1$. Under this assumption, and recalling that $P=1$ for our choice of stiffness modulation function, Eq.~\ref{e:eig} reduces to the system of three equations:
\begin{equation}
\left(\begin{bmatrix}
\hat{K}_0 & \hat{K}_{-1} & 0\\
\hat{K}_{+1} & \hat{K}_0 & \hat{K}_{-1}\\
0 & \hat{K}_{+1} & \hat{K}_0
\end{bmatrix}-m\begin{bmatrix}
\left( \omega -\omega_m \right)^2 & 0 & 0\\
0 & \omega^2 & 0\\
0 & 0 & \left( \omega +\omega_m \right)^2
\end{bmatrix}\right.
+
\left.ic\begin{bmatrix}
\omega -\omega_m & 0 & 0\\
0 & \omega & 0\\
0 & 0 & \omega +\omega_m
\end{bmatrix}\right)
\left[\begin{matrix}
\hat{V}_{-1}\\
\hat{V}_{0}\\
\hat{V}_{+1}
\end{matrix} \right]
=
\left[ \begin{matrix}
0\\
0\\
0
\end{matrix} \right],
\label{e:eig3}
\end{equation}
which can be written in compact form as $\mathbf{D}(\omega)\,\mathbf{\hat{V}}=\mathbf{0}$. The approximated resonance frequencies of damped vibrations are the local minima of the determinant $|\mathbf{D}(\omega)|$, as shown in Fig.~\ref{f:free}(a) for parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.25$ and $\xi=0.001$.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1.0]{Fig_SDOF_FreeNew}
\caption{Dynamics of a resonator with time-modulated stiffness, Eq.~\ref{e:eom}. (a) Analytical evaluation of the determinant of the dynamic matrix for $dK/K_0=0.1$, $\omega_m/\omega_r=0.25$ and $\xi=0.001$. The markers indicate the minima. (b) Fourier Transform of the response to an initial velocity for the same parameters used in (a). The markers indicate the resonance peak and its side-bands. (c) Stability diagram, as a function of the modulation parameters. The stability contours are given for three values of damping ratio $\xi$. The unstable (U) regions for $\xi=0.001$ are shaded in gray. The star marker indicates that parameters $dK/K_0=0.1$ and $\omega_m/\omega_r=0.25$ yield stable (S) results.}
\label{f:free}
\end{figure*}
The choice of a harmonically-modulated stiffness and a truncated solution at $|h|\le H=1$ yields three resonance frequencies for damped vibrations; these are a central frequency $\omega_r$ and two shifted ones near $\omega_r+\omega_m$ and $\omega_r-\omega_m$.
To verify the validity of the analytical approach, we solve Eq.~\ref{e:eom} numerically using a central difference scheme, in the $0 \leq t \leq 600\,T_r$ time range, with $T_r=2\pi/\omega_r$ and time increment $dt=T_r/(10\pi)$. We choose initial conditions $[V,dV/dt]_{t=0}=[0,1]$. The normalized spectrum of the steady-state portion of the displacement signal is shown in Fig.~\ref{f:free}(b). It features a central resonance peak and multiple side-bands, as expected for modulated oscillators~\cite{Minkov2017}. One can see two main differences between the analytical and numerical results. First, the numerical results yield more peaks than the analytical approximation in Eq.\ref{e:eig3} predicted: in addition to the sidebands near $\omega_r+\omega_m$ and $\omega_r-\omega_m$, there are others near $\omega_r+2\omega_m$ and $\omega_r-2\omega_m$. Moreover, the numerical sidebands are slightly shifted in frequency when compared to their respective eigenvalues (although this is not easy to appreciate from the figure). These inconsistencies are attributed to the
truncation of the analytical results.
This is discussed in more detail in Section~\ref{s:base}.
\subsection{Stability}
\label{s:stab}
When the modulations have a cosine profile in time, Eq.~\ref{e:eom} is known as Mathieu's equation. It is well known that some combinations of parameters can lead to instabilities in systems governed by this equation~\cite{Kovacic2018}. Here, we determine the regions of the modulation parameter space for which the motion of the resonator remains stable. First, we select a range of variables of interest: $0.01 \leq \omega_m/\omega_r \leq 1$ and $0 \leq dK/K_0 \leq 1$. For each $\omega_m/\omega_r$ and $dK/K_0$ couple, we solve Mathieu's equation, obtained from Eq.~\ref{e:eom} via a change of variables:
\begin{equation}
\frac{d^2V}{d\tau^2}+\bar{c}\frac{dV}{d\tau}+\left( \delta+\epsilon \cos{\tau} \right)\,V=0,
\label{e:Mat}
\end{equation}
where, for our specific problem:
\begin{equation}
\tau=\omega_mt,\,\,\,\bar{c}=2\xi\frac{\omega_r}{\omega_m},\,\,\,\delta=\frac{\omega_r^2}{\omega_m^2},\,\,\,\epsilon=2\frac{dK}{K_0}\frac{\omega_r^2}{\omega_m^2}.
\end{equation}
Eq.~\ref{e:Mat} is solved numerically for $\tau \in [0,2\pi]$, for two sets of initial conditions: (i) $[V,dV/d\tau]_{\tau=0}=[1,0]$, which yields displacement $V_1(\tau)$; (ii) $[V,dV/d\tau]_{\tau=0}=[0,1]$, which yields displacement $V_2(\tau)$. For each pair of $\omega_m/\omega_r$ and $dK/K_0$, according to Ref.~\cite{Kovacic2018}, the system is stable if:
\begin{equation}
\left|
\mathrm{Tr}
\begin{bmatrix}
V_1(\tau) & V_2(\tau)\\
dV_1(\tau)/d\tau & dV_2(\tau)/d\tau
\end{bmatrix}_{\tau=2\pi}
\right| < 2,
\end{equation}
where $\mathrm{Tr}$ is the trace operator.
The stability diagram as a function of the modulation frequency ratio $\omega_m/\omega_r$ and the modulation amplitude ratio $dK/K_0$ is illustrated in Fig.~\ref{f:free}(c). The shaded regions between the tongues represent the unstable regions for the damping of choice, $\xi=0.001$. One can see that the parameters used in Fig.~\ref{f:free}(a,b), corresponding to the red star-like marker in Fig.~\ref{f:free}(c), yield a stable response. The contours of the unstable regions are strongly dependent on damping. Increasing damping shrinks the unstable regions, while decreasing damping expands them. When damping is 0, the unstable tongues can extend to $dK/K_0=0$; however, one can appreciate that even an extremely small damping can guarantee stability for a wide range of parameters. This stability diagram represents an important tool to properly choose the modulation parameters.
\subsection{Base excitation}
\label{s:base}
To bridge the gap between single resonator dynamics and surface wave-metasurface interactions, we incorporate the harmonic motion of the substrate into our model. In fact, a resonator on a semi-infinite medium subject to Rayleigh waves exchanges stresses with the substrate, and these stresses are a function of the relative displacement between the base and resonator~\cite{Garova1999,Boechler2013}. At this stage, we ignore the interaction between the resonators through the substrate, and focus on the response of a single modulated oscillator to a base excitation. This is equivalent to assuming the substrate as rigid; we will consider the full problem in Section~\ref{s:saw}.
The base excitation problem can be analyzed similarly to the free vibrations case. {Here, the forced equation of motion is a non-homogeneous version of Eq.~\ref{e:eom} and it} reads:
\begin{equation}
m\ddot{V}+c\dot{V}+K(t)V=c\dot{v}+K(t)v,
\label{e:eomb}
\end{equation}
where $v(t)=v_0\,e^{i\Omega t}$ is the harmonic base displacement, $\Omega$ the corresponding frequency of excitation and the overdot indicates a time derivative. Following the same steps detailed in Section~\ref{s:sfree}
leads to the following system of equations:
\begin{equation}
\left(\begin{bmatrix}
\hat{K}_0 & \hat{K}_{-1} & 0\\
\hat{K}_{+1} & \hat{K}_0 & \hat{K}_{-1}\\
0 & \hat{K}_{+1} & \hat{K}_0
\end{bmatrix}-m\begin{bmatrix}
\left( \Omega -\omega_m \right)^2 & 0 & 0\\
0 & \Omega^2 & 0\\
0 & 0 & \left( \Omega +\omega_m \right)^2
\end{bmatrix}\right.
+
\left.ic\begin{bmatrix}
\Omega -\omega_m & 0 & 0\\
0 & \Omega & 0\\
0 & 0 & \Omega +\omega_m
\end{bmatrix}\right)
\left[ \begin{matrix}
\hat{V}_{-1}\\
\hat{V}_{0}\\
\hat{V}_{+1}
\end{matrix} \right]
=
\left[ \begin{matrix}
\hat{K}_{-1}v_0\\
(\hat{K}_{0}+ic\,\Omega)v_0\\
\hat{K}_{+1}v_0
\end{matrix} \right],
\label{e:eig3b}
\end{equation}
which can be written in a compact form as $\mathbf{D}(\Omega)\,\mathbf{\hat{V}}=\mathbf{F}_b$. This expression can be solved to find three Fourier coefficients $\hat{V}_j$ for any excitation frequency $\Omega$. Coefficient $\hat{V}_0$ corresponds to frequency $\Omega$, $\hat{V}_{-1}$ to $\Omega-\omega_m$, and $\hat{V}_{+1}$ to $\Omega+\omega_m$. To quantify the accuracy of this analytical solution, we solve Eq.~\ref{e:eomb} using the same numerical procedure used in Sec.~\ref{s:sfree}. This process is illustrated in Fig.~\ref{f:base} and explained in the following.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1.0]{Fig_SDOF_BaseNew}
\caption{Base excitation response of a single resonator with time-modulated stiffness. (a) Normalized Fourier transform of the numerical response of a system with $dK/K_0=0.1$, $\omega_m/\omega_r=0.45$ and $\xi=0.001$ to a harmonic base excitation of frequency $\Omega=\omega_r$. The cross markers indicate the peaks of the response, and the circular markers indicate the relative amplitudes of the Fourier coefficients. (b) Response to various base frequencies $\Omega$, where we track the numerical maxima $\bar{V}_j/\bar{V}_0$ and the relative Fourier coefficients $\hat{V}_j/\hat{V}_0$. Note that (a) is a slice of (b), and that the same legend applies to (a) and (b). (c) Evolution of the maxima of the numerical responses, and of the relative Fourier coefficients, as a function of $\Omega$. From (c), we extract the discrepancy between analytical and numerical results in predicting the frequency location of the side peaks. (d) Frequency discrepancy map.
The star markers indicate modulation parameters of interest.
}
\label{f:base}
\end{figure*}
First, we compute the numerical response to a base excitation of frequency $\Omega$. The Fourier transform of the steady-state part of the response to an excitation at $\Omega/\omega_r=1$ is shown as a continuous line in Fig.~\ref{f:base}(a), for parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.45$ and $\xi=0.001$. According to Fig.~\ref{f:free}(c), the free response of the resonator is stable for this choice of parameters. This frequency response shows several peaks: one at $\Omega/\omega_r$, and side-bands at $(\Omega+\omega_m)/\omega_r$ and $(\Omega-\omega_m)/\omega_r$. Other side-bands are also present in the numerical solution, but they are not captured by the analytical solution in Eq.~\ref{e:eig3b}. The response is normalized by the amplitude of the peak at $\Omega/\omega_r$. The peaks of interest are highlighted with cross markers in Fig.~\ref{f:base}(a). Then, we plot the analytically-derived Fourier coefficients $\hat{V}_0$, $\hat{V}_{-1}$, $\hat{V}_{+1}$, normalized by $\hat{V}_0$, at their corresponding frequencies. These are indicated as circular markers.
We compute the response of the resonator for other values of $\Omega/\omega_r$, as presented in the waterfall plot in Fig.~\ref{f:base}(b). To quantify the discrepancy between the numerical and analytical evaluation of the frequency location of the side-bands, we track the maxima of the numerical response (cross markers) and the Fourier coefficients (circles), as a function of $\Omega/\omega_r$. This is shown in Fig.~\ref{f:base}(c), from which we calculate the discrepancy in frequency as $\max(\Delta\omega_{-1}, \Delta\omega_{+1})$, where $\Delta\omega_{-1}$ and $\Delta\omega_{+1}$ are the discrepancies between the two sets of peaks.
This procedure is repeated for all modulation parameters of interest. We restrict our analysis to $0 \leq dK/K_0 \leq 0.3$ and $0.1 \leq \omega_m/\omega_r \leq 0.5$, all within the stable region for $\xi=0.001$. As a result, we obtain the discrepancy map of Fig.~\ref{f:base}(d). This map can be used to evaluate the error introduced by the truncated expansion of the analytical solution. It shows that there are wide parameter regions where the truncated expansion is accurate, with frequency discrepancies below 5\%.
In light of these results, we choose the following parameters to perform the surface wave-metasurface interaction analysis: (i) $dK/K_0=0.1$ and $\omega_m/\omega_r=0.25$, which yield a discrepancy in frequency of 5\%; (ii) $dK/K_0=0.1$ and $\omega_m/\omega_r=0.45$, which yield a discrepancy in frequency of 2\%. Both sets of parameters correspond to resonators with stable responses.
\section{Surface wave dispersion in modulated metasurfaces}
\label{s:saw}
Now that we have studied the dynamics of a single resonator and learned about the acceptable ranges for the modulation parameters, we tackle the problem of a spatio-temporally modulated metasurface. Here, we couple the motion of an elastic substrate with the array of modulated resonators using an effective medium approach~\cite{Garova1999} and a truncated plane-wave expansion of the solution~\cite{Vila2017}. To quantify the dispersion characteristics of the modulated metasurface, we use a first-order asymptotic analysis~\cite{Nassar2017prsa, Nassar2017eml}.
\subsection{Analytical dispersion relation of non-modulated metasurfaces}
\label{s:metasurf}
We begin our investigation by first recalling the dynamics of vertically polarized surface waves (of the Rayleigh type) propagating in an isotropic, elastic, homogeneous medium of infinite depth, decorated with an array of vertically-vibrating resonators. We restrict our analysis to plane waves propagating in the $x,z$ plane (see Fig.~\ref{f:met}), and we assume plane-strain conditions. The displacements along $x$ and $z$ are called $u$ and $v$, respectively. In the absence of body forces, pressure and shear waves propagating in the substrate are described by the wave equations~\cite{Graff1991}:
\begin{subequations}
\begin{equation} \label{e:bulk 1}
\nabla^{2} \Phi=\frac{1}{c_{L}^{2}} \frac{\partial^{2} \Phi}{\partial t^{2}},
\end{equation}
\begin{equation} \label{e:bulk 2}
\nabla^{2} \Psi_{y}=\frac{1}{c_{S}^{2}} \frac{\partial^{2} \Psi_{y}}{\partial t^{2}},
\end{equation}
\end{subequations}
where the dilational $\Phi$ and the transverse $\Psi_{y}$ potentials are introduced via Helmholtz decomposition of the substrate displacement field, $u=\frac{\partial\Phi}{\partial x}-\frac{\partial\Psi_y}{\partial z}$ and $v=\frac{\partial\Phi}{\partial z}+\frac{\partial\Psi_y}{\partial x}$. The pressure ($c_{L}$) and shear ($c_{S}$) wave velocities are given as:
\begin{equation}
c_{L}=\sqrt{\frac{\lambda+2\mu}{\rho}}, \quad c_{S}=\sqrt{\frac{\mu}{\rho}},
\end{equation}
where $\lambda$ and $\mu$ are the elastic Lam\'e constants and $\rho$ is the mass density of the substrate. Following a standard approach for the derivation of Rayleigh waves dispersion, we assume the following form of the potentials:
\begin{subequations}
\begin{equation} \label{e:pot 1}
\Phi=A_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{L}^{2}}}\,z}\,e^{i(\omega t-kx)},
\end{equation}
\begin{equation} \label{e:pot 2}
\Psi_{y}=B_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{S}^{2}}}\,z}\,e^{i(\omega t-kx)},
\end{equation}
\label{e:pot}
\end{subequations}
with $k$ being the wavenumber along $x$.
In parallel, we account for the presence of the surface resonators. This is done by considering the equation of motion of an undamped resonator placed on the free surface (corresponding to $z=0$) and excited by the substrate motion $v(x,0,t)=v_{0}$:
\begin{equation}
m\ddot{V}+K_0(V-v_{0})=0.
\label{e:eom2}
\end{equation}
Following the procedure adopted in Ref.~\cite{Garova1999}, we assume a harmonic motion $V=V_0\,e^{i(\omega t-kx)}$ for the resonator and consider the normal stress exerted by the resonator at the surface as its inertial force divided by the footprint area $A=s^2$, where $s$ is the distance between resonators, i.e., the unit cell size of the array. This stress is defined as:
\begin{equation} \label{e:average stress}
\sigma_{zz,r}=-\frac{m}{A}\ddot{V}=\frac{m}{A}\omega^2 V.
\end{equation}
By using this assumption, often referred to as effective medium approach~\cite{Boechler2013}, we restrict our analysis to wave propagation regimes where the surface wavelengths are much larger than the characteristic resonator spacing, $s$. The average stress in Eq.~\eqref{e:average stress} can be used as a boundary condition for the normal stress of the elastic half-space at $z=0$:
\begin{subequations}
\begin{equation} \label{e:normal stress bc at z=0}
\sigma_{zz}=\sigma_{zz,r},
\end{equation}
together with the free stress condition on the tangential component:
\begin{equation} \label{e:tang. stress bc at z=0}
\sigma_{zx}=0.
\end{equation}
\end{subequations}
For a linear elastic and isotropic material, the stresses can be related to the potentials $\Phi$ and $\Psi_y$ using the constitutive relations~\cite{Graff1991}:
\begin{subequations}
\begin{align}
\label{sigzx}
\sigma_{zx} &= \mu \left(2\frac{\partial^2\Phi}{\partial x \partial z} + \frac{\partial^2\Psi_y}{\partial x^2 } - \frac{\partial^2\Psi_y}{\partial z^2 }\right),
\\
\label{sigzz}
\sigma_{zz} &= (\lambda+2\mu) \left(\frac{\partial^2\Phi}{\partial z^2 }+ \frac{\partial^2\Psi_y}{\partial x \partial z}\right) + \lambda \left(\frac{\partial^2\Phi}{\partial x^2 } - \frac{\partial^2\Psi_y}{\partial x \partial z}\right).
\end{align}
\label{e:sig}
\end{subequations}
At this stage, using Eq.~\eqref{e:sig}, we express the boundary conditions in Eq.~\eqref{e:normal stress bc at z=0} and Eq.~\eqref{e:tang. stress bc at z=0} in terms of surface wave potentials in Eqs.~\eqref{e:pot}, and obtain the expressions:
\begin{subequations}
\begin{equation}
\left[-2i\mu \, \sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}\,k A_0 + \mu\left(\frac{\omega^2}{c_S^2} - 2k^2\right) B_0\right]\,e^{i(\omega t-kx)}=0,
\end{equation}
\begin{equation}
\left[\left(2\mu k^{2}-2\mu\frac{\omega^{2}}{c_{L}^{2}} - \lambda\frac{\omega^2}{c_{L}^2}\right)A_0 -2i\mu k \sqrt{k^{2}-\frac{\omega^{2}}{c_{S}^{2}}}\,B_0 -m\frac{\omega^2}{A}V_0\right]\,e^{i(\omega t-kx)}=0.
\end{equation}
\end{subequations}
Coupling these two equations with the equation of motion of the resonator, Eq.~\eqref{e:eom2}, and dropping the exponential $e^{i(\omega t-kx)}$, we obtain:
\begin{equation}
\label{e:metasurf}
\left[\begin{array}{ccc}
{-2i\mu k\sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}} & {\mu(\frac{\omega^2}{c_S^2} - 2k^2)} & {0} \\
{2\mu (k^{2}-\frac{\omega^{2}}{c_{L}^{2}}) - \lambda\frac{\omega^2}{c_{L}^2}} & {-2i\mu k \sqrt{k^{2}-\frac{\omega^{2}}{c_{S}^{2}}} } & {-m\frac{\omega^2}{A}} \\
{-K_{0}\sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}} & {i K_0 k} &{-m\omega^2 + K_0}
\end{array}\right]\left[\begin{array}{ccc}{A_0}\\{B_0}\\{V_0}\end{array}\right]=
\left[\begin{array}{ccc}{0}\\{0}\\{0}\end{array}\right].
\end{equation}
This system of three equations can be written in compact form as $\boldsymbol{\Pi}(k,\omega)\,\mathbf{q}_0=\mathbf{0}$. It represents the necessary condition for the plane-wave solutions to hold.
Non-trivial solutions of Eq.~\ref{e:metasurf} are found by setting $|\boldsymbol{\Pi}(k,\omega)|=0$, which yields the non-modulated metasurface dispersion relation. An example of this dispersion relation is given by the solid black lines in Fig.~\ref{f:disp}(a), for an elastic substrate with $c_L/c_S=1.5$ and a metasurface with mass ratio $m \omega_r/(A \rho c_S)=0.15$.
Note that the coupling between Rayleigh waves and surface resonators induces a subwavelenght bandgap in the surface wave spectrum. This gap covers the frequency range $\omega_r < \omega < \omega_r(\beta+\sqrt{\beta^2+1})$, where $\beta=\frac{m\omega_r}{2\rho A c_S}\sqrt{1-c_S^2/c_L^2}$ ~\cite{Palermo2016}. Further details about the dispersive features of a non-modulated metasurface can be found in Refs.~\cite{Garova1999, Boechler2013, Palermo2016}.
\subsection{Analytical dispersion relation of modulated metasurfaces}
\label{s:modmetasurf}
We consider a plane-wave spatio-temporal modulation of the stiffness of the resonators:
\begin{equation}
K(t)=K_0+2dK \cos{\left( \omega_m t -k_m x \right)},
\label{e:kdef meta}
\end{equation}
where $k_m$ is the modulation wavenumber. The key difference between Eqs.~\ref{e:kdef} and~\ref{e:kdef meta} is the presence of a spatially-varying phase term, $k_mx$. {Note that such one-dimensional modulation restricts our investigation to those scenarios where the surface wave is collinear with the direction of the stiffness modulation.}
This spatial modulation of the stiffness parameter, on its own, results in the appearance of a symmetric frequency gap in the dispersion relation of the surface waves (symmetric with respect to $k$). When combined with the temporal modulations, these frequency gaps occur at different frequencies for forward- and backward-traveling waves, i.e., non-reciprocal propagation emerges~\cite{Swinteck2015,Trainiti2016,nassarPRB}.
Based on the results of Section~\ref{s:sdof}, we choose a modulation amplitude $dK/K_0$ and frequency $\omega_m/\omega_r$ such that the response of the resonators remain stable, and the truncated approximation of the response is acceptable.
To ensure stability in the presence of spatio-temporal modulations, we need to additionally check that the modulation wave speed is smaller than the phase velocity of the medium~\cite{Cassedy1963}, i.e., $\omega_m/k_m<c\,(\omega)$. This condition might not be respected near $\omega_r$ if the resonant bandgap is at very low frequencies. Note, however, that our results on the stability of a single resonator in Fig.~\ref{f:free}(c) already warned us to stay away from values of the modulating frequency that are close to $\omega_r$.
The modulating wave generates a scattered wavefield, here described by the vector of amplitudes $\mathbf{q}_j=[\hat{A}_j, \hat{B}_j, \hat{V}_j]^T$, where $j$ is a non-zero integer. These amplitudes are associated to the substrate potentials:
\begin{subequations} \label{e:potential function j}
\begin{equation}
\Phi_j=\hat{A}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{L}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)},
\end{equation}
\begin{equation}
\Psi_{y,j}=\hat{B}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{S}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)},
\end{equation}
\end{subequations}
and to the resonator displacement:
\begin{equation}
V_{j}=\hat{V}_{j}\,e^{i(\omega_j t-k_j x)},
\end{equation}
For convenience, we define the shifted frequency and wavenumber as:
\begin{equation}
\omega_j=\omega+j\omega_m, \quad k_j=k+j k_m.
\end{equation}
The scattered field has a non-negligible amplitude only when the phase matching condition $|\boldsymbol{\Pi}(k,\omega)|$=$|\boldsymbol{\Pi}(k_j,\omega_j)|$=0 is met~\cite{Nassar2017prsa}, namely at the crossing points between the original dispersion curves $|\boldsymbol{\Pi}(k,\omega)|$=0 and the shifted curves $|\boldsymbol{\Pi}(k+j k_m,\omega+j \omega_m)|=0$. A graphical representation of two shifted curves for $j=\pm 1$ is provided in Fig.~\ref{f:disp}(a) for a metasurface modulated with frequency $\omega_m/\omega_r=0.25$ and wavenumber $k_m/k_r=2.5$, where $k_r=\omega_r/c_S$.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Dispersion}
\caption{Dispersion properties of modulated and non-modulated metasurfaces. (a) Dispersion curves. The solid black curves represent the non-modulated dispersion relation, while the dashed red and blue lines are the shifted curves for $j=-1$ and $j=+1$, respectively, for modulation parameters $\omega_m/\omega_r=0.25$ and $k_m/k_r=2.5$. The crossing points are highlighted with circular markers. The thin gray lines connect phase-matched points of the original dispersion curves. (b), (c) Details of the crossing points that are highlighted by boxes in (a). The dark regions of the colormap follow the minima of the determinant of Eq.~\eqref{e:metasurf mod}, while the circular red markers indicate the asymptotic evaluation of the modulated dispersion. The thin dotted line represents the sound cone. All cases correspond to modulation amplitude $dK/K_0=0.05$. (b) A case of veering, where no frequency band gap is found. (c) A case of locking that features a frequency bandgap of normalized width $2\delta \omega/\omega_r$. (d) Evolution of the width of the bandgap in (c) as a function of the modulation amplitude.}
\label{f:disp}
\end{figure*}
The asymmetric positioning of the crossing points between regions with positive and negative wavenumbers suggests the occurrence of direction-dependent phenomena within the metasurface. We predict the dispersion properties of the modulated meatsurface near these crossing points using a truncated plane-wave expansion. In particular, we assume that the surface wave potentials have the following form, comprising non-modulated and scattered amplitudes:
\begin{subequations}
\begin{equation} \label{e:pot 1 PW}
\Phi=\hat{A}_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{L}^{2}}}\,z} \,e^{i(\omega t-k x)} + \sum^{1}_{\substack{j=-1 \\ j \neq 0}}\hat{A}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{L}^{2}}}\,z} \,e^{i(\omega_j t-k_j x)},
\end{equation}
\begin{equation} \label{e:pot 2 PW}
\Psi_y=\hat{B}_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{S}^{2}}}\,z} \,e^{i(\omega t-k x)} + \sum^{1}_{\substack{j=-1 \\ j \neq 0}}\hat{B}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{S}^{2}}}\,z} \,e^{i(\omega_j t-k_j x)},
\end{equation}
\end{subequations}
and a resonator displacement:
\begin{equation} \label{e: res PW}
V=\hat{V}_{0}\,e^{i(\omega t-k x)} + \sum^{1}_{\substack{j=-1 \\ j \neq 0}}\hat{V}_{j}\,e^{i(\omega_j t-k_j x)}.
\end{equation}
The choice of $j=\pm1$ is direct consequence of using a harmonic plane-wave modulation in Eq.~(\ref{e:kdef meta}); otherwise, higher-order terms need to be included.
Following the same procedure adopted for the
non-modulated case, we substitute the expanded potentials, Eq.~\eqref{e:pot 1 PW} and Eq.~\eqref{e:pot 2 PW}, into the constitutive equations, Eq.~\eqref{sigzx} and Eq.~\eqref{sigzz}. Similarly, we use the truncated resonator displacement, Eq.~\eqref{e: res PW}, in the governing equation of the resonator, Eq.~\eqref{e:eom2}, and boundary condition, Eq~\eqref{e:average stress}. The result is finally substituted into the boundary conditions, Eq.~\eqref{e:normal stress bc at z=0} and Eq.~\eqref{e:tang. stress bc at z=0}. After collecting and simplifying the common exponential in each equation, we obtain:
\begin{equation}
\label{e:metasurf mod}
\left[\begin{array}{ccc}
{\boldsymbol{\Pi}(k_{-1},\omega_{-1})}&{\boldsymbol{\Gamma}(k,\omega)} &\mathbf{0}\\
{\boldsymbol{\Gamma}(k_{-1},\omega_ {-1})}&{\boldsymbol{\Pi}(k,\omega)}&{\boldsymbol{\Gamma}(k_{+1},\omega_{+1})}\\
\mathbf{0}&{\boldsymbol{\Gamma}(k,\omega)} &{\boldsymbol{\Pi}(k_{+1},\omega_{+1})}
\end{array}\right]\left[\begin{array}{ccc}{\mathbf{q}_{-1}}\\{\mathbf{q}_0}\\ \mathbf{q}_{+1}\end{array}\right]=\mathbf{0},
\end{equation}
where the submatrix $\boldsymbol{\Pi}$ is defined in Eq.~\eqref{e:metasurf}, and the submatrix $\boldsymbol{\Gamma}$ is defined as:
\begin{equation} \label{e:Gamma}
\boldsymbol{\Gamma}(k,\omega)=\left[\begin{array}{ccc}
{0} & {0} & {0} \\
{0} & {0} & {0} \\
{-dK\sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}} & {i\,dK\,k} & {dK}
\end{array}\right].
\end{equation}
{\noindent Note that the operator $\boldsymbol{\Gamma}(k_j,\omega_j)$ describes the coupling between the scattered $j$ and fundamental $0$ wave fields introduced by the stiffness modulation of the resonator.}
The expression in Eq.~\eqref{e:metasurf mod}, written in compact form as $\mathbf{\Lambda}(k,\omega)\,\mathbf{q}=\mathbf{0}$, describes the relation between the Rayleigh waves and the modulation-induced scattered waves. This relation is valid when the scattered field interacts strongly with the main field, i.e., near the crossings of non-modulated and translated dispersion curves, as indicated in Fig.~\ref{f:disp}(a).
Nontrivial solutions of Eq.~\eqref{e:metasurf mod} are obtained by setting the determinant of the $9\times9$ matrix equal to 0, $|\mathbf{\Lambda}(k,\omega)|=0$.
The resulting equation describes the dispersion relation of the modulated system in the vicinity of the crossing points between the fundamental and the shifted dispersion curves. We refrain from seeking a closed-form expression of its roots. Nevertheless, by evaluating the determinant $|\mathbf{\Lambda}(k,\omega)|$ in the neighborhood of the crossing points, and finding its local minima, we can identify the dispersion branches for the modulated system. Examples of modulated branches are provided in Fig.~\ref{f:disp}(b,c), where the magnitude of $|\mathbf{\Lambda}(k,\omega)|$ near the two crossing points is displayed as a colormap, with the minima being darker. In the neighborhood of the crossing points, the modulated branches are characterized by frequency ($\delta \omega$) and wavenumber ($\delta k$) shifts with respect to the intersection of the fundamental ($|\mathbf{\Pi}(k,\omega)|=0$) and translated ($|\boldsymbol{\Pi}(k+j k_m,\omega+j \omega_m)|=0$) dispersion curves. These shifts result from the repulsion between the two interacting modes.
The pair ($\delta k,\delta \omega$) can be calculated as the leading-order correction to ($k,\omega$) in an asymptotic analysis of the problem ~\cite{Nassar2017prsa,Hinch}.
For this purpose, we expand the surface wave potentials and the resonator displacement around the crossing point of interest, as shown in the following:
\begin{subequations}
\begin{equation} \label{e:pot 1 cor}
\tilde{\Phi}=\left(\tilde{A}_{0}\,e^{\sqrt{(k+\delta k)^2-{(\omega+\delta \omega)^{2}}/{c_{L}^{2}}}\,z} \,e^{i(\omega t-k x)}+\tilde{A}_{j} \,e^{\sqrt{(k_j+\delta k)^{2}-{(\omega_j+\delta \omega)^{2}}/{c_{L}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)}\right)\,e^{i(\delta\omega t-\delta k x)},
\end{equation}
\begin{equation} \label{e:pot 2 cor}
\tilde{\Psi}_y=\left(\tilde{B}_{0}\,e^{\sqrt{(k+\delta k)^2-{(\omega+\delta \omega)^{2}}/{c_{S}^{2}}}\,z} \,e^{i(\omega t-k x)}+\tilde{B}_{j} \,e^{\sqrt{(k_j+\delta k)^{2}-{(\omega_j+\delta \omega)^{2}}/{c_{S}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)}\right)\,e^{i(\delta\omega t-\delta k x)},
\end{equation}
\begin{equation} \label{e: res cor}
\tilde{V}=\left(\tilde{V}_{0}\,e^{i(\omega t-k x)}+\tilde{V}_{j}\,e^{i(\omega_j t-k_j x)}\right)\,e^{i(\delta\omega t-\delta k x)},
\end{equation}
\end{subequations}
where $j$ is either +1 or -1 depending on which shifted branch satisfies the phase matching condition with the fundamental dispersion curve.
With these ansatzes, and replicating the procedure we used to obtain the dispersion relation for the modulated metasurface, we obtain:
\begin{equation}
\label{e:metasurf correction}
\left[\begin{array}{ccc}
{\boldsymbol{\Pi}}(k+\delta k,\omega+\delta \omega) & {\boldsymbol{\Gamma}(k_j+\delta k,\omega_j+\delta \omega)} \\
{{\boldsymbol{\Gamma}}(k+\delta k,\omega+\delta \omega) } & {\boldsymbol{\Pi}(k_j+\delta k,\omega_j+\delta \omega)}
\end{array}\right]\left[\begin{array}{ccc}{\mathbf{q}_0}\\ \mathbf{q}_j\end{array}\right]=\mathbf{0},
\end{equation}
We can then find the corrections $\delta k$ and $\delta \omega$ by setting the determinant of the $6\times6$ matrix in Eq.~\eqref{e:metasurf correction} to zero.
Further details on this computation are given in~\ref{a:analy}.
Examples of corrected portions of the dispersion relation are shown in Fig.~\ref{f:disp}(b,c) as red dotted curves. We can see that the corrections are non-zero only in the neighborhood of the crossing points, and that they show an excellent agreement with the minima of the determinant of the matrix in Eq.~\eqref{e:metasurf mod}.
\subsection{Physical insight on the modulated dispersion relation}
From Fig.~\ref{f:disp}(b,c), we observe that the presence of a spatio-temporal modulation causes the fundamental and shifted dispersion curves to repel each other. Two distinct phenomena are observed depending on whether the fundamental and shifted branches propagate along the same direction or not, i.e., whether the group velocities $c_g={\partial \omega}/{\partial k}$ and $c_{gj}={\partial \omega_j}/{\partial k_j}$ satisfy $c_{g}c_{gj}>0$ or $c_{g}c_{gj}<0$, respectively. For a pair of co-directional branches like those shown in Fig.~\ref{f:disp}(b), the interacting modes veer without crossing as a result of the repulsion between the fundamental and scattered modes. No significant frequency shift is found and consequently no directional band gaps are generated.
Conversely, for a couple of contra-directional branches, as shown in Fig.~\ref{f:disp}(c), the repulsion between the pair of coupled modes results in a branch locking phenomenon~\cite{mace2012} and, in some occasions, in the opening of a directional bandgap. We quantify the branch repulsion by evaluating the bandgap width at the locking point, $2\delta \omega$, as a function of the modulation amplitude, $dK$. As expected from the first-order nature of the correction terms in Section~\ref{s:modmetasurf}, the width of a directional bandgap is proportional to the modulation amplitude; see Fig.~\ref{f:disp}(d).
We remark that for any crossing point $(k^*,\,\omega^*)$ at the intersection of $|\boldsymbol{\Pi}(k,\omega)|=0$ and $|\boldsymbol{\Pi}(k+ k_m,\omega+\omega_m)|=0$, we can identify a crossing point $(\omega^*+\omega_m,\,k^*+k_m)$, e.g., at the intersection of $|\boldsymbol{\Pi}(k,\omega)|=0$ and $|\boldsymbol{\Pi}(k- k_m,\omega- \omega_m)|=0$, that is phase-matched to $(k^*,\,\omega^*)$ via the pumping wave~\cite{Nassar2017eml}. In Fig.~\ref{f:disp}(a), all crossing points connected by thin gray lines are phase-matched, being only separated by a $\pm(k_m,\,\omega_m)$ translation. According to Eq.~\eqref{e:pot 1 cor} and Eq.~\eqref{e:pot 2 cor} we expect that for a surface wave traveling within the modulated metasurface with frequency $\omega^*$ and wavenumber $k^*$, a scattered field is generated with modulated frequencies and wavenumber $(\omega^*+\omega_m,\,k^*+k_m)$.
Similarly, for a fundamental surface wave at $(\omega^*+\omega_m,\,k^*+k_m)$, a scattered field at $(\omega^*,\,k^*)$ is expected.
In other words, if we send a wave at a frequency near one of the crossings, the metasurface will generate waves at the frequency of the corresponding phase-matched point~\cite{Nassar2017prsa}. Numerical evidence of this intriguing dynamic behavior, that hints to the possibility of using modulated metasurfaces as frequency converters for surface waves, is provided in Section~\ref{s:nr}.
\section{Surface wave non-reciprocity and other modulation-induced effects}
\label{s:nr}
We now resort to finite element (FE) simulations to analyze the propagation of surface waves in a modulated metasurface and to validate the directional behavior predicted by our analytical model. Our 2D plane-strain FE model, implemented in COMSOL Multiphysics, consists of a portion of an elastic substrate of depth $H=4\lambda_0$, where $\lambda_0=\omega_r/c_R$ and $c_R$ is the Rayleigh wave velocity in the substrate. One of our models is sketched in Fig.~\ref{f:disp_num}(a).
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Num1}
\caption{Numerical reconstruction of the modulated dispersion curves. (a) Schematic of the numerical models for right-going and left-going surface waves, with a right-going modulating wave. (b) Time history and (c) frequency content of the point force applied at the source. (d) Dispersion curves reconstructed via a 2D-DFT of the space-time evolution of the vertical displacement on the surface underneath the resonators, $v(x,0,t)$. The system has modulation parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.25$ and $k_m/k_r=2.5$. The colormap is scaled with respect to its maximum value. The analytical dispersion, shown as a thick red line, is obtained by tracing the local minima of $|\mathbf{\Lambda}(k,\omega)|$ in a range $\pm 0.1 k$ and $\pm 0.1 \omega$ around each crossing point. The dispersion curves of the non-modulated metasurface, $|\boldsymbol{\Pi}(k,\omega)|=0$, and its shifted twins, $|\boldsymbol{\Pi}(k+k_m,\omega+\omega_m)|=0$ and $|\boldsymbol{\Pi}(k-k_m,\omega-\omega_m)|=0$, are shown as black, red and blue dashed lines, respectively. (e) Same as (d), for modulation parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.45$ and $k_m/k_r=2.5$.}
\label{f:disp_num}
\end{figure*}
The substrate features an array of resonators mounted on its free surface with spacing $s=\lambda_0/23$. All edges of the domain, apart from the one decorated with resonators, are characterized by low-reflecting boundary conditions. A convergent mesh of quadratic Lagrangian elements is used to discretize the substrate and to ensure that the wave field is accurately captured in the frequency range of interest. The stiffness of each resonator varies in space and time according to Eq.~\eqref{e:kdef meta}. Based on the previous considerations on accuracy and stability in Section~\ref{s:sdof}, we choose modulation parameters $dK=0.1\,K_0$, $k_m=2.5\,k_r$ and either $\omega_m=0.25\,\omega_r$ or $\omega_m=0.45\,\omega_r$.
\subsection{Numerical dispersion reconstruction}
We perform transient simulations to numerically reconstruct the dispersion properties of the modulated metasurface, using the models shown in Fig.~\ref{f:disp_num}(a). We excite the medium with a vertical sine-sweep point force having frequency content $0.5\,\omega_r<\omega<2\,\omega_r$, as shown in Fig.~\ref{f:disp_num}(b,c). We record the vertical surface displacement $v(x,0,t)$ at 1000 equally-spaced locations along a length $L_a=15\lambda_0$ for a normalized time $0 < \bar{t} < 125$, where $\bar{t}=t/T_r$ and $T_r=2\pi/\omega_r$. To reconstruct the dispersion branches for $k>0$ and $k<0$, we simulate both a right-propagating (top panel of Fig.~\ref{f:disp_num}(a)) and a left-propagating wave (bottom panel), with a modulating wave that is always right-propagating. In both cases, the source is placed at a distance $d_s=5\lambda_0$ from the closest recording point. The recorded space-time traces are then transformed via 2D Discrete Fourier Transform (2D-DFT) to obtain the wavenumber-frequency spectrum $\bar{v}(k,0,\omega)$. By following the higher-amplitude regions of this two-dimensional spectrum, we can identify the numerical dispersion branches.
The reconstructed dispersion for modulation parameters $dK=0.1\,K_0$, $\omega_m=0.25\,\omega_r$ and $k_m=2.5\,k_r$ is shown as a colormap in Fig.~\ref{f:disp_num}(d). The analytical dispersion, shown as a thick red line, is obtained by tracing the minima of $|\mathbf{\Lambda}(k,\omega)|$ near the crossing points. For convenience, we also replicate on the same figure the original (non-modulated) dispersion curve and its shifted analogs (thin dashed lines). This plot unequivocally illustrates that the dispersive features observed in the numerical results are consistent with the analytical predictions. In particular, one can see that the numerical results clearly indicate the presence of several modulation-induced features: (i) two coupled directional bandgaps of narrow extent at $0.69\,\omega_r$ for left-propagating and $0.93\,\omega_r$ for right-propagating waves; (ii) two coupled veering points at $0.73\,\omega_r$ and $0.98\,\omega_r$, both for right-propagating waves; (iii) two coupled and relatively-wide directional gaps at $0.92\,\omega_r$ and $1.17\,\omega_r$ for left- and right-propagating waves, respectively.
We repeat this reconstruction procedure for different modulation parameters: $dK=0.1\,K_0$, $\omega_m=0.45\,\omega_r$ and $k_m=2.5\,k_r$. The results are shown in Fig.~\ref{f:disp_num}(e), and they display a similar consistency with the analytical predictions as for the previous configuration. In this case, the features of interest are two coupled directional gaps at the locking frequencies $0.86\,\omega_r$ and $1.31\,\omega_r$, for left- and right-propagating waves, respectively. These gaps are of interest because they are characterized by a significant reduction in spectral amplitude.
\subsection{Non-reciprocal transmission and conversion-by-reflection}
\label{s:nrtr}
To verify the characteristics of the scattered field responsible for directional wave propagation, we perform transient simulations with narrow-band waveforms centered at those frequencies. For these analyses, we use the models shown in Fig. \ref{f:TR}(a,b), cf. Fig.~\ref{f:disp_num}(a). In both cases, we have two substrate-only regions separated by a region of length $L_a=12.5\,\lambda_0$ that features a large number of surface resonators (286) spaced at $s=\lambda/23$. The response is recorded at locations $x_l$ and $x_r$, that mark the left and right edges of the region with resonators, respectively. In both configurations, the point source is located on the free surface at a distance $d_s=3.5\,\lambda$ from the corresponding edge of the resonators region. In all cases, the modulating wave is right-propagating, with $dK=0.1\,K_0$, $\omega_m=0.45\,\omega_r$ and $k_m=2.5\,k_r$. This corresponds to the dispersion curve in Fig.~\ref{f:disp_num}(e).
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Num2}
\caption{Transient FE simulations of the propagation of narrow-band signals centred at the directional gap frequencies ($0.86\,\omega_r$ and $1.31\,\omega_r$, Fig.~\ref{f:disp_num}(e)) through a modulated metasurface. Schematic of the numerical setup for (a) right-propagating and (b) left-propagating surface waves. Spectral content of the vertical surface wave field recorded at the left and right edges of the resonators array for: (c) right-propagating waves at $\Omega=1.31\,\omega_r$, (e) right-propagating waves at $\Omega=0.86\,\omega_r$, (g) left-propagating waves at $\Omega=1.31\,\omega_r$, (i) left-propagating waves at $\Omega=0.86\,\omega_r$. Radon transform of time-space surface wave records computed along the resonator array for: (d) right-propagating waves at $\Omega=1.31\,\omega_r$, (f) right-propagating waves at $\Omega=0.86\,\omega_r$, (h) left-propagating waves at $\Omega=1.31\,\omega_r$, (j) left-propagating waves at $\Omega=0.86\,\omega_r$.}
\label{f:TR}
\end{figure*}
We begin our investigation by considering a right-propagating surface wave (i.e., incident to the array at $x_l$) at frequency $\Omega=1.31\,\omega_r$. The spectra of the time signals recorded at $x_l$ and $x_r$ are shown in Fig.~\ref{f:TR}(c). The spectrum at $x_r$ (blue line), corresponding to a wave transmitted through the array of resonators, shows a significant amplitude reduction at $\Omega=1.31\,\omega_r$, in agreement with the directional gap predicted by our analysis. The amplitude gap is accompanied by the generation of a side peak at the twin locking frequency $0.86\,\omega_r$. This frequency content appears even more markedly in the spectrum of the signal recorded at the $x_l$ location (red line). This second peak corresponds to the reflected field caused by the modulated array of resonators. To support this claim, we compute the two-dimensional Radon transform (wave speed $c$ versus frequency $\omega$) of the time-space data matrix recorded within the array of resonators. By means of this transform, we determine if a signal with a certain frequency content is right-propagating (positive $c$) or left-propagating (negative $c$). The amplitude of this spectrum, shown as a colormap in Fig.~\ref{f:TR}(d), confirms that the signal content at $0.86\,\omega_r$ travels from right to left, opposite to the direction of the incident signal at $1.31\,\omega_r$. This indicates that the modulated metasurface can convert an incident wave into a reflected wave with a different frequency content---shifted from the original frequency by the modulating one~\cite{Nassar2017eml}. To verify non-reciprocity, we send a left-propagating wave with frequency centered at $1.31\,\omega_r$. In this case, the signal travels undisturbed through the metasurface, as confirmed by the spectra at $x_l$ and $x_r$, shown in Fig.~\ref{f:TR}(g). Moreover, no evidence of reflected waves is found in the Radon transform shown in Fig.~\ref{f:TR}(h).
We replicate these analyses for left- and right-propagating surface waves excited at the phase-matched locking frequency $\Omega=0.86\,\omega_r$. In this case, left-propagating waves travel almost undisturbed within the metasurface, as confirmed by the spectral contents in Fig.~\ref{f:TR}(e) that feature waves at the carrier frequency only, and by the Radon transform in Fig.~\ref{f:TR}(f). Conversely, the directional gap for right-propagating waves causes an attenuation of the transmitted signal at $0.86\,\omega_r$, as shown by the red line of Fig.~\ref{f:TR}(i). This phenomenon is accompanied by a back-scattering of the coupled frequency $1.31\,\omega_r$, as indicated by the blue line in Fig.~\ref{f:TR}(i) and by the Radon transform in Fig.~\ref{f:TR}(j).
While this section has been dedicated to the response to excitation frequencies within the directional bandgaps, the reader can find details on the response of a metasurface excited at a veering point in~\ref{a:transm}.
\subsection{Surface-bulk wave conversion}
It is known that surface waves can convert into bulk waves upon interaction with a metasurface~\cite{Colquitt2017}. To evaluate how this phenomenon can influence the directional response of a modulated metasurface, we analyze the full wavefield in the substrate at different time instants. We consider the case of a left-propagating narrow-band signal with carrier frequency $\Omega=0.86\,\omega_r$. This case corresponds to the results in Fig.~\ref{f:TR}(i,j). The time-space evolution of the displacement field along the surface is illustrated in Fig.~\ref{f:WF}(a).
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Num3}
\caption{(a) Time-space evolution of the surface displacement for a left-propagating wave at $0.86\,\omega_r$. The dashed line indicates the beginning of the region that features resonators. The thick horizontal lines indicate the time instants of interest. (b) The wavefield at $\bar{t}=5$, showing how waves propagate along and below the surface. (c) The wavefield at $\bar{t}=20$. The arrows and letters indicate wave features of interest.}
\label{f:WF}
\end{figure*}
The wavefields corresponding to time instants $\bar{t}=5$ and $\bar{t}=20$ are shown in Fig.~\ref{f:WF}(b,c), respectively. In particular, the wavefield at $\bar{t}=20$ presents several interesting features. First, it is clearly visible that the transmitted and reflected surface waves have different wavelength contents, as a result of the frequency conversion shown in Fig.~\ref{f:TR}(i). This is an example of conversion by reflection due to spatio-temporal modulations~\cite{Nassar2017eml}. The conversion does not take place exactly at the edge of the resonators region, but rather at a location within the resonator array.
If we focus our attention on the reflected waves, we can also see that not all waves are reflected along the surface. As indicated by the arrow pointing towards the bottom-right of Fig.~\ref{f:WF}(c), a part of the scattered field is converted into waves that propagate towards the bulk. It would be interesting to quantify the surface-to-bulk wave conversion mechanism and determine the penetration length of the fundamental wave into the metasurface. These aspects, which have practical implications for the design of surface wave converters and filters, deserve a separate treatment.
\section{Conclusions}
\label{s:concl}
We have provided a detailed analytical and numerical account of the non-reciprocal propagation of surface waves of the Rayleigh type in a dynamically modulated metasurface. We have first bridged the gap between the single-resonator dynamics and wave-resonator interactions, by providing a detailed description of the dynamics of a time-modulated resonator. We have then developed an analytical framework to describe the dispersion properties of spatio-temporally varying metasurfaces, and illustrated their asymmetric features.
By means of numerical simulations, we have demonstrated the occurrence of non-reciprocal surface wave attenuation, frequency conversion by reflection and by transmission. We have also shown that surface waves interacting with the modulated metasurface can leak as bulk waves into the substrate. Our findings and the tools we have provided can serve as guidelines for future experiments on the topic, and can play an important role in developing practical designs of SAW devices with unprecedented wave manipulation capacity.
\section*{Acknowledgments}
AP acknowledges the support of DICAM at the University of Bologna. PC acknowledges the support of the Research Foundation at Stony Brook University. CD acknowledges support from the National Science Foundation under EFRI Grant No.\ 1741565. The authors wish to thank Lorenz Affentranger and Yifan Wang for useful discussions.
| 2024-02-18T23:39:41.125Z | 2020-10-16T02:05:56.000Z | algebraic_stack_train_0000 | 64 | 9,677 |
|
proofpile-arXiv_065-479 | \section*{Note Added:}
\noindent After this letter was accepted for publication, we became aware of
the work of Girvin and MacDonald \cite{girvin}, where they
showed that the gauge-transformed Laughlin
wave-function [ eq. (7) of their paper] shows off-diagonal long-range order.
It then immediately follows that the Calogero-Sutherland ground state
wave-function in two dimensions as given by \eq{grst} [which is identical to
eq. (7) of \cite{girvin}] also exhibits off-diagonal long-range order.
| 2024-02-18T23:39:41.603Z | 1997-04-22T21:45:00.000Z | algebraic_stack_train_0000 | 92 | 82 |
|
proofpile-arXiv_065-484 | \section*{Introduction}
There has been recently an important activity in the study of {$N=2$} supersymmetric
hierarchies (KP \cite{popo1,aratyn,dasb1,ghosh,dasp}, generalizations of KdV
\cite{bokris,ikrim}, Two Bosons \cite{dasb2}, NLS \cite{kriso,krisoto,dasb3},
etc..). The most usual tools
in this field are the algebra of
$N=1$ pseudo-differential operators and Gelfand-Dickey type Poisson brackets
\cite{geldik}.
Although these systems have {$N=2$} supersymmetry, only for very few of them
with very low number of fields is a formulation in extended superspace known.
It is the purpose of this paper to partially fill this gap. The formalism which
we shall present here partly originates from the article \cite{delma}.
It turns out that in order to construct the Lax operators of
{$N=2$} supersymmetric hierarchies,
one should not use the whole algebra of {$N=2$} pseudo-differential operators, but
rather the subalgebra of pseudo-differential operators preserving chirality.
These operators were first considered in \cite{popo3}.
They will be defined in section \ref{main}, where we also study the KP Lax
equations and the two associated Hamiltonian structures. It turns out that the
first (linear) bracket
is associated with a non-antisymmetric $r$ matrix
\cite{semenov}. Because of that, the second
(quadratic) bracket is not of pure Gelfand-Dickey type. The main result of this
paper is that we find two
possibilities for this quadratic bracket. In fact, we show that there
is an invertible map in the KP phase space which sends one of the
quadratic Poisson structure into the other. However, this map does
not preserve the Hamiltonians.
In section \ref{reduc}, we study the possible reductions of
the KP hierarchy by looking for Poisson subspaces in the phase space. These
are different depending on the quadratic bracket which is used.
Among these reductions, there are two different hierarchies with the {$N=2$}
classical super-${\cal W}_n$ algebra \cite{lupope} as a hamiltonian structure.
In particular, two of the three known {$N=2$} supersymmetric extensions
of the KdV hierarchy \cite{Mathieu1} are found.
They correspond to $a=-2$ and $a=4$ in
the classification of Mathieu. These and some other examples are described in section \ref{examples}. Notice that from the known cases with a low number of fields \cite{Mathieu1,math2,popo2,yung1,Ivanov1,yung2},
one expects for any $n$ three hierarchies with
super-${\cal W}_n$ as a hamiltonian structure. So our construction does not exhaust the possible cases.
We also found two hierarchies which Poisson structure
is the classical ``small" $N=4$ superconformal algebra. In one case the
evolution equations are $N=4$ supersymmetric, while in the other they
are only {$N=2$} supersymmetric.
Finally, in section \ref{n1susy} we give the relation of our formulation with the usual formulation of the {$N=2$} supersymmetric KP Lax equations in $N=1$ superspace \cite{inami,dasb1,dasp}.
\setcounter{equation}{0}
\section{N=2 KP hierarchy \label{main}}
\paragraph{{$N=2$} supersymmetry}
We shall consider an {$N=2$} superspace with space coordinate $x$ and two
Grassmann coordinates $\theta$, $\bar\theta$. We shall use the
notation ${\underline x}$ for the triple of coordinates
$(x,\theta,\bar\theta)$. The supersymmetric covariant derivatives
are defined by
\begin{equation}
\partial\equiv{\partial\over\partial x},\,\,D={\partial\over\partial\theta}
+\bar\theta\partial,\,\,\bar D={\partial\over\partial\bar\theta}
+\theta\partial, D^2=\bar D^2=0,\,\,\{ D,\bar D\}=\partial
\label{n2alg}\end{equation}
Beside ordinary superfields $H({\underline x})$ depending
arbitrarily on Grassmann coordinates, one can also define chiral
superfields $\varphi({\underline x})$ satisfying
$D\varphi =0$ and antichiral superfields $\bar\varphi({\underline x})$
satisfying $\bar D\bar\varphi =0$.
We define the integration over the {$N=2$} superspace to be
\begin{equation}
\int d^3{\underline x}\, H(x,\theta,\bar\theta)= \int dx\bar DDH(x,\theta,\bar\theta)
\vert_{\theta=\bar\theta=0}.
\end{equation}
The elements of the associative algebra of {$N=2$} pseudo-differential operators ($\Psi$DOs) are the operators
\begin{equation}
P = \sum_{i <M} ( a_{i} +b_i[D,\bar D]+\alpha_{i} D + \beta_{i} \overline{D} )\partial^{i}
\label{pdo}\end{equation}
where $a_{i}$, $b_{i}$ and $\alpha_{i}$, $\beta_{i}$ are respectively even and odd {$N=2$} superfields.
However, this algebra is not very manageable.
In particular, the set of strictly pseudo-differential operators ($M=0$ in\reff{pdo}) is not
a proper subalgebra, but only a Lie subalgebra.
Also, there are too many fields in these operators. We expect the phase space of the
{$N=2$} KdV hierarchies to consist of the supercurrents of the {$N=2$}
${\cal W}_n$ algebras. In extended superspace, these supercurrents are bosonic superfields,
and there is one such superfield for a given integer dimension.
But in \reff{pdo}, each power of $\partial$ corresponds to four superfields, two even ones
of integer dimension and two odd ones of half-integer dimension. It is thus clear that one
has to restrict suitably the form of the {$N=2$} operators. It turns out
that a possible
restriction is to define the set $\cal C$ of
pseudo-differential operators $L$ preserving chirality of the form
\begin{footnote}
{Operators of this type were first considered in \cite{popo3}}
\end{footnote}
\begin{equation}
L=D{\cal L}\bar D,\,\,\,\,\,\,{\cal L}= \sum_{i <M} u_{i}\partial^{i}
\label{cpdo}\end{equation}
The coefficient functions $u_i$ are bosonic {$N=2$} superfields. These operators satisfy
$DL=L\bar D=0$.
The product of two chiral operators is again a
chiral operator. The explicit product rule is easily worked out
\begin{equation}
LL'= D \left(
{\cal L}\partial {\cal L'} +(D.{\cal L})(\bar{D}.{\cal L'}) \right) \bar{D},
\end{equation}
where we have used the notation
\begin{equation}
(D.{\cal L})=\sum_{i <M}(Du_{i})\partial^{i}.
\end{equation}
Notice that $I=D \partial^{-1}\bar{D}$ is the unit of the algebra $\cal C$.
We could have used as well the algebra $\bar{\cal C}$ of $\Psi$DOs
satisfying $\bar D\bar L=\bar L D=0$. Notice that the product of an element in ${\cal C}\,\,$ by
an element in $\bar{\cal C}$ vanishes. In fact ${\cal C}\,\,$ and $\bar{\cal C}$
are related by transposition, $L^t=-\bar D{\cal L}^tD\in \bar{\cal C}$.
Although the transposition leads from ${\cal C}$ to $\bar{\cal C}$,
there exists an anti-involution which acts inside ${\cal C}$. It is
given by
\begin{equation}
\tau(L)=DL^t\partial^{-1}\bar
D,\,\,\,\tau(L_{1}L_{2})=\tau(L_{2})\tau(L_{1}).
\end{equation}
Notice that it does not make sense in the algebra $\cal C$ to multiply a
$\Psi$DO by a function. However, it is possible to multiply on the left by a chiral
function $\phi$, $D\phi=0$
\begin{equation}
\phi L= D\phi{\cal L}\bar D={\lambda}(\phi) L,\,\,\,{\lambda}(\phi)\equiv
D\phi\partial^{-1}\bar D,
\end{equation}
and on the right by an antichiral function $\bar\phi$, $\bar D\bar\phi=0$
\begin{equation}
L\bar\phi = D{\cal L}\bar\phi\bar D=L{\bar\lambda}(\bar\phi),\,\,\,
{\bar\lambda}(\bar\phi)\equiv D\partial^{-1}\bar\phi\bar D.
\end{equation}
We define the residue of the pseudo-differential operator $L$ by
$\mbox{\rm res} L=u_{-1}$ \cite{Mathieu1}.
The residue of a commutator is a total derivative,
$\mbox{\rm res}[L,L']=D\bar\omega+\bar D\omega$. The trace of $L$ is the integral of the
residue
\begin{equation}
\mbox{\rm Tr}L=\int d^3{\underline x}\,\mbox{\rm res}L,\,\,\,\,
\mbox{\rm Tr}[L,L']=0.
\end{equation}
$\cal C$ can be divided into two proper subalgebras
${\cal C} = {\cal C}_{+} \oplus {\cal C}_{-}$, where $L$ is in ${\cal C}_{+}$ if $\cal L$ is a differential operator and $L$ is in ${\cal C}_{-}$ if $\cal L$ is a
strictly pseudo-differential operator ($M=0$ in \reff{cpdo}). We shall note
\begin{equation}
L=L_++L_-, \,\,\,L_+=D{\cal L}_+\bar D\in {\cal C}_+,\,\,\,
L_-=D{\cal L}_-\bar D\in{\cal C}_-.
\end{equation}
Here an important difference with the usual bosonic and $N=1$ cases occurs. For
any two $\Psi$DOs $L$ and $L'$ in $\cal C$ one has $\mbox{\rm Tr}(L_{-}L'_{-})= \int d^3{\underline x}\,\,\mbox{\rm res}(L)\,\,\mbox{\rm res}(L') \neq 0$. While ${\cal C}_{+}$ is an isotropic subalgebra, ${\cal C}_{-}$
is not. One important consequence of this fact is that if one defines the
endomorphism $R$ of $\cal C$ by $R(L)={1\over 2}(L_+-L_-)$, then $R$ is a non-antisymmetric classical $r$ matrix,
\begin{equation}
\mbox{\rm Tr} (R(L)L'+LR(L'))=-\int d^3{\underline x}\,\mbox{\rm res} L\,\mbox{\rm res} L'.
\end{equation}
Notice that a non-antisymmetric $r$ matrix in the context of bosonic KP Lax
equations first appeared in \cite{kuper}.
\paragraph{KP equations}
Let us now write the evolution equations of the {$N=2$} supersymmetric KP hierarchy.
We consider operators $L=D{\cal L}\bar D$ in $\cal C$ of the form
\begin{equation}
{\cal L}=\partial^{n-1}+\sum_{i=1}^{\infty}V_i\partial^{n-i-1}.
\label{KPop}\end{equation}
$L$ has a unique $n$th root in $\cal C$ of the form
\begin{equation}
L^{1\over n}=D(1+\sum_{i=1}^{\infty}W_i\partial^{-i})\bar D,
\end{equation}
and we are led to consider the commuting flows (see \cite{popo3})
\begin{equation}
{\partial\over\partial t_k}L=[(L^{k\over n})_+,L]=[R(L^{k\over n}),L].
\label{KPeq}\end{equation}
There are symmetries of these equations which may be described as
follows. Let us first introduce a chiral, Grassmann even superfield $\varphi$ which
satisfies
\begin{equation}
{\partial\over\partial t_k}\varphi=(L^{k\over n})_+.\varphi
\label{fieq}\end{equation}
where the right-hand side is the chiral field obtained by acting with
the differential operator $(L^{k\over n})_+$ on the field $\varphi$.
Then the transformed operator
\begin{equation} s(L)= \lambda(\varphi^{-1})L\lambda(\varphi)
\label{simil}\end{equation}
satisfies an evolution equation of the same form \reff{KPeq} as that
of $L$.
We may also consider an antichiral, Grassmann odd superfield
$\bar\chi$ which satisfies
\begin{equation}
{\partial\over\partial t_k}\bar\chi=-(L^{k\over n})^t_+.\bar\chi
\label{chieq}\end{equation}
Then the transformed operator
\begin{equation} \sigma(L)=(-1)^n\lambda((D\bar\chi)^{-1})\tau(L)\lambda(D\bar\chi)
\label{chitra}\end{equation}
satisfies an evolution equation of the same form \reff{KPeq} as that
of $L$, with the direction of time reversed.
\paragraph{Poisson brackets}
The Lax equations \reff{KPeq} are bi-hamiltonian with respect to two compatible Poisson brackets
which we now exhibit.
Let $X$ be some $\Psi$DO in $\cal C$ with coefficients independent of
the phase space fields $\{V_i\}$, then define the linear functional
$l_{X}(L) = \mbox{\rm Tr}(LX)$. The generalization of the first Gelfand-Dickey bracket
is obvious and reads
\begin{equation}
\{ l_{X},l_{Y} \}_{(1)} (L) = \mbox{\rm Tr} \left( L[X_{+},Y_{+}]-L[X_{-},Y_{-}] \right).
\label{pb1}\end{equation}
This is nothing but the linear bracket associated with the matrix $R$.
Now we turn to the construction of the second bracket. It will turn out
more complicated than the standard Gelfand-Dickey bracket because of the non-antisymmetry of the $r$ matrix. An analogous situation in the bosonic case is studied in \cite{Oevel}. We finally found two different
possibilities.
In order to write them down, we need to be able to separate the residue of a $\Psi$DO in
$\cal C$ into a chiral and an antichiral part. For an arbitrary
superfield $H({\underline x})$, we define
\begin{equation}
H=\Phi[H]+\bar\Phi[H],\,\,\, D\Phi[H]=0,\,\,\, \bar D\bar\Phi[H]=0.
\end{equation}
This is not a local operation in $\cal C$. An explicit form may be
chosen as
\begin{equation}
\Phi[H]=D\bar D\int d^{3}{\underline x}'\Delta({\underline
x}-{\underline x}')H({\underline x}'),\,\,
\bar \Phi[H]=\bar DD\int d^{3}{\underline x}'\Delta({\underline
x}-{\underline x}')H({\underline x}'),
\label{chipro}\end{equation}
where $\Delta$ is the distribution
\begin{eqnarray}
&\Delta({\underline x}-{\underline x}')=
(\theta-\theta')(\bar\theta-\bar\theta')\epsilon(x-x'),&\label{distri}\\
&\partial\epsilon(x-x')=\delta(x-x'),\,\,\, \epsilon(x-x')=-\epsilon(x'-x).
\nonumber\end{eqnarray}
In the following, we shall use the short-hand notations
$\Phi[\,\mbox{\rm res}[L,X]]=\Phi_X$, $\bar\Phi[\,\mbox{\rm res}[L,X]]=\bar\Phi_X$.
In general, $\Phi_{X}$ will not satisfy the same boundary conditions as the
phase space fields do. However, we noted earlier that in the case of a commutator,
the residue is a total derivative,
$\,\mbox{\rm res} [L,X]=D\bar\omega+\bar D\omega$.
Here $\omega$ and $\bar\omega$ are differential polynomials in the fields. Then
one easily shows that $\Phi_{X}=D\bar\omega+\alpha$,
$\bar\Phi_{X}=\bar D\omega-\alpha$,
where $\alpha$ is a constant reflecting the arbitrariness in the
definition of $\Phi$, $\bar\Phi$. Up to this constant, $\Phi_{X}$ will
respect the boundary conditions.
We are now in a position to write the two possibilities for the second bracket
as
\begin{footnote}
{The Poisson brackets (\ref{pb2p},\ref{pb2m}) may be put in the
general form introduced in \cite{Maillet}
\begin{equation}
\{ l_{X},l_{Y} \}_{(2)}^{a,b} (L) =\mbox{\rm Tr} \left( LXa(LY)+XLb(LY)-LXc(YL)-XLd(YL)
\right)\end{equation}
However, the price to pay is that $a$, $b$, $c$, $d$ are non-local endomorphisms of $\cal C$. As an example, for the first quadratic bracket one finds
\begin{equation}
a(X)={1\over 2}(X_++\lambda(\Phi[\,\mbox{\rm res} X]))-{1\over 2}
(X_--\lambda(\Phi[\,\mbox{\rm res} X])),\,\,\, b(X)=\bar\lambda(\bar\Phi[\,\mbox{\rm res} X]).
\end{equation}
One easily checks in particular that $a$ is a non-local antisymmetric $r$ matrix.}
\end{footnote}
\begin{equation}
\{ l_{X},l_{Y} \}_{(2)}^a (L) =\mbox{\rm Tr} \left( LX(LY)_{+}-XL(YL)_{+}+
\Phi_Y LX+XL\bar\Phi_Y\right),
\label{pb2p}\end{equation}
and
\begin{equation}
\{ l_{X},l_{Y} \}_{(2)}^b (L) =\mbox{\rm Tr} \left( LX(LY)_{+}-XL(YL)_{+}+
\Phi_Y XL+LX\bar\Phi_Y\right).
\label{pb2m}\end{equation}
These expressions do not depend on the arbitrary constant $\alpha$.
Checking the antisymmetry
of the Poisson brackets and the Jacobi identity can be done with
a little effort. As usual,
the first bracket is a linearization of the two quadratic ones, that is to say
\begin{equation}
\{ l_{X},l_{Y} \}_{(2)}^{a,b} (L+zD\partial^{-1}\bar D)
=\{ l_{X},l_{Y} \}_{(2)}^{a,b} (L)
+z\{ l_{X},l_{Y} \}_{(1)} (L),
\end{equation}
and the linear bracket is compatible with each of the two quadratic brackets.
Introducing
the hamiltonians
${\cal H}_{k} = {n\over k}\mbox{\rm Tr}(L^{k\over n})$, the KP evolution equations \reff{KPeq}
may be written as
\begin{equation}
\partial_{t_k} \left( l_{X}(L) \right)
= \{ l_{X},{\cal H}_{k+n} \}_{(1)} (L)
= \{ l_{X},{\cal H}_{k} \}_{(2)}^{a,b} (L)
\end{equation}
\paragraph{Poisson maps}
Before turning to the study of the reductions of the KP hierarchies,
let us exhibit some relations between the two quadratic brackets. We
will use the invertible map in $\cal C$
\begin{equation}
p(L)=\partial^{-1}\tau(L)=D\partial^{-1}L^t\partial^{-1}\bar D.
\label{poimap}\end{equation}
Then a straightforward calculation leads to
\begin{equation}
\{ l_{X}\circ p,l_{Y}\circ p\}_{(2)}^a=-\{ l_{X},l_{Y}\}_{(2)}^b\circ
p,
\end{equation}
which shows that \reff{pb2p} and \reff{pb2m} are equivalent Poisson
brackets. However there is no relation between the hamiltonians
$\mbox{\rm Tr}(L^{k\over n})$ and $\mbox{\rm Tr}(p(L)^{k\over n-1})$.
There is another relation between the two brackets, which involves the
chiral superfield $\varphi$ satisfying the evolution equation
\reff{fieq}. Let us introduce the linear functional
$l_t=\int d^3{\underline x}(t\varphi)$, where $t({\underline x})$ is a Grassmann
even superfield. We consider an enlarged phase space including $\varphi$,
and extend the Poisson bracket \reff{pb2p} to this phase space by
\begin{equation}
\{ l_t,l_{Y} \}_{(2)}^a (L,\varphi)=\int d^3{\underline x} t((LY)_+.\varphi
+\Phi_Y\varphi),\,\,\,\{ l_{t},l_{t'} \}_{(2)}^a=0.
\end{equation}
Then one finds
\begin{equation}
\{ l_{X}\circ s,l_{Y}\circ s\}_{(2)}^a=\{ l_{X},l_{Y}\}_{(2)}^b\circ s,
\end{equation}
where the transformation $s$ has been defined in \reff{simil}. Notice
that the hamiltonians are invariant functions for the transformation
$s$, $\mbox{\rm Tr}(L^{k\over n})=\mbox{\rm Tr}(s(L)^{k\over n})$.
A last relation uses the antichiral superfield $\bar\chi$ satisfying
the evolution \reff{chieq}. Let us introduce the linear functional
$l_{\bar t}=\int d^3{\underline x}(\bar t\bar\chi)$,
where $\bar t({\underline x})$ is a Grassmann
odd superfield. We consider an enlarged phase space including $\bar\chi$,
and extend the Poisson bracket \reff{pb2p} to this phase space by
\begin{eqnarray}
&\{ l_{\bar t},l_{Y} \}_{(2)}^a (L,\bar\chi)=\int d^3{\underline x}
{\bar t}(-(LY)^t_+.\bar\chi
+\Phi_Y\bar\chi),&\\ &
\{ l_{\bar t_1},l_{\bar t_1} \}_{(2)}^a=-2\int d^{3}{\underline
x}\bar t_1\bar\chi\bar\Phi[\bar t_2\bar\chi],&
\end{eqnarray}
where $\Phi$, $\bar\Phi$ are defined in equations (\ref{chipro},\ref{distri}).
Notice that this is a non-local Poisson bracket. One finds
\begin{equation}
\{ l_{X}\circ\sigma,l_{Y}\circ\sigma\}_{(2)}^a=-\{
l_{X},l_{Y}\}_{(2)}^b\circ\sigma,
\end{equation}
where the transformation $\sigma$ has been defined in \reff{chitra}.
\setcounter{equation}{0}
\section{Reductions of the KP hierarchy \label{reduc}}
In order to obtain consistent reductions of the KP hierarchy, we need to find Poisson
submanifolds of the KP phase space. Considering first the
quadratic bracket \reff{pb2p}, we rewrite it as
\begin{eqnarray}
&\{ l_{X},l_{Y} \}_{(2)}^a (L) =\mbox{\rm Tr} X\xi_{l_Y}^a,&\nonumber\\
&\xi_{l_Y}^a=(LY)_{+}L-L(YL)_{+}+\Phi_Y L+L\bar\Phi_Y.&
\label{hvf}\end{eqnarray}
$\xi_{l_Y}^a$ is the hamiltonian vector field associated with the function $l_Y$. One easily checks that if L has the form \reff{KPop}, then for any $Y$,
$\xi_{l_Y}^a$ has the form $D(\sum_{i<n-1}\xi_i\partial^i)\bar D$.
It is obvious from \reff{hvf} that for any $Y$, if $L$ is in ${\cal C}_+$,
then $\xi_{l_Y}^a$ is also in ${\cal C}_+$. This means that the constraint
\begin{equation}L=L_+
\label{kdv}\end{equation}
defines a Poisson submanifold. The hierarchies obtained in this way are
the {$N=2$} supersymmetric KdV hierarchies studied by Inami and Kanno
\cite{inami}, and the Lax operators \reff{kdv} already appeared in
\cite{popo3}. The lowest order cases will be presented in the next section.
Another possible reduction is to take $L$ of the form
\begin{equation} L=L_++D\,\varphi\partial^{-1}\bar\varphi\bar D,\,\,\,\,\,
D\varphi=\bar D\bar\varphi=0.
\label{nls}\end{equation}
where $\varphi$ and $\bar\varphi$ are Grassmann even or odd chiral superfields.
With $L$ of the form \reff{nls} and $Y$ arbitrary, one finds
\begin{equation}
(\xi_{l_Y}^a)_-=D((LY)_+.\varphi+\Phi_Y\varphi)\partial^{-1}\bar\varphi
+\varphi\partial^{-1}(-(YL)_+^t.\bar\varphi+\bar\Phi_Y\bar\varphi))\bar D,
\end{equation}
Noticing that $(LY)_+.\varphi$ is a chiral superfield and
$(YL)_+^t.\bar\varphi$ an antichiral superfield, it is easily checked that
$\xi_{l_Y}^a$ is indeed tangent to the submanifold defined by the constraints \reff{nls}.
It is possible to consider an enlarged phase space which coordinates are the fields in $L$
and $\varphi$, $\bar\varphi$. Let us introduce the linear functionals
\begin{equation}l_t=\int d^3{\underline x}(\varphi t),\,\,
l_{\bar t}=\int d^3{\underline x}(\bar t\bar\varphi),
\end{equation}
where $ t$ and $\bar t$ are general superfields, of the same
Grassmann parity as $\varphi$ and $\bar\varphi$.
In this enlarged phase space, the second Poisson bracket, in the case when
$\varphi$ and $\bar\varphi$ are Grassmann even, is defined by \reff{pb2p}
and
\begin{eqnarray}
&\{ l_t,l_{Y} \}_{(2)}^a (L,\varphi,\bar\varphi)=\int d^3{\underline x} ((LY)_+.\varphi
+\Phi_Y\varphi)t, &\label{lfi}\\
&\{l_{\bar t},l_{Y} \}_{(2)}^a (L,\varphi,\bar\varphi)=\int d^3
{\underline x}\,\bar t(-(YL)_+^t.\bar\varphi
+\bar\Phi_Y\bar\varphi),&
\nonumber\end{eqnarray} and
\begin{eqnarray}
&\{ l_t,l_{\bar t} \}_{(2)}^a (L,\varphi,\bar\varphi)=
\int d^3{\underline x}\, ({ L}_+.\bar t)t,&\label{bose}\\&
\{ l_{t_1},l_{t_2} \}_{(2)}^a=0,\,\,\,\,
\{ l_{\bar t_1},l_{\bar t_2} \}_{(2)}^a=0.&
\nonumber\end{eqnarray}
In the case when
$\varphi$ and $\bar\varphi$ are Grassmann odd,
the last two lines should be modified to
\begin{eqnarray}
&\{ l_t,l_{\bar t} \}_{(2)}^a (L,\varphi,\bar\varphi)=
\int d^3{\underline x} (({ L}_+.\bar t)t-2\varphi t\Phi[\bar t\bar\varphi]),
\label{fermi}&\\&
\{ l_{t_1},l_{t_2} \}_{(2)}^a=2\int d^3{\underline x}\,\varphi t_1
\Phi[\varphi t_2],\,\,\,\,
\{ l_{\bar t_1},l_{\bar t_2} \}_{(2)}^a=-2\int d^3{\underline x}
\,\bar t_1\bar\varphi
\bar\Phi[\bar t_2\bar\varphi],&
\nonumber\end{eqnarray}
where the applications $\Phi$ and $\bar\Phi$ have been defined in
\reff{chipro}.
The lowest order case is $L=D(1+\varphi\partial^{-1}\bar\varphi)\bar D$. Then
if $\varphi$ and $\bar\varphi$ are odd, the equation
${d\over dt}L=[L^2_+,L]$ is the {$N=2$} supersymmetric extension of the NLS
equation \cite{roelo}. The next-to-lowest order case is $L=D(\partial +H+\varphi\partial^{-1}\bar\varphi)\bar D$.
If $\varphi$ and $\bar\varphi$ are even, the hamiltonian structure \reff{pb2p} reduces in
this case to the classical version of the ``small'' $N=4$ superconformal algebra. Although
the Poisson algebra contains $4$ supersymmetry
generators, the evolution equations \reff{KPeq} have only {$N=2$} supersymmetry.
This case was first obtained by another method which will be
given, as part of a detailed study, in \cite{dgi}.
We now turn to the second quadratic bracket \reff{pb2m}. We rewrite it as
\begin{eqnarray}
&\{ l_{X},l_{Y} \}_{(2)}^b (L) =\mbox{\rm Tr} X\xi_{l_Y}^b,&\nonumber\\
&\xi_{l_Y}^b=(LY)_{+}L-L(YL)_{+}+L{\lambda}(\Phi_Y)+{\bar\lambda}(\bar\Phi_Y)L.&
\label{hvfm}\end{eqnarray}
It is easily seen that neither the condition \reff{kdv}, nor the more complicated condition
\reff{nls} are admissible reductions in this case. The easiest way to
find Poisson subspaces for the bracket \reff{pb2m} is to apply the
map \reff{poimap} to the Poisson subspaces of the first quadratic
bracket. From \reff{kdv}, we are then lead to the restriction:
\begin{equation}
L=L_++D\bar D\partial^{-1}H\partial^{-1}D\bar D
\label{a4}\end{equation}
With $L$ of the form \reff{a4} and $Y$ arbitrary, one finds
\begin{equation}
(\xi_{l_Y}^b)_-=D\bar D\partial^{-1}((LY)_+.H-(YL)_+^t.H+\,\mbox{\rm res}[L,Y]H)
\partial^{-1}D\bar D,
\end{equation}
which directly shows that condition \reff{a4} defines a Poisson submanifold for the
Poisson bracket
\reff{pb2m}. It turns out that \reff{a4} also defines a Poisson submanifold for
the linear Poisson
bracket \reff{pb1}. To show this we rewrite the linear bracket as
\begin{equation}\{ l_{X},l_{Y} \}_{(1)} (L) =\mbox{\rm Tr} X\eta_{l_Y},\,\,\,
\eta_{l_Y}=[L,Y]_+-[L,Y_+]+{\lambda}(\Phi_Y)+{\bar\lambda}(\bar\Phi_Y).
\end{equation}
With $L$ of the form \reff{a4} and $Y$ arbitrary, one finds
\begin{equation}
(\eta_{l_Y})_-=D\bar D\partial^{-1}((Y_+-Y_+^t).H+\,\mbox{\rm res}[L,Y])\partial^{-1}D\bar D.
\end{equation}
Thus the reduced hierarchies defined by condition \reff{a4} are
bi-hamiltonian. The lowest order cases will be studied in the next section.
Notice that the transformation \reff{simil} maps the systems satisfying the condition \reff{nls} with Grassmann even fields $\varphi$ and $\bar\varphi$
into systems satisfying condition \reff{a4} with
\begin{equation}
H= \varphi\bar\varphi+\varphi^{-1}L_+.\varphi.
\end{equation}
Analogously, the transformation \reff{chitra} maps the systems satisfying the condition \reff{nls} with Grassmann odd fields $\varphi$ and $\bar\varphi$
into systems satisfying condition \reff{a4} with
\begin{equation}
H= (-1)^n\left(\bar\varphi\varphi+(D\bar\varphi)^{-1}D(L_+^t.\bar\varphi)\right).
\end{equation}
Such transformations may be found in \cite{kriso,bokris}.
Finally we may consider the image of the Poisson subspace defined by
\reff{nls} under the map $p$. One finds the condition
\begin{equation}
L=L_++D\bar D\partial^{-1}(H+\bar\varphi\partial^{-1}\varphi)\partial^{-1}D\bar D.
\label{n4}\end{equation}
The lowest order case is when $L_{+}=D\bar D$. The hamiltonian structure \reff{pb2m}
reduces in
this case to the classical version of the ``small'' $N=4$ superconformal algebra.
The equation ${d\over dt}L=[(L^{3})_{+}, L]$ becomes, after suitable
redefinitions, the $N=4$ supersymmetric extension of the KdV equation
derived in \cite{delivan} and written in {$N=2$} superspace in
\cite{dik}.
One can again consider an enlarged phase space which coordinates are
the fields in $L$ and $\varphi$, $\bar\varphi$. The second quadratic bracket
in this phase space is easily obtained from the first one by applying the map
$p$ to the first quadratic bracket. $p$ acts as the identity on $\varphi$ and
$\bar\varphi$. As a consequence the Poisson brackets \reff{bose} and \reff{fermi} keep the same form, whereas \reff{lfi} should be modified to
\begin{eqnarray}
&\{ l_t,l_{Y} \}_{(2)}^b (L,\varphi,\bar\varphi)=\int d^3{\underline x} (\,\mbox{\rm res}\left(\tau((YL)_+)\lambda(\varphi)\right)
+\Phi_Y\varphi)t, &\\
&\{l_{\bar t},l_{Y} \}_{(2)}^b (L,\varphi,\bar\varphi)=\int d^3
{\underline x}\,\bar
t(-\,\mbox{\rm res}\left(\bar\lambda(\bar\varphi)\partial^{-1}\tau((LY)_+)\partial\right)
+\bar\Phi_Y\bar\varphi).&
\nonumber\end{eqnarray}
\setcounter{equation}{0}
\section{Examples and comparison with other works \label{examples}}
This paragraph is devoted to the presentation of the simplest integrable equations obtained using our formalism.
Considering first the condition \reff{kdv}, the simplest example is the lax operator $L = D (\partial + W) \bar{D}$. Then the
evolution equation
\begin{equation}
{d\over dt}L=[L^{3\over 2}_+,L],
\end{equation}
leads to the equation
\begin{equation}
8\partial_{t} W = 2W_{xxx}+6\left( (DW)(\overline{D}W) \right)_{x}-
\left( W^3 \right)_{x},
\end{equation}
which coincide after the redefinition $W=2i\Phi$ with the $a=-2$ {$N=2$}
extension of the KdV equation in the classification of
Mathieu \cite{Mathieu1,math2}. The Lax operator given in \cite{Mathieu1} may be
obtained from $L$ in the following way. Let us consider the operator
\begin{equation}
L_{-2} = L+L^t=\partial^2+W[D,\bar D]+(DW)\bar D-(\bar D W)D.
\label{eqam2}\end{equation}
$L$ is in $\cal C$ and $L^t$ is in $\bar{\cal C}$. If we remember that the product of an element in $\cal C$ and an element in $\bar{\cal C}$ always vanishes, we immediately get that a square root of $L_2$ with highest derivative term equal to $\partial$ is $(L_{-2})^{1\over 2}=L^{1\over 2}-(L^{1\over 2})^t$. From this we deduce the relation $(L_{-2})^{3\over 2}=L^{3\over 2}
-(L^{3\over 2})^t$. As a consequence $L_{-2}$ satisfies the evolution equation
\begin{equation}{d\over dt}L_{-2}=[L^{3\over 2}_+,L]+([L^{3\over 2}_+,L])^t
=[(L_{-2})^{3\over 2},L_{-2}],\end{equation}
which is thus an equivalent Lax representation for equation \reff{eqam2}.
As the next example, we consider the Lax operator
\begin{equation}
L=D(\partial^2+V\partial+W)\bar D
\end{equation}
Then the evolution equation ${d\over dt}L=[L^{2/3}_{+},L]$ should coincide, after suitable redefinitions, with one of the three {$N=2$} supersymmetric extensions of the Boussinesq equations derived in \cite{Ivanov1}. Indeed
one can check that the Lax operator they give for the $\alpha = -1/2$
equation may be written as
$L^{(1)}=L+\bar D\partial^2 D$. Then one easily obtains
$(L^{(1)})^{2\over 3}=L^{2\over 3}+\bar D\partial D$, and the evolution equation for $L^{(1)}$ is easily deduced from that of $L$
\begin{equation}
{d\over dt}L^{(1)}={d\over dt}L=[(L^{(1)})^{2\over 3},L^{(1)}].
\end{equation}
Turning now to condition \reff{a4}, the lowest order case corresponds to
the Lax operator
$L=D\bar D+D\bar D\partial^{-1}W\partial^{-1}D\bar D$. Then the
equation ${d\over dt}L=[(L^{3})_{+}, L]$ becomes, after suitable
redefinitions, the {$N=2$} supersymmetric extension of the KdV equation
with parameter $a=4$,
\begin{equation}
\partial_{t}W = W_{xxx} + \frac{3}{2}\left( [ D ,\overline{D} ] W^2 \right)_{x} - 3\left( (DW)(\overline{D}W) \right)_{x} + (W^3)_{x}
\end{equation}
Notice that, all integer powers of $L$ define
conserved charges in this case (an alternative Lax operator with the
same property was derived in \cite{krisoto}).
The last example that we shall study is the Lax operator
\begin{equation}
L = D \left( \partial + V \right) \overline{D} + D\overline{D}\partial^{-1}W\partial^{-1}D\overline{D}.
\end{equation}
Then the equation
\begin{equation}
\partial_{2} L = [L_{+},L]
\end{equation}
explicitely reads
\begin{eqnarray}
\partial_{2} V &=& 2 W_{x}\\
\partial_{2} W &=& [ D, \overline{D} ]W_{x} +VW_{x} +(DV)( \overline{D}W)+( \overline{D}V)(DW).
\end{eqnarray}
This equation is identical, up to a rescaling of time, to the {$N=2$} supersymmetric extension
of the Boussinesq equation with parameter $\alpha = -2$
derived in \cite{Ivanov1}.
\setcounter{equation}{0}
\section{From $N=2$ to $N=1$ superspace \label{n1susy}}
$N=2$ extensions of the KP and KdV hierarchies
have been studied in several articles
\cite{inami,ghosh,dasb1,dasp} using an $N=1$ superspace formalism. In this section we wish to relate the KP
hierarchies that we described in section \ref{main} to those given in
the litterature. The first step will be to relate our $N=2$ algebra ${\cal C}$
of pseudo-differential operators to the $N=1$ algebra of pseudo-differential operators.
An operator $L=D{\cal L}\bar D$ in ${\cal C}$ should be considered as acting on a chiral
object $\Psi$, $D\Psi$=0, and this action writes
\begin{equation}
L .\Psi=D{\cal L}\bar D .\Psi={\cal L}\partial .\Psi+(D .{\cal L})\bar D .\Psi.\label{R1}
\end{equation}
We shall use the following combinations of the chiral derivatives
\begin{equation}
D_1=D+\bar D,\,\, D_2=-D+\bar D,\,\, D_1^2=-D_2^2=\partial,\,\,
\{ D_1,D_2\}=0.\end{equation}
Then the action of $L$ on $\Psi$ is
\begin{equation}
L .\Psi=({\cal L}\partial+(D .{\cal L})D_1) .\Psi.
\end{equation}
We then choose to associate to the $N=2$ pseudo-differential operator $L$
the $N=1$ pseudo-differential operator $\underline L$ given by
\begin{equation}
{\underline L}={\cal L}\vert_{\theta_2=0}\partial+
(D .{\cal L})\vert_{\theta_2=0}D_1.
\end{equation}
It is easily checked that this correspondence respects the product,
$\underline{LL'}=\underline{L}\,\,\underline{L'}$. It also has the property
\begin{equation}
\underline{L_+}={\underline L}_{>0}.
\end{equation}
That is to say that the image of an $N=2$ differential operator is a strictly differential
$N=1$ operator, without the non-derivative term. Notice also the useful relations
\begin{eqnarray}
& \,\mbox{\rm res} (\underline{L})=(D. {\,\mbox{\rm res}}(L))\vert_{\theta_2=0},\,\, &\\
&\mbox{\rm Tr}(L)=\mbox{\rm Tr}(\underline{L})\equiv
\int d^2{\underline x} {\,\mbox{\rm res}}(\underline{L}),\,\,\,\int
d^2{\underline x}\equiv\int dxd\theta_1&
\end{eqnarray}
where the residue of the operator $\underline L$ is the coefficient of
$D_1^{-1}\equiv D_1\partial^{-1}$. From now on, all expressions will be written in $N=1$
superspace, and we drop the index of $D_1$ and $\theta_1$. The KP hierarchy described in
section \ref{main} may be described in $N=1$ superspace as follows. We consider an operator
$\underline L$ of the form
\begin{equation}
\underline{L}=D^{2n}+\sum_{p=1}^\infty w_pD^{2n-p-1}
\end{equation}
and consider evolution equations
\begin{equation}
{\partial\over\partial t_k}\underline{L}=[\underline{L}^{k\over n}_{>0},L]
\label{R9}\end{equation}
This is nothing but the non-standard supersymmetric KP hierarchy described in \cite{ghosh,dasb1}. The evolution equations (\ref{R9}) admit
the conserved quantities $H_p=\mbox{\rm Tr}(\underline{L}^{p\over n})$, and they are bi-hamiltonian.
The first Poisson bracket is easily deduced from its $N=2$
counterpart (\ref{pb1}). With
$l_{\underline{X}}=\mbox{\rm Tr}(\underline{L}\,\,\underline{X})$, we have
\begin{equation}
\{ l_{\underline{X}},l_{\underline{Y}}
\}_1=\mbox{\rm Tr} L([\underline{X}_{>0},\underline{Y}_{>0}]-
[\underline{X}_{\leq 0},\underline{Y}_{\leq 0}])
\end{equation}
As in the $N=2$ formalism, this is a standard bracket associated with a non-antisymmetric
$r$ matrix. As a consequence, the two quadratic brackets
deduced from (\ref{pb2p}) and \reff{pb2m} are
quite complicated. They involve the quantity $\psi_{\underline{X}}$ defined up to a constant by
$D\psi_{\underline{X}}= {\,\mbox{\rm res}}[{\underline{L}}\, ,{\underline{X}}]$. The first one is
\begin{eqnarray}
&\{ l_{\underline{X}},l_{\underline{Y}}
\}_2^a(L)=\mbox{\rm Tr}(\underline{L}\,\,\underline{X}(\underline{L}\,\,\underline{Y})_+
-\underline{X}\,\,\underline{L}(\underline{Y}\,\,\underline{L})_+)
+\int d^2{\underline x} (-\psi_{\underline{Y}}\, {\,\mbox{\rm res}}[{\underline{L}},{\underline{X}}]\nonumber&\\&
+{\,\mbox{\rm res}}[{\underline{L}}\, ,{\underline{Y}}]\,
{\,\mbox{\rm res}}(\underline{X}\,\,\underline{L}\,D^{-1})
- {\,\mbox{\rm res}}[{\underline{L}}\, ,{\underline{X}}]\,
{\,\mbox{\rm res}}(\underline{Y}\,\,\underline{L}\,D^{-1})
).&
\end{eqnarray}
The Poisson bracket \reff{pb2m} becomes
\begin{eqnarray}
&\{ l_{\underline{X}},l_{\underline{Y}}
\}_2^b(L)=\mbox{\rm Tr}(\underline{L}\,\,\underline{X}(\underline{L}\,\,\underline{Y})_+
-\underline{X}\,\,\underline{L}(\underline{Y}\,\,\underline{L})_+)
+\int d^2{\underline x} (\psi_{\underline{Y}}\, {\,\mbox{\rm res}}[{\underline{L}},{\underline{X}}] \nonumber&\\&
+{\,\mbox{\rm res}}[{\underline{L}}\, ,{\underline{Y}}]\,
{\,\mbox{\rm res}}(\underline{L}\,\,\underline{X}\,D^{-1})
- {\,\mbox{\rm res}}[{\underline{L}}\, ,{\underline{X}}]\,
{\,\mbox{\rm res}}(\underline{L}\,\,\underline{Y}\,D^{-1})
),&
\end{eqnarray}
and already appeared in \cite{dasp}.
It is not a difficult task to obtain the $N=1$ restrictions which correspond to the {$N=2$} conditions (\ref{kdv},\ref{nls},\ref{a4},\ref{n4}). Some of the lax
operators obtained in this way are already known, in particular those satisfying \reff{kdv} from \cite{inami} and the lowest order operator coming from
\reff{nls} with odd $\varphi$ and $\bar\varphi$, which is the super-NLS Lax operator obtained in \cite{dasb1}.
\setcounter{equation}{0}
\section{Conclusion}
An easy generalization of the hierarchies presented in this article would be to consider multi-components KP hierarchies, that is to say replace the fields
$\varphi$ and $\bar\varphi$ in \reff{nls} and \reff{n4} by a set of $n+m$ fields
$\varphi_i$ and $\bar\varphi_i$, $n$ of them being Grassmann even and the other
$m$ being Grassmann odd.
For the lowest order case of equation \reff{nls}, such a generalization has been considered in \cite{bokris}. The Lax representation that we propose for such hierarchies has the advantage that one does not need to modify the definition of the residue. For the next to lowest order case of equation \reff{nls}, and
the lowest order case of equation \reff{n4}, it should be possible to obtain
in this way hierarchies based on $\cal W$-superalgebras with an arbitrary number of supersymmetry charges.
Little is known about the matrix Lax formulation of the hierarchies presented here. In the case of operators satisfying
condition \reff{kdv},
such a matrix Lax formulation was constructed in $N=1$ superspace
by Inami and Kanno \cite{inami,ina1}. It involves the loop superalgebra based on $sl(n\vert n)$. What we know about the matrix Lax formulation in {$N=2$} superspace for hierarchies based on Lax operators satisfying conditions \reff{kdv} or \reff{nls} will be reported elsewhere. Notice that we obtained the form \reff{kdv} of the scalar Lax operators from a matrix Lax representation, and only later became aware of reference \cite{popo3} where these operators also appear.
| 2024-02-18T23:39:41.618Z | 1996-09-27T16:59:39.000Z | algebraic_stack_train_0000 | 94 | 6,252 |
|
proofpile-arXiv_065-810 | \section{Introduction}
\vspace*{-0.5pt}
\noindent
The production of heavy quarkonium states in high-energy collisions
provides an important tool to study the interplay between perturbative
and non-perturbative QCD dynamics. While the creation of heavy quarks
in a hard scattering process can be calculated in perturbative
QCD\cite{CSS86}, the subsequent transition to a physical bound state
introduces non-perturbative aspects. A rigorous framework for treating
quarkonium production and decays has recently been
developed.\cite{BBL95} The factorization approach is based on the use
of non-relativistic QCD\cite{CL86} (NRQCD) to separate the
short-distance parts from the long-distance matrix elements and
explicitly takes into account the complete structure of the quarkonium
Fock space. This formalism implies that so-called color-octet
processes, in which the heavy-quark antiquark pair is produced at
short distances in a color-octet state and subsequently evolves
non-perturbatively into a physical quarkonium, should contribute to
the cross section. It has recently been argued\cite{TEV1,CL96} that
quarkonium production in hadronic collisions at the Tevatron can be
accounted for by including color-octet processes and by adjusting the
unknown long-distance color-octet matrix elements to fit the data.
In order to establish the phenomenological significance of the
color-octet mechanism it is necessary to identify color-octet
contributions in different production processes. Color-octet
production of $J/\psi$ particles has also been studied in the context
of $e^+e^-$ annihilation\cite{BC95}, $Z$ decays\cite{CKY95}, hadronic
collisions at fixed-target experiments\cite{fthad,BR} and $B$
decays\cite{KLS2}. Here, I review the impact of color-octet
contributions and higher-order QCD corrections on the cross section
for $J/\psi$ photoproduction. The production of $J/\psi$ particles in
photon-proton collisions proceeds predominantly through photon-gluon
fusion. Elastic/diffractive mechanisms\cite{ELASTIC} can be eliminated
by measuring the $J/\psi$ energy spectrum, described by the scaling
variable $z = {p\cdot k_\psi}\, / \, {p\cdot k_\gamma}$, with $p,
k_{\psi,\gamma}$ being the momenta of the proton and $J/\psi$,
$\gamma$ particles, respectively. In the proton rest frame, $z$ is the
ratio of the $J/\psi$ to $\gamma$ energy, $z=E_{\psi}/E_\gamma$. For
elastic/diffractive events $z$ is close to one; a clean sample of
inelastic events can be obtained in the range $z\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\;0.9$.
According to the NRQCD factorization formalism , the inclusive cross
section for $J/\psi$ photoproduction can be expressed as a sum of
terms, each of which factors into a short-distance coefficient and a
long-distance matrix element:
\begin{equation}\label{eq_fac}
\mbox{d}\sigma(\gamma+g \to J/\psi +X) =
\sum_n \mbox{d}\hat{\sigma}(\gamma+g \to c\bar{c}\, [n] + X)\,
\langle {\cal{O}}^{J/\psi}\,[n] \rangle
\end{equation}
Here, $\mbox{d}\hat{\sigma}$ denotes the short-distance cross section
for producing an on-shell $c\bar{c}$-pair in a color, spin and
angular-momentum state labelled by $n$. The NRQCD matrix elements
$\langle {\cal{O}}^{J/\psi} \, [n] \rangle \equiv \langle 0 |
{\cal{O}}^{J/\psi} \, [n] | 0 \rangle$ give the probability for a
$c\bar{c}$-pair in the state $n$ to form the $J/\psi$ particle. The
relative importance of the various terms in (\ref{eq_fac}) can be
estimated by using NRQCD velocity scaling rules.\cite{LMNMH92} For
$v\to 0$ ($v$ being the average velocity of the charm quark in the
$J/\psi$ rest frame) each of the NRQCD matrix elements scales with a
definite power of $v$ and the general expression (\ref{eq_fac}) can be
organized into an expansion in powers of $v^2$.
\vspace*{1pt}\baselineskip=13pt
\section{Color-singlet contribution}
\vspace*{-0.5pt}
\noindent
At leading order in $v^2$, eq.(\ref{eq_fac}) reduces to the standard
factorization formula of the color-singlet model\cite{CS}. The
short-distance cross section is given by the subprocess
\begin{equation}\label{eq_cs}
\gamma + g \to c\bar{c}\, [\mbox{$\underline{1}$},{}^3S_{1}] + g
\end{equation}
shown in Fig.\ref{fig_1}a, with $c\bar{c}$ in a color-singlet state
(denoted by \mbox{$\underline{1}$}), zero relative velocity, and
spin/angular-momentum quantum numbers $^{2S+1}L_J = {}^3S_1$. Up to
corrections of ${\cal{O}}(v^4)$, the color-singlet NRQCD matrix
element is related to the $J/\psi$ wave function at the origin through
$\langle {\cal{O}}^{J/\psi}\,[\mbox{$\underline{1}$},{}^3S_{1}] \rangle \approx
(9/2\pi)|\varphi(0)|^2$ and can be extracted from the measurement of
the $J/\psi$ leptonic decay width or calculated within potential
models. Relativistic corrections due to the motion of the charm quarks
in the $J/\psi$ bound state enhance the large-$z$ region, but can be
neglected in the inelastic domain.\cite{REL} The calculation of the
higher-order perturbative QCD corrections to the short-distance cross
section (\ref{eq_cs}) has been performed recently.\cite{KZSZ94,MK95}
Generic diagrams which build up the cross section in next-to-leading
order (NLO) are depicted in Fig.\ref{fig_1}. Besides the usual
self-energy diagrams and vertex corrections for photons and gluons
(b), one encounters box diagrams (c), the splitting of the final-state
gluon into gluon and light quark-antiquark pairs, as well as diagrams
renormalizing the initial state parton densities (e).
Inclusion of the NLO corrections reduces the scale dependence of the
theoretical prediction and increases the cross section significantly,
depending in detail on the $\gamma p$ energy and the choice of
parameters.\cite{MK95} Details of the calculation and a comprehensive
analysis of total cross sections and differential distributions for
the energy range of the fixed-target experiments and for $J/\psi$
photoproduction at \mbox{HERA} can be found elsewhere.\cite{MK95}
\begin{figure}[t]
\vspace*{2.75cm}
\begin{picture}(7,7)
\special{psfile=fig1a.ps voffset=100 hoffset=30 hscale=35
vscale=37 angle=-90 }
\end{picture}
\vspace*{5.25cm}
\begin{picture}(7,7)
\special{psfile=fig1b.ps voffset=100 hoffset=30 hscale=35
vscale=37 angle=-90 }
\end{picture}
\vspace*{0.45cm}
\fcaption{\label{fig_1} Generic diagrams for $J/\psi$
photoproduction via the color-singlet channel: (a) leading order
contribution; (b) vertex corrections; (c) box diagrams; (d)
splitting of the final state gluon into gluon or light
quark-antiquark pairs; (e) diagrams renormalizing the initial-state
parton densities.}
\vspace*{-5mm}
\end{figure}
\newpage
\vspace*{1pt}\baselineskip=13pt
\section{Color-octet contributions}
\vspace*{-0.5pt}
\noindent
Color-octet configurations are produced at leading order in
$\mbox{$\alpha_{\mbox{\scriptsize s}}$}$ through the $2\to 1$ parton
processes\cite{CK96,AFM,KLS}
\begin{eqnarray}\label{eq_oc0}
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^1S_{0}]
\nonumber \\
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^3P_{0,2}]
\end{eqnarray}
shown in Fig.\ref{fig_2}a. Due to kinematical constraints, the leading
color-octet terms will only contribute to the upper endpoint of the
$J/\psi$ energy spectrum, $z\approx 1$ and $p_\perp\approx 0$,
$p_\perp$ being the $J/\psi$ transverse momentum. Color-octet
configurations which contribute to inelastic $J/\psi$ photoproduction
$z \le 0.9$ and $p_\perp \ge 1$~GeV are produced through the
subprocesses\cite{CK96,KLS}
\begin{eqnarray}\label{eq_oc2}
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^1S_{0}]
+ g \nonumber \\
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^3S_{1}]
+ g \nonumber \\
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^3P_{0,1,2}] + g
\end{eqnarray}
as shown in Fig.\ref{fig_2}b. Light-quark initiated contributions are
strongly suppressed at \mbox{HERA} energies and can safely be
neglected.
\begin{figure}[t]
\vspace*{2.5cm}
\begin{picture}(7,7)
\special{psfile=fig2.ps voffset=100 hoffset=30 hscale=32
vscale=35 angle=-90 }
\end{picture}
\vspace*{3cm}
\fcaption{\label{fig_2}
Generic diagrams for $J/\psi$ photoproduction via color-octet
channels: (a) leading color-octet contributions; (b) color-octet
contributions to inelastic $J\!/\!\psi$ production.}
\end{figure}
The transition of the color-octet $c\bar{c} \,
[\mbox{$\underline{8}$},{}^{2S+1}L_{J}]$ pair into a physical $J/\psi$
state through the emission of non-perturbative gluons is described by
the long-distance matrix elements $\langle {\cal{O}}^{J/\psi} \,
[\mbox{$\underline{8}$},{}^{2S+1}L_{J}] \rangle$. They have to be
obtained from lattice si\-mu\-la\-ti\-ons\cite{BSK96} or measured
directly in some production process. According to the velocity
scaling rules of NRQCD, the color-octet matrix elements associated
with $S$-wave quarkonia should be suppressed by a factor of $v^4$
compared to the leading color-singlet matrix element.\footnote{In the
case of $P$-wave quarkonia, color-singlet and color-octet matrix
elements contribute at the same order in $v$.\cite{BBL92}
Photoproduction of $P$-wave states is, however, suppressed compared
with $J/\psi$ states, by two orders of magnitude at
\mbox{HERA}.\cite{MA,CKHERA}} Color-octet contributions to $J/\psi$
photoproduction can thus become important only if the corresponding
short-distance cross sections are enhanced as compared to the
color-singlet process. Color-octet matrix elements have been fitted to
prompt $J/\psi$ data from \mbox{CDF}\cite{CDF} and found to be
\mbox{${\cal O}(10^{-2}$~GeV$^3)$}, consistent with the NRQCD velocity
scaling rules.\cite{TEV1,CL96} Meanwhile, fit values for color-octet
matrix elements have also been obtained from analyses of quarkonium
production in hadronic collisions at fixed-target
experiments\cite{BR}, $J/\psi$ photoproduction at the elastic
peak\cite{AFM} and $J/\psi$ production in $B$ decays\cite{KLS2}.
The results seem to indicate that the values for the color-octet
matrix elements extracted from the Tevatron data at moderate $p_\perp$
are too large; they should however be considered with some caution
since significant higher-twist corrections are expected to contribute
in the small-$p_\perp$ region probed at fixed target experiments and
in elastic $J/\psi$ photoproduction. Moreover, the comparison between
the different analyses is rendered difficult by the fact that the
color-octet matrix elements can in general only be extracted in
certain linear combinations which depend on the reaction under
consideration, see Sec.4.
\vspace*{1pt}\baselineskip=13pt
\section{$J/\psi$ photoproduction at HERA}
\vspace*{-0.5pt}
\noindent
The production of $J/\psi$ particles in high energy $ep$ collisions at
\mbox{HERA} is dominated by photoproduction events where the electron
is scattered by a small angle producing photons of almost zero
virtuality. The measurements at \mbox{HERA} provide information on the
dynamics of $J/\psi$ photoproduction in a wide kinematical region,
$30~\mbox{GeV} \; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$} \; \sqrt{s\hphantom{tk}}\!\!\!\!\! _{\gamma
p}\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 200~\mbox{GeV}$, corresponding to initial photon
energies in a fixed-target experiment of $450~\mbox{GeV} \; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$} \;
E_\gamma \; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$} \; 20,000~\mbox{GeV}$. Due to kinematical
constraints, the leading color-octet processes (\ref{eq_oc0})
contribute only to the upper endpoint of the $J/\psi$ energy spectrum,
\mbox{$z\approx 1$} and $p_\perp\approx 0$. The color-singlet and
color-octet predictions (\ref{eq_oc0}) have been compared to
experimental data\cite{H1} obtained in the region $z\ge 0.95$ and
$p_\perp \le 1$~GeV.\cite{CK96} Since the fac\-to\-ri\-za\-tion
approach cannot be used to describe the exclusive elastic channel
$\gamma + p \to J/\psi + p$, elastic contributions had been subtracted
from the data sample. It was shown that the large cross section
predicted by using color-octet matrix elements as extracted from the
Tevatron fits appears to be in conflict with the experimental data.
It is, however, difficult to put strong upper limits for the octet
terms from a measurement of the total cross section in the region
$z\approx 1$ and $p_\perp\approx 0$ since the overall normalization of
the theoretical prediction depends strongly on the choice for the
charm quark mass and the QCD coupling. Moreover, diffractive
production mechanisms which cannot be calculated within perturbative
QCD might contaminate the region $z\approx 1$ and make it difficult to
extract precise information on the color-octet contributions. Finally,
it has been argued that sizable higher-twist effects are expected to
contribute in the region $p_\perp\; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 1$~GeV, which cause the
breakdown of the factorization formula (\ref{eq_fac}).\cite{BFY}
It is therefore more appropriate to study $J/\psi$ photoproduction in
the inelastic region $z \le 0.9$ and $p_\perp \ge 1$~GeV where no
diffractive channels contribute and where the general factorization
formula (\ref{eq_fac}) and perturbative QCD calculations should be
applicable. Adopting the NRQCD matrix elements as extracted from the
fits to prompt $J/\psi$ data at the Tevatron one finds that
color-octet and color-singlet contributions to the inelastic cross
section are predicted to be of comparable size.\cite{CK96,KLS} The
short-distance factors of the $[\mbox{$\underline{8}$},{}^{1}S_{0}]$
and $[\mbox{$\underline{8}$},{}^{3}P_{0,2}]$ channels are strongly
enhanced as compared to the color-singlet term and partly compensate
the ${\cal{O}}(10^{-2})$ suppression of the corresponding
non-perturbative matrix elements. In contrast, the contributions from
the $[\mbox{$\underline{8}$},{}^{3}S_{1}]$ and
$[\mbox{$\underline{8}$},{}^{3}P_{1}]$ states are suppressed by more
than one order of magnitude. Since color-octet and color-singlet
processes contribute at the same order in
$\mbox{$\alpha_{\mbox{\scriptsize s}}$}$, the large size of the
$[\mbox{$\underline{8}$},{}^{1}S_{0}]$ and
$[\mbox{$\underline{8}$},{}^{3}P_{0,2}]$ cross sections could not have
been anticipated from naive power counting and demonstrates the
crucial dynamical role played by the bound state quantum
numbers.\cite{BR83} As for the total inelastic cross section, the
linear combination of the color-octet matrix elements $\langle
{\cal{O}}^{J/\psi} \, [\mbox{$\underline{8}$},{}^{1}S_{0}] \rangle$
and $\langle {\cal{O}}^{J/\psi} \,
[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle$ that is probed at
\mbox{HERA} is almost identical to that extracted from the Tevatron
fits at moderate $p_\perp$, independent of
$\sqrt{s\hphantom{tk}}\!\!\!\!\! _{\gamma p}$.\footnote{At leading
order in $v^2$, the $P$-wave matrix elements are related by
heavy-quark spin symmetry, $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{3}P_{J}] \rangle \approx \mbox{$(2J+1)$} \, \langle
{\cal{O}}^{J/\psi} \, [\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle$.} The Tevatron
results can thus be used to make predictions for color-octet
contributions to the total inelastic $J/\psi$ photoproduction cross
section without further ambiguities. However, taking into account the
uncertainty due to the value of the charm quark mass and the strong
coupling, the significance of color-octet contributions cannot be
deduced from the analysis of the absolute $J/\psi$ production rates.
In fact, the experimental data can be accounted for by the
color-singlet channel alone, once higher-order QCD corrections are
included and the theoretical uncertainties due to variation of the
charm quark mass and the strong coupling are taken into account, as
demonstrated at the end of this section. The same statement holds true
for the transverse momentum spectrum, since, at small and moderate
$p_\perp$, both color-singlet and color-octet contributions are almost
identical in shape. At large transverse momenta, $p_\perp \;
\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$>$} \;
10$~GeV, charm quark fragmentation dominates over the photon-gluon
fusion process.\cite{SA94,GRS95} In contrast to what was found at the
Tevatron\cite{PT_TEV}, gluon fragmentation into color-octet states is
suppressed over the whole range of $p_\perp$ in the inelastic region
$z\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 0.9$.\cite{GRS95}
A distinctive signal for color-octet processes should, however, be
visible in the $J/\psi$ energy distribution
$\mbox{d}\sigma/\mbox{d}{}z$.\cite{CK96} The linear combination of
color-octet matrix elements that is probed by the $J/\psi$ energy
distribution does, however, depend on the value of $z$. Therefore, one
cannot directly use the Tevatron fits but has to allow the individual
color-octet matrix elements to vary in certain ranges, constrained by
the value extracted for the linear combination. It has in fact been
argued that the color-octet matrix element $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle$ could be negative due to the
subtraction of power ultraviolett divergences.\cite{EBpriv} In
contrast, the matrix element $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{1}S_{0}] \rangle$ is free of power divergences and its
value is thus always positive. Accordingly, I have allowed $\langle
{\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle / m_c^2$ to vary in
the range $[-0.01,0.01]$~GeV$^3$ and determined the value of the
matrix element $\langle {\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{1}S_{0}]
\rangle$ from the linear combination extracted at the
Tevatron.\footnote{Note that, given $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{1}S_{0}] \rangle \;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 0.1$~GeV$^3$ as required
by the velocity scaling rules, a value $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle / m_c^2 \;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; -0.01$~GeV$^3$
would be in contradiction with the Tevatron
fits.} The result is shown in Fig.\ref{fig_3}(a) where I have plotted
\begin{figure}[ht]
\vspace*{3cm}
\begin{picture}(7,7)
\special{psfile=fig3a.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{5.75cm}
\begin{picture}(7,7)
\special{psfile=fig3b.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{3.25cm}
\fcaption{\label{fig_3} Color-singlet and color-octet contributions to
the $J\!/\!\psi$ energy distribution $\mbox{d}\sigma/\mbox{d}{}z$ at the
photon-proton centre of mass energy $\sqrt{s\hphantom{tk}}\!\!\!\!\!
_{\gamma p}\,\, = 100$~GeV integrated in the range (a) $p_\perp \ge
1$~GeV and (b) $p_\perp \ge 5$~GeV compared to experimental
data\cite{H1,ZEUS}.}
\vspace*{-5mm}
\end{figure}
(leading-order) color-singlet and color-octet contributions at a
typical \mbox{HERA} energy of $\sqrt{s\hphantom{tk}} \!\!\!\!\!
_{\gamma p}\,\, = 100$~GeV in the restricted range $p_\perp \ge
1$~GeV, compared to recent experimental data from \mbox{H1}\cite{H1}
and preliminary data from ZEUS\cite{ZEUS}. The hatched error band
indicates how much the color-octet cross section is altered if
$\langle {\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle / m_c^2$
varies in the range $[-0.01,0.01]$~GeV$^3$, where the lower bound
corresponds to $\langle {\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{3}P_{0}]
\rangle / m_c^2 = -0.01$~GeV$^3$. Since the shape of the distribution
is almost insensitive to higher-order QCD corrections or to the
uncertainty induced by the choice for $m_c$ and
$\mbox{$\alpha_{\mbox{\scriptsize s}}$}$, the analysis of the $J/\psi$
energy spectrum $\mbox{d}\sigma/\mbox{d}{}z$ should provide a clean
test for the underlying production mechanism. From Fig.\ref{fig_3}
one can conclude that the shape predicted by the color-octet
contributions is not supported by the experimental data. The
discrepancy with the data can only be removed when reducing the
relative weight of the color-octet contributions by at least a factor
of five.\cite{CK96} Let me emphasize that the rise of the cross
section towards large $z$ predicted by the color-octet mechanism is
not sensitive to the small-$p_\perp$ region and thus not affected by
the collinear divergences which show up at the endpoint $z=1$ and
$p_\perp=0$. This is demonstrated in Fig.\ref{fig_3}(b) where I show
color-singlet and color-octet contributions to the $J/\psi$ energy
distribution for $p_\perp > 5$~GeV. It will be very interesting to
compare these predictions with data to be expected in the future at
\mbox{HERA}. Let me finally mention that the shape of the $J/\psi$
energy distribution could be influenced by the emission of soft gluons
from the intermediate color-octet state.\cite{BR} While this effect,
which cannot be predicted within the NRQCD factorization approach,
might be significant at the elastic peak, it is by no means clear if
and in which way it could affect the inelastic region $z \;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\;
0.9$ and $p_\perp\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$>$}\;1$~GeV. In fact, if soft gluon emission
were important, it should also result in a feed-down of the leading
color-octet contributions (\ref{eq_oc0}) into the inelastic domain,
thereby increasing the discrepancy between the color-octet cross
section and the data in the large-$z$ region.
For the remainder of this section, I will demonstrate that the
experimental results on differential distributions and total cross
sections are well accounted for by the color-singlet channel alone
including higher-order QCD corrections. This can e.g.\ be inferred
from Fig.\ref{fig_4} where I compare the NLO color-singlet prediction
for the $J/\psi$ transverse momentum distribution\cite{MK95} with
recent results from \mbox{H1}\cite{H1}.
\begin{figure}[htbp]
\vspace*{3cm}
\begin{picture}(7,7)
\special{psfile=fig4.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{3.25cm}
\fcaption{\label{fig_4} LO and NLO color-singlet prediction for the
$J\!/\!\psi$ transverse momentum spectrum $\mbox{d}\sigma/\mbox{d}{}p_\perp^2$
at the photon-proton centre of mass energy
$\sqrt{s\hphantom{tk}}\!\!\!\!\! _{\gamma p}\,\, = 100$~GeV
integrated in the range $z \le 0.9$ compared to experimental
data\cite{H1}.}
\end{figure}
Note that the inclusion of higher-order QCD corrections is crucial to
describe the shape of the $p_\perp$ distribution. However, a detailed
analysis of the transverse momentum spectrum reveals that the
fixed-order perturbative QCD calculation is not under proper control
in the limit $p_\perp \to 0$, Fig.\ref{fig_4}. No reliable prediction
can be made in the small-$p_\perp$ domain without resummation of large
logarithmic corrections caused by multiple gluon emission. If the
region $p_\perp \le 1$~GeV is excluded from the analysis, the
next-to-leading order color-singlet prediction accounts for the energy
dependence of the cross section and for the overall normalization,
Fig.~\ref{fig_5}. The sensitivity of the prediction to the
small-$x$ behaviour of the gluon distribution is however not very
distinctive, since the average momentum fraction of the partons
$<\!x\!>$ is shifted to larger values when excluding the
small-$p_\perp$ region.
\begin{figure}[htbp]
\vspace*{3cm}
\begin{picture}(7,7)
\special{psfile=fig5.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{3.5cm}
\fcaption{\label{fig_5} NLO color-singlet prediction for the total
inelastic $J\!/\!\psi$ photoproduction cross section as a function
of the photon-proton energy for different
parametrizations\cite{pdfs} of the parton distribution in the proton
compared to experimental data\cite{H1,ZEUS}.}
\end{figure}
\vspace*{1pt}\baselineskip=13pt
\section{Conclusion}
\vspace*{-0.5pt}
\noindent
I have discussed color-singlet and color-octet contributions to the
production of $J/\psi$ particles in photon-proton collisions,
including higher-order QCD corrections to the color-singlet channel.
A comparison with photoproduction data obtained at fixed-target
experiments\cite{MK95} and the $ep$ collider \mbox{HERA} reveals that
the $J/\psi$ energy spectrum and the slope of the transverse momentum
distribution are adequately accounted for by the next-to-leading order
color-singlet prediction in the inelastic region $p_\perp
\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$>$}\;1$~GeV and $z\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\;0.9$. Taking into account the
uncertainty due to variation of the charm quark mass and the strong
coupling, one can conclude that the normalization too appears to be
under semi-quantitative control. Higher-twist effects\cite{HT} must be
included to improve the quality of the theoretical analysis further.
Distinctive signatures for color-octet processes should be visible in
the shape of the $J/\psi$ energy distribution. However, these
predictions appear at variance with recent experimental data obtained
at \mbox{HERA} indicating that the values of the color-octet matrix
elements $\langle {\cal{O}}^{J\!/\!\psi} \, [\mbox{$\underline{8}$},{}^{1}S_{0}]
\rangle$ and $ \langle {\cal{O}}^{J\!/\!\psi} \, [\mbox{$\underline{8}$},{}^{3}P_{0}]
\rangle$ are considerably smaller than suggested by the fits to
Tevatron data at moderate $p_\perp$. Support is added to this result
by recent analyses on $J/\psi$ production in hadronic collisions at
fixed-target energies\cite{BR} and $B$ decays\cite{KLS2}. Clearly,
much more effort, both theoretical and experimental, is needed to
establish the phenomenological significance of color-octet
contributions to $J/\psi$ production and to proof the applicability of
the NRQCD factorization approach to charmonium production in hadronic
collisions at moderate transverse momentum.
\nonumsection{Acknowledgements}
\noindent
I wish to thank Martin Beneke, Eric Braaten, Matteo Cacciari, Sean Fleming
and Arthur Hebecker for useful discussions.
\nonumsection{References}
\noindent
| 2024-02-18T23:39:42.653Z | 1996-09-20T10:44:50.000Z | algebraic_stack_train_0000 | 161 | 4,253 |
|
proofpile-arXiv_065-892 | \section{THE IMAGES OF NGC~7582}
NGC~7582 was observed in the
near infrared $J,H$ and $K$ bands and in the visible $B,V$ and $R$ bands
as part of a survey
of $\sim 250$ galaxies
currently being carried out at the Ohio State University.
The survey's goal is to produce a library of photometrically
calibrated images of late-type galaxies from $0.4$ to $2.2 \mu$m.
For notes on the observation and reduction techniques see
\cite{pog96}, or for individual examples \cite{qui94} and \cite{qui95}.
All the images were obtained
at the Cerro Tololo Interamerican Observatories.
The galaxy was observed in $BVR$
at the 0.9m telescope on 1994 October 27 using the Tek\#2
$1024\times1024$ pixel CCD with a spatial scale of $0.40''$/pixel.
Total on source exposure times were 20, 15 and 10 minutes for $B,V$ and
$R$ respectively.
The $JHK$ images were obtained at the 1.5m telescope
on 1994 October 23 using a NICMOS 3 $256\times256$
pixel infrared array camera with a spatial scale of $1.16''$/pixel.
Total on source exposure times were 16.0, 15.0 and 29.2 minutes at $J,H$
and $K$ bands respectively.
All images were observed during clear but non-photometric conditions.
The visible images were calibrated with snapshots of the galaxy taken
during photometric conditions
on 1994 November 1 with the same camera and telescope.
The infrared images were calibrated on the CTIO/CIT system
using aperture photometry of the galaxy from \cite{fro82}.
\section {RESULTS}
Figure 1 shows grayscale images of NGC~7582 overlayed with isophotes in
the $J$ and $V$ bands.
Figure 2 shows with $B-R$ and $V-H$ color maps.
Extinction from dust is higher on the north-east side than on
the opposite side
(see the $V-H$ color map).
The near infrared images of the bulge and disk are nearly symmetrical about
the galaxy's major axis. They clearly show a boxy/peanut morphology for
the bulge of the galaxy
with boxy isophotes outside of peanut-shaped isophotes (see Figure 2).
The visible images, on the other hand, show no evidence for a
boxy or peanut
shape on the heavily obscured north-east side but peanut-shaped
isophotes can
be seen on the opposite side where there is less extinction.
NGC~7582 is a moderately inclined ($i \sim 65^\circ$) barred
galaxy with spiral arms visible outside the bar (see Figure 2a).
The galaxy is classified as SBSab in the {\it RC3} \cite{dev91}.
Most galaxies in which boxy/peanut-shaped bulges are seen have
such a high inclination that the presence of a bar is ambiguous.
This galaxy, like NGC~4442 \cite{bet94}, is remarkable in that
both the bar and boxy/peanut-shaped bulge can be observed
simultaneously.
In moderately inclined galaxies
where only one side of the galaxy is free of extinction
such as NGC~7582, if peanut shaped isophotes are observed only
on one side, it is not possible to differentiate between
structure in the plane of the galaxy and a boxy/peanut shaped bulge.
The symmetry observed above and below the plane of the galaxy
in the near infrared images makes it possible to clearly identify
the boxy/peanut in NGC~7582 as being part of the bulge and not structure
in the plane of the galaxy.
NGC~7582 is a Seyfert 2 galaxy (\cite{war80}) and a X-ray source
(\cite{ver81}). It may
have suffered an interaction with one of its two close
companions - NGC~7590 and NGC~7599 - which, together with NGC~7552
make up the Grus Galaxy Quartet.
The appearance of the bar suggests the presence of a considerable
amount of dust.
Its infrared flux, star formation rate and molecular gas mass
as traced in carbon monoxide are all higher than normal
(\cite{hec89}, \cite{cla92}).
The large gas mass ($M_{H_2} \sim 10^{10} M_\odot$; \cite{cla92}) is
associated with a large dust extinction, also
inferred from the deep 10 micron
silicate absorption measured in front of its nucleus (\cite{fro82}).
The $J-K$ color increases by $\sim 0.1$ mag near the plane
of the galaxy compared to colors above and below this plane.
The peanut-shaped bulge was not previously recognized in visible images
because of this large extinction.
In the bulge (within $15''$ from the nucleus),
colors more distant than $3''$ from the bar major axis
on the south-west side of the galaxy (where there is little extinction)
are approximately constant in all
bands as a function of distance from this axis.
These colors are
$J-K = 0.88$, $J-H = 0.62$, $B-V = 0.95$, $H-V = 2.75$, and $V-R = 0.56$
with errors of $\pm 0.08$ from photometric calibration.
These colors are similar to those of other Sb galaxies (\cite{fro88})
and arise from light from an older stellar population.
This would be consistent with the peanut formation theories mentioned
above where normal disk stars are kicked above the plane of the galaxy.
On the north-east side the colors are redder because of extinction
but regain the values of the south-west side for distances
greater than $20''$ from the bar major axis (outside the main
band of extinction).
The near constant colors in the bulge itself are not inconsistent
with colors observed in other boxy/peanut shaped bulged galaxies
(e.g. \cite{sha93}).
We fit ellipses to the $K$ band isophotes using the ellipse routine
in the stsdas package of iraf which uses an
iterative method described by \cite{jed87}. The results
of this fit are shown in Figure 3. The A4 or cos(4 $\theta$) amplitude
is divided by the semi-major axis and the local intensity gradient
and measures the isophote's deviations from perfect ellipticity.
A positive term represents diskiness
and a negative term boxiness. Figure 3c shows that the bulge
is indeed quite boxy with deviations as large as $-5\%$. The bulge
achieves peak boxiness at a semi-major axis of $\sim 22''$.
The boxy/peanut
galaxies studied in a similar fashion by \cite{sha93} have a
mean A4 of $-3.3\%$ with strongly boxed bulges at $\sim -4\%$.
The peak value we measure in NGC~7582 is perhaps slightly
stronger than the strongest boxy/peanuts seen \cite{sha93}'s sample
of galaxies.
As mentioned in the introduction, many theoretical scenarios for
the formation of a boxy/peanut-shaped bulge require the presence
of a bar. Bars can also cause starbursts by funneling gas into the center
of the galaxy.
According to this scenario, the starburst begins either during
the gas inflow phase along
the bar as gas is concentrated into shocks aligned with the bar,
or in the nucleus (or nuclear ring) once the gas concentrated there
reaches sufficiently high densities. In either event
the starburst is expected to begin less than a couple of bar rotation
periods after the bar has been formed (e.g. \cite{fri95}; \cite{hel94}).
The bar is about $140''$ long (or 14.7kpc using a
Hubble constant of 75 km/s Mpc$^{-1}$ which gives a distance to NGC~7582
of 21 Mpc). For a circular velocity of 200 km/s (\cite{mor85})
the bar has a rotation period of $\sim 3 \times 10^8$ years.
It is likely that the
the starburst in NGC~7582 was mediated by the bar.
This implies that the bar formed recently.
Starbursts only last at most a few times $10^7$ years
(e.g. Larson 1987, \cite{rie93}), and bars only require
one or two bar rotation periods to concentrate gas, therefor
it is likely that the bar formed less than a few $\times 10^8$ years ago.
If the boxy peanut-shaped bulge (in addition to the starburst)
was caused by the bar
then the boxy peanut also formed less than
a few $\times 10^8$ years ago.
However, predicted timescales for creating a boxy bulge range
from the order of 10 bar rotation periods
for resonant heating
(\cite{com90}; \cite{pfe90}) to a few bar
rotation periods for the bending or firehose instability
(\cite{rah91} ; \cite{mer94}).
Because we suspect that the peanut shape has formed recently
in NGC~7582, it is reasonable to suggest that it formed by one of
the faster proposed mechanisms.
We expect that near infrared imaging surveys will discover more
boxy/peanut-shaped
bulges, analogous to the larger percentage of bars identified in near infrared
surveys than visible ones (e.g. \cite{her96}).
All existing surveys designed to detect boxy or peanut-shaped
bulges have been carried out at optical wavelengths.
These surveys find that a substantial fraction of galaxies may
have boxy/peanut-shaped bulge morphologies
(\cite{sha87}, \cite{det89}, \cite{jar86}, and \cite{dd87}); for example,
\cite{sha87} finds that $20\%$ of highly inclined galaxies
in the RC2 exhibit boxy or peanut-shaped bulges on the
POSS B and R or the ESO/SERC J survey plates.
Surveys that include near infrared as well as visible data will provide better
statistics on the frequencies of both bars and boxy/peanut-shaped
bulges. In moderately inclined systems such as NGC~7582 more systems
with both bars and peanuts will be identified.
These statistics should determine if the two
features are commonly related.
As the mechanisms for boxy/peanut bulge formation show, bars can
provide a major source of stellar heating (as in random motions of stars).
Studies which determine
the fraction of galaxies with bars and boxy/peanut shaped bulges and the
connection between bars and boxy/peanut shaped bulges should
shed light on the role of bars on secular evolution in disk galaxies.
As in the case of NGC~7582, boxy/peanuts bulges are more
likely to be discovered in infrared surveys in galaxies
with large dust extinction and active star formation.
By investigating systems that feature both boxy/peanut bulges and
short-lived phenomena such as starbursts, it may be possible to place
limits on the timescale for vertical heating in barred galaxies.
\acknowledgments
We thank the referee for constructive criticism which resulted in
a much clearer paper.
We are grateful to Roberto Aviles for obtaining the $B,V$ and $R$
images for us.
We acknowledge helpful discussions and correspondence with G. Rieke.
The OSU galaxy survey is being supported in part by NSF grant AST 92-17716.
A.C.Q. acknowledges the support of a Columbus fellowship.
J.A.F. thanks
Roger Davies for his hospitality at Durham University and PPARC for
partial support via a Visiting Senior Research Fellowship.
\clearpage
| 2024-02-18T23:39:42.924Z | 1997-01-15T18:06:17.000Z | algebraic_stack_train_0000 | 176 | 1,763 |
|
proofpile-arXiv_065-1180 | \section{The (late?) $R_b$ problem}
As of one day before this talk was given, there appeared to be a
serious discrepancy with the LEP measurement of $R_b$, the ratio of the
partial widths $\Gamma(Z\to b\bar b)$ versus $\Gamma(Z\to {\rm
hadrons})$. Naturally, this inspired a plethora of possible
theoretical explanations. In trying to judge the relative merits of
one extension of the Standard Model versus another, it would be useful
to have some kind of unified framework in which to view them, so as to
be able to answer questions like ``how special is this particular
model?'' or ``could another similar model do the job better?'' \ We
were thus led to consider the more general question ``What kind of new
physics can increase the $Zb\bar b$ vertex?'' and to try to answer it
in as model-independent a way as possible.
Let me first review the experimental situation as it stood just after the
Moriond 1996 conference.\cite{lep} Normalizing the $Zb\bar b$ couplings as
\begin{equation}
\frac{e}{\sin2\theta_{\scriptscriptstyle W}}\bar b \gamma^\mu\left( g^b_L (1-\gamma_5) +
g^b_R (1+\gamma_5) \right) b\, Z_\mu,
\label{bcouplings}
\end{equation}
their Standard Model values can be expressed as
\begin{eqnarray}
g^b_L &=& -\frac12 + \frac13\sin^2\theta_{\scriptscriptstyle W} = -0.4230;\nonumber\\
g^b_R &=& + \frac13\sin^2\theta_{\scriptscriptstyle W} = +0.770.
\label{smcouplings}
\end{eqnarray}
Allowing for deviations $\delta g^b_L$ and $\delta g^b_R$ of these
couplings from their Standard Model values, possibly due to new
physics, and performing a global fit to the $Z$-pole data, one obtains
the results\cite{peter} summarized in table 1. Column 1 shows which
parameters among $\delta g^b_L$ and $\delta g^b_R$ are allowed to
vary, so that row 1 (where neither vary) represents the
Standard Model, which is a poor fit to the data. This is due to the
measured $R_b = 0.2219\pm 0.0017$ having been larger than the Standard
Model value of $R_b^{\scriptscriptstyle SM}=0.2156$. Allowing $\delta g^b_L$ or $\delta
g^b_R$ to vary increases the confidence level of the fit from 2.5\% to
$26-37$\%, a quite substantial improvement. From a statistical point
of view, it is sufficient to adjust either of the couplings, not
necessarily both, to get an acceptable fit.
Table 1 shows that it takes a much bigger relative change in the
right-handed than the left-handed coupling to change $R_b$ by the
amount that was needed: $\delta g^b_R/g^b_R\sim 40$\% versus $\delta
g^b_L/g^b_L\sim 2$\%. This immediately tells us that we need
tree-level physics, for example mixing of $b_R$ with new quarks, to get
a big enough change in $g^b_R$. On the other hand, one-loop effects
{\it or} small tree-level mixing can give a sufficiently large change
in the left-handed coupling. There is a further possibility we do not
discuss, but which has been pursued by others,\cite{mangano} mixing of
the $Z$ boson with a $Z'$.
\begin{table}\begin{center}\caption{Global fits to Moriond $Z$-pole
data.$^3$}
\vspace{0.4cm}
\begin{tabular}{|c|c|c|c|c|}
\hline
vary & $\delta g^b_L$ & $\delta g^b_R$ & c.l. of fit & $\chi^2/$d.o.f \\
\hline
none & 0 & 0 & 2.5\% & $24.7/13$ \\
$\delta g^b_L$ & ${-0.0063\atop\pm 0.0020}$ & 0 & 26\% & $14.7/12$ \\
$\delta g^b_L$ & 0 & ${0.034\atop\pm 0.010}$ & 37\% & $13.0/12$ \\
both & ${-0.0029\atop\pm 0.0037}$ & ${0.022\atop \pm 0.018}$ & 32\% & $12.5/11$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Quark mixing}\subsection{$b$-$b'$ mixing}
A simple way to change the $Zb\bar b$ couplings is to allow mixing
between the $b$ and a new exotic $b'$ with both left- and right-handed
components. The couplings become $g^b_{\scriptscriptstyle L,R} \to
\cos^2\theta_{\scriptscriptstyle L,R}\, g^b_{\scriptscriptstyle L,R} + \sin^2\theta_{\scriptscriptstyle L,R}\,
g^{b'}_{\scriptscriptstyle L,R}$, where the mixing angles come from the similarity
transformation $M\to O^T_L M O_R$ needed to diagonalize the $2\times 2$
Dirac mass matrix $M$ for $b$ and $b'$. We were able to simplify the
range of possibilities by making three reasonable assumptions. First
of all one must assume that $M_{12}$ or $M_{21}$ is nonvanishing in
order to have any mixing at all. Second, we expect the $b'$ to be
heavy, which is only possible if $M_{22}$, the entry corresponding to
the $b'$ mass in the case of zero mixing, is large. Finally we exclude
Higgs boson representations higher than doublets, since these tend to
make undesirably large contributions to the rho parameter.
Under these assumptions there are only twelve possible isospin
assignments for the chiral components of the $b'$ quark, which are
listed in Table 2, along with the mixing angle needed ($s^2_{\scriptscriptstyle L,R}
\equiv \sin^2\theta_{\scriptscriptstyle L,R}$) to change $g^b_L$ or $g^b_R$ by the desired
amount. In almost all of these models, only one of the two mixing
angles is relevant, the other one being suppressed by powers of the
first. However for the model with $I'_L = I'_{3L} = 0$ and $I'_R =
- I'_{3R} = 1/2$, a combination of both left- and right-handed mixing
angles is possible, illustrated in figure 1.
\begin{table}
\caption{Twelve models of $b$-$b'$ mixing that solve the $R_b$ problem,
labeled by the isospins of the $b'_L$ and $b'_R$.}
\vspace{0.4cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$I'_L$ & ${I'_3}_L$ & $I'_R$ & ${I'_3}_R$ & {angle needed}\\
\hline
$1$ & $-1$&$1$ & $-1$ &
$s_L^2=0.0111\pm 0.0032$\\
$1$ & $-1$&$\sfrac12$ & $-\sfrac12$& " \\
$\sfrac32$ & $-\sfrac32$&$1$ & $-1$& $s_L^2=0.0056\pm 0.0016$\\
$\sfrac12$ & $\sfrac12$&$\sfrac12$ & $\sfrac12$&
$s_R^2=0.052 {+ 0.013\atop -0.014}$\\
$0$ & $0$&$\sfrac12$ & $\sfrac12$&"\\
$\sfrac12$ & $\sfrac12$&$1$ & $1$& $s_R^2=0.026 {+ 0.006\atop -0.007}$\\
$\sfrac12$ & $\sfrac12$&$0,1$ & $0$& $s_L^2=0.8515\pm 0.0016$\\
$\sfrac32$ & $\sfrac12$&$1$ & $0$&"\\
$0$ & $0$&$\sfrac12$ & $-\sfrac12$& $s_R^2{\roughly >} 0.361$\\
$\sfrac12$ & $-\sfrac12$&$\sfrac12$ & $-\sfrac12$&$s_R^2=0.361
{+ 0.013\atop -0.014}$ \\
$\sfrac12$ & $-\sfrac12$&$1$ & $-1$&$s_R^2=0.180\pm 0.007$\\
\hline \end{tabular} \end{center}
\end{table}
\begin{figure}
\psfig{figure=Rb-peter-mirrorfig.eps,height=1.5in}
\caption{Allowed values of mixing angles in the $s^2_R$-$s^2_L$ plane for
model number 10 of table 2.}
\label{fig:fig1}
\end{figure}
\subsection{$t$-$t'$ mixing}
Another possibility is that the top quark mixes with an exotic $t'$,
which alters the $g^b_L$ coupling through the loop diagrams of figure
\ref{fig:tp}. These are the same as the diagrams of the Standard
Model, except that the top quark must be replaced by two linear
combinations of $t$ and $t'$, one for each chirality. Making the same
assumptions about the $t$-$t'$ mass matrix as we did for that of $b$
and $b'$ above, there are again twelve possible isospin assignments for
the $t'$ quark, enumerated in Table 3.
\begin{figure}
\psfig{figure=Rb-david-smgraphs.eps,height=3in}
\caption{Top quark loop diagrams that affect $g^b_L$. }
\label{fig:tp}
\end{figure}
\begin{table}
\caption{Twelve possible models of $t$-$t'$ mixing,
labeled by the isospins of the $t'_L$ and $t'_R$.}
\vspace{0.4cm}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$I'_L$ & ${I'_3}_L$ & $I'_R$ & ${I'_3}_R$ \\
\hline
$3 / 2$ & $+{1 / 2}$ & 1 & $ +1 \> $\\
$1 / 2$ & $+{1 / 2}$ & 1 & $ +1,0 \> $\\
$1 / 2$ & $+{1 / 2}$ & $1 / 2$ & $+{1 / 2} $\\
$1 / 2$ & $+{1 / 2}$ & $ 0 $ & $ 0 $ \\
$1 / 2$ & $-{1 / 2}$ & 1 & $ 0,-1 $ \\
$1 / 2$ & $-{1 / 2}$ & $1 / 2$ & $-{1 / 2} $\\
$1 / 2$ & $-{1 / 2}$ & $ 0 $ & $ 0 $\\
0 & 0 & $1 / 2$ & $\pm{1 / 2} $\\
0 & 0 & $ 0 $ & $ 0 $ \\
\hline \end{tabular} \end{center}
\end{table}
The resulting change in $g^b_L$ can be expressed rather simply in the
limit where both top quarks are much heavier than the $W$ boson, if we
concentrate on values of $m_{t'}$ so as to maximize the magnitude of
$g^b_L$. This turns out to occur when $m_{t'}\ll m_t$, in which case a
single term dominates the shift in $g^b_L$ beyond its Standard Model
value,
\begin{equation}
\delta g^b_L \cong {\alpha\over 16\pi s^2_w} V^2_{t'b}\left(
{m^2_{t'} - m^2_t\over m^2_W} + 6\ln{m_{t'}\over m_t}\right).
\end{equation}
Here $V_{t'b}$ is the element from the extended CKM matrix which
includes the $t'$ quark in addition to $u$, $c$ and $t$. The best case
is when $m_{t'}$ is at its experimental lower limit of 135 GeV, in
which case $\delta g^b_L = -0.0021$. Although this is too small to
explain the previous LEP data which needed $\delta g^b_L = -0.0063$,
the recent results of ALEPH announced at this conference\cite{Tomalin}
have reduced the size of the discrepancy between theory and the world
average measurement\cite{monig} of $R_b=0.2178\pm 0.0011$ such that $-0.0021$
is now sufficient.
\section{General Loop Effects}\subsection{Diagonal $Z$ Couplings}
In the previous subsection we had to consider loops involving the $t'$
as a special case because they were inextricably tangled up with usual
Standard Model contributions, but here we want to consider the effects
of a general fermion $f$ and scalar $\phi$ coupling to the $b_L$ via the
interaction ${\cal L}_y = y \bar b_L\phi f +$ h.c., and to the $Z$ boson
via
\begin{equation}
{2e\over\sin2\theta_W}\left(\bar f\gamma_\mu(g^f_L P_L+
g^f_R P_R)f + ig^\phi\phi^\dagger\raise1.5ex\hbox{$\leftrightarrow$}\mkern-16.5mu \partial_\mu\phi\right) Z^\mu.
\label{Zcouplings}
\end{equation}
The relevant diagrams which modify $g^b_L$ are shown in figure
\ref{fig:loops}. There are also vacuum polarization diagrams as in
figure \ref{fig:vacpol}, but these cancel to a good approximation in the
ratio that defines $R_b$. The result of evaluating the first four
diagrams has a remarkably simple expression in the approximation of
ignoring $M_Z$:
\begin{equation}
\delta g^b_L = {y^2 n_c\over 16\pi^2}(g^f_L-g^f_R){\cal
F}(m^2_f/m^2_\phi),
\label{diag}
\end{equation}
where $n_c$ is a color factor that is unity if $\phi$ is colorless and
$f$ is a color triplet, for example. The kinematic function given by
${\cal F}(x) = x/(x-1) - x\ln x/(x-1)^2$ is always positive and reaches
its maximum value of 1 when $m_f/m_\phi\to\infty$. The $M_Z=0$
approximation turns out to be better than one would expect, since the
first order correction in powers of $M^2_Z/m^2_f$ is typically only
10\% of eq.~(\ref{diag}), even when $M^2_Z/m^2_f=1$.
A notable feature of eq.~(\ref{diag}) is that its sign is completely
determined by the isospins of $f_L$ and $f_R$ since $g^f_L-g^f_R =
I^f_{3L} - I^f_{3R}$. Thus one can immediately see why two-Higgs
doublet models, as well as the top quark contribution in the Standard
Model, tend to decrease $R_b$: if $f$ is the top quark then $I^f_{3L} -
I^f_{3R} = 1/2$, which has the opposite sign to the tree level value of
$g^b_L$, reducing the effective coupling. In two-Higgs models with
very large $\tan\beta$, so that the bottom quark itself can make an
appreciable contribution in the loop, the sign is such as to increase
$R_b$.
\begin{figure}
\psfig{figure=loop.eps,width=3in}
\caption{The important generic loops for $g^b_L$}
\label{fig:loops}
\end{figure}
\begin{figure}
\psfig{figure=vacpol.eps,width=3in}
\caption{Vacuum polarization contribution which tend to cancel in $R_b$.}
\label{fig:vacpol}
\end{figure}
\subsection{Nondiagonal $Z$ Couplings}
It might happen that the flavor states with the largest coupling to
$b_L$, which we shall now denote by $\phi_1$ and $f_1$, mix with other
flavor states $\phi_2$ and $f_2$ to form mass eigenstates $\phi,\phi'$
and $f,f'$. Then our previous formula (\ref{diag}) for $\delta g^b_L$
must be generalized to something more complicated. Nevertheless in
certain limiting cases it becomes relatively simple again. First, if
the masses are degenerate such that $m_f = m_{f'}$ and $m_\phi=m_{\phi'}$,
we get
\begin{equation}
\delta g^b_L = {y^2 n_c\over 16\pi^2}(U_R U^\dagger_L g^f_L U_L
U^\dagger_R -g^f_R)_{11}{\cal
F}(m^2_f/m^2_\phi).
\label{nondiag1}
\end{equation}
Here $g^f_{L,R}$ are the $2\times 2$ matrix generalizations of the
fermion couplings to the $Z$ boson, and $U^\dagger_L M U_R$ is the
similarity transformation that diagonalizes the $f_1$-$f_2$ mass
matrix. Second, if the fermions are much heavier than the scalars,
we get eq.~(\ref{nondiag1}) again, with the addition of two extra terms
that tend to cancel each other, at least in the supersymmetric case we
will discuss below. Third, if the scalars are much heavier than the
fermions we obtain
\begin{equation}
\delta g^b_L = {y^2 n_c\over 16\pi^2}\sin^2 2\theta_\phi
(g^\phi_{11}-g^\phi_{22}){\cal
F}'(m^2_f/m^2_\phi),
\label{nondiag2}
\end{equation}
where $\theta_\phi$ is the scalar mixing angle and ${\cal F}'(x)
={x+1\over 2(x-1)}
\ln x -1$ is
another positive function like ${\cal F}$. Thus it is once again
rather easy to determine the sign and magnitude of $\delta R_b$.
An example is the supersymmetric Standard Model where, because of the
large top quark yukawa coupling, $\phi_1$ is the right-handed top
squark $\tilde t_R$, and $f_1$ is the higgsino $\tilde h_2^-$, or
alternatively the fermion and scalar are the usual top quark and one of
the Higgs bosons, $\phi_1 = h_2^-$ and $f_1 = t_R$. We already
explained that the quark-Higgs contribution has the wrong sign for
increasing $R_b$, so one wants to minimize these loops by taking the
Higgs bosons to be very heavy. Concentrating on the squark and
higgsino, therefore, one must take into account their mixing with the
left-handed top squark, $\phi_2=\tilde t_L$ and the Wino, $f_2 = \tilde
W^-$. The charge matrices are $g^f_L = g^f_R = \diag(-1/2,-1)$ for the
fermions and $g^\phi=\diag(0,1/2)$ for the scalars. Applying the
simplifying cases 1 or 2 mentioned above, one finds
\begin{equation}
\delta g^b_L \cong - {y^2 n_c\over 32\pi^2}\sin^2(\theta_L
-\sign(m_f/m_{f'})\theta_R){\cal F},
\end{equation}
and therefore to have a large shift in $g^b_L$, the combination of
chargino mixing angles $|\theta_L -\sign(m_f/m_{f'})\theta_R|$ must be
large. From this and the form of the chargino mass matrix it is easy
to deduce that both charginos must be rather light, and that
$\tan\beta$ must be close to unity. Furthermore the third simplifying
limit above, that of heavy scalars, gives the wrong sign for increasing
$R_b$ because $g^\phi_{11} - g^\phi_{22} = -1/2$. Thus one needs at
least one light scalar, which naturally enough turns out to be
$\tilde t_R$ because this is the one that couples directly to the
left-handed $b$ quark.
\section{Summary}
We have shown that a significant increase of $R_b$ from its Standard
Model value can be caused by new physics that changes $g^b_R$ by quark
mixing, or $g^b_L$ either by mixing or one loop contributions to the
$Zb\bar b$ coupling. In the category of $b$-$b'$ mixing we found 12
possible models using modest restrictions. The Standard Model $t$
quark loop corrections to $g^b_L$ can also be altered by $t$-$t'$
mixing, but only by as much as $-0.0021$, corresponding to a maximum
increase of $0.0017$ in $R_b$.
For more general kinds of loop corrections to $R_b$, we have found
simple approximate formulas which gives insight into the sign and
magnitude of the change. Although the original motivation was to make
it easier to build models explaining the former discrepancy, perhaps
they will in the future be more useful in constraining models that
arise in other contexts. It should however be kept in mind that the
world average value of $R_b$ as of this writing is still two standard
deviations higher than the Standard Model value, despite the recent
results of ALEPH which now agree with the Standard Model.
\section*{References}
| 2024-02-18T23:39:43.819Z | 1996-09-11T21:53:18.000Z | algebraic_stack_train_0000 | 222 | 2,784 |
|
proofpile-arXiv_065-1200 | \setcounter{equation}{0{MOTIVATIONS FOR STRING INFLATION}
Despite impressive success, the Standard Model of big--bang cosmology
is known to suffer from two `naturalness' problems: observed homogeneity and
spatial flatness of the present universe that cannot be explained in a
natural way.
Rather they point to a period of inflationary expansion in the past.
The evolution of Friedmann-Robertson-Walker (FRW) scale factor is governed
by the Einstein equations
\begin{eqnarray}
\Big({\dot a \over a} \Big)^2 &=& {8 \pi \over 3} G \rho - {k \over a^2}
\label{hubble} \\
{\ddot a \over a} &=& -{4 \pi \over 3} G (\rho + 3p)
\label{graveqn}
\end{eqnarray}
where $\rho$ and $p$ are the energy density and the pressure of the matter
and $k$ measures the spatial curvature.
The spatial flatness problem is solved naturally if the matter density has
grown much larger than the spatial curvature.
Thus inflation is characterized by a period
during which the ratio ${8 \pi \over 3} G \rho / (1/a^2) = {\dot a}^2 + k$
has increased with time, viz., $\dot a > 0$ and $\ddot a > 0$.
Eq.(\ref{graveqn}) shows that such an accelerated expansion is
possible only for exotic matter satisfying
$ \rho + 3p < 0$.
Inflation solves the horizon problem automatically. The physical distance
for a fixed comoving separation scales as $a$.
The cosmic horizon is inversely proportional to the Hubble parameter.
Thus, during inflation, their ratio $a / (a / {\dot a}) = \dot a$
grows with time since $\ddot a > 0$.
This implies that the physical distance scale is stretched outside
the horizon so that the correlation encompasses an enormous spatial volume,
hence, solves the horizon problem.
There are three possible types of inflation. The first,
de Sitter inflation $a(t) = \exp (H t)$, arises from nonzero vacuum energy
during weakly first-order phase transition.
The second, power-law inflation $a(t) = t^p$ for $t> 0, p >1$ arises in
many models of supergravity with exponential potential. The third,
super-inflation $a(t) = (-t)^p$, $t> 0, p < 0$ is the least familiar one
but arises for Brans-Dicke-type gravity theories including string theory.
The novelty of the super-inflation is that it is driven by kinetic energy
rather than vacuum potential energy as is required for the former two
inflations. This is gratifying since it
has been known that the vacuum potential energy has to be fine-tuned
in order to achieve an observationally successful inflation.
As such, for inflations driven by potential energy, the naturalness problems
of observational cosmology have been traded for the naturalness problems of
underlying microscopic physics.
\setcounter{equation}{0{KINEMATICS OF STRING SUPER-INFLATION}
Classical string dynamics at sub-Planck scale is described by a Brans-Dicke
type effective Lagrangian expressed as a power--series expansion of spacetime
derivatives:
\begin{eqnarray}
L &=& e^{-2\phi} [- R - 4 (\nabla \phi)^2 + (\nabla T)^2 + \cdots]
\nonumber \\
&+& L_{\rm matter}.
\end{eqnarray}
The ellipses denote (classical) higher-derivative terms,
$L_{\rm eff}$
is the Lagrangian describing matter coupling, and
$\phi$ and $T$ are dilaton and moduli fields (associated with
the size and the shape of compactified space) respectively.
Eq.(3) indicates that,
in string theory, the Newton's constant is not a fixed quantity but
determined dynamically by the dilaton $\phi$: $G = e^{2 \phi} / 16 \pi$.
As first pointed out by Veneziano (Veneziano, 1991), the
string theory gives rise to the super-inflation naturally.
Veneziano has shown that Einstein equation and $\phi, T$ equations of
motion derived from Eq.(3) have always two cosmological branches.
For example, for vacuum without matter, the first branch
exhibits decelerating expansion and growing Newton's constant
\begin{equation}
a(t) = t^{+ 1/\sqrt 3}, \hskip0.3cm e^\phi = t^{-1 + \sqrt 3},
\hskip0.5cm t > 0,
\end{equation}
while the second branch represents accelerating expansion and growing
Newton's constant
\begin{equation}
a(t) = (-t)^{-1/\sqrt 3}, \hskip0.3cm e^\phi = (-t)^{-1-\sqrt 3},
\hskip0.5cm t < 0.
\end{equation}
Similarly, for $p = \pm \rho/3$ matter,
the first branch ($p = +\rho/3$) represents a radiation-dominated FRW
universe and frozen Newton's constant
\begin{equation}
a(t) = t^{1/2}, \hskip0.3cm \phi = {\rm constant}, \hskip0.5cm t > 0.
\end{equation}
The second branch ($p = - \rho/3$) represents the universe with accelerated
expansion and growing Netwon's constant
\begin{equation}
a(t) = (-t)^{-1/2}, \hskip0.3cm \phi = -3 \log (-t), \hskip0.5cm t < 0.
\end{equation}
Veneziano has shown that the two branches are related
each other by simultaneous time-reversal $t \rightarrow -t$ and
`scale-factor duality':
$a \rightarrow 1/a, \,\, \phi \rightarrow \phi - 6 \log a$ and
$(p/\rho) \rightarrow - (p/\rho)$.
The duality is a consequence of the underlying string theory
symmetries, hence,
is the most distinguishing feature of string cosmology from the others.
More interestingly the scale-factor duality may offer a stringy mechanism
for exiting from inflation: by duality
the branch can flip from the super-inflation to the FRW-type one.
For physically sensible branch change, metric, Newton's constant and all
other physical quantities should interpolate smoothly across the moment of
fixed point of the scale-factor duality $t^*$: $a(t^*) = 1/a(-t^*), \,\,
\phi (-t^*) = \phi(t^*) - 6 \log a(t^*)$.
Kinematical distinction of the super-inflation compared to the other
two inflations is most clearly seen from the behavior of the cosmic horizon
(Fabbri et.al., 1985).
The particle horizon given by the inverse Hubble parameter
$R_H = 1/ H = (-t)/p$ shrinks to zero size asymptotically as
$t \rightarrow 0^-$.
In de Sitter inflation $R_H = 1/H$ remains frozen, while
in power-law inflation $R_H = t/p$ grows large. This distinction bears
an important consequence to the generation of the primordial density and
metric perturbations.
The physics behind generating these perturbations is the quantum
fluctuation produced inside the subluminal horizon. Once produced, the quantum
fluctuation is streched outside the horizon and behaves as classical
density and metric perturbations (Rey, 1987).
Since the horizon for super-inflation shrinks during inflation,
quantum fluctuations are parametrically squeezed
toward shorter wavelengths. More precisely, the first horizon-crossing
condition implies that the higher frequency modes cross the horizon at later
time. With increasing Hubble parameter in time, the power spectra of
the higher frequency modes are amplified with respect to lower frequency mode
ones. This has an important bearing on the formation of the large-scale
structures as discussed later.
\setcounter{equation}{0{GRACEFUL EXIT VIA QUANTUM BACK REACTION}
As mentioned the string theory offers an extremely attractive
possibility of exit from inflation utilizing the scale-factor duality.
Unfortunately, extensive study
has shown it impossible (Brustein and Veneziano, 1994) because the curvature
and the Newton's constant have turned out discontinuous and
divergent at the fixed point moment $t^* = 0$.
This problem seems very generic and is now known as the `graceful exit problem'
of string inflationary cosmology.
On a closer look, however, the problem arises entirely within the
classical approximation. Near the fixed point moment, both the curvature and
the Newton's constant grow indefinitely so that higher-order curvature
and quantum corrections become important. Therefore, whether a smooth
transition between the two branches is possible or not should be determined
only after a full-fledged quantum stringy effect is taken into account.
Antoniadis, Rizos and Tamvakis (Antoniadis et.al., 1994)
have initiated the study of the quantum effect to the
graceful exit problem. They have found that under certain initial conditions
the quantum back reaction of the fluctuating matter leads to a smooth
transition from the super-inflation branch to the FRW-type branch.
Their analysis has been further extended (Easther and Maeda, 1996)
for nonzero spatial curvature and have reached essentially the same
conclusion.
In fact, it is
possible to obtain an `exact' quantum analysis for a two-dimensional
truncation (Rey, 1996), which still captures
all the underlying essential physics of four dimensions.
After the truncation, the classical Lagrangian is given by
\begin{equation}
L_{classical} = e^{-2 \phi} (-R - 4 (\nabla \phi)^2)
+ {1 \over 2} (\nabla {\vec f})^2
\label{2daction}
\end{equation}
where $\phi$ and ${\vec f}$ refer to the dilaton and the $N$-component
(Ramond-Ramond) matter field.
By solving the equations of motion, two branches are found exactly. The
super-inflation branch:
\begin{equation}
(ds)^2 = [d \tau^2 - ({{\tilde M} \over - \tau})^2 dx^2];
\hskip0.3cm -\infty < \tau \le 0
\end{equation}
with a growing dilaton: $\phi = - \log (-2\tau)$, and
the FRW-type branch:
\begin{equation}
(ds)^2 = [ d\tau^2 - (M \tau)^2 dx^2];
\hskip0.3cm 0 \le \tau < \infty
\end{equation}
with a frozen dilaton.
The two branches also show discontinuous and divergent
curvature and Newton's constant at the fixed point moment $\tau^* = 0$,
hence, a two-dimensional version of the graceful exit problem.
In two dimensions, the quantum corrections are entirely specified by
the conformal anomaly arising at one loop only, hence, exactly solvable
for a given spin and multiplicity content of the massless fields. It is
found that
\begin{equation}
L_{\rm quantum} = L_{\rm classical}
+ {\kappa \over 2} \big[ R {1 \over \partial^2 } R
+ 2 \phi R \big]
\label{effaction}
\end{equation}
where $\kappa = (N-24)/24$ and is assumed negative definite (this last
condition has been relaxed for a more general class of truncations
(Gasperini and
Veneziano, 1996)). For detailed analysis of the quantum effects,
we refer to the original work (Rey, 1996). Here, we sketch
the main result. Due to the quantum corrections, both
the scale factor and the dilaton evolves beyond the classically allowed
region and interpolates between the two branches.
A straightforward calculation shows the scalar curvature evolves as
\begin{equation}
R= 16 e^{2 \phi} / (1 + |\kappa| e^{2 \phi}/2)^3,
\end{equation}
which vanishes at past and future infinity at which $\phi \rightarrow \pm
\infty$. Hence, the classical singularity at $\tau = 0$ is now completely
erased out and the inflation has ended gracefully! The $\kappa$ dependence
of the curvature clearly indicates that the graceful exit is a quantum
mechanical effect.
Similar conclusion is reached for the quantum corrected string
vacua in four dimensions.
The scalar curvature again vanishes at asymptotic past/future
infinity but approaches a finite positive maximum at the fixed pont moment
$\tau = 0$. We thus conclude that quantum corrected string theory
resolves the classical singularity and exit super-inflation
gracefully.
\setcounter{equation}{0{POWER SPECTRA OF DENSITY AND GRAVITATIONAL WAVES}
Further indication that the quantum back reaction is an essential element
for a successful string inflation
comes from the constraints of the large-scale
observational cosmology.
As emphasized above, if the quantum back reaction effect is ignored,
the cosmic horizon shrinks with time during the super-inflation
epoch. The shrinking horizon amplifies quantum fluctuations parametrically
as they are stretched outside the horizon. Because of this effect, it is
expected that the power spectra of primordial scalar and tensor perturbations
are enhanced characteristically to higher frequency than the spectra for
de Sitter or power-law inflations. Explicit calculations
(Brustein et al., 1995, Hwang, 1996) have confirmed this expectation.
The power spectrum at the moment of re-entrance inside the horizon
$aH|_{\rm HC} = k$ during matter-dominated epoch is given by
\begin{equation}
P(k, t_{\rm HC}) = {(a H)^4 \over 2 \pi^2} {|\delta (k)|^2 \over k}.
\end{equation}
Here $|\delta (k, t)|^2 \equiv A(t) k^n$ denotes the conventionally normalized
power spectrum of the density contrast $\delta \equiv \delta \rho / \rho_o$.
Up to logarithmic corrections, the spectral index is found to $n=4$, hence,
strongly tilted to higher frequency modes. This should be contrasted to the
observationally supported near-Harrison-Zeldovich spectrum $n \approx 1$.
Density perturbation with such a high spectral index is problematic to seed
the large-scale structure formation. For instance, consider the temperature
fluctuation of the cosmic microwave background radiation
$\delta T(\Omega_2) / T = \sum_{l,m} a_{lm} Y_{lm}(\Omega_2)$
induced by the scattering off the gravitationl potential perturbations.
Power spectrum of the $l$-th spherical mode is given by
\begin{equation}
|a_l|^2 = \pi \int_0^\infty {dk \over k} \Big(j_l ({2k \over H_0})\Big)^2
P(k).
\end{equation}
For the spectral index $n=4$ the short wavelength contribution is
so large that Eq.(14) diverges for all $l$. Recent COBE observations clearly
contradicts this, hence, seems to rule out the string super-inflation as a seed
for the large-scale structure formation.
However, the above calculations have not taken into accout of
the important quantum mechanical
effects to the dynamics of the horizon during inflation.
Especially, since the relevant density perturbations at the present
large-scale observations have left the horizon near the very end of the
super-inflation epoch, the quantum back reactions should have
become significant by then. Eq.(12) shows that the back reaction tends to
retard the rate the horizon shrinks. It is now easy to understand that any
change of horizon dynamics affects directly the shape of the power spectra.
The super-inflation has started essentially classically initially. Therefore
low-frequency fluctuations generated during earlier stage should show
the characteristic $n=4$ spectral index.
On the other hand, toward the end of inflation, the
back reaction has slowed down significantly the rate the horizon shrinks
to the point $\dot H \rightarrow 0$, hence, evolves de Sitter-like essentially.
Therefore the spectral index of higher frequency quantum fluctuations that
have left the horizon at this later stage should be close to that of the
de Sitter inflation, viz., $n \approx 1$.
The crossover from the classical, low-frequency regime to the quantum
mechanical, high frequency regime takes place at some intermediate scale
$k = k^*$, and is model-dependent.
As a result, the fully quantum corrected power spectra of density perturbation
should exhibit frequency-dependent spectral index $n(k \ll k^*) \approx 4$,
$n(k \gg k*) \approx 1$ which
interpolates monotonically between the two limits.
Consequently the quantum effects keep CMBR partial wave
power spectra Eq.(14) from diverging at high frequency.
The gravity wave power spectrum can be calculated in a similar
manner and exhibit classically a strongly tilted spectra with $n=4$.
Again quantum back reaction will curb down the spectral index
at higher frequency regime, hence, the actual gravitational wave
signal would not be as strong as what the classical power spectra shows.
The above discussions clearly point to the importance of calculating
fully quantum corrected power spectra for density and metric
perturbations. In addition,
stochastic dynamics (Rey, 1987) of inflaton during the super--inflation
exhibits distinct non-Gaussian signals from the de Sitter or power-law
inflations. A full exposition of these calculations will be reported
elsewhere.
In this talk I have summarized basic features of the string inflationary
cosmology. I have emphasized that an essential element to string cosmology
is the full-fledged quantum back reaction effect.
The features should confront present and future observations
and experiments. The prospect is quite exciting for
both string theorists and observational cosmologists.
For string theorists, observational cosmology offers the first direct
observation of relic cosmological string effects .
For observational cosmologists, string theory offers the first natural
model of inflationary cosmology and unique signature of relic density
and gravitational waves.
\vskip 3ex plus .8ex minus .4ex {\center{ACKNOWLEDGEMENTS}\vskip 0.5ex}
I acknowledge discussions with M. Gasperini, J.-C. Hwang and G. Veneziano.
This work was supported in part by NSF-KOSEF Bilateral Grant, KOSEF
Purpose-Oriented Grant 94-1400-04-01-3 and SRC Program, KRF International
Collaboration Grant and Non-Directed
Research Grant, Ministry of Education BSRI 95-2418,
and Seoam Foundation Fellowship.
| 2024-02-18T23:39:43.865Z | 1996-09-17T23:52:24.000Z | algebraic_stack_train_0000 | 230 | 2,600 |
|
proofpile-arXiv_065-1248 | \section{
Introduction
}
The transient accretion-powered pulsar ($\nu=2.14$ Hz) GRO~J1744-28
was discovered by the Burst and Transient Source Experiment ({\em
BATSE\/}) aboard the Compton Gamma Ray Observatory ({\em CGRO\/})
during a day of rapid bursting (about twenty bursts per hour) on 1995
December 2 (Kouveliotou et al.\ 1996a; Finger et al.~1996a). Finger
et al.\ (1996a) measured the 11.8 day orbit and found that GRO
J1744-28 was spinning up at a rate $\dot \nu=(3.5$--$12.2)\times
10^{-12} \,{\rm s}^{-2}$ between 1995 December 15 and 1996 January 23.
The pulsar subsequently settled into a regime of hourly bursting with
burst durations of 2--7 seconds, as seen by the PCA instrument on the
Rossi X-Ray Timing Explorer ({\em RXTE\/}) (Swank 1996; Giles et al.\
1996) and the OSSE instrument aboard {\em CGRO\/} (Strickman et al.\
1996). A qualitatively similar burst has been seen from another
accreting pulsar (see our discussion of SMC X-1 in \S2), but
recurrent bursts of this nature have never been observed.
The bursting behavior has two obvious energy sources: accretion and
thermonuclear burning. Lewin et al.\ (1996) presented the case for
accretion-powered bursts via analogy to the Rapid Burster. The rapid
bursts (one every few minutes) are responsible for up to 50\% of the
total time averaged luminosity above 20 keV and thus cannot have a
thermonuclear origin (Kouveliotou et al.\ 1996a). The hourly bursts
(over 3000 observed by {\em BATSE\/} [Kouveliotou et al.\ 1996c]) are
responsible for a much smaller fraction of the total luminosity, at
least above 20 keV. The average burst fluence was $7\times 10^{-7}
\,{\rm erg}\,{\rm cm}^{-2}$ (20--50 keV) on 1996 January 15, when there were 40 per
day and the persistent luminosity was 2.5 Crab in the 20--100 keV band
(1996 January 16 [Fishman et al.\ 1996]) and $4.4\pm 0.3$ Crab in the
8--20 keV band (1996 January 14--1996 January 15 [Sazonov \& Sunyaev
1996]). Above 20 keV, this gives a time-averaged burst luminosity
60--100 times smaller than the steady-state accretion luminosity. The
burst energetics below 20 keV (where most of the energy is emitted) is
still uncertain due to dead-time corrections in the PCA. Preliminary
indications are that these corrections are large enough so that the
bursts might be responsible for more than 5\% of the $<20$ keV
emission (Jahoda 1996, private communication).
The burst energies were marginally consistent with nuclear energy
release and motivated our theoretical study of thermonuclear burning
on this unusual X-ray pulsar. Additional motivations were the
matchings of the characteristic decay time (2--10 seconds [Strickman
et al.\ 1996]) with the cooling time at the nuclear burning depth and
the mean recurrence time (about 30 minutes for the 260 bursts seen by
OSSE from 1996 January 16 to 1996 January 30 [Strickman et al.\ 1996])
with the time to accumulate enough fuel for an instability. The
primary observational evidence against {\em all\/} of these bursts
having a thermonuclear origin is (1) the independence of the
recurrence time from the accretion rate (at least when it is brighter
than 200--400 mCrab [Giles et al.\ 1996]), (2) the existence of
``foreshocks'' before the bursts (Giles et al.\ 1996), (3) the lack of
spectral evolution during the burst, and (4) the global burst
energetics from the PCA instrument (if the dead-time corrections are
understood).
We consider here the possibility that this X-ray pulsar has an
unusually low field and therefore might exhibit different
thermonuclear burning behavior than conventional X-ray pulsars. It is
unlikely that the thermonuclear burning was unstable at the peak of
the outburst. Hence we concentrate on lower accretion rates, for which
the burning will most likely be thermally unstable and will manifest
itself as Type I X-ray bursts or flares of a few minutes duration.
Since the presence and character of a thermonuclear instability
depends on the neutron star's magnetic field ($B$) and global
accretion rate ($\dot M$), we begin in \S2 by summarizing the
indirect inferences about these quantities. The spin behavior points to
an especially weak ($\ll 10^{12} \,{\rm G}$) dipolar magnetic field
component and constrains the global accretion rate as well. We also
compare GRO~J1744-28's bursting behavior at the peak of the outburst
to that of SMC X-1, for which similar arguments point to a low dipole
field. In \S3, we constrain the binary properties, infer a
time-averaged mass transfer rate, and speculate on the origin of the
long-term transient behavior. The thermonuclear stability of the
accreted hydrogen and helium is discussed in \S4 for a range of $\dot
M$'s. Section 5 is a discussion of the magnetic field's role in the
stability and character of nuclear burning in accreting X-ray
pulsars. In particular, we discuss in detail how the star will behave
as $\dot M$ decreases. We conclude, in \S6, by describing what a
successful identification of a thermonuclear instability implies about
the neutron star's properties.
\section{
Properties of the Accreting Neutron Star and Comparison to SMC X-1
}
In addition to the bursting behavior, this pulsar is unusual because
of its unusually high spin frequency and its steady spin-up over a
large range of accretion rates. In the context of magnetic accretion,
these facts imply a dipole field lower than most other accreting
pulsars. The magnetosphere is located at $r_m=\xi r_A$ (Ghosh \& Lamb
1979), where $r_A$ is a characteristic length found by equating
magnetic and fluid stresses, and $\xi$ is a model-dependent
dimensionless number. Estimates of $\xi$ range from $\approx 0.52$
(Ghosh \& Lamb 1979) to $\approx 1$ (Arons 1993; Ostriker \& Shu 1995;
Wang 1995). A measurement of $\xi$ obtained from the observed quasi-periodic
oscillations in the accreting pulsar A0535+26 (Finger, Wilson, \&
Harmon 1996b) gave $\xi\approx 1$. Since the neutron star is spinning
up, the magnetospheric radius, $r_m$, must be less than the
co-rotation radius, $r_{\rm co}$, which implies an {\em upper\/} limit
on the surface strength of the dipolar componenet of the magnetic
field
\begin{equation}\label{eq:rm<rco}
B\lesssim 3\times 10^{11}\,{\rm G}\;\xi^{-7/4}
\left(\frac{\dot{M}}{\ee{8}{-9} M_{\odot} \,{\rm yr}^{-1}}\right)^{1/2}
\left(\frac{10\,{\rm km}}{R}\right)^3,
\end{equation}
in agreement with previous estimates (Finger et al.~1996a; Sturner \&
Dermer 1996). The magnetospheric radius is presently unknown;
continual spin-up at lower $\dot M$'s will reduce this upper
limit. This upper bound is less than the typical X-ray pulsar field
strength, which, as we discuss in \S5, changes the nature of the
convection; GRO~J1744-28 is the first high accretion rate X-ray pulsar
beneath this limit.
The X-ray spectrum above 20 keV falls very steeply (Kouveliotou et
al.\ 1996a; Strickman et al.\ 1996) and is consistent with being above
the characteristic cut-off energy found by {\em RXTE}/PCA ($\approx
15\mbox{--}20\,{\rm keV}$ [Swank 1996; Giles et al.\ 1996]). Daumerie et al.\
(1996) argue that this spectrum and the increase in pulse fraction
with energy imply a surface field $\sim 10^{12} \,{\rm G}$. This argument
is at odds with our upper limit on the dipolar component (for $\xi=1$)
and might imply that higher order magnetic moments are present.
We use the measured spin-up rate $\dot \nu$ to constrain the accretion
rate onto the neutron star. The maximum specific angular momentum of
the accreted matter is $l_{\rm max}\equiv(GM_xr_{\rm co})^{1/2}$,
where $r_{\rm co}=1.0\times 10^8 \,{\rm cm}\approx 100 R$ is the co-rotation
radius (where the Kepler frequency equals the neutron star's spin
frequency) for a $M_x=1.4M_{\odot}$ neutron star. Because the observed
torque $2\pi I \dot \nu$ must be less than $\dot M l_{\rm max}$ (Ghosh
\& Lamb 1979; Chakrabarty et al.\ 1993), we set a lower bound to
$\dot M$,
\begin{equation}
\dot M > 8\times 10^{-9} M_{\odot}\,{\rm yr}^{-1}
\left(\frac{\dot{\nu}}{10^{-11}\,{\rm s}^{-2}}\right)
\left(\frac{R}{10\,{\rm km}}\right)^2,
\end{equation}
where $I=0.4 M_x R^2$ is the neutron star's moment of inertia (a good
approximation for our choice of mass and radius [Ravenhall \& Pethick
1994]). The spherically averaged local accretion rate,
\begin{equation}
\dot{m}_{\rm sph} \equiv \frac{\dot{M}}{4\pi R^2} > \ee{4}{4}
\,{\rm g}\,{\rm cm}^{-2}\,{\rm s}^{-1}
\left(\frac{\dot{\nu}}{10^{-11} \,{\rm s}^{-2}}\right),
\end{equation}
is then {\em independent\/} of the neutron star's radius. Using a rough
bolometric flux for 1996 January 15 of $10^{-7} \,{\rm erg} \,{\rm cm}^{-2}
\,{\rm s}^{-1}$, we infer a minimum distance $d=3\,{\rm kpc}$. As we
note later, the extinction and position in the galaxy most likely
place the object at least 8 kpc away.
The only other accreting pulsar which has always been spinning up
($\dot \nu=2.4\times 10^{-11} \,{\rm s}^{-2}$) and for which our earlier
arguments yield a comparable field strength is SMC X-1 ($\nu=1.4 \,
{\rm Hz}$). For a typical accretion rate $\dot M\approx 4\times
10^{-8} M_{\odot} \,{\rm yr}^{-1}$ (Levine et al.\ 1993), we infer $B<10^{12}
\,{\rm G}$. Angelini, Stella, and White (1991) discovered an X-ray burst
from SMC X-1 during an {\em EXOSAT\/} observation on 18 October 1984
when the 1-16 keV luminosity was $L\approx 4\times 10^{38} \,{\rm erg}
\,{\rm s}^{-1}$. This burst is {\em very\/} similar to the large hourly
bursts seen from GRO~J1744-28. The burst rose by a factor of three
within one second, lasted for about 80 seconds, and was followed by a
35\% decline in the persistent flux. There was no evidence for
spectral changes during or after the burst, and the pulse fraction and
phase remained constant. The recurrence time must be long, as only one
burst was seen in $\approx 20$ hours of observation. Angelini et al.\
(1991) noted the similarities to the Rapid Burster and argued for
accretion as the energy source for these events. They also noted that
the variability of the source has a strong underlying quasi-period of
a few minutes, which was present in all {\em EXOSAT\/} observations
except for the five hours following the burst. About 10\% of the total
luminosity of the source is in these variations.
\section{
Properties of the Binary
}
Following the {\em ROSAT\/} positioning of this object (Kouveliotou et
al.\ 1996b), Augusteijn et al.\ (1996) identified the variable
infra-red counterpart, which was present in an earlier image (8
February 1996) of Blanco, Lidman, and Glazebrook (1996) at
$m_K=15.7\pm 0.3$ and was undetected and at least a magnitude fainter
on 28 March 1996. This light is most likely X-rays reprocessed by
either the accretion disk or companion (Augusteijn et al.\ 1996). The
most obvious Roche-lobe filling object is a first-ascent red giant
branch star with a degenerate helium core of mass $M_{\rm He}$ and an
overlying hydrogen envelope (Finger et al.~1996a; Sturner \& Dermer
1996). Hydrogen shell burning via the CNO cycle supplies a luminosity
strongly dependent on only the helium core mass, which allows us to
estimate the stellar luminosity and expected IR magnitude. In the
absence of mass or angular momentum loss, this binary evolves due to
the expansion of the red giant as the helium core mass grows.
\subsection{
Constraints on the Optical Companion and Neutron Star Mass
}
We solve for the companion mass, $M_c$, by using the orbital
parameters $P_{\rm orb}=11.83 \,{\rm d}$ and $a_x \sin i=2.63
\,{\rm lt\mbox{-}sec}$ (Finger et al.~1996a) and the
core-mass/luminosity relations of Webbink, Rappaport, \& Savonije
(1983). We presume that the giant fills the Roche lobe estimated by
Eggleton (1983) and fix $M_x=1.4 M_{\odot}$. For metallicities
$Z=0.02(0.0001)$, the first allowed solution (corresponding to an
unrealistic hydrogen envelope mass of zero) has $M_c/M_{\odot} =
0.216\,(0.232)$ and inclinations less than 18--19 degrees ($\cos
i>0.95$). The solution in the middle of the allowed range (i.e.,
$\cos i=0.975$) has $M_c=0.334\,M_{\odot}$, $M_{\rm He}=0.22\,(0.24)
M_{\odot}$, $L\approx 12 L_\odot$, an orbital separation of $26
R_\odot$, a stellar radius of $6.9R_\odot$, and a projected companion
velocity $K\approx 20 \,{\rm km}\,{\rm s}^{-1}$. The helium core mass is
slightly larger for the metal-poor case to compensate for fewer
catalysts. The strong dependence of the giant radius on $M_{\rm He}$
makes the inferred core mass (shown by the dotted line in Figure 1)
nearly independent of the inclination angle and always close to
$0.22\, (0.24) M_{\odot}$. The dashed line in Figure 1 shows $M_c$ as a
function of $\cos i$ for $Z=0.02$ and $M_x=1.4M_{\odot}$. The resulting
mass transfer rate (Webbink et al.\ 1983) is given by the solid line
in Figure 1. For the ``typical'' solution presented above, the
average mass transfer rate is $\dot{M}\approx 10^{-9} M_{\odot} \,{\rm yr}^{-1}$,
and the hydrogen envelope mass is $0.1 M_{\odot}$. This implies a
lifetime, nearly independent of the metallicity, of $10^8 \,{\rm yr}$. Most
of the giant's envelope goes onto the neutron star (for conservative
evolution).
Unless we are looking at the system nearly pole-on, we must conclude
that $M_c\approx 0.3 M_{\odot}$. If this low-mass star followed a normal
evolutionary track, then it must have started mass transfer as a
$\approx M_{\odot}$ star. If all the departed mass accreted onto the
compact object, then either the neutron star is more massive than its
``birthweight'' ($1.2$--$1.7 M_{\odot}$ [Timmes, Woosley, \& Weaver
1996]), or the neutron star was a white dwarf that underwent
accretion-induced collapse. Considering more massive neutron stars
does not alleviate the companion's substantial mass loss but does
increase the inclination angle (e.g., $\cos i=0.9$ for $M_x=2.0
M_{\odot}$, a doubling of the allowed phase space).
Depending on the inclination angle, an infrared measurement of the
orbit of this system might actually constrain the neutron star
mass. The projected companion velocity is
\begin{equation}
K = \left(\frac{2\pi a_x \sin i}{P_{\rm orb}}\right)
\left(\frac{M_x}{M_c}\right) = 4.85 \,{\rm km}\,{\rm s}^{-1}
\left(\frac{M_x}{M_c}\right).
\end{equation}
We find the minimum companion mass by requiring (1) that the star be
luminous enough so that it fills the Roche lobe, and (2) that the
overlying hydrogen envelope have a minimum mass of $10^{-2}
M_{\odot}$ so as to live for at least $10^{7} \,{\rm yr}$ and to be fully
convective. The minimum companion mass ($M_c=0.22 M_{\odot}$) is then
basically independent of $M_x$, so that the maximum $K$ for the
Roche-lobe filling giant hypothesis is $K_{\rm max}=21 \,{\rm km}
\,{\rm s}^{-1}(M_x/M_{\odot})$. Hence, a measurement of $K$ can
potentially constrain the neutron star mass. For example, if $M_x=2
M_{\odot}$ and we presume that $\cos i$ is uniformly distributed between
0.9 and 1.0, then there is a 50\% chance of measuring a velocity
larger than the maximum for the $1.4 M_{\odot}$ neutron star case. This
will be a tough job. We find that the stellar effective temperature is
in the range 4000--4200$\,{\rm K}$, which, for the luminosity derived above,
gives $M_K\approx -0.2$ (Bessell \& Brett 1988). If the extinction is
$A_V\approx 25$, as implied by the $N_H\approx (5$--$6)\times 10^{22}
\,{\rm cm}^{-2}$ measurement of Dotani et al.\ (1996), then $A_K\approx 3$ and for
a distance of 8 kpc, we expect $m_K\approx 17$.
\subsection{
The Long-Term Accretion History
}
Consistent with the transient nature of this binary, the inferred
long-term mass transfer rate is less than the current accretion rate.
This possibly indicates that the present outburst (which so far has
lasted for over 300 days) arises from a thermal instability in the
accretion disk similar to that occurring in dwarf novae. Indeed, the
implied long term mass transfer rate is much less than the amount
needed for stable mass transfer (Shafter 1992). Our orbital
calculations give the distance from the neutron star to where the
accreting matter strikes the accretion disk as $R_r\approx 4 R_\odot$
(Lubow \& Shu 1975). The thermal instability occurs when the column
density of matter ($\Sigma$) accumulated at this radius exceeds
$\gtrsim 10^3 \,{\rm g} \,{\rm cm}^{-2}$ (Cannizzo \& Wheeler 1984), in which
case the accumulated mass is $\gtrsim R_r^2 \Sigma \sim \ee{5}{-8}
M_{\odot} $. This requires over 50 years of mass transfer from the giant
and supplies more than enough material to power the large outburst.
The accretion will most likely be halted by a propeller effect at low
accretion rates (Illarionov \& Sunyaev 1975). However, even when not
accreting, the neutron star has a luminosity from the cooling core (at
temperature $T_c$) given by Gudmundsson, Pethick, and Epstein (1983) as
\begin{equation}\label{cool}
L_{\rm core} = \ee{6}{32}\,{\rm erg}\,{\rm s}^{-1}
\left({M_x\over 1.4 M_{\odot}}\right)\left({T_{c}\over 10^{8} \,{\rm K}} \right)^{2.2}.
\end{equation}
This luminosity will probably not quench the thermal instability of
the accretion disk as fully exposed matter at $R_r$ only has an
effective temperature of $T_{\rm eff}=1800 \,{\rm K}(T_c/10^{8}\,{\rm K})^{0.55}$.
It is the unusual combination of large orbital separation, low
inferred mass transfer rates, and low neutron star luminosity that
allows for such instabilities in this neutron star binary. For
time-averaged accretion rates $\approx 10^{-9} M_{\odot} \,{\rm yr}^{-1}$, the
outburst intervals are longer than the history of X-ray and
$\gamma$-ray monitoring. The system's extinction ($A_V>25$) also
makes optical identification of a prior outburst unlikely.
\section{
The Nuclear Burning of Accreted Matter
}
The thermal stability and appearance of nuclear burning on steadily
accreting neutron stars is well studied. For comparable metallicities
and magnetic fields weak enough so as not to affect the opacities
($<10^{13} \,{\rm G}$), the only residual difference between this neutron
star and others accreting at comparable local rates is a colder
core. This might allow for unstable hydrogen/helium ignition at
slightly higher instantaneous local accretion rates ($\dot m$) than on
a steadily accreting star.
Constant accretion at these $\dot M$'s conductively heats the neutron
star core, which attains an equilibrium temperature of
$T_c=(2\mbox{--}4)\times 10^8 \,{\rm K}$ in about $10^3\mbox{--}10^4 \,{\rm yr}$ by
balancing this heating with neutrino cooling (Ayasli \& Joss
1982). The time-averaged $\dot M$ in GRO~J1744-28 is comparable to
many X-ray binaries, whereas the instantaneous $\dot M$ can clearly be
much higher. The cycling of $\dot M$ leads to a colder equilibrium
core than that of a neutron star steadily accreting at the same
time-averaged $\dot M$. This is because the thermal timescale of the
deep ocean and crust (which is where substantial energy release occurs
as matter is forced through the electron capture boundaries [Haensel
\& Zdunik 1990]) is $\sim 10\mbox{--}100$ years, much longer than the
outburst duration. There is thus insufficient time to heat the crust
to a temperature profile favorable for sending a large luminosity
$L\approx (\dot{M}/m_p)(1\,{\rm MeV})$ into the core. In addition, between
outbursts, the cold outer envelope conducts away the accumulated heat
in the deep ocean and crust. For temperatures below $T_c\sim
\ee{2}{8}\,{\rm K}$ the core cools radiatively between outbursts, much as a
young neutron star still hot from birth.
\subsection{
Settling and Nuclear Burning of the Accreting Matter
}
As described in \S3, the neutron star is accreting the hydrogen-rich
envelope of an evolved giant. Because the giant has already lost an
appreciable amount of mass, the matter presently being transferred was
most likely processed in the companion's interior during its main
sequence lifetime. This would increase the helium content. The
$\beta$-decay limitations and high temperatures on the neutron star
fix the hydrogen burning (via the CNO cycle) rate at the $\beta$-decay
limited value of the ``hot'' CNO cycle, $\epsilon_H=5.8\times 10^{15}
Z_{\rm CNO} \,{\rm erg}\,{\rm g}^{-1}\,{\rm s}^{-1}$ where $Z_{\rm CNO}$ is the mass
fraction of the CNO elements. For high accretion rates, this burning
never consumes the accreted hydrogen before helium ignition, so that
unstable helium burning occurs in a hydrogen-rich environment,
which enhances the nuclear reaction chains and energy release (Lamb \& Lamb
1978; Taam \& Picklum 1979; Fujimoto et al.\ 1981; Taam 1982). Because
of the complicated rp process (Wallace
\& Woosley 1981), the exact composition of the ashes is unknown except
for a few limited cases (Van Wormer et al.\ 1994).
Prior to helium ignition, the accreted material is settling onto the
neutron star and is stably burning hydrogen. Since the thermal time is
always less than the time to accrete to a given depth, $t_{\rm
accr}=y/\dot m$, we find the temperature structure by solving the
time-independent entropy and flux equations,
\begin{equation}\label{eq:atmos}
T\dot m{ds\over dy} ={dF \over dy }+ \epsilon_H,
\hskip 10 pt F={c\over 3 \kappa }{d a T^4\over dy},
\end{equation}
where the local accretion rate is written as $\dot m=\dot m_4 10^4
\,{\rm g}\,{\rm cm}^{-2}\,{\rm s}^{-1}$, $s$ is the specific entropy, $F$ is the
outward heat flux, and $y$ is the column depth. The opacity in the
upper atmosphere is given by $1/\kappa=1/\kappa_{\rm
rad}+1/\kappa_{\rm cond}$, where the radiative opacity is the sum of
electron scattering (we use Paczy\'nski's [1983] fitting formulae
for the degeneracy and high temperature corrections) and free-free
absorption. We use the conductivity from Yakovlev \& Urpin (1980).
These settling solutions are valid until the helium burns fast enough
to appreciably change either the temperature or the helium
abundance. Fushiki and Lamb (1987) defined the boundary of stable
helium burning in the $y$-$T$ plane (the ignition curve) by setting
\begin{equation}
{d\epsilon_{3\alpha}\over dT}={d\epsilon_{c}\over dT},
\end{equation}
where
$\epsilon_c=acT^4/3\kappa y^2$ is a local representation of the
conductive cooling and $\epsilon_{3\alpha}$ is the helium burning
rate. These derivatives are taken at constant pressure, as the
instability grows faster than the pressure changes. The heavy
dot-dashed line in Figure 2 denotes this ignition curve for
$Y=0.3$. Regions to the right of this curve are thermally unstable to
helium burning. We also define the depletion curve by equating the
helium lifetime to the $3\alpha$ reaction with $t_{\rm accr}$ (Fushiki
\& Lamb 1987); the heavy solid lines in Figure 2 denote this condition
for $\dot m_4=3,\ 7.5,\ {\rm and}\ 30$.
We derive the settling solutions by varying the flux exiting the
atmosphere until the flux is zero at the depth where the ignition
curve (or depletion curve, whichever is first) is met. We take zero
flux at the bottom to mimic the cold core. These settling solutions
all strike the ignition curve before the depletion curve, so that we
find the column density $y_{\rm ign}$ accumulated on the star prior to
unstable helium ignition.
For Population I type metallicities, the temperature of the settling
material is mostly set by the slight amount of hydrogen
burning. Indeed, as is evident in the three light solid lines that
display these settling solutions in Figure 2, $y_{\rm ign}$ depends
only mildly on $\dot m$ for the $Z_{\rm CNO}=0.01$ case. We obtain
$y_{\rm ign}/10^8 \,{\rm g}\,{\rm cm}^{-2}=3.1,\ 2.95,\ {\rm and} \ 2.65$ for
$\dot m_4= 3, \ 7.5,\ {\rm and}\ 30$; these correspond to recurrence
times of 2.9 hours, 1.1 hours, and 15 minutes. The flux exiting the
atmosphere is $F/(10^{22}\,{\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}) = 2.0, \ 2.42,\ {\rm
and}\ 4.93$ for these $\dot m$'s.
Lower CNO abundances are relevant if the companion is an older
Population II giant and/or if spallation of the incident nuclei occurs
in the accretion shock (Bildsten, Salpeter, \& Wasserman 1992). At
lower $Z_{\rm CNO}$, the settling solutions are more sensitive to the
accretion rate via the gravitational compression terms. The two
bottom thin solid lines in Figure 2 are settling solutions when
$Z_{\rm CNO}=10^{-4}$ for $\dot m_4=7.5\ {\rm and}\ 30$, giving
recurrence times of 6.6 hours and 0.5 hours.
\subsection{
When is the Burning Time Dependent?
}
The crucial question to answer is, ``At what accretion rate is the
burning unstable?''. The simplest way to address this is to construct
the steady-state solution (represented by the dotted lines in Figure 2
for the $\dot m$'s given above) that burns the material as fast as it
accretes. If this solution does not consume the fuel before reaching
the instability curve, then it is unstable. For an accretion rate
below $\dot m_{\rm crit}\approx 3\times 10^4 \,{\rm g}\,{\rm cm}^{-2}\,{\rm s}^{-1}$,
the burning is unstable (for $Y=0.3$). This critical accretion rate
increases by a factor of two if the helium mass fraction is 0.5.
These estimates also indicate that the burning is stable when $\dot m
> \dot m_{\rm crit}$.\footnote{There have been indications in the past
that the $\dot m_{\rm crit}$ obtained from a time dependent simulation
might actually be higher than our estimate. In particular, Ayasli and
Joss (1982) argued that the steady state solution would not be reached
until the time to reach the ignition depth became shorter than the
local thermal time. This requires accretion rates 2--5 times higher
than our estimate. However, in the absence of a full time-dependent
calculation, we will stick to our present, and maybe overly conservative,
estimate.} {\em Since $\dot m_{\rm crit}$ is smaller than our minimum
estimated accretion rate at the outburst peak, it is unlikely that the
hourly bursts during the brightest parts of the outburst have a
thermonuclear origin.}
However, the burning becomes unstable when $\dot m< \dot m_{\rm crit}$
(either due to a reduction in the overall accretion rate or spreading
away from the polar cap; see \S5). When the fuel spreads over the
whole star prior to ignition, the maximum luminosity (and hence flux)
at which an instability occurs is $L\approx 7\times 10^{37}
\,{\rm erg}\,{\rm s}^{-1}$ (for $\dot m_{\rm crit}=3\times 10^4
\,{\rm g}\,{\rm cm}^{-2}\,{\rm s}^{-1}$ and $R=10 \,{\rm km}$). This corresponds to
a bolometric flux
\begin{equation}\label{eq:fmin}
F_{\rm unstable}<10^{-8} \,{\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}\left(L\over
7\times 10^{37} \,{\rm erg} \,{\rm s}^{-1}\right)\left( 8 \,{\rm kpc} \over
d\right)^2.
\end{equation}
We convert this to $\,{\rm c\,s^{-1}}$ in the PCA instrument aboard {\em RXTE\/} by
using the conversion factor (Giles et al.~1996) of $4\times 10^{-8}
\,{\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}$ (2\mbox{--}60 keV) for $10^4\,{\rm c\,s^{-1}}$.
For $d=8 \,{\rm kpc}$, unstable burning signatures will most likely
not appear until the PCA count rate is $\lesssim 2500\,{\rm c\,s^{-1}}$.
When the burning is unstable, the intersection of the settling
solutions with the helium ignition curve defines $y_{\rm ign}$. For
typical metallicities, $y_{\rm ign}> \ee{1.5}{8}\,{\rm g}\,{\rm cm}^{-2}$, which
at $\dot m_{\rm crit}$ gives a recurrence time of just over an
hour. Once the helium ignites, the temperature rises rapidly and
starts a propagating combustion front. If the whole star is ignited
(see next section) the maximum burst energy would be $4\pi R^2 y_{\rm
ign} (7 \,{\rm MeV}/m_p)$. This is the maximum, as time-dependent
calculations of the hydrogen/helium burning flash often found
incomplete burning (Taam et al.\ 1993) and the whole star need not
ignite. The resulting fluence would be
\begin{equation}
\mbox{Maximum Burst Fluence}=1.5 \times 10^{-6} \,{\rm erg}\,{\rm cm}^{-2}
\left(y_{\rm ign}\over \ee{1.5}{8} \,{\rm g}\,{\rm cm}^{-2}\right)\left(R\over
10\,{\rm km}\right)^2\left(8 \,{\rm kpc} \over d\right)^2.
\end{equation}
The peak luminosity depends on the combustion front's propagation
speed through the fuel-rich regions, as we now discuss.
\section{
The Role of the Magnetic Field
}
Conventional Type I X-ray bursts are not seen from highly magnetized
($B\gtrsim 10^{12} \,{\rm G}$) accreting X-ray pulsars. This was at
first surprising because they accrete at rates comparable to X-ray
burst sources that are not obviously magnetic. Joss and Li (1980)
explained the lack of bursts by {\em stabilizing\/} the nuclear burning
with two mechanisms. For this pulsar, the relevant mechanism is the
increased local accretion rate on the polar cap. The magnetic field
funnels the accretion onto the polar cap and confines the accretion
mound until the ignition pressure is reached. These constraints are
typically satisfied for steadily accreting X-ray pulsars with $\dot{M}
\gtrsim 10^{-10} M_{\odot} \,{\rm yr}^{-1}$, as the fractional area of the polar
cap only needs to satisfy $A_{\rm cap}/4\pi R^2 \lesssim 0.01$. This
is well within the estimates obtained by either following the field
lines from the magnetospheric radius to the star (Lamb, Pethick, \&
Pines 1973),
\begin{equation}\label{cap}
A_{\rm cap} \approx 10^{11}\,{\rm cm}^2 \xi^{-1}
\left(\frac{B}{10^{11}\,{\rm G}}\right)^{-4/7}
\left(\frac{\dot{M}}{10^{-8} \ M_{\odot} \,{\rm yr}^{-1}}\right)^{2/7}
\left(\frac{R}{10\,{\rm km}}\right)^{9/7},
\end{equation}
or allowing the
matter to penetrate through the magnetopause via a Rayleigh-Taylor
instability and attach to field lines at smaller radii (Arons \& Lea
1976; Elsner \& Lamb 1977).
The absence of Type I X-ray bursts
in {\em all\/} X-ray pulsars seems unlikely given the range in magnetic
field strengths and accretion rates. Bildsten (1995) suggested that,
even when the burning is thermally unstable, a strong magnetic field
will inhibit the rapid lateral convective motion needed for the
combustion front to ignite the whole star in a few seconds (Fryxell \&
Woosley 1982). The field strength required to halt the convective
($\sim 10^6 \,{\rm cm}\,{\rm s}^{-1}$) propagation of burning fronts is not
known. Convection is potentially stabilized when $B^2 > 8\pi P$
(Gough \& Tayler 1966), which requires $B\gtrsim 7\times
10^{11}\,{\rm G}$ in the helium burning region.
\footnote{
Even lower fields might slow the burning fronts, as the sub-sonic
velocities ($v_c\sim 10^6 \,{\rm cm}\,{\rm s}^{-1}$) implied by efficient
convection can only push around fields of strength $B^2<8\pi \rho
v_c^2$, or $B<10^9 \,{\rm G}$ at the helium ignition depth (Bildsten
1995). Observations will most likely tell us the outcome for magnetic
fields in the intriguing regime $\rho v_c^2 \ll B^2/8\pi \ll P\:
(10^9\,{\rm G} < B < 7\times 10^{11} \,{\rm G} )$.}
Most inferred dipolar field strengths
for accreting X-ray pulsars easily satisfy this constraint. However,
GRO~J1744-28 does not and hence is an especially intriguing candidate
for showing Type I X-ray bursts.
If the lateral convective motion is
inhibited (or if there is too little fuel for convection to occur
[Bildsten 1993]) then the burning front propagates at the slower speed
set by heat transport (electron conduction and/or radiative transport,
depending on the depth)
\begin{equation}
v_{\rm slow}\approx 80 + 200
\left(\frac{y_{\rm He}-y_q}{\ee{4.5}{7} \,{\rm g}\,{\rm cm}^{-2}}\right)
\,{\rm cm}\,{\rm s}^{-1},
\end{equation}
where $y_{\rm He}$
is the local helium column density (in $\,{\rm g}\,{\rm cm}^{-2}$) and
$y_q=\ee{1.35}{8} \,{\rm g}\,{\rm cm}^{-2}$ is the minimum column density needed
for a pure helium burning front to propagate (Bildsten 1995). This
relation is for a pure helium atmosphere and is a reasonable lower
limit to the mixed hydrogen/helium burning case. For the typical
$y_{\rm He}\approx \ee{2}{8} \,{\rm g}\,{\rm cm}^{-2}$ where the instability sets
in, $v_{\rm slow}\approx 400 \,{\rm cm}\,{\rm s}^{-1}$, so that the burning front
crosses a $10^5 \,{\rm cm}$ polar cap in $t_{\rm cross}\approx 4 \,{\rm
min}$. In this case, the thermonuclear instability would appear as a
flare of a few minutes duration (potentially time symmetric) with a
luminosity set by the amount of accumulated fuel and the time to burn
all of it, $L_{\rm flare}\approx 0.05 L_{\rm accr}(t_{\rm accr}/t_{\rm
cross})$ (Bildsten 1995).
\subsection{
The Spreading of Accreted Fuel
}
The local accretion rate, $\dot m$, onto the polar cap would greatly
exceed $\dot m_{\rm sph}$ if accretion onto GRO~J1744-28 is via a
polar cap of the size given by equation (\ref{cap}) (this is not clear
given the small pulse fraction and nearly sinusoidal pulses seen by
Finger et al.\ [1996a]). Because the fate of the accreted material
depends on $\dot m$, knowing the depth at which accreted matter flows
laterally over the surface and reduces $\dot m$ to $\dot{m}_{\rm sph}$
is crucial. The magnetic Reynolds number of the flow is $4\pi \sigma
\ell v/c^2$, where $\ell$ is the length over which the magnetic field
varies, $\sigma$ is the electrical conductivity, and $v=\dot{m}/\rho$
is the downward flow velocity. Using the pressure scale height, $h =
P/\rho g$, as an estimate for $\ell$, we find that the magnetic
Reynolds number is $ \sim 100$ in the helium burning region ($P\approx
10^{22}\,{\rm erg}\,{\rm cm}^{-3}$, $T\approx\ee{5}{8}\,{\rm K}$). As a result, the
magnetic field is frozen into the matter at the ignition depth.
For a small polar cap, we find the pressure where the magnetic field
can no longer hold up the accretion mound by balancing the transverse
pressure gradient (given by the lateral extent of the polar cap,
$\ell_{\rm cap}\sim \sqrt{A_{\rm cap}} <R$) with a distorted field.
Following Hameury et al.\ (1983), we presume spreading occurs at the
depth where the field is distorted by a large angle. To illustrate
this problem, consider an azimuthally symmetrical and poloidal
magnetic field $\vec{B}=(B_\varpi,0,B_z)$ in cylindrical coordinates
$(\varpi,\phi,z)$. The characteristic length in the $z$-direction is the
pressure scale height $h=P/\rho g$ and the characteristic length in
the $\varpi$-direction is $\ell_{\rm cap}$. The accretion flow distorts
the field from $\vec{B} = (0,0,B)$ to the perturbed configuration
$\vec{B} = (B_\varpi,0,B-\delta B_z)$. Equating estimates of $\vec{J}$
obtained from ${\rm curl}\,\vec{B} = 4\pi\vec{J}/c$ and the $\varpi$-component
of $-\vec{\nabla} P + \rho\vec{g} + \vec{J}\mbox{\boldmath $\times$}\vec{B}/c = \vec{0}$, and
using ${\rm div}\,\vec{B} = 0$ to obtain a relation between $\delta B_z$ and
$B_\varpi$, we have
\begin{equation} \label{eq:fd}
\frac{B_\varpi}{B} \sim \frac{4\pi h P}{\ell_{\rm cap}B{}^2}.
\end{equation}
For a fully ionized H/He mixture with an ideal gas equation of state,
the pressure where $B_\varpi\sim B $ is
\begin{equation}\label{eq:spread}
P_{\rm spread} \sim 10^{22} \,{\rm erg}\,{\rm cm}^{-3}
\left(\frac{A_{\rm cap}}{10^{11}\,{\rm cm}^2}\right)^{1/2}
\left(\frac{B}{10^{10}\,{\rm G}}\right)^2
\left(\frac{\ee{2}{8}\,{\rm K}}{T}\right).
\end{equation}
This pressure defines a boundary ($y_s$ in column depth) where the
matter starts to spread laterally. We will presume that above $y_s$
the local accretion rate is $\dot{M}/A_{\rm cap}$ and that below
$y_s$, the local accretion rate is the spherical value $\dot{m}_{\rm
sph}$.
\subsection{ Global Behavior of the Nuclear Burning for GRO~J1744-28 }
Having shown that the nuclear burning is unstable when $\dot m< \dot
m_{\rm crit}\approx 3\times 10^4 \,{\rm g} \,{\rm cm}^{-2} \,{\rm s}^{-1}$, we now
estimate $\dot m$ at the ignition depth in terms of the dipole
magnetic moment $\mu$ and global accretion rate $\dot M$ (we are
assuming for simplicity that no higher-order multipole moments are
present). We show in Figure \ref{fig:parspace} the cases $\xi=1.0$
and $\xi=0.5$. Within this parameter space, the first requirement is
that the magnetospheric radius be less than the co-rotation radius
(eq.~[\ref{eq:rm<rco}] with $R$ set to $10\,{\rm km}$) and is indicated by
the unhatched area. The second relation, indicated by the dot-dashed
line in Figure \ref{fig:parspace}, is $y_s = y_{\rm ign}$. Below this
line, the accretion flow spreads before ignition, thereby lowering the
local accretion rate to $\dot{m}_{\rm sph}$. Depending on the dipolar
field strength there are two cases.
\begin{itemize}
\item For $B\lesssim \ee{(2\mbox{--}4)}{10}\,{\rm G}$, the accreted matter
spreads before igniting regardless of the polar cap area. Thus, the
relevant local accretion rate (indicated by the vertical dashed line)
is $\dot{m}_{\rm sph}$ and the burning is unstable when the flux
constraint (eq.~[\ref{eq:fmin}]) is reached. This case is denoted by
region I in Figure 3.
\item For $B\gtrsim \ee{(2\mbox{--}4)}{10}\,{\rm G}$, the matter ignites prior
to spreading, so that the nature of the thermonuclear burning depends
on the polar cap size. This case is denoted by region II in Figure
3. The most constraining polar cap area is given by equation \ref{cap}
and is indicated by the slanted dotted line in the right-hand plot.
For GRO~J1744-28, this scenario is only relevant if $\xi\sim 0.5$, as
otherwise the propeller effect will halt accretion before
$\dot{m}<\dot{m}_{\rm crit}$ on the polar cap. For $\xi=1.0$, nuclear
burning in region II is unstable only if the polar cap area is larger
than the estimate of equation (\ref{cap}).
\end{itemize}
Our ignorance of the polar cap size prohibits us from saying at what
accretion rate the $B\gtrsim\ee{(2\mbox{--}4)}{10} \,{\rm G}$ case
becomes unstable (the triangular regions denoted II). If we choose a
fixed polar cap area of 10\% of the stellar area, then the $\dot M$
needed for an instability decreases by a factor of ten, as does the
maximum flux required to see an instability. The polar cap of Arons and
Lea (1980) is always larger than 10\% of the stellar area and implies
that the burning becomes unstable when $L< \ee{3}{37} \,{\rm erg}\,{\rm s}^{-1}$.
The appearance of unstable burning will constrain the polar cap size,
as a necessary condition for stable burning is $A_{\rm
cap}<\dot{M}/\dot{m}_{\rm crit}$.
\section{
Summary and Observational Outlook
}
The continuous monitoring of GRO~J1744-28 by the {\em RXTE\/} provides
an important opportunity to learn about both accretion and
thermonuclear instabilities on a weakly magnetized neutron star. We
have shown that the bursts observed during the peak of the outburst
are most likely not of thermonuclear origin, as even the minimum local
accretion rate on the neutron star is too high for unstable burning of
hydrogen-rich material. This statement is no longer true as the
accretion rate, $\dot M$, decreases.
The full understanding of the thermonuclear burning depends on many
properties of both the neutron star and the binary. In \S2, we used
the observed torque to estimate the minimum accretion rate and dipole
field, which led to the comparison with SMC X-1, another bursting
X-ray pulsar. We speculated in \S3.2 that the present outburst might
be the result of a thermal instability in the disk, comparable to what
occurs in dwarf novae. We also discussed the optical companion and
pointed out that a careful velocity measurement of the companion will
constrain the neutron star mass.
We have shown that, if $B\lesssim\ee{(2\mbox{--}4)}{10} \,{\rm G}$, then the
nuclear burning becomes unstable when the intrinsic source luminosity
is $L\lesssim \ee{7}{37} \,{\rm erg}\,{\rm s}^{-1}$. For higher magnetic fields,
the matter stays confined at the polar cap before ignition, in which
case the stability of the nuclear burning depends strongly on the
polar cap size. As the outburst fades and $\dot M$ decreases, the
burning might become unstable, especially if the polar cap is larger
than the conventional estimate (equation [\ref{cap}]), for which the
thermonuclear burning is always stable when $\xi=1$. Hence, for larger
fields, the global accretion rate when the first signatures of
thermonuclear instability appear will constrain the polar cap size.
As discussed in \S5, the character of the instability strongly depends
on the burning front's propagation speed. If the instability is too
weak to convect (or if the field is strong enough, $B\gtrsim 7\times
10^{11} \,{\rm G}$, to inhibit the lateral convective heat transport) a
possible burning signature is flares with durations of a few minutes
to an hour. The duration of the flare is set by the slow burning speed
(\S5) and the size of the fuel-rich region. These flares rise on a
timescale comparable to their duration and are not necessarily
asymmetrical in time, as Type I X-ray bursts are. For a polar cap
radius of $10^5\,{\rm cm}$ (10\% of the stellar radius) and a typical
ignition depth of $\ee{2}{8}\,{\rm g}\,{\rm cm}^{-2}$, the burning front crosses
the polar cap in four minutes. If convection occurs, then we expect
to see Type I X-ray bursts as in other Low Mass X-Ray Binaries.
When bursts or flares first appear, they will have recurrence times of
order an hour if our $\dot m_{\rm crit}$ determination is accurate. It
is possible that the recurrence times will be longer than this
(especially for low metallicity), in which case detection will be more
difficult. The ability of the PCA to position bursts to within $0.2$
degrees has already eliminated GRO~J1744-28 as the source of two Type
I X-ray bursts seen in the field (Corbet \& Jahoda 1996; Jahoda et
al.\ 1996) and should eventually provide an unambiguous localization
of a Type I burst from GRO~J1744-28.
We thank Jon Arons, Mark Finger, Keith Jahoda, Chris McKee, Ed
Morgan, Tom Prince and Bob Rutledge for many discussions about the
nature of this source. We especially thank Bob Rutledge (MIT) for
creating and maintaining a Web site about this X-ray binary. Our work
was supported by NASA via grants NAG 5-2819 and NAGW-4517 and by the
California Space Institute (CS-24-95). L. B. was also supported by
the Alfred P. Sloan Foundation.
| 2024-02-18T23:39:44.035Z | 1996-09-24T00:39:20.000Z | algebraic_stack_train_0000 | 241 | 7,312 |
|
proofpile-arXiv_065-1324 | \section{Introduction}
Traditional SLAM(Simultaneous Localization and Mapping) systems paid great attention to geometric information. Based on the solid foundation of Multi-view Geometry, a lot of excellent studies have been carried out. However, problems arise from none-geometric modules in SLAM systems. To track the location of cameras, researchers usually perform pixel-level matching operations in tracking threads and optimize poses of a small number of frames as local mapping. No doubt that errors resulted by drift in pose estimation and map evaluation keep accumulating.
In the meanwhile, Deep Learning, a data-driven technique, has brought out rapid development in numerous computer vision tasks such as classification and matching. Such achievements reflect that deep learning may be one of the best choices to solve problems related to data association. Therefore, more and more researchers believe that pixel-level or higher level associations between images, the bottleneck of SLAM systems we mentioned above, can also be handled with the help of neural networks.
Deep learning has proved its superiority in SLAM systems. Many outstanding studies have employed it to replace some non-geometric modules in traditional SLAM systems \cite{kottas2013efficient,kong2015tightly,zbontar2016stereo,luo2016efficient,feng2017efficient}. These approaches enhance the overall SLAM system by improving only part of a typical pipeline, such as stereo matching, relocalization and so on. Some researchers also attempt to use higher-level features obtained through deep learning models as a supplement to SLAM \cite{salas2013slam++,reid2014towards,atanasov2014semantic,bowman2017probabilistic,gay2017probabilistic}.These higher-level features are more likely to infer the semantic content-object feature and improve the capability of visual scene understanding. Moreover, end-to-end learning models have also been proposed\cite{zhu2017target,gupta2017cognitive}. These methods outperform traditional SLAM algorithms under specific circumstances and demonstrate the potential of deep learning in SLAM.
However, such combination of Deep learning and SLAM have significant shortcomings. Most of Deep Learning methods rely heavily on data used for training, which means that they can not fit well into unknown environments. For example, we can not ensure whether the room we want to explore is equipped with chairs and desks and cannot guarantee semantic priority of desks will help in this occasion. What's more, most Deep-Learning enhanced SLAM systems are designed to reflect advantage of Deep Learning techniques and abandon the strong points of SLAM. As a result, they may sacrifice efficiency, an essential part of SLAM algorithms, for accuracy.
Last but not least, some DL-based SLAM techniques take traditional SLAM systems as their underlying framework\cite{zbontar2016stereo,luo2016efficient,feng2017efficient,detone2017superpoint} and make a great many changes to support Deep Learning strategies. Too many replacements may lead to loss of some useful features of the SLAM pipeline and also make it hard for researchers to perform further comparisons with existing studies, let alone migrate these techniques to other SLAM systems. As a result, DL-based SLAM is not mature enough to outperform traditional SLAM systems.
Therefore, we make our efforts to put forward a simple, portable and efficient SLAM system. Our basic idea is to improve the robustness of local feature descriptor through deep learning to ensure the accuracy of data association between frames.
In this paper, we propose a novel approach to use the learned local feature descriptors as a substitute for the traditional hand-craft descriptors. Our method has advantages in portability and convenience as deep feature descriptors can directly replace traditional ones. The replacement is highly operable for all SLAM systems and even other geometric computer vision tasks such as Structure-from-Motion, camera calibration and so on. The learned local feature descriptors guarantee better performance than hand-craft ones in actual SLAM systems \cite{mur2017orb} and achieve amazing improvement in accuracy. Since we adopt a shallow neural network to obtain local feature descriptor, the feature extraction module does not consume much time on GPU, and the system can operate in almost real-time.
\section{Related Work}
\textbf{Deep Learning enhanced SLAM.} Deep learning is considered an excellent solution to SLAM problems due to its superb performance in data association tasks. Part of recent studies makes a straight substitution of an end-to-end network for the traditional SLAM system, estimating ego-motion from monocular video\cite{zhou2017unsupervised,mahjourian2018unsupervised,li2018undeepvo} or completing visual navigation for robots entirely through neural networks\cite{zhu2017target,gupta2017cognitive}. Such works can hardly catch up with traditional methods in accuracy under test datasets. Nevertheless, since deep learning systems rely too much on training data, the end-to-end system fails from time to time at the face of new environments and situations. That's to say the model may hardly predict correct results when there exists a big difference between training scenes and actual scenes.
To tackle such problems, some researchers focus on the replacement of only parts of traditional SLAM systems while keeping traditional pipelines unchanged\cite{garg2016unsupervised,yang2018deep}\cite{kendall2015posenet,wu2017delving,vo2017revisiting}. Such attempts are still in an embryonic stage and do not achieve better results than traditional ones. One of the possible explanation for their limited improvement is that they also rely too much on the priority learned from training data, especially when it comes to predicting depth from monocular images. Thus, they are still subject to the same limitation of end-to-end methods. We believe that the experience-based system is not the best choice for geometric problems.
Other efforts are made to add auxiliary modules rather than replace existing geometric modules. Semantic mapping and fusion\cite{reid2014towards,semanticfusion} make use of semantic segmentation. They always take in poses provided by underlying SLAM systems and output optimized 3D models. Such changes are not involved in the optimization of original SLAM systems and cannot directly improve pose estimation modules. Some other researchers separate key points belonging to different items and process them differently \cite{engel2018direct}. These constraints have outstanding performance especially when the environment is dynamic. But they still avoid making changes to the basic system. To combine higher-level information tighter with SLAM pipelines, Detection SLAM and Semantic SLAM\cite{salas2013slam++} jointly optimize semantic information and geometric constraints. Early studies operate semantic and geometric modules separately and merge the results afterward\cite{civera2011towards,pillai2015monocular}. \cite{atanasov2014semantic} incorporate semantic observations in the geometric optimization via Bayes filter. Focusing on the overall SLAM pipeline, \cite{bowman2017probabilistic,gay2017probabilistic} formulate semantic SLAM as a probabilistic model. These approaches extract object-level information and add the semantic feature to the constraints of Bundle Adjustment. However, up to now, there are still no convincing loss functions for semantic modules, and there are also no outbreaking improvements. What's worse, since semantic SLAM add too much extra supervision to the traditional SLAM systems, the number of variables to be optimized inevitably increased, which is a great challenge for the computation ability and the speed.
A simple but effective method is to directly improve the module that limits the performance of traditional SLAM, i.e., stereo matching between frames. Some of them calculate similarity confidence of local features\cite{zbontar2016stereo,luo2016efficient,feng2017efficient}, resulting in the inability to use traditional matching strategy, such as Euclidean distance, cosine distance and so on. SuperPoint\cite{detone2017superpoint} trains an end-to-end network to extract both local feature detectors and descriptors from raw images through one forward calculation. However, the efficiency of SuperPoint remains not verified as it only gives out the result on synthetic and virtual datasets and has not been integrated into a real SLAM system for evaluation.
\textbf{Local feature descriptor.} Parallel with the long history of SLAM, considerable attempts have been made on local features. Based on classical hand-craft local features like SIFT \cite{ng2003sift}, SURF \cite{bay2006surf}, ORB \cite{rublee2011orb}, early combination of low-level machine learning and local feature descriptors produce PCA-SIFT \cite{ke2004pca}, ASD \cite{wang2014affine}, BOLD \cite{balntas2015bold}, Binboost \cite{trzcinski2013boosting}, RFD \cite{fan2014receptive}, RMGD \cite{gao2015local}, GRIEF\cite{krajnik2016griefras} etc. Some of these attempts dedicate on dimensionality reduction and utilize various methods to map high-dimensional descriptors to low-dimensional space. Thus they lose a great amount of information on the raw image. Others make use of binary features. Part of them enhance a traditional feature on specific environments to fit special requirements\cite{krajnik2016griefras} and is lack of mobility. Most of these studies put forward a new kind of feature without further tests or applications.
Thanks to the booming of Deep Learning, researchers have gone further. End-to-end networks consisting of multiple independent components\cite{yi2016lift,detone2017superpoint,ono2018lf,noh2017largescale} can not only give out local feature descriptors through one forward computation but also extract local feature detectors.
Focusing only on descriptors, most researchers adopt multi-branch CNN-based architectures like Siamese and triplet networks. Multi-branch networks were first proposed to verify whether the handwritten signatures were consistent in 1994 \cite{bromley1994signature}. Experiments related to similarity measurements further confirm the superiority of this multi-branch structure. As a result, Siamese and triplet networks turn out to be the main architectures employed in local feature descriptor tasks. MatchNet\cite{han2015matchnet} and DeepCompare\cite{zagoruyko2015learning} are typical Siamese networks. Each branch consists of a feature network and a metric network which determines the similarity between two descriptors. Thus the final output is similarity confidence. Together with the metric learning layer, \cite{kumar2016learning} uses triplet structure and achieves better performance. These achievements reveal the potential of triplet neural network. However, these models prove to be not suitable for traditional nearest neighbor search. Therefore, studies that directly output local feature descriptors are derived.
Early research\cite{simo2015discriminative} only uses Siamese network and designs a novel sampling strategy. L2Net \cite{tian2017l2} creatively utilizes a central-surround structure and a progressive sampling strategy to improve performance. These unique structures and training strategies can also extend to triplet. \cite{mishchuk2017working} adopts the structure presented by L2Net and enhances the strict hardest negative mining strategy to select closest negative example in the batch. \cite{he2018local} also uses the same structure but formulates feature matching as nearest neighbor retrieval. Thus, it directly optimizes a ranking-based retrieval performance metric to obtain the model. It is worth to be mentioned that \cite{balntas2016learning} trains a shallow triplet network based on random sampling strategy but performs better than some deep structures like DeepDesc and DeepCompare, which is an essential reference for our work. Similar to TFeat, some researchers focus on the formation of a single branch. DeepCD \cite{yang2017deepcd} proposes a new network layer, termed the data-dependent modulation layer, to enhance the complementarity of local feature descriptors. Considering that the geometric repeatability is not the only factor that influence learned local features, AffNet \cite{via2017repeatability} raises a novel loss function and training process to estimate the affine shape of patches. It trains local feature descriptor network based on the affine invariance to improve the performance of deep descriptor.
\section{System Overview}
In our DF-SLAM system, learned local feature descriptors are introduced to replace ORB, SIFT and other hand-made features. We adopt the traditional and popular pipeline of SLAM as our foundation and evaluate the efficiency and effectiveness of our improved deep-feature-based SLAM system. The whole system incorporates three threads that run in parallel: tracking, local mapping and loop closing. Local feature descriptors are extracted as long as a new frame is captured and added before the tracking thread.
\subsection{System Framework}
\begin{figure}[h]
\centering
\fbox{
\includegraphics[width=0.95\linewidth]{pipeline.eps}
}
\caption{System framework.}
\label{img}
\end{figure}
The framework of our system is shown in Fig.1. We derive the tracking thread from Visual Odometry algorithms. Tracking takes charge of constructing data associations between adjacent frames using visual feature matching. Afterward, it initializes frames with the help of data associations and estimates the localization of the camera using the polar geometric constraint. It also decides whether new keyframes are needed. If lost, global relocalization is performed based on the same sort of features. Local Mapping will be operated regularly to optimize camera poses and map points. It receives information constructed by the tracking thread and reconstructs a partial 3D map. If loops are detected, the Loop Closure thread will take turns to optimize the whole graph and close the loop. The frame with a high matching score is selected as a candidate loop closing frame, which is used to complete loop closing and global optimization. None of these modules accept raw images as inputs to reduce space consumption. Only sparse visual features and inter-frame associations are recorded to support pose estimation, relocalization, loop detection, pose optimization and so on. Therefore, we believe that the local feature is the cornerstone of our entire system.
As is shown in Fig.2, our first step is to extract our interested points. We utilize TFeat network to describe the region around key points and generate a normalized 128-D float descriptor. Different from hand-made features, we do not need a Gaussian Blur before feature-extraction but take patches of raw images as our input directly. Features extracted are then stored in every frame and passed to tracking, mapping and loop closing threads.
We adopt the method used in ORB-SLAM to perform localization based on DBoW. This method measures the similarity between two frames according to the similarity between their features. As the deep feature descriptor is a float, the Euclidean distance is used to calculate the correspondence. Apparently, the relocalization and loop closing modules rely heavily on the local feature descriptors.
To speed up the system, we also introduce our Visual Vocabulary. Visual Vocabulary is employed in numerous computer vision applications. It extracts a big set of descriptors from training sets offline and creates a vocabulary structured as a tree. Descriptors are divided and integrated according to their characteristics. Thus, during the matching step, a new descriptor could search along the tree for its class much more quickly while ensuring accuracy, which is ideal for practical tasks with real-time requirements. We trained the vocabulary, based on DBoW, using the feature descriptors extracted by our DF methods. Therefore, we could assign a word vector and feature vector for each frame, and calculate their similarity more easily.
\begin{figure}[h]
\centering
\fbox{
\includegraphics[width=0.9\linewidth]{image_based_pipeline.eps}
}
\caption{Overview of feature-based modules.}
\label{img}
\end{figure}
\subsection{Feature Design}
\begin{figure*}[ht]
\centering
\fbox{
\includegraphics[width=0.8\linewidth]{tfeat1.eps}
}
\caption{The architecture of TFeat.}
\label{img}
\end{figure*}
Many excellent studies have indicated the effectiveness of CNN-based neural networks in local feature descriptor designs. However, it's a question of striking the right balance between efficiency and accuracy. Although the performance becomes better and better as the number of convolutional layers increases, time assumption prevents us from adopting a deep and precise network. Instead, we make use of a shallow but efficient network to complete our task.
The architecture adopts a triplet network proposed by TFeat\cite{balntas2016learning}. There are only two convolutional layers followed by Tanh non-linearity in each branch. Max pooling is added after the first convolutional layer to reduce parameters and further speed up the network. A fully connected layer outputs a 128-D descriptor L2 normalized to unit-length as the last layer of the network.
\cite{balntas2016learning} forms triplets for training based on simple methods. It randomly chooses a positive pair of patches that originate from the same label and a sampled patch from another different label. This training strategy is too naive and can hardly improve the performance of the model. Luckily, the hard negative mining strategy proposed in HardNet\cite{mishchuk2017working} is proved to be useful in experiments. We turned to it for help and combined hard negative mining strategy with TFeat architecture to make improvements\footnote{The combination is mentioned in HardNet and AffNet.}.
The sampling strategy selects the closest non-matching patch in a batch by L2 pairwise distance matrix\footnote{The strategy is utilized in HardNet.}. The first step is to generate a batch of matched local patches. Such patches follow the rule that there is only one matching patch for the specific anchor in a batch. To evaluate the similarity of patches, we denote the distance matrix as $D = \{d_{ij}\}$. Each element represents the distance between the $i$th anchor patch descriptor and the $j$th positive patch descriptor.
Next, the hardest negative patch distance can be calculated according to the following rules:
\[
\begin{split}
d_n = min(a_{k_{min}},p_{j_{min}})
\end{split}
\]
where $a_{k_{min}}$ represents the nearest patch to anchor and $p_{j_{min}}$ is the nearest one to positive.
Loss function is formulated as
\[
\begin{split}
Loss = &\frac{1}{N}\sum_{i=0}^{N}max(0,1+d(a_i,p_i)-d_n)
\end{split}
\]
where $a_i$ is anchor descriptor and $p_i$ is positive descriptor.
\section{Experiments}
We perform several experiments to evaluate the efficiency and accuracy of our system and provide some quantitative results. To give out an intuitive comparison, we choose the open-source library of ORB-SLAM as our basis and test on public datasets. All the experiments are performed on a computer with Intel Core i5-4590 CPU 3.30GHz * 4 and GeForce GTX TITAN X/PCIe/SSE2 processor.
\subsection{Preprocess}
Two of the most complicated preparations we made is to create datasets for model training and to construct our visual vocabulary.
Most of the existing patch-based datasets use the DoG detector to extract points of interest. However, the local feature used in most SLAM systems are extracted by a FAST detector and evenly distributed across the image. To fit the requirements of SLAM systems, we need to build patch datasets for training in the same way as ORB-SLAM to ensure the efficiency of the network. We extract our patch from HPatches images containing 116 scenes\cite{balntas2017hpatches}. The patch generation approaches are identical to HPatches except for the way of local feature detection.
After we have successfully received our model, we start another training procedure for visual vocabulary. We train our bag of words on COCO datasets and choose 1e6 as the number of leaves in the vocabulary tree. Since our descriptor is a normalized float vector, the leaf nodes are also normalized.
\subsection{System Performance}
We evaluate the performance of our system in two different datasets to show how well our system can fit into different circumstances. Since we never train our model on these validation sets, the experiments also reveal the modality of our system.
Note that there are many parameters, including knn test ratio in feature matching, number of features, frame rate of camera and others in the original ORB-SLAM2 system. To ensure fairness, we use the same sort of parameters for different sequences and datasets. Such behavior also illustrates how robust and portable our system is.
\subsubsection*{A. EuRoC Dataset}
\begin{table}[h]
\centering
\linespread{1}
\begin{tabular}{cccc}
\hline
Dataset & ORB-SLAM2 & DF-SLAM & Improvement \\
\hline
MH\_01 & 0.036 & 0.037 & 1.67\%\\
\hline
MH\_02 & 0.048 & 0.043 & 10.2\%\\
\hline
MH\_03 & 0.044 & 0.046 & -4.9\%\\
\hline
MH\_04 & 0.112 & 0.063 & 43.9\%\\
\hline
MH\_05 & 0.061 &0.042 & 30.7\%\\
\hline
V1\_01 & 0.087 &0.086 & 0.6\%\\
\hline
V1\_02 & 0.065 &0.064 & 1.2\%\\
\hline
V1\_03 & 0.078 &0.065 & 15.5\%\\
\hline
V2\_01 & 0.062 &0.058 & 7.3\%\\
\hline
V2\_02 & 0.057 &0.058 & -1.6\%\\
\hline
V2\_03 & x &x & x\\
\hline
\newline
\end{tabular}
\caption{Comparison between ORB-SLAM2 and DF-SLAM in EuRoC dataset with loop closure added.}
\end{table}
\begin{figure*}
\centering
\subfigure[DF-SLAM]{
\fbox{
\includegraphics[width=0.45\linewidth]{mh04.eps}
}
}
\subfigure[ORB-SLAM2]{
\fbox{
\includegraphics[width=0.45\linewidth]{mh04orb.eps}
}
}
\caption{An example of MH 04 difficult sequence in EuRoC dataset. Above:DL-SLAM Below:ORB-SLAM2}
\label{img}
\end{figure*}
\begin{table}[h]
\centering
\linespread{1}
\begin{tabular}{cccc}
\hline
Dataset & ORB-SLAM2 & DF-SLAM & Improvement \\
\hline
MH\_01 & 0.038 & 0.036 & 4.3\%\\
\hline
MH\_02 & 0.047 & 0.050 & -8.3\%\\
\hline
MH\_03 & 0.039 & 0.043 & -10.6\%\\
\hline
MH\_04 & 0.147 & 0.060 & 58.9\%\\
\hline
MH\_05 & 0.059 &0.044 & 25.3\%\\
\hline
V1\_01 & 0.087 &0.086 & 1.5\%\\
\hline
V1\_02 & 0.097 &0.069 & 28.7\%\\
\hline
V1\_03 & 0.189 &0.114 & 39.8\%\\
\hline
V2\_01 & 0.071 &0.068 & 2.6\%\\
\hline
V2\_02 & 0.114 &0.102 & 10.1\%\\
\hline
V2\_03 & x &x & x\\
\hline
\newline
\end{tabular}
\caption{Comparison between ORB-SLAM2 and DF-SLAM in EuRoC dataset without loop closure.}
\end{table}
We evaluate the improved system in public EuRoC dataset, that consists of 11 sequences variant in scene complexity and sensor speed. The difficult sequences with intense lighting, motion blur, and low-texture areas are challenging for visual SLAM systems. As the ground truth of trajectory is provided in EuRoC, we use root-mean-square error(RMSE) for the representation of accuracy and stability. As we have mentioned above, we only change the threshold for feature matching and remain everything else the same as the original ORB-SLAM2 system, including the number of features we extract, time to insert a keyframe, ratio to do knn test during bow search period and so on. We also use the same pair of thresholds for each sequence.
We operate our system on each sequence for ten times and record both mean RMS errors for each data sequence and variance of these tests. As is illustrated in Figure 4, our method outperforms ORB-SLAM in MH sequences and perform no worse than ORB-SLAM in V sequences. Note that MH sequence is lack of loops and rely heavily on the performance of features while V sequence will always operate global pose optimization, we can easily find our method outstanding.
What is more, considering the variance of each test, we find that our system is quite stable no matter the situation. While the performance of ORB-SLAM2 may vary from time to time, we remain steady in each test we run.
We hold that the ability to walk a long way without much drift is a practical problem and matters a lot. We can never make sure that the environment we need to reconstruct is enough small and contains as many loops as we need to optimize our map. To further verify the performance of our system, we close the global bundle adjustment module(Loop Closing Thread) and repeat the test we run. We can easily find that our method outperforms ORB-SLAM2 at all V sequences, which has proved that DL-SLAM is actually more stable and accurate especially when the camera needs to go a long way without loops for global optimization.
\subsubsection*{B. TUM Dataset}
We further prove our robustness and accuracy on TUM Dataset, another famous dataset among SLAM researchers. The TUM dataset consists of several indoor object-reconstruction sequences. Since most of the sequences we used to make evaluation are captured by hand-holding cameras, these datasets contain terrible twitter from time to time. Such sequences are therefore excellent to test the robustness of our system.
We still use the same pair of features as in EuRoC datasets and other numerical features the same as ORB-SLAM2. We are happy to find that in TUM Datasets, where other SLAM systems lose their trajectory frequently, our system works well all the time. We take fr1/desk sequence as an example in Fig 7, where ORB-SLAM2 lost seven times at the same place in our entire ten tests and DF-SLAM covers the whole period easily. Similar to EuRoC, we find that DF-SLAM achieves much better results than ORB-SLAM2 among sequences that do not contain any apparent loops, and perform no worse that ORB-SLAM2 when there is no harsh noise or shake.
\begin{figure}
\centering
\subfigure[Track Lost]{
\begin{minipage}[b]{0.5\textwidth}
\fbox{
\includegraphics[width=3.25in]{lost.eps}
}
\end{minipage}
}
\subfigure[Full Trajectory]{
\begin{minipage}[b]{0.5\textwidth}
\fbox{
\includegraphics[width=3.25in]{tracked.eps}
}
\end{minipage}
}
\caption{Lost and Tracked sequence of fr1/desk. The camera starts from right to left and turn around to the right side.}
\label{img}
\end{figure}
\begin{table}
\centering
\small
\linespread{1}
\begin{tabular}{ccccc}
\hline
Dataset & ORB-SLAM2 & DF-SLAM & Improvement & Tracked \\
\hline
fr1/desk & 0.025 & 0.015 & 36.9\% & 3/10\\
\hline
fr1/desk2 & 0.028 & 0.021 & 24.5\% & 7/10\\
\hline
fr1/room & 0.058 & 0.041 & 28.9\% & 10/10\\
\hline
fr2/desk & 0.0089 & 0.0097 & -9.8\% & 10/10\\
\hline
fr2/xyz & 0.0038 &0.0030 & 19.5\% & 10/10\\
\hline
fr3/office & 0.011 &0.011 & -0.7\% & 10/10\\
\hline
fr3/nst & 0.022 &0.012 & 45.4\% & 10/10\\
\hline
\newline
\end{tabular}
\caption{Comparison between ORB-SLAM2 and DF-SLAM in TUM dataset. Tracked Numer is number of tests not lost in total 10 tests(ORB-SLAM2/DF-SLAM).}
\end{table}
\subsection{Runtime Evaluation}
We measure the run-time of the deep feature extraction using GeForce GTX TITAN X/PCIe/SSE2. A single forward pass of the model runs 7e-5 seconds for each patch based on pytorch c++ with CUDA support. The time spent on the feature extraction of one image is 0.09 seconds(1200 key points). Together with time to do tracking, mapping and loop closing in parallel, our system runs at a speed of 10 to 15fps. We find that since that our feature is much more robust and accurate, we can operate the whole system with a smaller number of features without losing our position. Therefore, there is still much space left for us to speed up the entire system and move forward to real-time.
\subsection{Local Feature Descriptor}
\begin{figure}[h]
\centering
\subfigure[The matching result on HPathes dataset.]{
\begin{minipage}[b]{0.5\textwidth}
\fbox{
\includegraphics[width=3in]{image_matching.eps}
}
\end{minipage}
}
\subfigure[The retrieval result on HPathes dataset.]{
\begin{minipage}[b]{0.5\textwidth}
\fbox{
\includegraphics[width=3in]{patch_retrieval.eps}
}
\end{minipage}
}
\subfigure[The verification result on HPatches dataset.]{
\begin{minipage}[b]{0.5\textwidth}
\fbox{
\includegraphics[width=3in]{patch_verification.eps}
}
\end{minipage}
}
\caption{The matching result on HPathes dataset. TFeat stands for the original TFeat network with simple training strategy. HardTFeat\_HD uses hard negative mining strategy, which is trained on original HPatches dataset. HardTFeat\_HF is the model we trained using FAST-based HPathces. }
\label{img}
\end{figure}
Besides, we separately evaluate the performance of local feature descriptor that we used in DL-SLAM.
We use evenly distributed FAST detector to build the training dataset. All training is done using
pytorch and stochastic gradient descent solver with the learning
rate of 0.01, the momentum of 0.9 and weight decay of 0.0001.
We also use typical data augmentation techniques, such as
random rotation and crop, to improve the robustness of our
network.
We train our deep feature using different training strategies on HPatch training set and test them on testing set also provided by HPatch. We choose ORB and SIFT, two of the most popular descriptors as a comparison. Learned features outperform traditional ones in every task. Especially, HardTFeat\_HD shows a clear advantage over TFeat in matching function, which demonstrates the superiority of the strict hard negative mining strategy we use. HardTFeat\_HD and HardTFeat\_HF are trained on different datasets but show similar performance on both matching and retrieval tasks.
\section{Conclusion}
We propose DF-SLAM system that combines robust learned features with traditional SLAM techniques. DF-SLAM makes full use of the advantages of deep learning and geometric information and demonstrates outstanding improvements in efficiency and stability in numerous experiments. It can work stably and accurately even in challenging scenes. Our idea of making use of deep features provides better data associations and is an excellent aspect of doing further research on. The fantastic result proves the success of our novel idea that enhancing SLAM systems with small deep learning modules does lead to exciting results.
In future work, we will dedicate on the stability of DF-SLAM to handle difficult localization and mapping problems under extreme conditions. The speed of deep-learning-enhanced SLAM system is also within our consideration. What's more, we aim to design a robust local feature detector that matches the descriptors used in our system. Online learning is also an attractive choice to increase the modality of our system. We even decide to make use of global features to improve global bundle adjustment and establish a whole system for DL enhanced SLAM systems. We believe that such combination can figure out a great many non-geometric problems we are faced with and promote the development of SLAM techniques.
{\small
\bibliographystyle{ieee}
| 2024-02-18T23:39:44.322Z | 2019-01-25T02:12:32.000Z | algebraic_stack_train_0000 | 250 | 4,814 |
|
proofpile-arXiv_065-1670 | \section*{Supplementary Information}
Below we detail the procedure utilized in the extraction of
energy levels from the correlation functions. Afterwards, the statistical
and systematic uncertainties in the binding energy of $\mathcal{D}_{6b}$ dibaryon
determined from these estimates are quantified.
\vspace*{0.05in}
\noindent{\bf{Energy extraction}:} \,\,
The Euclidean two-point correlation functions in \eqn{Eq:Domega2} at large source-sink separations $\tau$
are fitted with single exponentials of the form
\begin{equation}
\label{Eq:eff_split}
\langle C (\tau) \rangle = W_{0}e^{-E_0\tau},
\end{equation}
using correlated $\chi^2$ and maximum likelihood estimators to extract $E_0$ and $W_0$.
In Figure \ref{fig:maxl} we present such a result showing the projections of posterior probability distributions
of the parameters $E_0$ and $W$ demonstrating the reliability of the fits for the example of $\mathcal{D}_{6b}$ correlation
functions in the finest ensemble.
\bef[h]
\centering
\includegraphics[height=7.5cm,width=8cm]{fig7.pdf}
\caption{\label{fig:maxl}
The multivariate distribution of fitted parameters $E_0$ and $W$ (bottom left) and their respective univariate distributions
(Top left) and (Bottom right).}
\eef{fg:maxl}
In order to quantify the uncertainties arising from the choice of fitting window ($\tau_{min}, \tau_{max}$),
we do the following. First choose a $\tau_{max}$ as large as possible with a good signal-to-noise ratio. Then the $\tau_{min}$ is varied over a range to determine the stability of $E_0$ estimate and
a $\tau_{min}$ value is chosen where a clear plateau is observed. A conservative estimate taking account of an
uncertainty on this choice is arrived at using a correlated average over neighboring $\tau_{min}$ values in the plateau.
In Figure \ref{fig:tmin-plot} (main text) and Figure \ref{fig:tmin-plot-all},
we present the $\tau_{min}$ dependence for all the fits along with the 1$\sigma$ statistical errors for the chosen
fit window (blue bands), and the final estimate considering the uncertainty from the chosen fitting window (magenta bands). In both figures, we present the
estimates for the non-interacting two-baryons on the left and for the dibaryons $\mathcal{D}_{6b}$ on the right.
These estimates are then utilized to arrive at the energy differences in \eqn{eq:dE} and \tbn{tb:delE}.
\bef[h]
\centering
\hspace*{-0.06in}\includegraphics[scale=0.53]{fig8.pdf}
\caption{\label{fig:tmin-plot-all} Fit results for the ground state masses for different fit-windows corresponding to various choices of minimum time ($\tau_{min}$).}
\eef{}
It should be mentioned here that we do not use the ratio-method for extracting the energy difference. As is evident from the Figure~\ref{fig:eff-mass}, due to a large energy difference, the ground states of the single baryon ($\Omega_{bbb}$) and the dibaryon ($\mathcal{D}_{6b}$) reach to their respective saturation at two very different times. Taking ratio of these correlators at a given time, in this case, may lead to a precise but inaccurate result.
\vspace*{0.05in}
\noindent{\bf{Error Analysis:}} The main source of error in a lattice QCD calculation involving multiple hadrons is from the rapid decrease
of the signal-to-noise ratio in the correlation functions \cite{BEANE20111}. In heavy hadrons, this is
somewhat mitigated due to the presence of heavy quarks. In this calculation since all the valence quarks are of
bottom flavor and no chiral dynamics is involved, it is expected to have a relatively better signal-to-noise ratio than that of other dibaryons.
Nevertheless, different systematics need to be addressed, particularly discretization errors, to arrive at a
reliable estimate for the binding energy of the $\mathcal{D}_{6b}$ dibaryon. We discuss various
relevant systematics involved in our calculation below.
\noindent{\it{Excited states and statistical errors}:} Coulomb gauge fixed wall sources are utilized
for the quark fields, which gives an early ground state saturation with highly suppressed excited states contamination. We also consider fitting windows that are above 2 fm. For a compact state like $\mathcal{D}_{6b}$, this is sufficiently large to make sure of the ground state signal saturation. We also average the correlation functions over multiple
source time-slices to improve the statistical uncertainties. In addition to this, we follow the procedure outlined in the previous section to include fit-window uncertainties, and arrive at the final energies and energy differences presented in the main text. We find these uncertainties amount to 9-17\% on finer to coarser ensembles that we used.
\noindent{\it{Continuum extrapolation}:}
We employ a set of lattice QCD ensembles in which gauge fields are Symanzik-improved at $\mathcal{O}(\alpha_sa^2)$
and include the effect of $u$, $d$, $s$ and $c$ quark vacuum polarization generated with the highly improved
staggered quark action \cite{Bazavov:2012xda}. Quark propagators are generated with NRQCD action with improvement
coefficients up to $\mathcal{O}(\alpha_sv^4)$. The lattice spacing dependence of the energy differences in \tbn{tb:delE}
could be nontrivial. Similar to the approach made in Ref.~\cite{Green:2021qol}, we account for this by parameterizing $kcot\delta_0$,
that enter the scattering analysis in \eqn{luscher}, with different forms and perform fits with different sets of
energy levels determined from the simulation. Choosing the linear parameterization
$k~cot\delta_0 = -1/a_0^{[0]} - a/a_0^{[1]}$ that best describes the entire data, we find the total uncertainties arising
from statistics, fitting window and continuum extrapolation to be $\sim$18\% of the binding energy from the continuum extrapolation.
We find that choosing other forms of continuum extrapolation for the scattering length $-1/a_0$ leads to a change
of at most 8 MeV in the binding energy, which we quantify as the uncertainty arising from the discretization error.
\noindent{\it Scale setting}: Scale settings through $r_1$ parameter \cite{Bazavov:2012xda} and Wilson-flow were
found to be consistent \cite{Bazavov:2012xda} for these lattice ensembles. Systematics with the scale settings
further gets reduced in the estimation of energy differences (Eq. \ref{eq:dE}), and as in Ref.
\cite{Mathur:2018epb,Junnarkar:2019equ} we find it to be maximum of about 3 MeV.
\noindent{\it Quark mass tuning}: We tune the bottom quark mass employing the Fermilab method of heavy quarks
\cite{ElKhadra:1996mp}. Here, we equate the lattice extracted spin average $\overline{1S}$ bottomonia {\it kinetic mass}, ${1\over 4} [3 M_{\Upsilon} + M_{\eta_b}]_{kin}$, with its physical value. We perform this tuning corresponding to the central value of the chosen scale and also at its error values. We calculate
$E_{\mathcal{D}_{6b}}$ for each of the tuned masses and include the variation as the estimation of error due to quark mass tuning. We find it to be less than 2 MeV.
With the above mentioned lattice-set up we find the hyperfine splitting in $1S$ bottomonia, a benchmark observable
for the evaluation of the goodness of lattice calculations with bottom quarks, is quite consistent with its experimental value, as demonstrated in Figure \ref{fig:hfs}. The continuum value (green star) is obtained taking the average of estimates from all ensembles and the error (green band) is estimated as a weighted average with respect to the lattice spacings. Continuum extrapolation with the linear as well as and quadratic forms in lattice spacing are also
shown by the orange and blue stars respectively with the same color bands for their 1-$\sigma$ errors. Together with
possible other systematics, that we are discussing here, we estimate its value to be 62.6(3)(5) MeV.
\bef[h]
\centering
\hspace*{-0.1in}\includegraphics[scale=0.5]{fig9.pdf}
\caption{\label{fig:hfs} Hyperfine-splitting of the $1S$ bottomonia.}
\eef{}
\noindent{\it Electromagnetism}:
The dibaryon investigated here has two units of electric charge and can have substantial Coulomb repulsion. We model each $\Omega_{bbb}^{-}$ as a compact particle, but with a finite charge radius ($r_d$), with an
exponential charge distribution as in Ref. \cite{Lyu:2021qsh}. This leads to the Coulomb potential as in
Eq. (6) of Ref. \cite{Lyu:2021qsh}, but with a total electric charge of $-2$. We then solve the non-relativistic Schr\"odinger equation with this potential by varying $r_d$ in between $0.01-0.5$ fm, covering the {\it rms} radius of the ground state wave-function of the parameterized strong potential. We find the change in binding energy
to be $5-10$ MeV within this range of $r_d$. For heavy baryons, the possible systematics due to
electromagnetic corrections was found to be 3 MeV \cite{Borsanyi:2014jba}. Keeping that in mind as the
source of other electromagnetic effects beside the Coulomb repulsion, we take a conservative estimate of
8 MeV corrections for the binding energy (by adding the average of Coulomb repulsion with the above mentioned 3 MeV in quadrature).
No chiral extrapolation is necessary for $\mathcal{D}_{6b}$. For heavier dibaryons the unphysical sea
quark mass effects are expected to be within a percent level ~\cite{McNeile:2012qf, Dowdall:2012ab, Chakraborty:2014aca},
and particularly for $\mathcal{D}_{6b}$, it would be negligibly small. In Table \ref{error-table} we summarize the error-budget estimate where above mentioned systematics are added in quadrature.
\bet[h]
\centering
\begin{tabular}{l|c }
\hline
\hline
$Source$ & Error (MeV)\\
\hline
Statistical + Fit-window + & \multirow{2}{*}{\large$\left(^{+16}_{-12}\right)$}\\
Continuum extrapolation\\\hline
\hline
Discretization & 8\\
Scale setting& 3 \\
$m_b$ tuning & 2 \\
Electromagnetism & 8\\
\hline
Total systematics & 12 \\
\hline
\hline
\end{tabular}
\caption{\label{error-table}Error budget in the calculation of the binding energy $\Delta E_{\mathcal{D}_{6b}}$.}
\end{table}
\end{document}
| 2024-02-18T23:39:45.801Z | 2022-05-09T02:00:28.000Z | algebraic_stack_train_0000 | 311 | 1,591 |
|
proofpile-arXiv_065-1727 | \section{Variational formulations of QPME}\label{sec:formulation}
Since the mesh-based algorithms suffer from the curse of dimensionality, we therefore turn to neural network based techniques for solutions to high dimensional PDEs.
In particular, we first convert the initial/boundary value problem (IBVP) of QPME into a variational formulation and then take the neural network as an ansatz of the solution. The objective function is then taken as the loss function and the extrema will be obtained by optimizing the loss function with stochastic gradient descent (SGD) or its variants.
In this section, we specifically focus on the first step of this procedure, i.e., the IBVP and the variational reformulation.
\subsection{Initial / boundary value problem}
Consider the QPME on a hyperrectangle
$$\partial_t u= \frac{1}{2} \Delta u^2, \quad (t,\mathbf{x} )\in Q$$
where $Q =[0,T]\times \Omega,$ and $ \Omega = [-a_i,a_i]^{d}$. We consider the QPME with
the homogeneous Dirichlet boundary condition
\begin{equation}\label{BC}
\textbf{Dirichlet B.C.}\quad u(t,\mathbf{x})|_{\Sigma_T} = 0
\end{equation}
where $\Sigma_T: = [0,T] \times\partial \Omega$. We also impose the initial condition to the PDE as
\begin{equation}\label{IC}
\textbf{I.C.}\quad u(0,\mathbf{x}) = u_0(\mathbf{x})\quad \mathbf{x}\in \Omega.
\end{equation}
\subsection{Strong formulation}\label{sec:PINN}
One immediate optimization formulation is to use the strong form of the PDE by minimizing the squared PDE residual
\begin{equation}\label{PINN}
\mathcal{L}_{\text{PDE}} (u) = \int_Q \left( \partial _t u -\frac{1}{2} \Delta u^2\right)^2.
\end{equation}
If both the I.C.{} and B.C.{} are strictly enforced as hard constraints, the optimization problem can then be formulated as
\begin{equation}\label{PINNN_strong}
\min_{u\in V_0} \mathcal{L}_{\text{PDE}} (u)
\end{equation}
where $V_0: = \{ f : f|_{\Sigma_t} = 0 , f(0,\mathbf{x}) = u_0(\mathbf{x})\}$.
Alternatively, both I.C.{} and B.C.{} can be treated as soft constraints enforced by penalizations: We may define
\begin{equation}\label{bc_weak}
\mathcal{L}_{B}(u) = \int_{\Sigma_T} u ^2
\end{equation}
for homogeneous Dirichlet boundary condition, and
\begin{equation}\label{weak_initial}
\mathcal{L}_{I}(u) := \int_{\Omega} \left(u \left(0,\mathbf{x}\right) - u_{0}\left(\mathbf{x}\right)\right)^2
\end{equation}
for the initial condition.
The optimization problem \eqref{PINNN_strong} can then be relaxed to
\begin{equation}\label{PINN_full}
\min_{u\in V} \mathcal{L}_{\text{PINN}}(u)
\end{equation}
for some function space $V$, where
\begin{equation} \mathcal{L}_{\text{PINN}}(u): = \kappa \mathcal{L}_{\text{PDE}}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u)
\end{equation}
is weighted sum of the PDE residual, the error of boundary condition and the error for initial condition. $\kappa, \mu,\nu$ are weights for each term.
We used the subscript PINN for the loss function, as such formulation was popularized by the PINN method \cite{raissi2019physics} in recent years, while the idea dates back to early days of using neural network ansatz for PDE solutions \cite{lagaris1998artificial}.
So far, the PDE residual, mismatch in initial condition and boundary condition of $u$ are all measured in a $L^2$ sense. We could also define an analogous $L^1$ optimization problem \eqref{PINN_full} with:
\begin{equation}\label{PINN_L1}
\begin{split}
&\mathcal{L}_{\text{PDE}} (u) = \int_Q \left|\partial _t u -\frac{1}{2} \Delta u^2\right|,
\\
&
\mathcal{L}_{B}(u) = \int_{\Sigma_T} |u|,\\
&\mathcal{L}_{I}(u;u_0) := \int_{\Omega} |u \left(0,\mathbf{x}\right) - u_{0}\left(\mathbf{x}\right)|.
\end{split}
\end{equation}
We refer the target function in $L^1$ by $\mathcal{L}_{\text{PINN}-L^1}$ and that in $L^2$ by $\mathcal{L}_{\text{PINN}-L^2}$.
While using $L^2$ to measure the PDE residual and I.C./B.C. mismatch is a standard practice in PINN, the use of $L^1$ is inspired by the following stability analysis.
\section{Numerical example: Barenblatt Solution }\label{sec:nuemrical}
To test the numerical schemes, we will use a series of special solutions, known as
Barenblatt Solutions. They are given by \begin{equation}
U_m(t, \mathbf{x}; C) := t^{-\alpha}\left(\left(C - \frac{\beta(m- 1)}{2}\frac{|\mathbf{x}|^2}{t^{2\beta} }\right)^+\right)^{\frac{1}{m-1}}
\end{equation}
where $\alpha := \frac{d}{d(m-1)+2}, \beta := \frac{\alpha}{d}$, $(s)^+ := \max(s,0)$ and $C > 0$ is an arbitrary constant. This solution takes a Dirac mass as initial data : $\lim_{t \to 0} u(t,\mathbf{x}) \to M\delta(\mathbf{x})$, where $M$ is a function of the constant $C$ (depends on $m$ and $d$).
In the particular case $m = 2$, the Barenblatt Solution to \eqref{QPME} reduces to
\begin{equation}\label{barenblatt}
U_2(t, \mathbf{x}; C) := t^{-\frac{d}{d+2}}\left(C - \frac{1}{2(d+2)}\frac{|\mathbf{x}|^2}{t^{\frac{2}{d+2}} }\right)^+.
\end{equation}
The free boundary $\partial \mathcal{P}_u$ to \eqref{barenblatt} in this case can then characterized by the equation
$$ \quad|\mathbf{x}| = r_t$$
with $r_t:= \sqrt{2C(2+d)}\ t^{\frac{1}{d+2}}$. We also notice that the solution is scale-invariant:
\begin{equation*}
u_{\lambda}(t, \mathbf{x}) := \lambda^{\alpha} u(\lambda t, \lambda^{\beta} \mathbf{x}).
\end{equation*}
The shifted Barenblatt Solution is a {strong/weak/very weak} solution of PME, which is unique subject to the Dirac initial condition.
Since numerically one cannot set $\delta$ function as the initial condition, we specifically consider the following IBVP:
\begin{equation}\label{numerical_baren}
\begin{split}
&\partial_t u = \frac{1}{2}\Delta u^2\quad (t,\mathbf{x})\in Q = [0,1]\times\Omega,\\
&u(0,\mathbf{x}) = \left(1- \frac{1}{2(2+d)} |\mathbf{x}|^2
\right)^+.
\end{split}
\end{equation}
Notice the initial condition is essentially the Barenblatt Solution \eqref{barenblatt} evaluated at $t=1$ when $C =1$.
The exact solution to \eqref{numerical_baren} is therefore the Barenblatt Solution \eqref{barenblatt} with the time shifted:
\begin{equation}\label{exact}
U_2(t, \mathbf{x}) := \left(t+1\right)^{-\frac{d}{2+d}}\left(1 - \frac{1}{2(d+2)}\frac{|\mathbf{x}|^2}{ (t +1)^{\frac{2}{d+2}} }\right)^+.
\end{equation}
We further let $\Omega = [-a,a]^d$, where $a$ is the smallest integer that is greater than the radius of the free boundary of $U_2(t,\mathbf{x})$ at the terminal time $T=1$:
$$a := \text{ceil}(r_T)$$
where $r_T = (2+d)^{\frac{1}{2}}2^{\frac{4+d}{2d+4}}$ to ensure the the computational domain is large enough to include the entire free boundary for $t\in [0,1]$.
We take this example to test the effectiveness of the proposed formulations by comparing the approximate solution with the exact one \eqref{exact}. The performance of each formulation is further analyzed to show the pros and cons.
\subsection{Numerical settings}\label{sec:num_setting}
In particular, We would like to solve the aforementioned QPME with three formulations using neural network ansatz.
We specifically take $\mathcal{NN}_u(\cdot,\cdot;\theta_u)$, $\mathcal{NN}_{\phi}(\cdot,\cdot;\theta_{\phi})$, $\mathcal{NN}_{q}(\cdot,\cdot;\theta_{q})$ and $\mathcal{NN}_{\sigma}(\cdot,\cdot;\theta_{\sigma})$ to be fully connected neural networks with two hidden layers and the $\text{softplus}(\cdot)$ as their activation function. The corresponding solution ansatz can then be developed following Section \ref{sec:nn}. Notice that when computing the derivatives of the solution ansatz, a chain rule will be applied and the derivatives of $f_{dc}(\mathbf{x})$ needs to be computed when a homogeneous Dirichlet B.C. is imposed.
To evaluate the empirical losses, as discussed in Section \eqref{empirical_loss}, we further take randomly sampled data to approximate the integrals in $Q$ or in $\Omega$. When boundary condition is softly imposed, we take additional randomly sampled data over $\Sigma_T$ to evaluate $\mathcal{L}_B$. In particular, since the data are sampled on the fly, a new set of data will be sampled at each training step, the total number of training data $n$ can then be computed by $n= \text{batch size}\times \text{training steps}$.
Moreover, for high-dimensional cases, to make sure the sampled points can capture the features of the solutions, we use the following weighted sampling scheme. Specifically, we first decompose the region $\Omega = [-a,a]^d$ into $\Omega = V_0\cup V_1\cup V_2$ where $$V_{0}: =\{\mathbf{x} \in \Omega\ |\ |\mathbf{x}|\leq r_0 \},\quad V_{1}: =\{\mathbf{x} \in \Omega\ |\ r_0<|\mathbf{x}|\leq r_T \},\quad V_{2}: =\{\mathbf{x} \in \Omega\ |\ |\mathbf{x}|> r_T \}.$$
The radius of these region are decided by the radius of the free boundary to the Barenblat solution \eqref{exact} at $t=0$ and $t=T=1 $ respectively:
$$r_0 : = \sqrt{2(2+d)},\qquad r_T = (2+d)^{\frac{1}{2}}2^{\frac{4+d}{2d+4}}.$$
We then take weights $\theta_0, \theta_1$ and
the $\tilde{\mathbf{X}}_j$'s will be uniformly sampled within $V_0$ with a probability $\theta_0$, within $V_1$ with a probability $\theta_1$ and within $\Omega$ with a probability $\theta_2:=1-\theta_0-\theta_1$ (see Figure \ref{fig:train_data} for an illustration of the sampled training data).
The probability of sampling data in each region can be thus be computed as
$$P_{V_0} = \theta_0+\theta_2 \frac{|V_0|}{|\Omega|}\quad P_{V_1} = \theta_1+\theta_2 \frac{|V_1|}{|\Omega|} \quad P_{V_2}=\theta_2 \frac{|V_2|}{|\Omega|}.$$
The density function to this mixture distribution can be written as the piecewise constant function
$$f(\mathbf{x}) = \theta_0 f_0(\mathbf{x})+ \theta_1 f_1(\mathbf{x})+\theta_2 f_2(\mathbf{x})$$
where
\begin{equation*}
\begin{split}
f_0(\mathbf{x}) = \frac{1}{|V_0|}\mathbf{1}_{V_0}(\mathbf{x}), \quad
f_1(\mathbf{x}) = \frac{1}{|V_1|}\mathbf{1}_{V_1}(\mathbf{x}), \quad
&f_2(\mathbf{x}) =\frac{1}{|\Omega|}
\end{split}
\end{equation*}
are defined over the entire $\Omega$, with $\mathbf{1}_{V}(\mathbf{x})$ being an indicator function of region $V$.
When sampling from $V_0$ and $V_1$, to ensure the data are uniformly sampled in those high-dimensional ball regions, we specifically adopt the following algorithm as in \cite{marsaglia1972choosing}
\begin{enumerate}
\item Generating random points uniformly on the $(d-1)$-unit sphere
\begin{enumerate}
\item Generate an $d$-dimensional vector $\bm{x} = (x_1x_2,\cdots, x_d)$ so that $x_i \sim N(0,1)$ for $i= 1,2,\cdots, d$.
\item then $\tilde{\bm{x}} := \frac{\bm{x}}{||\bm{x}||_2}$ is a uniformly sampled point form $(d-1)$-unit sphere.
\end{enumerate}
\item Generate a point uniformly at random {\it{within}} the $d$-ball
\begin{enumerate}
\item Let $u$ be a number generated uniformly at random from the interval $[0, 1]$, then $u^{\frac{1}{d}}\tilde{\bm{x}}$ is a point randomly sampled within the unit ball.
\item Further, $r_0 u^{1/d}\tilde{\bm{x}} $ is a random point in $V_0$ and $\left((r_T-r_0) u^{1/d}+r_0\right)\tilde{\bm{x}} $ is a random point in $V_1$.
\end{enumerate}
\end{enumerate}
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/3d/train_data_distribution_3D.png}
\caption{\textbf{3d:} $\theta_0 = 0.3, \theta_0 = 0.3$, $\Omega = [-4,4]^3$.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/50d/train_data_distribution_3D.png}
\caption{\textbf{50d:} $\theta_0 = 0.3, \theta_1 = 0.2$, $\Omega = [-11,11]^{50}$ }
\end{subfigure}
\caption{3D projection (first three coordinates) of samples of $\{\tilde{\mathbf{X}}_j\}_{j=1}^{10^6}$ in $\Omega$. Red : $\tilde{\mathbf{X}}_j \in V_0$, green: $\tilde{\mathbf{X}}_j \in V_1$ and blue: $\tilde{\mathbf{X}}_j \in V_2$.}
\label{fig:train_data}
\end{figure}
To avoid changing the values of the integrals to be evaluated in the loss functions, a piecewise constant factor should be multiplied to the empirical function to correct the approximation of the integrals resulted from a nonuniform distributed training data. For PINN formulation, the empirical loss \eqref{empirical_PINN} can then be rewritten as
\begin{equation}
\mathcal{L}_{\text{PINN}}^{n} = \frac{\kappa}{n}\sum_{j=1}^{n} c(\tilde{\mathbf{X}}_j)\left( \partial _t u(T_j,\tilde{\mathbf{X}}_j) -\frac{1}{2} \Delta u(T_j,\tilde{\mathbf{X}}_j)\right)^2 +\frac{\nu}{n}\sum_{j=1}^{n} c(\tilde{\mathbf{X}}_j)\left(u \left(0,\tilde{\mathbf{X}}_j\right) - u_{0}\left(\tilde{\mathbf{X}}_j\right)\right)^2
\end{equation}
with the correction term
$$c(\mathbf{x}) = \displaystyle \sum_{i=1}^{3}\frac{|V_i|}{|\Omega| P_{V_i}}\mathbf{1}_{V_i}(\mathbf{x}).$$
Here, $\tilde{\mathbf{X}}_j$'s are random points in $\Omega$ sampled subject to the aforementioned density function $f(\mathbf{x})$, while $T_j$ will be uniformly sampled from $[0,1]$. The empirical losses for $\phi$ formulation and $q-\sigma$ formulation can also be formulated in a similar fashion.
The data are then randomly sampled on the fly which are batched into $1000$ for each training step. New data is sampled for each batch. Essentially, this sampling scheme enforces data samples in all three regions which can potentially improve the representativity of the training data and thus lead to a faster convergence of the training procedure.
For high-dimensional cases, the initial conditions and PDEs are in fact imposed without the correction factor $c(\mathbf{x})$, i.e., with the efficient sampling scheme, we are essentially minimizing the modified initial/PDE conditions. Taking such terms measured with $L^2$ as examples, the following loss terms will be minimized
\begin{equation}
\begin{split}
&\mathcal{L}_{I}(u) = \int_{\Omega} \left(u(0,\mathbf{x})-u_0\left(\mathbf{x}\right)\right)^2 f(\mathbf{x})\ d\mathbf{x}, \\
&\mathcal{L}_{\text{PDE}} (u) = \int_Q \left( \partial _t u -\frac{1}{2} \Delta u^2\right)^2 f(\mathbf{x})\ d\mathbf{x} dt,\\
&\mathcal{L}_{\partial_t \sigma, \Delta q} = \int_Q ( \partial_t \sigma + \Delta q )^2 f(\mathbf{x})\ d\mathbf{x} dt.
\end{split}
\end{equation}
Since $f(\mathbf{x})$ is merely a positive piecewise constant function, such modification will keep the minimizers to these terms unchanged, meaning the desired initial condition and PDE equation will still be imposed with mild assumption on the regularity of the solution ansatz.
The reason that the correction constants are not used is because for high-dimensional cases, $$c(\mathbf{x})\ll 1\% \quad\forall \mathbf{x} \in V_0\cup V_1 ,$$
which means extremely small contribution of samples within these region will be made to update the trainable parameters with SGD.
We also notice that while this is possible for imposing the initial condition and PDEs, we must apply $c(\mathbf{x})$ the to the inf terms, namely $\mathcal{L}_{\phi}(\phi)$ and $\mathcal{L}_{q,\sigma}(q,\sigma)$ as the sampling scheme will change the optimization target.
However, the choices of $\theta_0$ and $\theta_1$ still remain to be arbitrary. While numerical examples show that certain choices could lead to faster convergence, there is no clear principles one could follow to make optimal choices.
Similar situations happen when we decide the values of $\nu, \kappa,\gamma$ to balance the terms in the losses. While theoretically these hyper-parameters can be any positive number, the choice of them can heavily influence the training procedure. Some choices seem to help the weighted loss to converge faster compared to others, but there is no justified reason for any certain choices. Therefore, the choices of theses hyper-parameters used in the results reported in this paper are results of trial and error.
The losses are then optimized by tuning the trainable parameters of the neural networks. We take the optimizer that implements the Adam algorithm \cite{kingma2014adam} to train the models. The complete algorithm to establish the loss function and to train the neural networks is implemented using the TensorFlow library~\cite{abadi2016tensorflow}.
Once the training is finished, to evaluate the quality of the approximate solution obtained with the trained neural networks, we can further quantify the generalization error of it. In particular, we define the relative errors on a solution slice $u(t,x,y,c,\cdots,c)$ at time $t$ for some fixed constant $c\in[-a,a]$, denoting $u(t,x,y,c,\cdots,c)$ by $u(t)$ for simplicity:
\begin{equation}
\begin{split}
& L^1\textbf{-Relative Error}\quad \frac{||u_{NN}(t)-u(t)||_{1}}{||u(t)||_{1}},\\
& L^2\textbf{-Relative Error}\quad \frac{||u_{NN}(t)-u(t)||_{2}}{||u(t)||_2},\\
& H^1\textbf{-Relative Error}\quad \frac{||u_{NN}(t)-u(t)||_{H^1}}{||u(t)||_{H^1}}.\\
\end{split}
\end{equation}
where $u_{NN}$ stands for the neural network based solutions.
These norms can be further numerically approximated over a $100\times 100$ evenspaced mesh on $[-a,a]^2$ letting $\{x_i,y_j\}_{i=1,j=1}^{100}$ be the meshgrid points
\begin{equation*}
\begin{split}
&||f(x,y)||_1 \approx \frac{(2a)^2}{10^4}\sum_{i,j}|f(x_i,y_j)|,\\
&||f(x,y)||_2 \approx \sqrt{\frac{(2a)^2}{10^4}\sum_{i,j}|f(x_i,y_j)|^2},\\
&||f(x,y)||_{H^1} \approx \sqrt{\frac{(2a)^2}{10^4}\sum_{i,j}(|f(x_i,y_j)|^2 + |\nabla f(x_i,y_j)|^2)}.\\
\end{split}
\end{equation*}
The numerical relative errors can then be computed with predicted values of the neural network solutions evaluated at the mesh-grid points.
\subsection{PINN formulation}
In this section, we consider the case where the QPME \eqref{numerical_baren} is solved following a PINN formulation \eqref{PINN_full} in both $L^1$ and $L^2$ norm. Specifically speaking, the homogeneous Dirichlet boundary condition is imposed as a hard constraint and the initial condition is imposed as a soft constraint following \eqref{PINN_with_condition}. The specific algorithmic settings are further presented in Table \ref{tab:PINN_error} along with the relative errors computed for the trained solution slice $u(0.5,x,y,1.0,\cdots,1.0; \theta_u^*)$ at time $t= 0.5$ comparing with the exact solution \eqref{exact}.
From Table \ref{tab:PINN_error}, Figure \ref{fig:PINN_l2} and Figure \ref{fig:PINN_l1}, one can observe that the PINN formulation can indeed provide numerical solutions that closely approximate the exact ones even in high dimensions.
Not only is the neural network able to accurately approximate the function itself, but also the derivative of it. This is essentially a result of successful imposition of the PDE. As can be observed from Figure \ref{fig:PDE_l2} and Figure \ref{fig:PDE_l1}, the learned $\partial_t u$ coincides with $\frac{1}{2} \Delta u^2$ which confirms the PDE has been successfully learned. The initial condition can also be softly enforced with term $\mathcal{L}_{I}$ as training proceeds (See Figure \ref{fig:init_l2} and Figure \ref{fig:init_l1} for an illustration).
The training loss history of PINN is further presented in Figure \ref{fig:training_history_PINN} term by term. A training convergence can be observed from these plots, which further suggests a convergence to the exact solution ensured by \eqref{convergence_L1} and \eqref{convergence_L2}.
In addition, one can observe that learning the solution to QPME does not require the number of the trainable parameters of the solution ansatz to scale exponentially, which, in contrast to mesh-based solvers, is advantageous in dealing with high-d problems.
\begin{table}[htbp]
\hspace*{-1cm}
\centering
\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|}
\toprule
\textbf{Dimension}& & \multicolumn{1}{r|}{\textbf{1}} & \multicolumn{1}{r|}{\textbf{2}} & \multicolumn{1}{r|}{\textbf{3}} & \multicolumn{1}{r|}{\textbf{4}} & \multicolumn{1}{r|}{\textbf{5}} & \multicolumn{1}{r|}{\textbf{10}} & \multicolumn{1}{r|}{\textbf{15}} & \multicolumn{1}{r|}{\textbf{20}} & \multicolumn{1}{r|}{\textbf{50}} \\
\midrule
\multirow{3}[6]{3cm}{\textbf{Relative Error(\%)} for $L^2$-\textbf{PINN}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.21}} & \multicolumn{1}{r|}{\textbf{0.65}} & \multicolumn{1}{r|}{\textbf{0.61}} & \multicolumn{1}{r|}{\textbf{0.55}} & \multicolumn{1}{r|}{\textbf{1.1}} & \multicolumn{1}{r|}{\textbf{0.86}} & \multicolumn{1}{r|}{\textbf{1.72}} & \multicolumn{1}{r|}{\textbf{5.50}} & \multicolumn{1}{r|}{\textbf{16.03}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.14}} & \multicolumn{1}{r|}{\textbf{0.4}} & \multicolumn{1}{r|}{\textbf{0.39}} & \multicolumn{1}{r|}{\textbf{0.43}} & \multicolumn{1}{r|}{\textbf{0.94}} & \multicolumn{1}{r|}{\textbf{0.71}} & \multicolumn{1}{r|}{\textbf{1.64}} & \multicolumn{1}{r|}{\textbf{5.12}} & \multicolumn{1}{r|}{\textbf{15.26}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{4.28}} & \multicolumn{1}{r|}{\textbf{8.25}} & \multicolumn{1}{r|}{\textbf{7.93}} & \multicolumn{1}{r|}{\textbf{7.45}} & \multicolumn{1}{r|}{\textbf{8.64}} & \multicolumn{1}{r|}{\textbf{8.09}} & \multicolumn{1}{r|}{\textbf{9.88}} & \multicolumn{1}{r|}{\textbf{10.87}} & \multicolumn{1}{r|}{\textbf{28.5}} \\
\midrule
\multirow{3}[6]{3cm}{\textbf{Relative Error(\%) for $L^1$-PINN}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.45}} & \multicolumn{1}{r|}{\textbf{0.78}} & \multicolumn{1}{r|}{\textbf{0.95}} & \multicolumn{1}{r|}{\textbf{1.27}} & \multicolumn{1}{r|}{\textbf{2.07}} & \multicolumn{1}{r|}{\textbf{4.86}} & \multicolumn{1}{r|}{\textbf{10.46}} & \multicolumn{1}{r|}{\textbf{ 9.73}} & \multicolumn{1}{r|}{\textbf{10.76}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.23}} & \multicolumn{1}{r|}{\textbf{0.5}} & \multicolumn{1}{r|}{\textbf{0.69}} & \multicolumn{1}{r|}{\textbf{1.1}} & \multicolumn{1}{r|}{\textbf{1.91}} & \multicolumn{1}{r|}{\textbf{4.65}} & \multicolumn{1}{r|}{\textbf{8.91}} & \multicolumn{1}{r|}{\textbf{8.37}} & \multicolumn{1}{r|}{\textbf{10.47}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{5.11}} & \multicolumn{1}{r|}{\textbf{8.98}} & \multicolumn{1}{r|}{\textbf{9.38}} & \multicolumn{1}{r|}{\textbf{9.51}} & \multicolumn{1}{r|}{\textbf{10.48}} & \multicolumn{1}{r|}{\textbf{16.57}} & \multicolumn{1}{r|}{\textbf{16.45}} & \multicolumn{1}{r|}{\textbf{13.13}} & \multicolumn{1}{r|}{\textbf{21.55}} \\
\midrule
\multirow{2}[4]{*}{Formulation Weight} & $\nu$ & \multicolumn{5}{c|}{$10^3$} & \multicolumn{4}{c|}{1} \\
\cmidrule{2-11}
& $\kappa$ & \multicolumn{5}{c|}{1} & \multicolumn{4}{c|}{$10^3$} \\
\midrule
\multirow{2}[4]{*}{NN Architectire} & \# trainable & \multicolumn{1}{r|}{41001} & \multicolumn{1}{r|}{41201} & \multicolumn{1}{r|}{41401} & \multicolumn{1}{r|}{41601} & \multicolumn{1}{r|}{41801} & \multicolumn{1}{r|}{42801} & \multicolumn{1}{r|}{43801} & \multicolumn{1}{r|}{169601} & \multicolumn{1}{r|}{181601} \\
\cmidrule{2-11} & Width/Depth & \multicolumn{7}{c|}{200/2} & \multicolumn{2}{c|}{400/2} \\
\midrule
\multirow{2}[4]{*}{Data Sampling} & $\theta_0$ & \multicolumn{9}{c|}{0.3} \\
\cmidrule{2-11} & $\theta_1$ & \multicolumn{7}{c|}{0.3} & \multicolumn{2}{c|}{0.2} \\
\midrule
\multirow{2}[4]{*}{Training} & Steps & \multicolumn{7}{c|}{$10^5$} & \multicolumn{2}{c|}{$2\times10^5$} \\
\cmidrule{2-11} & Learning Rate & \multicolumn{9}{c|}{$10^{-3}$} \\
\bottomrule
\end{tabular}%
\caption{\textbf{PINN formulation \eqref{PINN}: (hard Dirichlet B.C.+soft I.C.)} Relative error comparison for various dimensions .}
\label{tab:PINN_error}%
\end{table}%
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Barenblatt_Solution.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^2-$ PINN formulation \eqref{PINN_full}} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:PINN_l2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/prediction_u_t_t_0.5.png}
\caption{Learned $u_t$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/prediction_usq_xx_t_0.5.png}
\caption{Learned $\displaystyle \frac{1}{2}\Delta u^2$}
\end{subfigure}
\caption{\textbf{15D, $L^2-$ PINN formulation \eqref{PINN_full}}, predicted partial derivatives.}
\label{fig:PDE_l2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Reference_u_0_t_0.5.png}
\caption{Exact $u_0$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/prediction_u_0_t_0.5.png}
\caption{Learned initial value}
\end{subfigure}
\caption{\textbf{15D, $L^2-$ PINN formulation \eqref{PINN_full}}, predicted initial value $u(0,x,y,1.0,\cdots,1.0)$ for $\mathbf{x} \in
\Omega =[-7,7]^{15}$.}
\label{fig:init_l2}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Barenblatt_Solution.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^1-$ PINN formulation \eqref{PINN_full}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:PINN_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/prediction_u_t_t_0.5.png}
\caption{Learned $u_t$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/prediction_usq_xx_t_0.5.png}
\caption{Learned $\displaystyle \frac{1}{2}\Delta u^2$}
\end{subfigure}
\caption{\textbf{15D, $L^1-$ PINN formulation \eqref{PINN_full}:} predicted partial derivatives.}
\label{fig:PDE_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Reference_u_0_t_0.5.png}
\caption{Exact $u_0$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/prediction_u_0_t_0.5.png}
\caption{Learned initial value}
\end{subfigure}
\caption{\textbf{15D, $L^1-$ PINN formulation \eqref{PINN_full}}, predicted initial value $u(0,x,y,1.0,\cdots,1.0)$ for $\mathbf{x} \in
\Omega =[-7,7]^{15}$.}
\label{fig:init_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/PINN_15d_l2.png}
\caption{$L^2-$PINN}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/PINN_15d_l1.png}
\caption{$L^1-$PINN}
\end{subfigure}
\caption{\textbf{15D:} Training loss history by term.}
\label{fig:training_history_PINN}
\end{figure}
However, from Table \ref{tab:PINN_error}, one can observe that the generalization error of the neural network based solutions of high-dimensional cases are larger compared to that of low-dimensional cases. This could be a result of using larger neural network to approximate a more complicated solution in high-dimensional cases. Numerical experiments show that neural networks with width $200$ is no longer sufficient in approximating solutions to QPME with dimension larger than $15$, thus a larger network was adopted. Such network naturally requires more training and data for convergence. Since the training steps (data) was not quadrupled as the number of trainable does, which could have contributed to a larger approximation error. In addition, whether assigning quadrupled training steps and training data could bring significant improvement on the approximation accuracy is also questionable as the optimization of $\theta_{u}$ is highly nonconvex, which means one has to accept a significant and unavoidable uncertainty about optimization success with SGD or its variants.
\FloatBarrier
\subsubsection{\texorpdfstring{$\phi$}{} formulation}
In this section, we consider the $\phi$ formulation \eqref{full_phi} to solve QPME \eqref{numerical_baren} in both $L^1$ and $L^2$ norm. The homogeneous Dirichlet boundary condition is enforced as a hard constraint following \eqref{phi_condition}. The initial condition can also be enforced softly with term $\mathcal{L}_{I}$ similar to the PINN formulation. The specific algorithmic settings are further presented in Table \ref{tab:phi_error} along with the relative errors computed for the trained solution slice $u(0.5,x,y,1.0,\cdots,1.0; \theta_u^*)$ at time $t= 0.5$ comparing with the exact solution \eqref{exact}.
From Table \ref{tab:phi_error}, Figure \ref{fig:phi_l2} and Figure \ref{fig:phi_l1}, one can observe that the $\phi$ formulation can indeed provide numerical solutions that closely approximate the exact ones up till dimension $20$.
Not only is the neural network able to accurately approximate the function itself, but also the derivatives of it. The mismatch mainly concentrated near the region where the solution is not smooth (free boundary).
The predicted minimizer $\phi$ to \eqref{full_phi} is also presented as in Figure \ref{fig:phi_pred}.
Theoretically speaking, compared to PINN, $\phi$ formulation is advantageous as it can be applied to solve a wider range of QPMEs, whose solution are less regular or smooth.
However, for the case being tested, we do encounter more challenges in the training process especially for the high-dimensional cases compared with that of PINN.
One observation is that the generalization error of the testing solution slice gets larger as the dimension gets higher. This could attributes to the nature of the exact solution $U_2$ as its nonzero region only accounts for a tiny small portion of $\Omega$($\ll 1$\textperthousand) when $d$ is large.
That is to say, the zero function will be a pretty good approximation of $U_2$ already in both $L^1(\Omega)$ and $L^2 (\Omega)$ sense. The training can thus be easily trapped in a local minimum $u_{\phi} =0$ which can be reached by a neural network $\phi = 0$. In addition, the generalization error reported measures the error for a solution slice projected onto a two-dimensional space in stead of in $\Omega$ to ease the computation and visualization, which can be an uncomprehensive measurement of the error. Moreover, the selected slice is a slice whose values are dominated by nonzero ones, which can also be a unfair representative of the entire solution to quantify the relative error.
The reason that PINN formulation seems to suffer less from this effects is probably the application of efficient sampling. Since the correction term $c(\mathbf{x})$ is {{not}} applied for any terms in the loss functional of PINN, meaning a very large weight was put on the region where the solution is nonzero when evaluating $\mathcal{L}_{\text{PINN}}$ , which could have helped the solution ansatz to escape from the local minimum. However, by the nature of $\phi$ formulation, the $c(\mathbf{x})$ can not be omitted. Otherwise, the target functional will be changed entirely.
Furthermore, while we are able to identify the desired solutions in many cases, theoretically, one can not guarantee meaningful solutions to QPME form the training of $\phi$ formulation. In fact, both the condition $1-\Delta \phi$ and $u_{\phi}\geq 0$ are not enforced at all in this formulation. Theses conditions can only be used post training to carry out a solution selection or as a criteria for early truncation of training.
Artificial choices of other algorithmic ingredients such as batch sizes, learning rates, $\theta_0$ and $\theta_1$ will also inevitable influence the optimization process providing limited computational resources.
\begin{table}[htbp]
\centering
\hspace*{-1.8cm}
\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|r|}
\toprule
\textbf{Dimension} & & \multicolumn{1}{r|}{\textbf{1}} & \multicolumn{1}{r|}{\textbf{2}} & \multicolumn{1}{r|}{\textbf{3}} & \multicolumn{1}{r|}{\textbf{4}} & \multicolumn{1}{r|}{\textbf{5}} & \multicolumn{1}{r|}{\textbf{10}} & \multicolumn{1}{r|}{\textbf{15}} & \multicolumn{1}{r|}{\textbf{20}} & \textbf{50} \\
\midrule
\multirow{3}[6]{3.5cm}{\textbf{Relative Errors(\%) for $L^2-\phi$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{3.58}} & \multicolumn{1}{r|}{\textbf{4.95}} & \multicolumn{1}{r|}{\textbf{4.41}} & \multicolumn{1}{r|}{\textbf{9.77}} & \multicolumn{1}{r|}{\textbf{5.77}} & \multicolumn{1}{r|}{\textbf{3.82}} & \multicolumn{1}{r|}{\textbf{8.29}} & \multicolumn{1}{r|}{\textbf{14.45}} & \textbf{54.26} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{3.23}} & \multicolumn{1}{r|}{\textbf{5.87}} & \multicolumn{1}{r|}{\textbf{4.57}} & \multicolumn{1}{r|}{\textbf{9.98}} & \multicolumn{1}{r|}{\textbf{6.22}} & \multicolumn{1}{r|}{\textbf{3.97}} & \multicolumn{1}{r|}{\textbf{8.99}} & \multicolumn{1}{r|}{\textbf{15.56}} & \textbf{77.65} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{16.44}} & \multicolumn{1}{r|}{\textbf{18.84}} & \multicolumn{1}{r|}{\textbf{20.18}} & \multicolumn{1}{r|}{\textbf{27.29}} & \multicolumn{1}{r|}{\textbf{16.97}} & \multicolumn{1}{r|}{\textbf{14.98}} & \multicolumn{1}{r|}{\textbf{19.21}} & \multicolumn{1}{r|}{\textbf{24.15}} & \textbf{71} \\
\midrule
\multirow{3}[6]{3.5cm}{\textbf{Relative Errors(\%) for $L^1-\phi$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{2.32}} & \multicolumn{1}{r|}{\textbf{5.01}} & \multicolumn{1}{r|}{\textbf{5.06}} & \multicolumn{1}{r|}{\textbf{8.8}} & \multicolumn{1}{r|}{\textbf{4.87}} & \multicolumn{1}{r|}{\textbf{3.62}} & \multicolumn{1}{r|}{\textbf{9.25}} & \multicolumn{1}{r|}{\textbf{26.67}} & \textbf{52.24} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{2.11}} & \multicolumn{1}{r|}{\textbf{6.16}} & \multicolumn{1}{r|}{\textbf{5.84}} & \multicolumn{1}{r|}{\textbf{9.21}} & \multicolumn{1}{r|}{\textbf{4.84}} & \multicolumn{1}{r|}{\textbf{3.52}} & \multicolumn{1}{r|}{\textbf{9.49}} & \multicolumn{1}{r|}{\textbf{28.05}} & \textbf{85.13} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{14.52}} & \multicolumn{1}{r|}{\textbf{25.25}} & \multicolumn{1}{r|}{\textbf{18.23}} & \multicolumn{1}{r|}{\textbf{25.73}} & \multicolumn{1}{r|}{\textbf{16.82}} & \multicolumn{1}{r|}{\textbf{13.73}} & \multicolumn{1}{r|}{\textbf{20.3}} & \multicolumn{1}{r|}{\textbf{44.74}} & \textbf{67.49} \\
\midrule
\multirow{2}[4]{*}{Formulation Weights} & $\nu$ & \multicolumn{4}{c|}{$10^3$} & \multicolumn{5}{c|}{1} \\
\cmidrule{2-11} & $\kappa$ & \multicolumn{4}{c|}{1} & \multicolumn{2}{c|}{$10^3$} &$10^4$ & \multicolumn{1}{r|}{$10^5$}& $10^3$ \\
\midrule
\multirow{2}[4]{*}{NN Architecture} & Width/Depth & \multicolumn{7}{c|}{200/2} & \multicolumn{2}{c|}{400/2} \\
\cmidrule{2-11} & \# trainable & \multicolumn{1}{r|}{41001} & \multicolumn{1}{r|}{41201} & \multicolumn{1}{r|}{41401} & \multicolumn{1}{r|}{41601} & \multicolumn{1}{r|}{41801} & \multicolumn{1}{r|}{42801} & \multicolumn{1}{r|}{43801} & \multicolumn{1}{r|}{169601} & 181601 \\
\midrule
\multirow{2}[4]{*}{Data Sampling} & $\theta_0$ & \multicolumn{7}{c|}{0.3} & \multicolumn{1}{r|}{0.2} & 0.4 \\
\cmidrule{2-11} & $\theta_1$ & \multicolumn{8}{c|}{0.3} & 0.4 \\
\midrule
\multirow{2}[4]{*}{Training} & Steps & \multicolumn{7}{c|}{$10^5$} &
\multicolumn{1}{r|}{$6\times 10^{5}$} & $2\times{10^{5}}$ \\
\cmidrule{2-11} & Learning Rate & \multicolumn{8}{c|}{$10^{-3}$} & $10^{-4}$\\
\bottomrule
\end{tabular}%
\caption{\textbf{$\phi$ formulation \eqref{phiformulation} (hard Dirichlet B.C.+soft I.C.): } Relative error comparison for various dimensions.}
\label{tab:phi_error}%
\end{table}%
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Barenblatt_Solution.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^2- \phi$ formulation \eqref{phiformulation}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:phi_l2}
\end{figure}
\pagebreak
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Barenblatt_Solution.png}
\caption{Barenblatt reference solution}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^1-\phi$ formulation \eqref{phiformulation}:} Predicted solution slice $u_{\phi}(0.5,x,y,1.0,\cdots, 1.0)$ with for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:phi_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{Phi_formulation/L2/prediction_phi_t_0.5.png}
\caption{Learned through $L^1$-$\phi$ formulation}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{Phi_formulation/L1/prediction_phi_t_0.5.png}
\caption{Learned through $L^2$-$\phi$ formulation}
\end{subfigure}
\caption{\textbf{15D: }Predicted $\phi(0.5, x,y,1.0,\cdots,1.0; \theta_{\phi}^*)$.}
\label{fig:phi_pred}
\end{figure}
We further observe that the optimization of $\mathcal{L}_{\phi}(u_{\phi}(t,\mathbf{x};\theta_{\phi}))$ indeed converges to $-\int_{Q}U_2^2$ as training proceeds with $u_{\phi}$ being the parametrize solution ansatz as stated in \eqref{u_phi}.
This observation in fact confirms the theoretical result \eqref{form_equivalency} derived in \cite{brenier2020examples}.
In Figure \ref{fig:phi_usq_comp}, we specifically use the batch of training data at each training step to evaluate empirically the value of $-\int_Q U_2^2$ for the exact solution $U_2(t,\mathbf{x})$ defined as in \eqref{exact}, and further compare it with the empirical loss $\mathcal{L}_{\phi}(u_{\phi})$ based on the neural network solution $u_{\phi}$ at that time. As one can observe, the difference of the two values gradually reduces as the training continues, which verifies the training effectiveness of this formulation.
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{Phi_formulation/5d_usq_comp.png}
\caption{\textbf{5D}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{Phi_formulation/10d_usq_comp.png}
\caption{\textbf{10D}}
\end{subfigure}
\caption{Empirical $\int_Q U_2^2(t,\mathbf{x}) +\mathcal{L}_{\phi}\big(u_{\phi}(t,\mathbf{x};\theta_{\phi} )\big)$ as training proceeds. }
\label{fig:phi_usq_comp}
\end{figure}
\FloatBarrier
\subsection{\texorpdfstring{$q$}{}-\texorpdfstring{$\sigma$}{} formulation}
Since $q-\sigma$ formulation \eqref{qsigma_full} is developed based on the $\phi$ formulation. The training thus also suffers from challenges met in the training of $\phi$ formulation, i.e., the training can be easily trapped in a local minimum, i.e. $u_{q,\sigma} = 0$ for high-dimensional cases. Additionally, the partial derivatives of $\phi$ is separated into two independent functions $q$ and $\sigma$, whose correlation is only enforced softly with the loss term $\mathcal{L}_{\partial_t \sigma, \Delta q}$, which can pose more challenges to the optimization of the target functional. Additional hyper-parameter $\gamma$ is also introduced to adjust the weight of $\mathcal{L}_{\partial_t \sigma, \Delta q}$ whose optimal choice is again obscure. Due to such reasons, only results for dimension $1$ to $10$ are reported as no reasonable results for higher-dimensions were obtained in the scope of experiments that were carried out.
Specifically, the homogeneous Dirichlet boundary condition is imposed as a hard constraint following \eqref{q_with_conditions} and the condition for $\sigma$ is imposed with \eqref{sigma_condition}.
The condition \eqref{sigma_selection} was not strongly imposed for training reasons. The initial condition can then be softly enforced with term $\mathcal{L}_{I}$ as mentioned earlier. The specific algorithmic settings are further presented in Table \ref{tab:qsigma_error} along with the relative errors computed for the trained solution slice $u_{q,\sigma}(0.5,x,y,1.0,\cdots,1.0; \theta_{q}, \theta_{\sigma})$ at time $t= 0.5$ comparing with the exact solution \eqref{exact}. The comparison of predicted solutions with exact solution are further presented in Figure \ref{fig:qsigma_l2} and Figure \ref{fig:qsigma_l1}. In addition, the predicted function $q$ and $\sigma$ are depicted in Figure \ref{fig:qsigma_PDE_l2} and Figure \ref{fig:qsigma_PDE_l1}. These figures are further used to show the predicted $-\Delta q$ and $\partial_t \sigma$ to verify that the condition
$$\Delta q + \partial_t \sigma = 0$$
is satisfied. Finally, Figure \ref{fig:qsigma_usq_comp} is used to demonstrate the computational value of $\mathcal{L}_{q,\sigma}$ converges to
to $-\int_{Q}U_2^2$ as training proceeds.
This observation confirms once again the theoretical result \eqref{form_equivalency} derived in \cite{brenier2020examples}.
Here, the batch of training data at each training step is used to evaluate empirically the value of $-\int_Q U_2^2$ for the exact solution $U_2(t,\mathbf{x})$ defined as in \eqref{exact}. Such value is further compared with the empirical loss $\mathcal{L}_{q,\sigma}(u_{q,\sigma})$ at that time. The difference of the these values gradually reduces as the training continues (Figure \ref{fig:qsigma_usq_comp}).
\begin{table}[htbp]
\centering
\begin{tabular}{|c|l|c|c|c|c|c|c|}
\toprule
\textbf{Dimension} & & \multicolumn{1}{r|}{\textbf{1}} & \multicolumn{1}{r|}{\textbf{2}} & \multicolumn{1}{r|}{\textbf{3}} & \multicolumn{1}{r|}{\textbf{4}} & \multicolumn{1}{r|}{\textbf{5}} & \multicolumn{1}{r|}{\textbf{10}} \\
\midrule
\multirow{2}[4]{4cm}{\textbf{Relative Errors(\%) for $L^2-q-\sigma$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{1.95}} & \multicolumn{1}{r|}{\textbf{3.2}} & \multicolumn{1}{r|}{\textbf{3.88}} & \multicolumn{1}{r|}{\textbf{3.97}} & \multicolumn{1}{r|}{\textbf{4.77}} & \multicolumn{1}{r|}{\textbf{4.03}} \\
\cmidrule{2-8} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{1.64}} & \multicolumn{1}{r|}{\textbf{3.11}} & \multicolumn{1}{r|}{\textbf{3.5}} & \multicolumn{1}{r|}{\textbf{4.02}} & \multicolumn{1}{r|}{\textbf{5.11}} & \multicolumn{1}{r|}{\textbf{4.14}} \\
\midrule
\multirow{2}[4]{4cm}{\textbf{Relative Errors(\%) for $L^1-q-\sigma$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{2.06}} & \multicolumn{1}{r|}{\textbf{2.96}} & \multicolumn{1}{r|}{\textbf{3.83}} & \multicolumn{1}{r|}{\textbf{3.94}} & \multicolumn{1}{r|}{\textbf{5.63}} & \multicolumn{1}{r|}{\textbf{4.28}} \\
\cmidrule{2-8} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{1.72}} & \multicolumn{1}{r|}{\textbf{2.79}} & \multicolumn{1}{r|}{\textbf{3.28}} & \multicolumn{1}{r|}{\textbf{3.72}} & \multicolumn{1}{r|}{\textbf{6.41}} & \multicolumn{1}{r|}{\textbf{4.59}} \\
\midrule
\multirow{3}[6]{*}{Formulation Weights } & $\nu$ & \multicolumn{6}{c|}{1} \\
\cmidrule{2-8} & $\kappa$ & \multicolumn{6}{c|}{$10^3$} \\
\cmidrule{2-8} & $\gamma$ & \multicolumn{5}{c|}{$10^3$} & \multicolumn{1}{r|}{1} \\
\midrule
\multirow{2}[4]{*}{NN Architecture} & Width/Depth & \multicolumn{6}{c|}{200/2} \\
\cmidrule{2-8} & \# trainable & \multicolumn{1}{r|}{82002} & \multicolumn{1}{r|}{82402} & \multicolumn{1}{r|}{82802} & \multicolumn{1}{r|}{83202} & \multicolumn{1}{r|}{83602} & \multicolumn{1}{r|}{85602} \\
\midrule
\multirow{2}[4]{*}{Data Sampling} & $\theta_0$ & \multicolumn{6}{c|}{0.3} \\
\cmidrule{2-8} & $\theta_1$ & \multicolumn{6}{c|}{0.3} \\
\midrule
\multirow{2}[4]{*}{Training} & Steps & \multicolumn{6}{c|}{$10^5$} \\
\cmidrule{2-8} & Learning Rate & \multicolumn{6}{c|}{$10^{-3}$} \\
\bottomrule
\end{tabular}%
\caption{\textbf{$q-\sigma$ formulation \eqref{qsigma_form} (hard Dirichlet B.C.+soft I.C.): } Relative error comparison for various dimensions.}
\label{tab:qsigma_error}%
\end{table}%
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L2/reference_u_t_0.5.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L2/prediction_u_t_0.5.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L2/u_prediction_error_t_0.5.png}
\caption{Learned solution error}
\end{subfigure}
\caption{\textbf{10D, $L^2-q-\sigma$ formulation \eqref{qsigma_full}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-6,6]^{10}$, $t= 0.5$. }
\label{fig:qsigma_l2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_q_t_0.5.png}
\caption{Learned $q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_sigma_t_0.5.png}
\caption{Learned $\displaystyle \sigma$}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_q_xx_t_0.5.png}
\caption{Learned $-\Delta q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_sigma_t_t_0.5.png}
\caption{Learned $\displaystyle \partial_t \sigma$}
\end{subfigure}
\caption{\textbf{10D, $L^2-q-\sigma$ formulation \eqref{qsigma_full}:} predicted $q$, $\sigma$ and their partial derivatives.}
\label{fig:qsigma_PDE_l2}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L1/reference_u_t_0.5.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L1/prediction_u_t_0.5.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L1/u_prediction_error_t_0.5.png}
\caption{Learned solution error}
\end{subfigure}
\caption{\textbf{10D, $L^1-q-\sigma$ formulation \eqref{qsigma_full}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-6,6]^{10}$, $t= 0.5$. }
\label{fig:qsigma_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_q_t_0.5.png}
\caption{Learned $q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_sigma_t_0.5.png}
\caption{Learned $\displaystyle \sigma$}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_q_xx_t_0.5.png}
\caption{Learned $-\Delta q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_sigma_t_t_0.5.png}
\caption{Learned $\displaystyle \partial_t \sigma$}
\end{subfigure}
\caption{\textbf{10D, $L^1-q-\sigma$ formulation \eqref{qsigma_full}:} predicted $q,\sigma$ and their partial derivatives.}
\label{fig:qsigma_PDE_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{qsigma/5d_usq_comp.png}
\caption{\textbf{5D}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{qsigma/10d_usq_comp.png}
\caption{\textbf{10D}}
\end{subfigure}
\caption{Empirical $\int_Q U_2^2(t,\mathbf{x}) +\mathcal{L}_{q,\sigma}\big(u_{q,\sigma}(t,\mathbf{x};\theta_{q},\theta_{\sigma} )\big)$ v.s. training steps. }
\label{fig:qsigma_usq_comp}
\end{figure}
\subsection{Relaxed concave optimization problem}
Besides the strong formulation, in this section, we derive and consider a series of optimization problems which can also be used to solve the QPME; they correspond to various weak formulation of the PDE.
We start by considering the {very weak solutions} to the QPME \eqref{QPME}, i.e. $u \in L^{1}(Q)$ satisfying
\begin{equation}
\int_{Q } -2\partial_t\psi u -\Delta \psi u^2 +2u_0 \partial_t \psi =0\quad
\end{equation}
for all test functions $\psi \in C^{2,1}(\bar{Q})$ which vanishes on $\Sigma_T$ and for $t= T$. Essentially, a very weak solution is an integrable distribution solution. Unlike strong solutions, no derivative of the solution is used in defining such solutions; so very weak solutions have much lower regularity requirements.
We also remark that while we focus on very weak solutions in this paper, there are different ways of defining generalized solutions for QPME. A weak solution, for example, is defined to be a function $u$ such that $u^2\in L_{loc}^1(0,T;\ W_{loc}^{1,1})$ which satisfies
$$\int_Q -2\partial_t\psi u +\nabla (u^2)\cdot\nabla \psi + 2u_0\partial_t \psi = 0$$
It is clear that all weak solutions are very weak solutions by definition; weak solutions require higher regularity of the solutions.
The following theorem gives a characterization for very weak solutions to QPME \cite{brenier2020examples}.
\begin{theorem}[\cite{brenier2020examples}]\label{thm}
Any \textbf{very weak solution} $u$ to QPME can be recovered as
\begin{equation}\label{u_phi_thm}
u = \frac{\partial_t \phi^*}{1-\Delta \phi^*}
\end{equation}
where
\begin{equation}\label{phi_formulation}
\phi^* = {\argmax_{\phi\in B} J(u_0)} = \argmax_{\phi\in B} \int_Q \frac{-(\partial_t \phi)^2}{ 1-\Delta \phi} + 2u_0 \partial_t \phi
\end{equation}
with $B:= \{\phi \ |\ \phi(T,\mathbf{x}) =0 , \ 1-\Delta \phi\geq 0\}$. In addition, any solution $\phi^*$ satisfies $1- \Delta \phi^* \geq (\frac{t}{T})^{\frac{d}{d+2}}$.
\end{theorem}
While we will not repeat the proof here, let us mention that the proof starts with minimizing the Lyapunov (``entropy") functional among the very weak solutions $u$ of QPME
\begin{equation}\label{Ljapunov}
\int_{Q} u^2(t,\mathbf{x}),
\end{equation}
it can then be proved that the following formulations are equivalent letting
\begin{itemize}
\item $A: = \{u\in L^2(Q) \text{ is a very weak non-negative solution associated with } u_0\in L^2 (\Omega)\} $,
\item $ B: = \{\phi \mid \phi(T,\mathbf{x}) =0,\ 1-\Delta \phi\geq 0
\}$.
\end{itemize}
\begin{enumerate}
\item Original form
\begin{equation}
\begin{split}
&I(u_0) = \inf_{u \in A}\sup_{\phi\in B} \int_{Q } \left(u^2 -2\partial_t\phi u -\Delta \phi u^2 +2u_0 \partial_t \phi
\right)
\end{split}
\end{equation}
\item Flipping $\sup, \inf$
\begin{equation}\label{relaxed-form}
\begin{split}
&J(u_0) = \sup_{\phi\in B} \inf_{u\in A} \int_{Q } \left(u^2 -2\partial_t\phi u -\Delta \phi u^2 +2u_0 \partial_t \phi
\right)
\end{split}
\end{equation}
\item Point-wise minimization of \eqref{relaxed-form}.\\
\begin{equation}\label{phiformulation}
\begin{split}
&\tilde{J}(u_0) =\sup_{\phi\in B}\ \int_{Q} \left(\frac{- (\partial_t \phi)^2}{1-\Delta \phi} + 2u_0\partial_t \phi
\right)
\end{split}
\end{equation}
\item Let $q = \partial_t \phi$, $\sigma = 1-\Delta \phi$ in \eqref{phiformulation}
\begin{equation}\label{strong_terminal}
\begin{split}
&\hat{J} (u_0) = \sup_{q, \sigma} \int_Q \left(
\frac{-q^2}{\sigma} + 2 u_0 q
\right)\\
&\sigma \geq 0,\quad \sigma(T, \cdot) = 1,\quad \partial_t \sigma+ \Delta \phi =0.
\end{split}
\end{equation}
\end{enumerate}
More specifically, it is proved that
\begin{equation}\label{form_equivalency}
\int_{Q} u^2(t,\mathbf{x}) \ d\mathbf{x}\ dt = I(u_0) = J(u_0) = \tilde{J}(u_0) = \hat{J}(u_0).
\end{equation}
Theorem \ref{thm} shows that we can indirectly obtain very weak solutions to QPME by solving \eqref{phi_formulation}. We first obtain $\phi^{*}$, then obtain candidates for the very weak solution with \eqref{u_phi_thm}.
We can therefore consider the following loss function
\begin{equation}\label{phi_form}
\begin{split}
\textbf{$\boldsymbol{\phi}$ formulation}\quad \mathcal{L}_{\phi}(\phi) &:= -\int_{Q} \left(\frac{- (\partial_t \phi)^2}{1-\Delta \phi} + 2u_0\partial_t \phi
\right).
\end{split}
\end{equation}
It is not hard to see that if a smooth $\phi^*$ is a minimizer, as long as the recovered solution satisfies the homogeneous boundary condition and the initial condition, $\frac{\phi^*}{1-\Delta \phi^*}$ must be a solution to QPME. Moreover, in the case where $u\geq 0$, it has been proved that the solution to the QPME subject to $u_0\geq 0$ is unique.
However, it is worth noting that such minimizer $\phi^*$ is not necessarily unique. Thus \eqref{phi_form} can be used to identify the unique solution to QPME, as long as the initial/boundary conditions are imposed, but more than one minimizer $\phi^*$ could theoretically exist \cite{brenier2020examples}.
Moreover, since \eqref{strong_terminal} is equivalent to \eqref{phiformulation}, one can also recover a candidate of very weak solution to QPME with $q^*$ and $\sigma^*$ by
\begin{equation}
u_{q,\sigma}^*: = \frac{q^*}{\sigma^*}
\end{equation}
with $q^*$ and $\sigma^*$ being the maximizer of \eqref{strong_terminal}. Thus, we may also consider the loss function
\begin{equation}\label{qsigma_form}
\textbf{$\mathbf{q}-\boldsymbol{\sigma}$ formulation}\quad \mathcal{L}_{q,\sigma}(q,\sigma) = -\int_Q \left(\frac{-q^2}{\sigma} + 2 u_0 q\right).
\end{equation}
Similar to the discussion in section \ref{sec:PINN}, we can also relax the initial/boundary conditions of the recovered solution $u$ obtained from $\phi$ formulation and $q,\sigma$ formulation to a penalization by regularizing $\mathcal{L}_{B}$ and $\mathcal{L}_{I}$ as defined earlier:
\begin{equation}\label{full_phi}
\begin{split}
\mathcal{L}_{\phi-\text{NN}}(u): = \kappa \mathcal{L}_{\phi}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u),
\end{split}
\end{equation}
and
\begin{equation}\label{partial_q_sigma}
\mathcal{L}_{q,\sigma-\text{NN}}(u): = \kappa \mathcal{L}_{q,\sigma}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u).
\end{equation}
However, it is worth pointing out that it is very difficult to impose initial condition as a hard constraint to solution ansatz for both formulations when the optimization problem is solved with neural networks as only intermediate functions $\phi,q,\sigma$ will be parametrized. The boundary conditions, on the other hand, can be explicitly imposed by modifying the solution ansatz.
For $q-\sigma$ formulation in particular, we should also note that, the consistency between $q$ and $\sigma$
\begin{equation}\label{qsigma_PDE}
\partial_t \sigma + \Delta q = 0
\end{equation}
needs to be imposed since they are essentially derivatives of the same function $\phi$. This condition can be imposed by minimizing the residual of equation \eqref{qsigma_PDE}
\begin{equation}
\mathcal{L}_{\partial_t \sigma, \Delta q} = \int_Q ( \partial_t \sigma + \Delta q )^2
\end{equation}
or in $L^1$ sense,
\begin{equation}\label{q_sigma_corelation}
\mathcal{L}_{\partial_t \sigma, \Delta q} = \int_Q | \partial_t \sigma + \Delta q |.
\end{equation}
Thus, $\mathcal{L}_{q,\sigma -\text{NN}}$ can be further modified as
\begin{equation}\label{qsigma_full}
\mathcal{L}_{q,\sigma-\text{NN}} = \kappa\mathcal{L}_{q,\sigma}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u) +
\gamma \mathcal{L}_{\partial_t \sigma, \Delta q}.
\end{equation}
Let us remark that the condition \eqref{qsigma_PDE} can also be imposed weakly following the Dirichlet principle, $\mathcal{L}_{\partial_t \sigma, \Delta q}$ can be then replaced by
\begin{equation}
\mathcal{L}_{\partial_t \sigma, \nabla q} :=\int_{[0,T]}\left(\frac{1}{2} \int_{\Omega} |\nabla q|^2\ dx + \frac{\lambda}{2} \left(\int_{\Omega} q\ dx\right)^2 + \int_{\Omega} \partial_t\sigma q\ dx \right)\ dt .
\end{equation}
While related results using $\mathcal{L}_{\partial_t \sigma, \nabla q}$ will not be presented in this paper, we remark that this formulation completely bypasses taking second order derivative of $q$, which means less smoothness requirement of the neural network ansatz. However, adding such a term would make the optimization more complicated especially in the high-dimensional cases. Since this term can not be interpreted as a pointwise condition as \eqref{q_sigma_corelation}, it thus can not benefit much from an efficient sampling scheme (see Section \eqref{sec:num_setting}).
In addition, the introduction of the extra hyper-parameter $\lambda$ further increases the difficulty of parameter tuning. From our own experience, the training of the $q,\sigma$ formulation with the term $\mathcal{L}_{\partial_t \sigma, \nabla q}$ seems extremely challenging if not impossible and is therefore not presented.
\section{Introduction}
Solving high-dimensional PDEs is a long-standing challenge in scientific computing. Standard mesh-based methods, such as Finite Element Method (FEM), Finite Difference Method (FDM) would suffer from the curse of dimensionality, i.e., in order to sustain the accuracy of the approximate solution to high-dimensional PDEs, an approximation space with exponentially large size must be used. The number of degrees of freedom associated to the approximation space is often proportional to the number of elements in the mesh which usually scales exponentially in the dimension to achieve an suitable discretization of the domain. Therefore, for high-dimensional problems, the mesh-based methods are impractical. Alternatively, semilinear parabolic PDEs can also be solved point-wisely based on stochastic representation of the solutions using Monte Carlo algorithm \cite{henry2012counterparty,warin2018nesting,henry2019branching,warin2017variations}, but such approaches only apply to specific type of PDEs.
To circumvent the challenges for solving general high dimensional (nonlinear) PDEs, many attempts have been made. One natural idea is to restrict to a solution ansatz. For example, using the tensor train (TT) format to approximate the solutions of high-dimensional PDEs \cite{richter2021solving,dektor2021rank,dektor2020dynamically,dolgov2021tensor,eigel2019non,boelens2018parallel}. While such methods are quite successful if the solution can be well represented by the tensor train, the representability is not guaranteed. Another natural and promising candidate for PDE solution ansatz is the artificial neural networks. Thanks to the rich expressiveness of the neural networks to parametrize high dimensional functions \cite{barron1993universal}. Theoretical results are also available to justify the approximability of PDE solutions by neural networks without curse of dimensionality, e.g., \cite{jentzen2018proof,lu2021priori}.
Many recent works have been devoted for various approaches of using neural networks to solve high dimensional PDEs. Typically, such methods first identify functional optimization problems corresponding to the PDEs
$$\underbrace{u}_{\text{PDE solution}} = \underbrace{\argmin_f\mathcal{C}(f)}_{\text{PDE-inspired optimization problem}}.$$
Then one could take neural network as an ansatz of the minimizer $u$ and minimize the parameters using stochastic gradient type approaches.
One well known method of this kind is the physics informed neural network (PINN) \cite{raissi2019physics}, which takes the loss function to be directly the PDE residual.
One drawback of PINN is that, to compute the loss function, the derivatives or high-order derivatives of the neural network need to be computed throughout the training process. However, the generality and simplicity of this framework still make it an easy choice when it comes to the high-dimensional PDEs.
Other neural network based methods to solve PDEs include the Deep Ritz Methods \cite{yu2018deep}, the Deep BSDE method \cite{han2018solving,han2017deep} for semilinear parabolic PDEs, utilizing different formulations to turn the high dimensional PDE problems into an optimization problem on the neural network parameters.
More recent work \cite{zang2020weak} adopts weak variational formulation for linear equations to solve such PDEs using neural networks. While it is generically unclear how to extend such techniques to nonlinear ones.
In this paper, we make an attempt along the direction of utilizing weak formulation for solving nonlinear PDEs in high dimension. In particular, we consider high-dimensional quadratic porous medium equation (QPME) (Section \ref{QPME_intro}). Several variational formulations of the QPME is proposed to solve such PDE (Section \ref{sec:formulation}) which both allow solutions in a very weak sense and can be sought with deep learning techniques.
In addition, these formulations are further compared with the PINN formulation. In Section \ref{sec:nn}, more detailed treatment of the neural network for solving the optimization problem is presented. Numerical results are then provided to verify and compare the effectiveness of the proposed methods in Section \ref{sec:nuemrical} and Section \ref{sec:nuemrical_waiting}.
\section{Preliminaries}\label{QPME_intro}
We consider the porous medium equation (PME)
$$\partial_t u = \frac{1}{m}\Delta u^m,\quad (t,\mathbf{x}) \in Q.$$
PME is a degenerate parabolic equation. It is only parabolic where $u>0$.
When $m$ is taken to be $2$, the quadratic porous medium equation (QPME or Boussinesq's equation) reads
\begin{equation}\label{QPME}
\begin{split}
\partial_t u = \frac{1}{2}\Delta u^2 = \text{div}\left(u \nabla u \right) = \frac{1}{2} u\Delta u + \frac{1}{2}|\nabla u|^2 ,\quad (t,\mathbf{x}) \in Q,
\end{split}
\end{equation}
where $Q := [0,T]\times \Omega$ and $\Omega$ stands for a bounded domain in $\mathbb{R}^d$. $\Delta= \Delta_x$ represents the Laplace operator acting on the space variables.
The equation is mainly used to describe process involving fluid flow, heat transfer or diffusion. Particularly, it can be used to study the flow of an isentropic gas through a porous medium \cite{muskat1938flow,leibenzon1930motion}. In this case, $u(t,\mathbf{x})\in \mathbb{R}$ is a scalar function denoting the density of the gas ($u^{2}$ is roughly the pressure, and $u\nabla u$ stands for the flux). Physical constraints may apply such that $u> 0$. Power $2$ here relates to the thermal dynamics character expansion in terms of the pressure of the gas of interest (linear dependency).
A main feature of PME is the ``finite propagation'' speed, in contrast with the infinite propagation of usual diffusion equations. Essentially, the free boundary that separates the regions where the solution is positive (i.e. where ``there is gas", according to the standard interpretation of $u$ as a gas density), from the ``empty region" where $u = 0$ moves as time passes.
$$
\Gamma = \partial \mathcal{P}_u \cap Q
$$
where $\mathcal{P}_u := \{(t,\mathbf{x})\in Q\ |\ u(t,\mathbf{x})>0 \}$ denotes the set where $u$ is positive.
$\Gamma$ is also sometimes referred as the moving boundary, propagation fronts, or the interface. While a rather comprehensive theoretical analysis of this PDEs is provided in \cite{vazquez2007porous}, exact solutions to general initial/boundary conditions can usually not be obtained. Numerical schemes thus must be applied to obtain approximate solutions. Most previous studies of PME from a numerical aspect put their focus on dealing with the moving free boundaries of the solutions. The adaptive moving mesh schemes were proposed and coupled with mesh-based methods such as finite element method (FEM) to obtain accurate yet efficient numerical solutions to PME \cite{ngo2017study}. However, such methods can not be used to solve high-dimensional QPME due to the curse of dimensionality. The only exception as far as we know is \cite{shukla1996use}, in which the supervised learning was conducted to learn the correspondence between the physical parameters in the PDE and the one-dimensional solutions at certain evaluation $x$, the learning of the global solution is not considered by the authors.
While PME is mainly used in modellings for low physical dimensions, $d = 2$ or $3$. In this work, we use it as a prototypical degenerate nonlinear equation in high dimensions to test numerical PDE solvers based on the neural networks. The high dimensional diffusion might be used for certain machine learning tasks such as analysis of high dimensional point cloud data, which we will leave for future investigations.
\section{Conclusion}
In this paper, we explored different variational forms in solving high-dimensional QPME with neural networks. In specific, three formulations were considered. For the PINN formulation, the solution is directly parametrized and the PDE residual is minimized. A theoretical analysis is carried out to show that the convergence of the PINN loss guarantees a convergence to the exact solution in $L^1$ sense. Moreover, this analysis also suggests the use of the $L^1$ norm to quantify the residual and approximation error.
In addition, inspired by the work \cite{brenier2020examples}, a $\phi$ formulation and a $q-\sigma$ formulation is further presented and used to solve the QPME in a very weak sense. Theoretically, these formulation can identify solutions with less regularity. All formulations are then tested with the Barrenbaltt solution in low and high dimensions. Experiments have shown that $\phi$ formulation and $q,\sigma$ formulation can provide approximate solutions with a similar level of accuracy compared with PINN in low-dimensional cases but the optimization aspect continues to pose challenges in high-dimensional cases. A two-dimensional example of QPME that exhibits waiting phenomena is also presented to show the capability of deep learning based methods in identifying solution features as such.
Other aspects of the discussion toward solving QPME with deep learning includes the hard and soft imposition of certain conditions of the solutions in all formulations such as initial conditions and boundary conditions. Additionally, an efficient sampling scheme is proposed aiming at a faster convergence towards the solution desired especially in high-dimensional cases. These treatments in principal can also be applied in other scenarios where the PDE solutions are parametrize with neural networks.
While such efforts can all contributes to more efficient implementation of solving high-dimensional QPMEs, we must admit that the training success is overwhelmed by the large number of hyper-parameters. Moreover, for practical applications,
neural network training using stochastic gradient descent type schemes, which means one must accept a significant and unavoidable uncertainty about optimization success.
An efficient strategy on making choices of hyper-parameters could potentially be an interesting direction for future investigations.
More broadly, whether a similar variational form could be derived for general $m$ of porous medium equation is also an open question.
\section{Numerical example: waiting-time phenomena }\label{sec:nuemrical_waiting}
In this section, we further consider the following IBVP
\begin{equation}\label{numerical_waiting}
\begin{split}
&\partial_t u = \frac{1}{2}\Delta u^2\quad (t,\mathbf{x})\in Q = [0,1]\times\Omega,\\
&u(0,\mathbf{x}) = u_0(\mathbf{x}) =\begin{cases}
\cos(|\mathbf{x}|), & |\mathbf{x}|\leq \frac{\pi}{2} \\
0, & \text{elsewhere}
\end{cases}\\
&u(t,\mathbf{x})|_{\partial \Omega} =0.
\end{split}
\end{equation}
where $\Omega = [-4,4]^d$.
The general exact solution to \eqref{numerical_waiting} can hardly be derived. When $d=2$, the reference solution can be taken to be the numerical solutions obtained with a moving mesh finite element method following \cite{wells2004moving} instead. In particular, the mesh is advanced forward in time using a forward Euler time-stepping scheme. These mesh-based results are then compared with the ones obtained following a deep learning framework under various formulations.
For higher dimensions, the mesh-based solver in general will suffer from curse of dimensionality. The moving mesh method would also be more challenging to design. Therefore, for comparison reasons, we only present the results for $d=2$ while noticing the higher dimensional cases can also be handled by the neural network based algorithms.
We also note that the solution to PME of this type of initial condition exhibits a waiting-time phenomenon \cite{ngo2017study}. In fact, the velocity of the free boundary of QPME is given by Darcy's law \cite{shmarev2005interfaces}, i.e.
\begin{equation}\label{darcy}
\Gamma'(t) = \lim_{\mathbf{x}\to \Gamma(t)^-}\nabla (\frac{u^2}{2}),
\end{equation}
where the limit is taken from the interior of the support.
Thus, as one may compute, the free boundary of solution to
\eqref{numerical_waiting} should not move until a finite amount of time has elapsed as initially $\Gamma'(0) = \nabla (\frac{u^2_0}{2})$ vanishes at the free boundary $\Gamma(0): |\mathbf{x}| = \frac{\pi}{2}$. This phenomena of waiting can be observed form solutions obtained with Finite Element Method as shown in Figure \ref{fig: PINN_waiting}, where the dashed vertical lines indicate the exact initial location of the free boundary. In Figure \ref{waiting:PINN_a}, a series of snapshots of the solution for $t\in[0,0.1]$ is plotted, while in \ref{waiting:PINN_b}, the solution snapshots for a broader range of time is plotted. As one may observe, the free boundary of the solution barely moves in the entire time of $t\in [0,0.1]$ and only start to change by the time of $t= 0.2$. This phenomena can also be accurately captured by a solution obtained following the PINN formulation \eqref{PINN_full}. In specific, the solution obtained with the PINN formulation is presented in comparison with the ones obtained with the moving mesh FEM in Figure \ref{fig: PINN_waiting}. The solution slices essentially overlap one another. Parallelly, problem \eqref{numerical_waiting} is also solved with the $\phi$ formulation \eqref{phi_form}. The comparison of the resulted solution with a FEM solution is then presented as in Figure \eqref{fig: phi_waiting}, which verifies the effectiveness of the neural network based solutions using $\phi$ formulation. In these numerical tests, the choices of the hyper-parameters are taken to be the same as in Table \ref{tab:PINN_error} and Table \ref{tab:phi_error} respectively.
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{PINN_formulation/waiting/u_comparison_different_time_0.1.png}
\caption{Snapshot solution slices for $t\in [0,0.1]$}
\label{waiting:PINN_a}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{PINN_formulation/waiting/u_comparison_different_time_1.0.png}
\caption{Snapshot solution slices for $t\in [0,1.0]$}
\label{waiting:PINN_b}
\end{subfigure}
\caption{\textbf{2d, $L^2-$ PINN formulation, waiting-time phenomena \eqref{numerical_waiting}:} snapshots of solution cross-section $u(t,x,0)$ at $y=0$. Green: reference solutions obtained with moving mesh FEM ($DOF= 901$); red: predicted solutions obtained with PINN formulation; blue: initial condition.}
\label{fig: PINN_waiting}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{Phi_formulation/waiting/u_comparison_different_time_0.1.png}
\caption{$t\in [0,0.1]$}
\label{waiting:phi_a}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{Phi_formulation/waiting/u_comparison_different_time.png}
\caption{Snapshot solution slices for $t\in [0,1.0]$}
\label{waiting:phi_b}
\end{subfigure}
\caption{\textbf{2d, $L^2-\phi$ formulation, waiting-time phenomena \eqref{numerical_waiting}:} snapshots of solution cross-section $u(t,x,0)$ at $y=0$. Green: reference solutions obtained with moving mesh FEM ($DOF= 901$); red: predicted solutions obtained with $\phi$ formulation; blue: initial condition.}
\label{fig: phi_waiting}
\end{figure}
\section{Solving high dimensional QPME with neural network ansatz}\label{sec:nn}
Neural network is a class of functions that have a certain layered structure, for example the feed-forward fully connected neural network is defined to be
\begin{equation}\label{FFNN}
\mathcal{NN}(\mathbf{x}; \theta) := W_n g(\cdots g(W_2 g(W_1\mathbf{x}+ b_1 )+b_2)\cdots) +b_n.
\end{equation}
In this case, each layer of the network is a composition of a linear transformation and an nonlinear function $g$ acting component-wise. Here, $\theta := [W_1, W_2,\cdots, W_n, b_1, b_2,\cdots, b_n]$ are the trainable parameters.
The idea of neural network based numerical solver for PDEs is to utilize such a neural network $\mathcal{NN}$ to approximate the function of interest, say $u$. This is usually achieved by solving an optimization problem
\begin{equation}
u = \argmin_f \mathcal{C}(f),
\end{equation}
where $\mathcal{C}$ is some suitable objective function. Then one could take a neural network as an ansatz and minimize $\mathcal{C}$ by tuning its parameters $\theta$ to get an approximate solution $\mathcal{NN}\left(\cdot; {\theta^*}\right)$ where
\begin{equation}
\theta^* = \argmin_\theta \mathcal{C}\left(\mathcal{NN}\left(\cdot; {\theta}\right)\right).
\end{equation}
The process of optimization is also referred as ``training'', using the terminology from machine learning. The objective function $\mathcal{C}$ is often referred as the loss function.
In Section \ref{sec:formulation}, we have derived a few loss functions which can be used to solve the QPME. In this section, we provide further details on how to solve the aforementioned optimization problems with neural networks especially on how initial and boundary conditions can be imposed to the neural network as a solution ansatz. In particular, the following conditions are generally considered as a solution ansatz to QPME:
1) the initial condition, 2) the boundary condition, 3) the physical constraint, i.e., $u\geq 0$. In addition, one could consider to impose conditions like $1 - \Delta \phi \geq (\frac{t}{T})^{\frac{d}{d+2}}$ to narrow down the search space as we know by Theorem \ref{thm} that it is satisfied by the true solution.
We will slightly modify existing neural network structure as needed to satisfy the constraints.
In this paper, we take the architecture of the neural networks to be feed-forward fully-connected as defined in \eqref{FFNN}, while other architectures could also be considered.
\subsection{PINN formulation}
To solve QPME with PINN formulation \eqref{PINN}, we first notice the argmin to \eqref{PINN_full} is exactly a solution to QPME. Then the solution itself can be directly parametrized with a neural network. In particular, to further impose the aforementioned conditions to the solution ansatz, we start with a neural network $\mathcal{NN}_{u}(t,\mathbf{x};\theta_u)$ with both time $t$ and spatial coordinates $\mathbf{x}$ as its inputs and denote the collection of trainable parameters as $\theta_u$. Moreover, since we need to compute the PDE residual, which
includes the computation of second-order derivative of the solution ansatz, $\mathcal{NN}_{u}(t,\mathbf{x};\theta_u)$ must be at least second order differentiable. We thus require activation functions $g$ to be smooth ones such as $\tanh$ and $\text{softplus}$ functions.
To impose the \textbf{initial condition \eqref{IC} as a hard constraint}, we can parametrize the solution $u(t,\mathbf{x})$ as:
$$u(t,\mathbf{x};\theta_u) = u_0(\mathbf{x}) +t \mathcal{NN}_u(t,\mathbf{x};\theta_u).$$
However, in this case, the physical constraint ($u \geq 0$) cannot easily be imposed explicitly. The positivity of the solution can only be reached through minimizing PDE residual.
In the case where the \textbf{initial condition is imposed softly}, the term $\mathcal{L}_I$ defined as in \eqref{weak_initial} will be added as a part of the loss $\mathcal{L}_{\text{PINN}}$ and minimized through training. Meanwhile, the physical constraint of solution can be imposed by parametrization:
$$ u(t,\mathbf{x}; \theta_u) = \text{softplus}\left( \mathcal{NN}_u \left(t,\mathbf{x};\theta_u\right)\right)$$
where the softplus function is given by
$$\text{softplus}(x) = \ln(1+e^x)$$
which guarantees the solution ansatz to be positive.
As for the boundary condition,
the \textbf{homogeneous Dirichlet boundary condition \eqref{BC}} can be imposed as a hard constraint. We take advantage of the function
\begin{equation}\label{f_dc}
f_{dc}(\mathbf{x}) := \prod_{i=1}^{d} \frac{(a_i -x_i)(a_i+x_i)}{a_i^2}
\end{equation}
so that $f_{dc}(\mathbf{x}) = 0$ for any $\mathbf{x}\in\partial \Omega$. Moreover, we notice that $f_{dc}(\mathbf{0}) = 1 $ and $0\leq f_{dc}(\mathbf{x})\leq 1$ for all $\mathbf{x}\in \Omega$.
The solution ansatz $u(t,\mathbf{x})$ can then be further modified by multiplying $f_{dc}$ to satisfy the boundary condition:
\begin{equation}
\begin{split}
\textbf{hard I.C. + hard B.C.}\quad u(t,\mathbf{x};\theta_u) = u_0(\mathbf{x})+tf_{dc}(x)\ \mathcal{NN}_u \left(t,\mathbf{x};\theta_u\right)\\
\end{split}
\end{equation}
\begin{equation}\label{PINN_with_condition}
\textbf{soft I.C. + hard B.C.}\quad u(t,\mathbf{x};\theta_u) = f_{dc}(\mathbf{x})\ \text{softplus} \left(\mathcal{NN}_u \left(t,\mathbf{x};\theta_u\right)\right)
\end{equation}
assuming the homogeneous Dirichlet B.C. is satisfied by $u_0$.
The benefit of solving PINN is that the convergence of training of $\mathcal{L_{\text{PINN}}}$ guarantees accurate solution, which is justified in \eqref{convergence_L1} and \eqref{convergence_L2}.
On the other hand, PINN formulation also has its own limitation that it only allows strong solutions. Solutions with less regularity can not be identified with this formulation.
\subsection{\texorpdfstring{$\phi$}{} formulation}
To solve QPME following the $\phi$ formulation, we need to parametrize $\phi(t,\mathbf{x})$ in \eqref{phi_form} instead of the solution $u$ directly.
When computing $\mathcal{L}_{\phi}$ as in \eqref{phi_form}, we also need the ansatz of $\phi(t,\mathbf{x})$ to be at least second-order differentiable. Note that this is a much weaker assumption on the solution ansatz of $u$ compared with the PINN. In particular, no assumption is needed on the smoothness of $u$ directly. We simply take a neural network $\mathcal{NN}_{\phi} (t,\mathbf{x};\theta_{\phi})$ with smooth activation function as its solution ansatz.
We then note that the minimizers $\phi^*$ to \eqref{phi_form} must also satisfy certain conditions in order to obtain reasonable solutions.
As suggested by Theorem \ref{thm}, we would like to require $\phi$ to vanish at $t = T$. We thus let
\begin{equation}
\phi(t,\mathbf{x};\theta_{\phi}) = (T-t)\mathcal{NN}_{\phi}(t,\mathbf{x};\theta_{\phi}).
\end{equation}
For the recovered solution $u_{\phi}$
\begin{equation}\label{u_phi}
u_{\phi}:= \frac{\partial_t \phi }{1-\Delta \phi},
\end{equation}
unlike PINN formulations, the solution to QPME is not directly parametrized; thus it is not easy to impose the initial condition as a hard constraint. Instead, we enforce the constraint softly relying on the penalty term $\mathcal{L}_{I}$.
The homogeneous Dirichlet boundary condition, on the other hand, can be softly enforced with the term $\mathcal{L}_B$ or enforced as a hard constraint by modifying the neural network. Essentially, we only need
$$\partial_t \phi|_{\partial \Omega} = 0,$$
which can be achieved using the ansatz
\begin{equation}\label{phi_condition}
\textbf{soft I.C. + hard B.C.} \quad \phi (t,\mathbf{x};\theta_{\phi}) = (T-t) f_{dc}(\mathbf{x}) \mathcal{NN}_{\phi} (t,\mathbf{x};\theta_{\phi}),
\end{equation}
where $f_{dc}(\mathbf{x})$ is defined as in \eqref{f_dc}.
Additionally, while the recovered solution $u_{\phi}\geq 0$ is desired, this condition cannot be easily imposed by simply modifying the solution ansatz.
Compared with the PINN formulation, using a $\phi$-formulation allows solutions in a very weak sense. It potentially can find solutions with less regularity. The smoothness requirement is not directly applied to $u_{\phi}$. However, as described above, a few conditions of $\phi$ can not be easily enforced. In addition to the positivity of $u_{\phi}$, conditions like
\begin{equation}\label{growing_condition}
1- \Delta \phi \geq (\frac{t}{T})^{\frac{d}{d+2}},
\end{equation}
is difficult to enforce as a hard constraint either. While \eqref{growing_condition} is preferable as it can narrow down the search function space for $\phi$ (since we know the PDE solution would satisfy that), it is not necessary. However, the fact that $u_{\phi}$ is not confined to be positive function can potentially cause the training of $\mathcal{L}_{\phi-NN}$ converges to unphysical solutions.
\subsection{\texorpdfstring{$q$}{}-\texorpdfstring{$\sigma$}{} formulation}
The $q-\sigma$ formulation \eqref{qsigma_form} is derived from the $\phi$ formulation, and thus also inherits a few conditions for $\phi$.
We first notice that when computing $\mathcal{L}_{q,\sigma}$, no computations of derivatives will be needed. However, when computing $\mathcal{L}_{\Delta q,\partial_t\sigma}$, first-order and second-order derivatives of $\sigma$ and $q$ are required respectively. We thus can start by parametrizing $q(t,\mathbf{x})$ and $\sigma(t,\mathbf{x})$ with neural networks $\mathcal{NN}_q(t,\mathbf{x};\theta_{q})$ and $\mathcal{NN}_{\sigma}(t,\mathbf{x};\theta_{\sigma})$ which should be at least first and second order differentiable respectively.
To ensure the positivity of $\sigma$,
as suggested in \eqref{strong_terminal}, we further parametrize $\sigma$ by
$$\sigma (t,\mathbf{x} ;\theta_{\sigma})= \text{softplus}\left( \mathcal{NN}_{\sigma}\left(t,\mathbf{x} ;\theta_{\sigma}\right)\right).$$
To guarantee that
$$\sigma(T,\cdot) = 1,$$
we modify the above and let
\begin{equation}\label{sigma_condition}
\sigma(t,\mathbf{x};\theta_{\sigma}) = \text{softplus}\big(\ln\left(e-1\right)+\left(T- t\right) \mathcal{NN}_{\sigma}\left(t,\mathbf{x};\theta_{\sigma}\right) \big).
\end{equation}
Alternatively, if we also impose the condition (to narrow down the search space)
\begin{equation}\label{sigma_selection}
\sigma \geq (\frac{t}{T})^{\frac{d}{d+2}},
\end{equation}
one can also parametrize $\sigma$ as
\begin{equation}
\sigma (t,\mathbf{x};\theta_{\sigma}) = \left(\frac{t}{T}\right)^{\frac{d}{d+2}}+(T- t)\, \text{softplus}\left(\mathcal{NN}\left(t,\mathbf{x};\theta_{\sigma}\right) \right).
\end{equation}
For the recovered solution
$$u_{q,\sigma} = \frac{q}{\sigma},$$ like $\phi$ formulation, the initial condition can only be softly imposed. The homogeneous Dirichlet boundary condition can be enforced as a hard constraint, as long as
$$q|_{\partial \Omega} = 0.$$ We thus let
\begin{equation}
q(t,\mathbf{x};\theta_{q}) = f_{dc}(\mathbf{x})\mathcal{NN}_{q}(t,\mathbf{x};\theta_{q}).
\end{equation}
Moreover, to ensure $u_{q,\sigma} \geq 0$, we further let
\begin{equation}\label{q_with_conditions}
\textbf{soft I.C. + hard B.C. }\quad q(t,\mathbf{x};\theta_{q}) = f_{dc}(\mathbf{x})\text{softplus}\left(\mathcal{NN}_{q}\left(t,\mathbf{x};\theta_{q}\right)\right).
\end{equation}
Similar to $\phi$ formulation, the $q-\sigma$ formulation allows solutions with less regularity. However, two neural networks will be needed to parametrize a solution to QPME, which could potentially be more challenging to train.
\subsection{Empirical loss and training data sampling}\label{empirical_loss}
\newcommand{\mathrm{P}}{\mathrm{P}}
\newcommand{\mathcal{L}_{\text{PINN}}}{\mathcal{L}_{\text{PINN}}}
\newcommand{\mathbf{X}}{\mathbf{X}}
To solve the QPME with aforementioned formulations \eqref{PINN_full},\eqref{full_phi} and \eqref{qsigma_full}, we need to compute the high-dimensional integrals of the neural network or its derivatives to evaluate the loss functions.
In practice, Monte Carlo methods are usually used to approximate those high dimensional integrals. The approximate solutions are then obtained by minimizing the surrogate empirical loss functions. Take the PINN formulation as an example, let $\mathrm{P}_{\Omega}$ be the uniform probability distributions over the spatial domain $\Omega$ and let $\{\mathbf{X}_j\}_{j=1}^{n}$ be an i.i.d. sequence of random variables distributed according to $\mathrm{P}_{\Omega}$. Parallelly, we also define $\mathrm{P}_{[0,T]}$ be the uniform probability distributions over the spatial domain $[0,T]$ and let $\{T_j\}_{j=1}^{n}$ be an i.i.d. sequence of random variables distributed according to $\mathrm{P}_{[0,T]}$. Define the empirical loss $\mathcal{L}_{\text{PINN}}^{n}$ by setting
\begin{equation}\label{empirical_PINN}
\mathcal{L}_{\text{PINN}}^{n} = \frac{ \kappa}{n}\sum_{j=1}^{n}\left( \partial _t u(T_j,\mathbf{X}_j) -\frac{1}{2} \Delta u(T_j,\mathbf{X}_j)\right)^2 + \frac{\nu}{n} \sum_{j=1}^{n}\left(u \left(0,\mathbf{X}_j\right) - u_{0}\left(\mathbf{X}_j\right)\right)^2
\end{equation}
for the case where only I.C. is imposed softly and the loss measuring norm is taken to be $L^2$. Notice that all terms are scaled by $\frac{1}{|\Omega|}$, which does not change the minimizer of the problem but can effectively avoid numerical blowup in evaluating the loss during the training. Similarly, $\mathcal{L}_{B}$ can also be approximated with points uniformly sampled from $\partial \Omega$ when needed. We further refer such sampled data as training data.
However, a uniform sampling of $\mathbf{X}_j$'s sometimes does not meet the need of our computation especially in the case where the dimension $d$ is very large. Notice that one essential feature of solutions to QPME is that it has a free boundary that separates positive part of the solution from the zeros. In particular, in the case where the solution is a Barenblatt solution, the nonzero values of the solution actually concentrate near the origin. Ideally, one would like to sample points in both non-zero and zero region, to capture the local features of the solution. However, with a fixed budget of training samples, it could happen that all randomly sampled data points only reside in the zero region, which is apparently problematic. In fact, this could become a serious issue for high dimensional problems. For example, when $d =20$, the probability of sampling the nonzero region of a Barrenblatt solution \eqref{barenblatt} at $t=2$ within $[-7,7]^{20}$ can be computed by the ratio of the volume of the $d$-ball $V_{\text{nonzero}}$ with radius $(22)^{1/2} 2^{6/11}$ standing for the non-zero region versus the volume of the hypercubic. It can then be computed that
$$ \mathrm{P}_{\text{nonzero}} = \frac{V_{\text{nonzero}}}{14^{20}}\approx 1.57\times 10^{-8}$$
Which means, the non-zero region can rarely be sampled if not impossible.
Therefore, we would need an effective sampling scheme which puts larger weights over the non-zero region, so that we can accurately approximate the loss function. Ideally, one could hope for an adaptive important sampling scheme which provides samples of the training data based on a distribution adapted to current status of the parameterized solution and its derivatives throughout the training process. However, especially for high-dimensional problems, such sampling scheme is challenging and computationally expensive to implement. Therefore, a hand-crafted sampling scheme is used instead, which is explained in details in Section \ref{sec:nuemrical} for specified numerical examples.
\section{Introduction}
Solving high-dimensional PDEs is a long-standing challenge in scientific computing. Standard mesh-based methods, such as Finite Element Method (FEM), Finite Difference Method (FDM) would suffer from the curse of dimensionality, i.e., in order to sustain the accuracy of the approximate solution to high-dimensional PDEs, an approximation space with exponentially large size must be used. The number of degrees of freedom associated to the approximation space is often proportional to the number of elements in the mesh which usually scales exponentially in the dimension to achieve an suitable discretization of the domain. Therefore, for high-dimensional problems, the mesh-based methods are impractical. Alternatively, semilinear parabolic PDEs can also be solved point-wisely based on stochastic representation of the solutions using Monte Carlo algorithm \cite{henry2012counterparty,warin2018nesting,henry2019branching,warin2017variations}, but such approaches only apply to specific type of PDEs.
To circumvent the challenges for solving general high dimensional (nonlinear) PDEs, many attempts have been made. One natural idea is to restrict to a solution ansatz. For example, using the tensor train (TT) format to approximate the solutions of high-dimensional PDEs \cite{richter2021solving,dektor2021rank,dektor2020dynamically,dolgov2021tensor,eigel2019non,boelens2018parallel}. While such methods are quite successful if the solution can be well represented by the tensor train, the representability is not guaranteed. Another natural and promising candidate for PDE solution ansatz is the artificial neural networks. Thanks to the rich expressiveness of the neural networks to parametrize high dimensional functions \cite{barron1993universal}. Theoretical results are also available to justify the approximability of PDE solutions by neural networks without curse of dimensionality, e.g., \cite{jentzen2018proof,lu2021priori}.
Many recent works have been devoted for various approaches of using neural networks to solve high dimensional PDEs. Typically, such methods first identify functional optimization problems corresponding to the PDEs
$$\underbrace{u}_{\text{PDE solution}} = \underbrace{\argmin_f\mathcal{C}(f)}_{\text{PDE-inspired optimization problem}}.$$
Then one could take neural network as an ansatz of the minimizer $u$ and minimize the parameters using stochastic gradient type approaches.
One well known method of this kind is the physics informed neural network (PINN) \cite{raissi2019physics}, which takes the loss function to be directly the PDE residual.
One drawback of PINN is that, to compute the loss function, the derivatives or high-order derivatives of the neural network need to be computed throughout the training process. However, the generality and simplicity of this framework still make it an easy choice when it comes to the high-dimensional PDEs.
Other neural network based methods to solve PDEs include the Deep Ritz Methods \cite{yu2018deep}, the Deep BSDE method \cite{han2018solving,han2017deep} for semilinear parabolic PDEs, utilizing different formulations to turn the high dimensional PDE problems into an optimization problem on the neural network parameters.
More recent work \cite{zang2020weak} adopts weak variational formulation for linear equations to solve such PDEs using neural networks. While it is generically unclear how to extend such techniques to nonlinear ones.
In this paper, we make an attempt along the direction of utilizing weak formulation for solving nonlinear PDEs in high dimension. In particular, we consider high-dimensional quadratic porous medium equation (QPME) (Section \ref{QPME_intro}). Several variational formulations of the QPME is proposed to solve such PDE (Section \ref{sec:formulation}) which both allow solutions in a very weak sense and can be sought with deep learning techniques.
In addition, these formulations are further compared with the PINN formulation. In Section \ref{sec:nn}, more detailed treatment of the neural network for solving the optimization problem is presented. Numerical results are then provided to verify and compare the effectiveness of the proposed methods in Section \ref{sec:nuemrical} and Section \ref{sec:nuemrical_waiting}.
\section{Preliminaries}\label{QPME_intro}
We consider the porous medium equation (PME)
$$\partial_t u = \frac{1}{m}\Delta u^m,\quad (t,\mathbf{x}) \in Q.$$
PME is a degenerate parabolic equation. It is only parabolic where $u>0$.
When $m$ is taken to be $2$, the quadratic porous medium equation (QPME or Boussinesq's equation) reads
\begin{equation}\label{QPME}
\begin{split}
\partial_t u = \frac{1}{2}\Delta u^2 = \text{div}\left(u \nabla u \right) = \frac{1}{2} u\Delta u + \frac{1}{2}|\nabla u|^2 ,\quad (t,\mathbf{x}) \in Q,
\end{split}
\end{equation}
where $Q := [0,T]\times \Omega$ and $\Omega$ stands for a bounded domain in $\mathbb{R}^d$. $\Delta= \Delta_x$ represents the Laplace operator acting on the space variables.
The equation is mainly used to describe process involving fluid flow, heat transfer or diffusion. Particularly, it can be used to study the flow of an isentropic gas through a porous medium \cite{muskat1938flow,leibenzon1930motion}. In this case, $u(t,\mathbf{x})\in \mathbb{R}$ is a scalar function denoting the density of the gas ($u^{2}$ is roughly the pressure, and $u\nabla u$ stands for the flux). Physical constraints may apply such that $u> 0$. Power $2$ here relates to the thermal dynamics character expansion in terms of the pressure of the gas of interest (linear dependency).
A main feature of PME is the ``finite propagation'' speed, in contrast with the infinite propagation of usual diffusion equations. Essentially, the free boundary that separates the regions where the solution is positive (i.e. where ``there is gas", according to the standard interpretation of $u$ as a gas density), from the ``empty region" where $u = 0$ moves as time passes.
$$
\Gamma = \partial \mathcal{P}_u \cap Q
$$
where $\mathcal{P}_u := \{(t,\mathbf{x})\in Q\ |\ u(t,\mathbf{x})>0 \}$ denotes the set where $u$ is positive.
$\Gamma$ is also sometimes referred as the moving boundary, propagation fronts, or the interface. While a rather comprehensive theoretical analysis of this PDEs is provided in \cite{vazquez2007porous}, exact solutions to general initial/boundary conditions can usually not be obtained. Numerical schemes thus must be applied to obtain approximate solutions. Most previous studies of PME from a numerical aspect put their focus on dealing with the moving free boundaries of the solutions. The adaptive moving mesh schemes were proposed and coupled with mesh-based methods such as finite element method (FEM) to obtain accurate yet efficient numerical solutions to PME \cite{ngo2017study}. However, such methods can not be used to solve high-dimensional QPME due to the curse of dimensionality. The only exception as far as we know is \cite{shukla1996use}, in which the supervised learning was conducted to learn the correspondence between the physical parameters in the PDE and the one-dimensional solutions at certain evaluation $x$, the learning of the global solution is not considered by the authors.
While PME is mainly used in modellings for low physical dimensions, $d = 2$ or $3$. In this work, we use it as a prototypical degenerate nonlinear equation in high dimensions to test numerical PDE solvers based on the neural networks. The high dimensional diffusion might be used for certain machine learning tasks such as analysis of high dimensional point cloud data, which we will leave for future investigations.
\section{Conclusion}
In this paper, we explored different variational forms in solving high-dimensional QPME with neural networks. In specific, three formulations were considered. For the PINN formulation, the solution is directly parametrized and the PDE residual is minimized. A theoretical analysis is carried out to show that the convergence of the PINN loss guarantees a convergence to the exact solution in $L^1$ sense. Moreover, this analysis also suggests the use of the $L^1$ norm to quantify the residual and approximation error.
In addition, inspired by the work \cite{brenier2020examples}, a $\phi$ formulation and a $q-\sigma$ formulation is further presented and used to solve the QPME in a very weak sense. Theoretically, these formulation can identify solutions with less regularity. All formulations are then tested with the Barrenbaltt solution in low and high dimensions. Experiments have shown that $\phi$ formulation and $q,\sigma$ formulation can provide approximate solutions with a similar level of accuracy compared with PINN in low-dimensional cases but the optimization aspect continues to pose challenges in high-dimensional cases. A two-dimensional example of QPME that exhibits waiting phenomena is also presented to show the capability of deep learning based methods in identifying solution features as such.
Other aspects of the discussion toward solving QPME with deep learning includes the hard and soft imposition of certain conditions of the solutions in all formulations such as initial conditions and boundary conditions. Additionally, an efficient sampling scheme is proposed aiming at a faster convergence towards the solution desired especially in high-dimensional cases. These treatments in principal can also be applied in other scenarios where the PDE solutions are parametrize with neural networks.
While such efforts can all contributes to more efficient implementation of solving high-dimensional QPMEs, we must admit that the training success is overwhelmed by the large number of hyper-parameters. Moreover, for practical applications,
neural network training using stochastic gradient descent type schemes, which means one must accept a significant and unavoidable uncertainty about optimization success.
An efficient strategy on making choices of hyper-parameters could potentially be an interesting direction for future investigations.
More broadly, whether a similar variational form could be derived for general $m$ of porous medium equation is also an open question.
\section{Numerical example: waiting-time phenomena }\label{sec:nuemrical_waiting}
In this section, we further consider the following IBVP
\begin{equation}\label{numerical_waiting}
\begin{split}
&\partial_t u = \frac{1}{2}\Delta u^2\quad (t,\mathbf{x})\in Q = [0,1]\times\Omega,\\
&u(0,\mathbf{x}) = u_0(\mathbf{x}) =\begin{cases}
\cos(|\mathbf{x}|), & |\mathbf{x}|\leq \frac{\pi}{2} \\
0, & \text{elsewhere}
\end{cases}\\
&u(t,\mathbf{x})|_{\partial \Omega} =0.
\end{split}
\end{equation}
where $\Omega = [-4,4]^d$.
The general exact solution to \eqref{numerical_waiting} can hardly be derived. When $d=2$, the reference solution can be taken to be the numerical solutions obtained with a moving mesh finite element method following \cite{wells2004moving} instead. In particular, the mesh is advanced forward in time using a forward Euler time-stepping scheme. These mesh-based results are then compared with the ones obtained following a deep learning framework under various formulations.
For higher dimensions, the mesh-based solver in general will suffer from curse of dimensionality. The moving mesh method would also be more challenging to design. Therefore, for comparison reasons, we only present the results for $d=2$ while noticing the higher dimensional cases can also be handled by the neural network based algorithms.
We also note that the solution to PME of this type of initial condition exhibits a waiting-time phenomenon \cite{ngo2017study}. In fact, the velocity of the free boundary of QPME is given by Darcy's law \cite{shmarev2005interfaces}, i.e.
\begin{equation}\label{darcy}
\Gamma'(t) = \lim_{\mathbf{x}\to \Gamma(t)^-}\nabla (\frac{u^2}{2}),
\end{equation}
where the limit is taken from the interior of the support.
Thus, as one may compute, the free boundary of solution to
\eqref{numerical_waiting} should not move until a finite amount of time has elapsed as initially $\Gamma'(0) = \nabla (\frac{u^2_0}{2})$ vanishes at the free boundary $\Gamma(0): |\mathbf{x}| = \frac{\pi}{2}$. This phenomena of waiting can be observed form solutions obtained with Finite Element Method as shown in Figure \ref{fig: PINN_waiting}, where the dashed vertical lines indicate the exact initial location of the free boundary. In Figure \ref{waiting:PINN_a}, a series of snapshots of the solution for $t\in[0,0.1]$ is plotted, while in \ref{waiting:PINN_b}, the solution snapshots for a broader range of time is plotted. As one may observe, the free boundary of the solution barely moves in the entire time of $t\in [0,0.1]$ and only start to change by the time of $t= 0.2$. This phenomena can also be accurately captured by a solution obtained following the PINN formulation \eqref{PINN_full}. In specific, the solution obtained with the PINN formulation is presented in comparison with the ones obtained with the moving mesh FEM in Figure \ref{fig: PINN_waiting}. The solution slices essentially overlap one another. Parallelly, problem \eqref{numerical_waiting} is also solved with the $\phi$ formulation \eqref{phi_form}. The comparison of the resulted solution with a FEM solution is then presented as in Figure \eqref{fig: phi_waiting}, which verifies the effectiveness of the neural network based solutions using $\phi$ formulation. In these numerical tests, the choices of the hyper-parameters are taken to be the same as in Table \ref{tab:PINN_error} and Table \ref{tab:phi_error} respectively.
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{PINN_formulation/waiting/u_comparison_different_time_0.1.png}
\caption{Snapshot solution slices for $t\in [0,0.1]$}
\label{waiting:PINN_a}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{PINN_formulation/waiting/u_comparison_different_time_1.0.png}
\caption{Snapshot solution slices for $t\in [0,1.0]$}
\label{waiting:PINN_b}
\end{subfigure}
\caption{\textbf{2d, $L^2-$ PINN formulation, waiting-time phenomena \eqref{numerical_waiting}:} snapshots of solution cross-section $u(t,x,0)$ at $y=0$. Green: reference solutions obtained with moving mesh FEM ($DOF= 901$); red: predicted solutions obtained with PINN formulation; blue: initial condition.}
\label{fig: PINN_waiting}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{Phi_formulation/waiting/u_comparison_different_time_0.1.png}
\caption{$t\in [0,0.1]$}
\label{waiting:phi_a}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width = \textwidth]{Phi_formulation/waiting/u_comparison_different_time.png}
\caption{Snapshot solution slices for $t\in [0,1.0]$}
\label{waiting:phi_b}
\end{subfigure}
\caption{\textbf{2d, $L^2-\phi$ formulation, waiting-time phenomena \eqref{numerical_waiting}:} snapshots of solution cross-section $u(t,x,0)$ at $y=0$. Green: reference solutions obtained with moving mesh FEM ($DOF= 901$); red: predicted solutions obtained with $\phi$ formulation; blue: initial condition.}
\label{fig: phi_waiting}
\end{figure}
\subsection{Relaxed concave optimization problem}
Besides the strong formulation, in this section, we derive and consider a series of optimization problems which can also be used to solve the QPME; they correspond to various weak formulation of the PDE.
We start by considering the {very weak solutions} to the QPME \eqref{QPME}, i.e. $u \in L^{1}(Q)$ satisfying
\begin{equation}
\int_{Q } -2\partial_t\psi u -\Delta \psi u^2 +2u_0 \partial_t \psi =0\quad
\end{equation}
for all test functions $\psi \in C^{2,1}(\bar{Q})$ which vanishes on $\Sigma_T$ and for $t= T$. Essentially, a very weak solution is an integrable distribution solution. Unlike strong solutions, no derivative of the solution is used in defining such solutions; so very weak solutions have much lower regularity requirements.
We also remark that while we focus on very weak solutions in this paper, there are different ways of defining generalized solutions for QPME. A weak solution, for example, is defined to be a function $u$ such that $u^2\in L_{loc}^1(0,T;\ W_{loc}^{1,1})$ which satisfies
$$\int_Q -2\partial_t\psi u +\nabla (u^2)\cdot\nabla \psi + 2u_0\partial_t \psi = 0$$
It is clear that all weak solutions are very weak solutions by definition; weak solutions require higher regularity of the solutions.
The following theorem gives a characterization for very weak solutions to QPME \cite{brenier2020examples}.
\begin{theorem}[\cite{brenier2020examples}]\label{thm}
Any \textbf{very weak solution} $u$ to QPME can be recovered as
\begin{equation}\label{u_phi_thm}
u = \frac{\partial_t \phi^*}{1-\Delta \phi^*}
\end{equation}
where
\begin{equation}\label{phi_formulation}
\phi^* = {\argmax_{\phi\in B} J(u_0)} = \argmax_{\phi\in B} \int_Q \frac{-(\partial_t \phi)^2}{ 1-\Delta \phi} + 2u_0 \partial_t \phi
\end{equation}
with $B:= \{\phi \ |\ \phi(T,\mathbf{x}) =0 , \ 1-\Delta \phi\geq 0\}$. In addition, any solution $\phi^*$ satisfies $1- \Delta \phi^* \geq (\frac{t}{T})^{\frac{d}{d+2}}$.
\end{theorem}
While we will not repeat the proof here, let us mention that the proof starts with minimizing the Lyapunov (``entropy") functional among the very weak solutions $u$ of QPME
\begin{equation}\label{Ljapunov}
\int_{Q} u^2(t,\mathbf{x}),
\end{equation}
it can then be proved that the following formulations are equivalent letting
\begin{itemize}
\item $A: = \{u\in L^2(Q) \text{ is a very weak non-negative solution associated with } u_0\in L^2 (\Omega)\} $,
\item $ B: = \{\phi \mid \phi(T,\mathbf{x}) =0,\ 1-\Delta \phi\geq 0
\}$.
\end{itemize}
\begin{enumerate}
\item Original form
\begin{equation}
\begin{split}
&I(u_0) = \inf_{u \in A}\sup_{\phi\in B} \int_{Q } \left(u^2 -2\partial_t\phi u -\Delta \phi u^2 +2u_0 \partial_t \phi
\right)
\end{split}
\end{equation}
\item Flipping $\sup, \inf$
\begin{equation}\label{relaxed-form}
\begin{split}
&J(u_0) = \sup_{\phi\in B} \inf_{u\in A} \int_{Q } \left(u^2 -2\partial_t\phi u -\Delta \phi u^2 +2u_0 \partial_t \phi
\right)
\end{split}
\end{equation}
\item Point-wise minimization of \eqref{relaxed-form}.\\
\begin{equation}\label{phiformulation}
\begin{split}
&\tilde{J}(u_0) =\sup_{\phi\in B}\ \int_{Q} \left(\frac{- (\partial_t \phi)^2}{1-\Delta \phi} + 2u_0\partial_t \phi
\right)
\end{split}
\end{equation}
\item Let $q = \partial_t \phi$, $\sigma = 1-\Delta \phi$ in \eqref{phiformulation}
\begin{equation}\label{strong_terminal}
\begin{split}
&\hat{J} (u_0) = \sup_{q, \sigma} \int_Q \left(
\frac{-q^2}{\sigma} + 2 u_0 q
\right)\\
&\sigma \geq 0,\quad \sigma(T, \cdot) = 1,\quad \partial_t \sigma+ \Delta \phi =0.
\end{split}
\end{equation}
\end{enumerate}
More specifically, it is proved that
\begin{equation}\label{form_equivalency}
\int_{Q} u^2(t,\mathbf{x}) \ d\mathbf{x}\ dt = I(u_0) = J(u_0) = \tilde{J}(u_0) = \hat{J}(u_0).
\end{equation}
Theorem \ref{thm} shows that we can indirectly obtain very weak solutions to QPME by solving \eqref{phi_formulation}. We first obtain $\phi^{*}$, then obtain candidates for the very weak solution with \eqref{u_phi_thm}.
We can therefore consider the following loss function
\begin{equation}\label{phi_form}
\begin{split}
\textbf{$\boldsymbol{\phi}$ formulation}\quad \mathcal{L}_{\phi}(\phi) &:= -\int_{Q} \left(\frac{- (\partial_t \phi)^2}{1-\Delta \phi} + 2u_0\partial_t \phi
\right).
\end{split}
\end{equation}
It is not hard to see that if a smooth $\phi^*$ is a minimizer, as long as the recovered solution satisfies the homogeneous boundary condition and the initial condition, $\frac{\phi^*}{1-\Delta \phi^*}$ must be a solution to QPME. Moreover, in the case where $u\geq 0$, it has been proved that the solution to the QPME subject to $u_0\geq 0$ is unique.
However, it is worth noting that such minimizer $\phi^*$ is not necessarily unique. Thus \eqref{phi_form} can be used to identify the unique solution to QPME, as long as the initial/boundary conditions are imposed, but more than one minimizer $\phi^*$ could theoretically exist \cite{brenier2020examples}.
Moreover, since \eqref{strong_terminal} is equivalent to \eqref{phiformulation}, one can also recover a candidate of very weak solution to QPME with $q^*$ and $\sigma^*$ by
\begin{equation}
u_{q,\sigma}^*: = \frac{q^*}{\sigma^*}
\end{equation}
with $q^*$ and $\sigma^*$ being the maximizer of \eqref{strong_terminal}. Thus, we may also consider the loss function
\begin{equation}\label{qsigma_form}
\textbf{$\mathbf{q}-\boldsymbol{\sigma}$ formulation}\quad \mathcal{L}_{q,\sigma}(q,\sigma) = -\int_Q \left(\frac{-q^2}{\sigma} + 2 u_0 q\right).
\end{equation}
Similar to the discussion in section \ref{sec:PINN}, we can also relax the initial/boundary conditions of the recovered solution $u$ obtained from $\phi$ formulation and $q,\sigma$ formulation to a penalization by regularizing $\mathcal{L}_{B}$ and $\mathcal{L}_{I}$ as defined earlier:
\begin{equation}\label{full_phi}
\begin{split}
\mathcal{L}_{\phi-\text{NN}}(u): = \kappa \mathcal{L}_{\phi}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u),
\end{split}
\end{equation}
and
\begin{equation}\label{partial_q_sigma}
\mathcal{L}_{q,\sigma-\text{NN}}(u): = \kappa \mathcal{L}_{q,\sigma}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u).
\end{equation}
However, it is worth pointing out that it is very difficult to impose initial condition as a hard constraint to solution ansatz for both formulations when the optimization problem is solved with neural networks as only intermediate functions $\phi,q,\sigma$ will be parametrized. The boundary conditions, on the other hand, can be explicitly imposed by modifying the solution ansatz.
For $q-\sigma$ formulation in particular, we should also note that, the consistency between $q$ and $\sigma$
\begin{equation}\label{qsigma_PDE}
\partial_t \sigma + \Delta q = 0
\end{equation}
needs to be imposed since they are essentially derivatives of the same function $\phi$. This condition can be imposed by minimizing the residual of equation \eqref{qsigma_PDE}
\begin{equation}
\mathcal{L}_{\partial_t \sigma, \Delta q} = \int_Q ( \partial_t \sigma + \Delta q )^2
\end{equation}
or in $L^1$ sense,
\begin{equation}\label{q_sigma_corelation}
\mathcal{L}_{\partial_t \sigma, \Delta q} = \int_Q | \partial_t \sigma + \Delta q |.
\end{equation}
Thus, $\mathcal{L}_{q,\sigma -\text{NN}}$ can be further modified as
\begin{equation}\label{qsigma_full}
\mathcal{L}_{q,\sigma-\text{NN}} = \kappa\mathcal{L}_{q,\sigma}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u) +
\gamma \mathcal{L}_{\partial_t \sigma, \Delta q}.
\end{equation}
Let us remark that the condition \eqref{qsigma_PDE} can also be imposed weakly following the Dirichlet principle, $\mathcal{L}_{\partial_t \sigma, \Delta q}$ can be then replaced by
\begin{equation}
\mathcal{L}_{\partial_t \sigma, \nabla q} :=\int_{[0,T]}\left(\frac{1}{2} \int_{\Omega} |\nabla q|^2\ dx + \frac{\lambda}{2} \left(\int_{\Omega} q\ dx\right)^2 + \int_{\Omega} \partial_t\sigma q\ dx \right)\ dt .
\end{equation}
While related results using $\mathcal{L}_{\partial_t \sigma, \nabla q}$ will not be presented in this paper, we remark that this formulation completely bypasses taking second order derivative of $q$, which means less smoothness requirement of the neural network ansatz. However, adding such a term would make the optimization more complicated especially in the high-dimensional cases. Since this term can not be interpreted as a pointwise condition as \eqref{q_sigma_corelation}, it thus can not benefit much from an efficient sampling scheme (see Section \eqref{sec:num_setting}).
In addition, the introduction of the extra hyper-parameter $\lambda$ further increases the difficulty of parameter tuning. From our own experience, the training of the $q,\sigma$ formulation with the term $\mathcal{L}_{\partial_t \sigma, \nabla q}$ seems extremely challenging if not impossible and is therefore not presented.
\section{Numerical example: Barenblatt Solution }\label{sec:nuemrical}
To test the numerical schemes, we will use a series of special solutions, known as
Barenblatt Solutions. They are given by \begin{equation}
U_m(t, \mathbf{x}; C) := t^{-\alpha}\left(\left(C - \frac{\beta(m- 1)}{2}\frac{|\mathbf{x}|^2}{t^{2\beta} }\right)^+\right)^{\frac{1}{m-1}}
\end{equation}
where $\alpha := \frac{d}{d(m-1)+2}, \beta := \frac{\alpha}{d}$, $(s)^+ := \max(s,0)$ and $C > 0$ is an arbitrary constant. This solution takes a Dirac mass as initial data : $\lim_{t \to 0} u(t,\mathbf{x}) \to M\delta(\mathbf{x})$, where $M$ is a function of the constant $C$ (depends on $m$ and $d$).
In the particular case $m = 2$, the Barenblatt Solution to \eqref{QPME} reduces to
\begin{equation}\label{barenblatt}
U_2(t, \mathbf{x}; C) := t^{-\frac{d}{d+2}}\left(C - \frac{1}{2(d+2)}\frac{|\mathbf{x}|^2}{t^{\frac{2}{d+2}} }\right)^+.
\end{equation}
The free boundary $\partial \mathcal{P}_u$ to \eqref{barenblatt} in this case can then characterized by the equation
$$ \quad|\mathbf{x}| = r_t$$
with $r_t:= \sqrt{2C(2+d)}\ t^{\frac{1}{d+2}}$. We also notice that the solution is scale-invariant:
\begin{equation*}
u_{\lambda}(t, \mathbf{x}) := \lambda^{\alpha} u(\lambda t, \lambda^{\beta} \mathbf{x}).
\end{equation*}
The shifted Barenblatt Solution is a {strong/weak/very weak} solution of PME, which is unique subject to the Dirac initial condition.
Since numerically one cannot set $\delta$ function as the initial condition, we specifically consider the following IBVP:
\begin{equation}\label{numerical_baren}
\begin{split}
&\partial_t u = \frac{1}{2}\Delta u^2\quad (t,\mathbf{x})\in Q = [0,1]\times\Omega,\\
&u(0,\mathbf{x}) = \left(1- \frac{1}{2(2+d)} |\mathbf{x}|^2
\right)^+.
\end{split}
\end{equation}
Notice the initial condition is essentially the Barenblatt Solution \eqref{barenblatt} evaluated at $t=1$ when $C =1$.
The exact solution to \eqref{numerical_baren} is therefore the Barenblatt Solution \eqref{barenblatt} with the time shifted:
\begin{equation}\label{exact}
U_2(t, \mathbf{x}) := \left(t+1\right)^{-\frac{d}{2+d}}\left(1 - \frac{1}{2(d+2)}\frac{|\mathbf{x}|^2}{ (t +1)^{\frac{2}{d+2}} }\right)^+.
\end{equation}
We further let $\Omega = [-a,a]^d$, where $a$ is the smallest integer that is greater than the radius of the free boundary of $U_2(t,\mathbf{x})$ at the terminal time $T=1$:
$$a := \text{ceil}(r_T)$$
where $r_T = (2+d)^{\frac{1}{2}}2^{\frac{4+d}{2d+4}}$ to ensure the the computational domain is large enough to include the entire free boundary for $t\in [0,1]$.
We take this example to test the effectiveness of the proposed formulations by comparing the approximate solution with the exact one \eqref{exact}. The performance of each formulation is further analyzed to show the pros and cons.
\subsection{Numerical settings}\label{sec:num_setting}
In particular, We would like to solve the aforementioned QPME with three formulations using neural network ansatz.
We specifically take $\mathcal{NN}_u(\cdot,\cdot;\theta_u)$, $\mathcal{NN}_{\phi}(\cdot,\cdot;\theta_{\phi})$, $\mathcal{NN}_{q}(\cdot,\cdot;\theta_{q})$ and $\mathcal{NN}_{\sigma}(\cdot,\cdot;\theta_{\sigma})$ to be fully connected neural networks with two hidden layers and the $\text{softplus}(\cdot)$ as their activation function. The corresponding solution ansatz can then be developed following Section \ref{sec:nn}. Notice that when computing the derivatives of the solution ansatz, a chain rule will be applied and the derivatives of $f_{dc}(\mathbf{x})$ needs to be computed when a homogeneous Dirichlet B.C. is imposed.
To evaluate the empirical losses, as discussed in Section \eqref{empirical_loss}, we further take randomly sampled data to approximate the integrals in $Q$ or in $\Omega$. When boundary condition is softly imposed, we take additional randomly sampled data over $\Sigma_T$ to evaluate $\mathcal{L}_B$. In particular, since the data are sampled on the fly, a new set of data will be sampled at each training step, the total number of training data $n$ can then be computed by $n= \text{batch size}\times \text{training steps}$.
Moreover, for high-dimensional cases, to make sure the sampled points can capture the features of the solutions, we use the following weighted sampling scheme. Specifically, we first decompose the region $\Omega = [-a,a]^d$ into $\Omega = V_0\cup V_1\cup V_2$ where $$V_{0}: =\{\mathbf{x} \in \Omega\ |\ |\mathbf{x}|\leq r_0 \},\quad V_{1}: =\{\mathbf{x} \in \Omega\ |\ r_0<|\mathbf{x}|\leq r_T \},\quad V_{2}: =\{\mathbf{x} \in \Omega\ |\ |\mathbf{x}|> r_T \}.$$
The radius of these region are decided by the radius of the free boundary to the Barenblat solution \eqref{exact} at $t=0$ and $t=T=1 $ respectively:
$$r_0 : = \sqrt{2(2+d)},\qquad r_T = (2+d)^{\frac{1}{2}}2^{\frac{4+d}{2d+4}}.$$
We then take weights $\theta_0, \theta_1$ and
the $\tilde{\mathbf{X}}_j$'s will be uniformly sampled within $V_0$ with a probability $\theta_0$, within $V_1$ with a probability $\theta_1$ and within $\Omega$ with a probability $\theta_2:=1-\theta_0-\theta_1$ (see Figure \ref{fig:train_data} for an illustration of the sampled training data).
The probability of sampling data in each region can be thus be computed as
$$P_{V_0} = \theta_0+\theta_2 \frac{|V_0|}{|\Omega|}\quad P_{V_1} = \theta_1+\theta_2 \frac{|V_1|}{|\Omega|} \quad P_{V_2}=\theta_2 \frac{|V_2|}{|\Omega|}.$$
The density function to this mixture distribution can be written as the piecewise constant function
$$f(\mathbf{x}) = \theta_0 f_0(\mathbf{x})+ \theta_1 f_1(\mathbf{x})+\theta_2 f_2(\mathbf{x})$$
where
\begin{equation*}
\begin{split}
f_0(\mathbf{x}) = \frac{1}{|V_0|}\mathbf{1}_{V_0}(\mathbf{x}), \quad
f_1(\mathbf{x}) = \frac{1}{|V_1|}\mathbf{1}_{V_1}(\mathbf{x}), \quad
&f_2(\mathbf{x}) =\frac{1}{|\Omega|}
\end{split}
\end{equation*}
are defined over the entire $\Omega$, with $\mathbf{1}_{V}(\mathbf{x})$ being an indicator function of region $V$.
When sampling from $V_0$ and $V_1$, to ensure the data are uniformly sampled in those high-dimensional ball regions, we specifically adopt the following algorithm as in \cite{marsaglia1972choosing}
\begin{enumerate}
\item Generating random points uniformly on the $(d-1)$-unit sphere
\begin{enumerate}
\item Generate an $d$-dimensional vector $\bm{x} = (x_1x_2,\cdots, x_d)$ so that $x_i \sim N(0,1)$ for $i= 1,2,\cdots, d$.
\item then $\tilde{\bm{x}} := \frac{\bm{x}}{||\bm{x}||_2}$ is a uniformly sampled point form $(d-1)$-unit sphere.
\end{enumerate}
\item Generate a point uniformly at random {\it{within}} the $d$-ball
\begin{enumerate}
\item Let $u$ be a number generated uniformly at random from the interval $[0, 1]$, then $u^{\frac{1}{d}}\tilde{\bm{x}}$ is a point randomly sampled within the unit ball.
\item Further, $r_0 u^{1/d}\tilde{\bm{x}} $ is a random point in $V_0$ and $\left((r_T-r_0) u^{1/d}+r_0\right)\tilde{\bm{x}} $ is a random point in $V_1$.
\end{enumerate}
\end{enumerate}
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/3d/train_data_distribution_3D.png}
\caption{\textbf{3d:} $\theta_0 = 0.3, \theta_0 = 0.3$, $\Omega = [-4,4]^3$.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/50d/train_data_distribution_3D.png}
\caption{\textbf{50d:} $\theta_0 = 0.3, \theta_1 = 0.2$, $\Omega = [-11,11]^{50}$ }
\end{subfigure}
\caption{3D projection (first three coordinates) of samples of $\{\tilde{\mathbf{X}}_j\}_{j=1}^{10^6}$ in $\Omega$. Red : $\tilde{\mathbf{X}}_j \in V_0$, green: $\tilde{\mathbf{X}}_j \in V_1$ and blue: $\tilde{\mathbf{X}}_j \in V_2$.}
\label{fig:train_data}
\end{figure}
To avoid changing the values of the integrals to be evaluated in the loss functions, a piecewise constant factor should be multiplied to the empirical function to correct the approximation of the integrals resulted from a nonuniform distributed training data. For PINN formulation, the empirical loss \eqref{empirical_PINN} can then be rewritten as
\begin{equation}
\mathcal{L}_{\text{PINN}}^{n} = \frac{\kappa}{n}\sum_{j=1}^{n} c(\tilde{\mathbf{X}}_j)\left( \partial _t u(T_j,\tilde{\mathbf{X}}_j) -\frac{1}{2} \Delta u(T_j,\tilde{\mathbf{X}}_j)\right)^2 +\frac{\nu}{n}\sum_{j=1}^{n} c(\tilde{\mathbf{X}}_j)\left(u \left(0,\tilde{\mathbf{X}}_j\right) - u_{0}\left(\tilde{\mathbf{X}}_j\right)\right)^2
\end{equation}
with the correction term
$$c(\mathbf{x}) = \displaystyle \sum_{i=1}^{3}\frac{|V_i|}{|\Omega| P_{V_i}}\mathbf{1}_{V_i}(\mathbf{x}).$$
Here, $\tilde{\mathbf{X}}_j$'s are random points in $\Omega$ sampled subject to the aforementioned density function $f(\mathbf{x})$, while $T_j$ will be uniformly sampled from $[0,1]$. The empirical losses for $\phi$ formulation and $q-\sigma$ formulation can also be formulated in a similar fashion.
The data are then randomly sampled on the fly which are batched into $1000$ for each training step. New data is sampled for each batch. Essentially, this sampling scheme enforces data samples in all three regions which can potentially improve the representativity of the training data and thus lead to a faster convergence of the training procedure.
For high-dimensional cases, the initial conditions and PDEs are in fact imposed without the correction factor $c(\mathbf{x})$, i.e., with the efficient sampling scheme, we are essentially minimizing the modified initial/PDE conditions. Taking such terms measured with $L^2$ as examples, the following loss terms will be minimized
\begin{equation}
\begin{split}
&\mathcal{L}_{I}(u) = \int_{\Omega} \left(u(0,\mathbf{x})-u_0\left(\mathbf{x}\right)\right)^2 f(\mathbf{x})\ d\mathbf{x}, \\
&\mathcal{L}_{\text{PDE}} (u) = \int_Q \left( \partial _t u -\frac{1}{2} \Delta u^2\right)^2 f(\mathbf{x})\ d\mathbf{x} dt,\\
&\mathcal{L}_{\partial_t \sigma, \Delta q} = \int_Q ( \partial_t \sigma + \Delta q )^2 f(\mathbf{x})\ d\mathbf{x} dt.
\end{split}
\end{equation}
Since $f(\mathbf{x})$ is merely a positive piecewise constant function, such modification will keep the minimizers to these terms unchanged, meaning the desired initial condition and PDE equation will still be imposed with mild assumption on the regularity of the solution ansatz.
The reason that the correction constants are not used is because for high-dimensional cases, $$c(\mathbf{x})\ll 1\% \quad\forall \mathbf{x} \in V_0\cup V_1 ,$$
which means extremely small contribution of samples within these region will be made to update the trainable parameters with SGD.
We also notice that while this is possible for imposing the initial condition and PDEs, we must apply $c(\mathbf{x})$ the to the inf terms, namely $\mathcal{L}_{\phi}(\phi)$ and $\mathcal{L}_{q,\sigma}(q,\sigma)$ as the sampling scheme will change the optimization target.
However, the choices of $\theta_0$ and $\theta_1$ still remain to be arbitrary. While numerical examples show that certain choices could lead to faster convergence, there is no clear principles one could follow to make optimal choices.
Similar situations happen when we decide the values of $\nu, \kappa,\gamma$ to balance the terms in the losses. While theoretically these hyper-parameters can be any positive number, the choice of them can heavily influence the training procedure. Some choices seem to help the weighted loss to converge faster compared to others, but there is no justified reason for any certain choices. Therefore, the choices of theses hyper-parameters used in the results reported in this paper are results of trial and error.
The losses are then optimized by tuning the trainable parameters of the neural networks. We take the optimizer that implements the Adam algorithm \cite{kingma2014adam} to train the models. The complete algorithm to establish the loss function and to train the neural networks is implemented using the TensorFlow library~\cite{abadi2016tensorflow}.
Once the training is finished, to evaluate the quality of the approximate solution obtained with the trained neural networks, we can further quantify the generalization error of it. In particular, we define the relative errors on a solution slice $u(t,x,y,c,\cdots,c)$ at time $t$ for some fixed constant $c\in[-a,a]$, denoting $u(t,x,y,c,\cdots,c)$ by $u(t)$ for simplicity:
\begin{equation}
\begin{split}
& L^1\textbf{-Relative Error}\quad \frac{||u_{NN}(t)-u(t)||_{1}}{||u(t)||_{1}},\\
& L^2\textbf{-Relative Error}\quad \frac{||u_{NN}(t)-u(t)||_{2}}{||u(t)||_2},\\
& H^1\textbf{-Relative Error}\quad \frac{||u_{NN}(t)-u(t)||_{H^1}}{||u(t)||_{H^1}}.\\
\end{split}
\end{equation}
where $u_{NN}$ stands for the neural network based solutions.
These norms can be further numerically approximated over a $100\times 100$ evenspaced mesh on $[-a,a]^2$ letting $\{x_i,y_j\}_{i=1,j=1}^{100}$ be the meshgrid points
\begin{equation*}
\begin{split}
&||f(x,y)||_1 \approx \frac{(2a)^2}{10^4}\sum_{i,j}|f(x_i,y_j)|,\\
&||f(x,y)||_2 \approx \sqrt{\frac{(2a)^2}{10^4}\sum_{i,j}|f(x_i,y_j)|^2},\\
&||f(x,y)||_{H^1} \approx \sqrt{\frac{(2a)^2}{10^4}\sum_{i,j}(|f(x_i,y_j)|^2 + |\nabla f(x_i,y_j)|^2)}.\\
\end{split}
\end{equation*}
The numerical relative errors can then be computed with predicted values of the neural network solutions evaluated at the mesh-grid points.
\subsection{PINN formulation}
In this section, we consider the case where the QPME \eqref{numerical_baren} is solved following a PINN formulation \eqref{PINN_full} in both $L^1$ and $L^2$ norm. Specifically speaking, the homogeneous Dirichlet boundary condition is imposed as a hard constraint and the initial condition is imposed as a soft constraint following \eqref{PINN_with_condition}. The specific algorithmic settings are further presented in Table \ref{tab:PINN_error} along with the relative errors computed for the trained solution slice $u(0.5,x,y,1.0,\cdots,1.0; \theta_u^*)$ at time $t= 0.5$ comparing with the exact solution \eqref{exact}.
From Table \ref{tab:PINN_error}, Figure \ref{fig:PINN_l2} and Figure \ref{fig:PINN_l1}, one can observe that the PINN formulation can indeed provide numerical solutions that closely approximate the exact ones even in high dimensions.
Not only is the neural network able to accurately approximate the function itself, but also the derivative of it. This is essentially a result of successful imposition of the PDE. As can be observed from Figure \ref{fig:PDE_l2} and Figure \ref{fig:PDE_l1}, the learned $\partial_t u$ coincides with $\frac{1}{2} \Delta u^2$ which confirms the PDE has been successfully learned. The initial condition can also be softly enforced with term $\mathcal{L}_{I}$ as training proceeds (See Figure \ref{fig:init_l2} and Figure \ref{fig:init_l1} for an illustration).
The training loss history of PINN is further presented in Figure \ref{fig:training_history_PINN} term by term. A training convergence can be observed from these plots, which further suggests a convergence to the exact solution ensured by \eqref{convergence_L1} and \eqref{convergence_L2}.
In addition, one can observe that learning the solution to QPME does not require the number of the trainable parameters of the solution ansatz to scale exponentially, which, in contrast to mesh-based solvers, is advantageous in dealing with high-d problems.
\begin{table}[htbp]
\hspace*{-1cm}
\centering
\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|}
\toprule
\textbf{Dimension}& & \multicolumn{1}{r|}{\textbf{1}} & \multicolumn{1}{r|}{\textbf{2}} & \multicolumn{1}{r|}{\textbf{3}} & \multicolumn{1}{r|}{\textbf{4}} & \multicolumn{1}{r|}{\textbf{5}} & \multicolumn{1}{r|}{\textbf{10}} & \multicolumn{1}{r|}{\textbf{15}} & \multicolumn{1}{r|}{\textbf{20}} & \multicolumn{1}{r|}{\textbf{50}} \\
\midrule
\multirow{3}[6]{3cm}{\textbf{Relative Error(\%)} for $L^2$-\textbf{PINN}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.21}} & \multicolumn{1}{r|}{\textbf{0.65}} & \multicolumn{1}{r|}{\textbf{0.61}} & \multicolumn{1}{r|}{\textbf{0.55}} & \multicolumn{1}{r|}{\textbf{1.1}} & \multicolumn{1}{r|}{\textbf{0.86}} & \multicolumn{1}{r|}{\textbf{1.72}} & \multicolumn{1}{r|}{\textbf{5.50}} & \multicolumn{1}{r|}{\textbf{16.03}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.14}} & \multicolumn{1}{r|}{\textbf{0.4}} & \multicolumn{1}{r|}{\textbf{0.39}} & \multicolumn{1}{r|}{\textbf{0.43}} & \multicolumn{1}{r|}{\textbf{0.94}} & \multicolumn{1}{r|}{\textbf{0.71}} & \multicolumn{1}{r|}{\textbf{1.64}} & \multicolumn{1}{r|}{\textbf{5.12}} & \multicolumn{1}{r|}{\textbf{15.26}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{4.28}} & \multicolumn{1}{r|}{\textbf{8.25}} & \multicolumn{1}{r|}{\textbf{7.93}} & \multicolumn{1}{r|}{\textbf{7.45}} & \multicolumn{1}{r|}{\textbf{8.64}} & \multicolumn{1}{r|}{\textbf{8.09}} & \multicolumn{1}{r|}{\textbf{9.88}} & \multicolumn{1}{r|}{\textbf{10.87}} & \multicolumn{1}{r|}{\textbf{28.5}} \\
\midrule
\multirow{3}[6]{3cm}{\textbf{Relative Error(\%) for $L^1$-PINN}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.45}} & \multicolumn{1}{r|}{\textbf{0.78}} & \multicolumn{1}{r|}{\textbf{0.95}} & \multicolumn{1}{r|}{\textbf{1.27}} & \multicolumn{1}{r|}{\textbf{2.07}} & \multicolumn{1}{r|}{\textbf{4.86}} & \multicolumn{1}{r|}{\textbf{10.46}} & \multicolumn{1}{r|}{\textbf{ 9.73}} & \multicolumn{1}{r|}{\textbf{10.76}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{0.23}} & \multicolumn{1}{r|}{\textbf{0.5}} & \multicolumn{1}{r|}{\textbf{0.69}} & \multicolumn{1}{r|}{\textbf{1.1}} & \multicolumn{1}{r|}{\textbf{1.91}} & \multicolumn{1}{r|}{\textbf{4.65}} & \multicolumn{1}{r|}{\textbf{8.91}} & \multicolumn{1}{r|}{\textbf{8.37}} & \multicolumn{1}{r|}{\textbf{10.47}} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{5.11}} & \multicolumn{1}{r|}{\textbf{8.98}} & \multicolumn{1}{r|}{\textbf{9.38}} & \multicolumn{1}{r|}{\textbf{9.51}} & \multicolumn{1}{r|}{\textbf{10.48}} & \multicolumn{1}{r|}{\textbf{16.57}} & \multicolumn{1}{r|}{\textbf{16.45}} & \multicolumn{1}{r|}{\textbf{13.13}} & \multicolumn{1}{r|}{\textbf{21.55}} \\
\midrule
\multirow{2}[4]{*}{Formulation Weight} & $\nu$ & \multicolumn{5}{c|}{$10^3$} & \multicolumn{4}{c|}{1} \\
\cmidrule{2-11}
& $\kappa$ & \multicolumn{5}{c|}{1} & \multicolumn{4}{c|}{$10^3$} \\
\midrule
\multirow{2}[4]{*}{NN Architectire} & \# trainable & \multicolumn{1}{r|}{41001} & \multicolumn{1}{r|}{41201} & \multicolumn{1}{r|}{41401} & \multicolumn{1}{r|}{41601} & \multicolumn{1}{r|}{41801} & \multicolumn{1}{r|}{42801} & \multicolumn{1}{r|}{43801} & \multicolumn{1}{r|}{169601} & \multicolumn{1}{r|}{181601} \\
\cmidrule{2-11} & Width/Depth & \multicolumn{7}{c|}{200/2} & \multicolumn{2}{c|}{400/2} \\
\midrule
\multirow{2}[4]{*}{Data Sampling} & $\theta_0$ & \multicolumn{9}{c|}{0.3} \\
\cmidrule{2-11} & $\theta_1$ & \multicolumn{7}{c|}{0.3} & \multicolumn{2}{c|}{0.2} \\
\midrule
\multirow{2}[4]{*}{Training} & Steps & \multicolumn{7}{c|}{$10^5$} & \multicolumn{2}{c|}{$2\times10^5$} \\
\cmidrule{2-11} & Learning Rate & \multicolumn{9}{c|}{$10^{-3}$} \\
\bottomrule
\end{tabular}%
\caption{\textbf{PINN formulation \eqref{PINN}: (hard Dirichlet B.C.+soft I.C.)} Relative error comparison for various dimensions .}
\label{tab:PINN_error}%
\end{table}%
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Barenblatt_Solution.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^2-$ PINN formulation \eqref{PINN_full}} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:PINN_l2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/prediction_u_t_t_0.5.png}
\caption{Learned $u_t$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/prediction_usq_xx_t_0.5.png}
\caption{Learned $\displaystyle \frac{1}{2}\Delta u^2$}
\end{subfigure}
\caption{\textbf{15D, $L^2-$ PINN formulation \eqref{PINN_full}}, predicted partial derivatives.}
\label{fig:PDE_l2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/Reference_u_0_t_0.5.png}
\caption{Exact $u_0$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/prediction_u_0_t_0.5.png}
\caption{Learned initial value}
\end{subfigure}
\caption{\textbf{15D, $L^2-$ PINN formulation \eqref{PINN_full}}, predicted initial value $u(0,x,y,1.0,\cdots,1.0)$ for $\mathbf{x} \in
\Omega =[-7,7]^{15}$.}
\label{fig:init_l2}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Barenblatt_Solution.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^1-$ PINN formulation \eqref{PINN_full}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:PINN_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/prediction_u_t_t_0.5.png}
\caption{Learned $u_t$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/prediction_usq_xx_t_0.5.png}
\caption{Learned $\displaystyle \frac{1}{2}\Delta u^2$}
\end{subfigure}
\caption{\textbf{15D, $L^1-$ PINN formulation \eqref{PINN_full}:} predicted partial derivatives.}
\label{fig:PDE_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/Reference_u_0_t_0.5.png}
\caption{Exact $u_0$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/prediction_u_0_t_0.5.png}
\caption{Learned initial value}
\end{subfigure}
\caption{\textbf{15D, $L^1-$ PINN formulation \eqref{PINN_full}}, predicted initial value $u(0,x,y,1.0,\cdots,1.0)$ for $\mathbf{x} \in
\Omega =[-7,7]^{15}$.}
\label{fig:init_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L2/15d/PINN_15d_l2.png}
\caption{$L^2-$PINN}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{PINN_formulation/Barenblatt/L1/15d/PINN_15d_l1.png}
\caption{$L^1-$PINN}
\end{subfigure}
\caption{\textbf{15D:} Training loss history by term.}
\label{fig:training_history_PINN}
\end{figure}
However, from Table \ref{tab:PINN_error}, one can observe that the generalization error of the neural network based solutions of high-dimensional cases are larger compared to that of low-dimensional cases. This could be a result of using larger neural network to approximate a more complicated solution in high-dimensional cases. Numerical experiments show that neural networks with width $200$ is no longer sufficient in approximating solutions to QPME with dimension larger than $15$, thus a larger network was adopted. Such network naturally requires more training and data for convergence. Since the training steps (data) was not quadrupled as the number of trainable does, which could have contributed to a larger approximation error. In addition, whether assigning quadrupled training steps and training data could bring significant improvement on the approximation accuracy is also questionable as the optimization of $\theta_{u}$ is highly nonconvex, which means one has to accept a significant and unavoidable uncertainty about optimization success with SGD or its variants.
\FloatBarrier
\subsubsection{\texorpdfstring{$\phi$}{} formulation}
In this section, we consider the $\phi$ formulation \eqref{full_phi} to solve QPME \eqref{numerical_baren} in both $L^1$ and $L^2$ norm. The homogeneous Dirichlet boundary condition is enforced as a hard constraint following \eqref{phi_condition}. The initial condition can also be enforced softly with term $\mathcal{L}_{I}$ similar to the PINN formulation. The specific algorithmic settings are further presented in Table \ref{tab:phi_error} along with the relative errors computed for the trained solution slice $u(0.5,x,y,1.0,\cdots,1.0; \theta_u^*)$ at time $t= 0.5$ comparing with the exact solution \eqref{exact}.
From Table \ref{tab:phi_error}, Figure \ref{fig:phi_l2} and Figure \ref{fig:phi_l1}, one can observe that the $\phi$ formulation can indeed provide numerical solutions that closely approximate the exact ones up till dimension $20$.
Not only is the neural network able to accurately approximate the function itself, but also the derivatives of it. The mismatch mainly concentrated near the region where the solution is not smooth (free boundary).
The predicted minimizer $\phi$ to \eqref{full_phi} is also presented as in Figure \ref{fig:phi_pred}.
Theoretically speaking, compared to PINN, $\phi$ formulation is advantageous as it can be applied to solve a wider range of QPMEs, whose solution are less regular or smooth.
However, for the case being tested, we do encounter more challenges in the training process especially for the high-dimensional cases compared with that of PINN.
One observation is that the generalization error of the testing solution slice gets larger as the dimension gets higher. This could attributes to the nature of the exact solution $U_2$ as its nonzero region only accounts for a tiny small portion of $\Omega$($\ll 1$\textperthousand) when $d$ is large.
That is to say, the zero function will be a pretty good approximation of $U_2$ already in both $L^1(\Omega)$ and $L^2 (\Omega)$ sense. The training can thus be easily trapped in a local minimum $u_{\phi} =0$ which can be reached by a neural network $\phi = 0$. In addition, the generalization error reported measures the error for a solution slice projected onto a two-dimensional space in stead of in $\Omega$ to ease the computation and visualization, which can be an uncomprehensive measurement of the error. Moreover, the selected slice is a slice whose values are dominated by nonzero ones, which can also be a unfair representative of the entire solution to quantify the relative error.
The reason that PINN formulation seems to suffer less from this effects is probably the application of efficient sampling. Since the correction term $c(\mathbf{x})$ is {{not}} applied for any terms in the loss functional of PINN, meaning a very large weight was put on the region where the solution is nonzero when evaluating $\mathcal{L}_{\text{PINN}}$ , which could have helped the solution ansatz to escape from the local minimum. However, by the nature of $\phi$ formulation, the $c(\mathbf{x})$ can not be omitted. Otherwise, the target functional will be changed entirely.
Furthermore, while we are able to identify the desired solutions in many cases, theoretically, one can not guarantee meaningful solutions to QPME form the training of $\phi$ formulation. In fact, both the condition $1-\Delta \phi$ and $u_{\phi}\geq 0$ are not enforced at all in this formulation. Theses conditions can only be used post training to carry out a solution selection or as a criteria for early truncation of training.
Artificial choices of other algorithmic ingredients such as batch sizes, learning rates, $\theta_0$ and $\theta_1$ will also inevitable influence the optimization process providing limited computational resources.
\begin{table}[htbp]
\centering
\hspace*{-1.8cm}
\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|r|}
\toprule
\textbf{Dimension} & & \multicolumn{1}{r|}{\textbf{1}} & \multicolumn{1}{r|}{\textbf{2}} & \multicolumn{1}{r|}{\textbf{3}} & \multicolumn{1}{r|}{\textbf{4}} & \multicolumn{1}{r|}{\textbf{5}} & \multicolumn{1}{r|}{\textbf{10}} & \multicolumn{1}{r|}{\textbf{15}} & \multicolumn{1}{r|}{\textbf{20}} & \textbf{50} \\
\midrule
\multirow{3}[6]{3.5cm}{\textbf{Relative Errors(\%) for $L^2-\phi$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{3.58}} & \multicolumn{1}{r|}{\textbf{4.95}} & \multicolumn{1}{r|}{\textbf{4.41}} & \multicolumn{1}{r|}{\textbf{9.77}} & \multicolumn{1}{r|}{\textbf{5.77}} & \multicolumn{1}{r|}{\textbf{3.82}} & \multicolumn{1}{r|}{\textbf{8.29}} & \multicolumn{1}{r|}{\textbf{14.45}} & \textbf{54.26} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{3.23}} & \multicolumn{1}{r|}{\textbf{5.87}} & \multicolumn{1}{r|}{\textbf{4.57}} & \multicolumn{1}{r|}{\textbf{9.98}} & \multicolumn{1}{r|}{\textbf{6.22}} & \multicolumn{1}{r|}{\textbf{3.97}} & \multicolumn{1}{r|}{\textbf{8.99}} & \multicolumn{1}{r|}{\textbf{15.56}} & \textbf{77.65} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{16.44}} & \multicolumn{1}{r|}{\textbf{18.84}} & \multicolumn{1}{r|}{\textbf{20.18}} & \multicolumn{1}{r|}{\textbf{27.29}} & \multicolumn{1}{r|}{\textbf{16.97}} & \multicolumn{1}{r|}{\textbf{14.98}} & \multicolumn{1}{r|}{\textbf{19.21}} & \multicolumn{1}{r|}{\textbf{24.15}} & \textbf{71} \\
\midrule
\multirow{3}[6]{3.5cm}{\textbf{Relative Errors(\%) for $L^1-\phi$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{2.32}} & \multicolumn{1}{r|}{\textbf{5.01}} & \multicolumn{1}{r|}{\textbf{5.06}} & \multicolumn{1}{r|}{\textbf{8.8}} & \multicolumn{1}{r|}{\textbf{4.87}} & \multicolumn{1}{r|}{\textbf{3.62}} & \multicolumn{1}{r|}{\textbf{9.25}} & \multicolumn{1}{r|}{\textbf{26.67}} & \textbf{52.24} \\
\cmidrule{2-11} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{2.11}} & \multicolumn{1}{r|}{\textbf{6.16}} & \multicolumn{1}{r|}{\textbf{5.84}} & \multicolumn{1}{r|}{\textbf{9.21}} & \multicolumn{1}{r|}{\textbf{4.84}} & \multicolumn{1}{r|}{\textbf{3.52}} & \multicolumn{1}{r|}{\textbf{9.49}} & \multicolumn{1}{r|}{\textbf{28.05}} & \textbf{85.13} \\
\cmidrule{2-11} & \boldmath{}\textbf{$H^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{14.52}} & \multicolumn{1}{r|}{\textbf{25.25}} & \multicolumn{1}{r|}{\textbf{18.23}} & \multicolumn{1}{r|}{\textbf{25.73}} & \multicolumn{1}{r|}{\textbf{16.82}} & \multicolumn{1}{r|}{\textbf{13.73}} & \multicolumn{1}{r|}{\textbf{20.3}} & \multicolumn{1}{r|}{\textbf{44.74}} & \textbf{67.49} \\
\midrule
\multirow{2}[4]{*}{Formulation Weights} & $\nu$ & \multicolumn{4}{c|}{$10^3$} & \multicolumn{5}{c|}{1} \\
\cmidrule{2-11} & $\kappa$ & \multicolumn{4}{c|}{1} & \multicolumn{2}{c|}{$10^3$} &$10^4$ & \multicolumn{1}{r|}{$10^5$}& $10^3$ \\
\midrule
\multirow{2}[4]{*}{NN Architecture} & Width/Depth & \multicolumn{7}{c|}{200/2} & \multicolumn{2}{c|}{400/2} \\
\cmidrule{2-11} & \# trainable & \multicolumn{1}{r|}{41001} & \multicolumn{1}{r|}{41201} & \multicolumn{1}{r|}{41401} & \multicolumn{1}{r|}{41601} & \multicolumn{1}{r|}{41801} & \multicolumn{1}{r|}{42801} & \multicolumn{1}{r|}{43801} & \multicolumn{1}{r|}{169601} & 181601 \\
\midrule
\multirow{2}[4]{*}{Data Sampling} & $\theta_0$ & \multicolumn{7}{c|}{0.3} & \multicolumn{1}{r|}{0.2} & 0.4 \\
\cmidrule{2-11} & $\theta_1$ & \multicolumn{8}{c|}{0.3} & 0.4 \\
\midrule
\multirow{2}[4]{*}{Training} & Steps & \multicolumn{7}{c|}{$10^5$} &
\multicolumn{1}{r|}{$6\times 10^{5}$} & $2\times{10^{5}}$ \\
\cmidrule{2-11} & Learning Rate & \multicolumn{8}{c|}{$10^{-3}$} & $10^{-4}$\\
\bottomrule
\end{tabular}%
\caption{\textbf{$\phi$ formulation \eqref{phiformulation} (hard Dirichlet B.C.+soft I.C.): } Relative error comparison for various dimensions.}
\label{tab:phi_error}%
\end{table}%
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Barenblatt_Solution.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L2/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^2- \phi$ formulation \eqref{phiformulation}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:phi_l2}
\end{figure}
\pagebreak
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Barenblatt_Solution.png}
\caption{Barenblatt reference solution}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Predicted_Solution.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Prediction_Error.png}
\caption{Learned solution error}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Gradient_of_Barenblatt_Solution.png}
\caption{Barenblatt reference solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Gradient_of_Predicted_Solution.png}
\caption{Learned solution gradient}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{Phi_formulation/L1/Prediction_Error_of_Gradient.png}
\caption{Learned solution gradient error}
\end{subfigure}
\caption{\textbf{15D, $L^1-\phi$ formulation \eqref{phiformulation}:} Predicted solution slice $u_{\phi}(0.5,x,y,1.0,\cdots, 1.0)$ with for $\mathbf{x}\in \Omega = [-7,7]^{15}$, $t= 0.5$. }
\label{fig:phi_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{Phi_formulation/L2/prediction_phi_t_0.5.png}
\caption{Learned through $L^1$-$\phi$ formulation}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width = 1.2\textwidth]{Phi_formulation/L1/prediction_phi_t_0.5.png}
\caption{Learned through $L^2$-$\phi$ formulation}
\end{subfigure}
\caption{\textbf{15D: }Predicted $\phi(0.5, x,y,1.0,\cdots,1.0; \theta_{\phi}^*)$.}
\label{fig:phi_pred}
\end{figure}
We further observe that the optimization of $\mathcal{L}_{\phi}(u_{\phi}(t,\mathbf{x};\theta_{\phi}))$ indeed converges to $-\int_{Q}U_2^2$ as training proceeds with $u_{\phi}$ being the parametrize solution ansatz as stated in \eqref{u_phi}.
This observation in fact confirms the theoretical result \eqref{form_equivalency} derived in \cite{brenier2020examples}.
In Figure \ref{fig:phi_usq_comp}, we specifically use the batch of training data at each training step to evaluate empirically the value of $-\int_Q U_2^2$ for the exact solution $U_2(t,\mathbf{x})$ defined as in \eqref{exact}, and further compare it with the empirical loss $\mathcal{L}_{\phi}(u_{\phi})$ based on the neural network solution $u_{\phi}$ at that time. As one can observe, the difference of the two values gradually reduces as the training continues, which verifies the training effectiveness of this formulation.
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{Phi_formulation/5d_usq_comp.png}
\caption{\textbf{5D}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{Phi_formulation/10d_usq_comp.png}
\caption{\textbf{10D}}
\end{subfigure}
\caption{Empirical $\int_Q U_2^2(t,\mathbf{x}) +\mathcal{L}_{\phi}\big(u_{\phi}(t,\mathbf{x};\theta_{\phi} )\big)$ as training proceeds. }
\label{fig:phi_usq_comp}
\end{figure}
\FloatBarrier
\subsection{\texorpdfstring{$q$}{}-\texorpdfstring{$\sigma$}{} formulation}
Since $q-\sigma$ formulation \eqref{qsigma_full} is developed based on the $\phi$ formulation. The training thus also suffers from challenges met in the training of $\phi$ formulation, i.e., the training can be easily trapped in a local minimum, i.e. $u_{q,\sigma} = 0$ for high-dimensional cases. Additionally, the partial derivatives of $\phi$ is separated into two independent functions $q$ and $\sigma$, whose correlation is only enforced softly with the loss term $\mathcal{L}_{\partial_t \sigma, \Delta q}$, which can pose more challenges to the optimization of the target functional. Additional hyper-parameter $\gamma$ is also introduced to adjust the weight of $\mathcal{L}_{\partial_t \sigma, \Delta q}$ whose optimal choice is again obscure. Due to such reasons, only results for dimension $1$ to $10$ are reported as no reasonable results for higher-dimensions were obtained in the scope of experiments that were carried out.
Specifically, the homogeneous Dirichlet boundary condition is imposed as a hard constraint following \eqref{q_with_conditions} and the condition for $\sigma$ is imposed with \eqref{sigma_condition}.
The condition \eqref{sigma_selection} was not strongly imposed for training reasons. The initial condition can then be softly enforced with term $\mathcal{L}_{I}$ as mentioned earlier. The specific algorithmic settings are further presented in Table \ref{tab:qsigma_error} along with the relative errors computed for the trained solution slice $u_{q,\sigma}(0.5,x,y,1.0,\cdots,1.0; \theta_{q}, \theta_{\sigma})$ at time $t= 0.5$ comparing with the exact solution \eqref{exact}. The comparison of predicted solutions with exact solution are further presented in Figure \ref{fig:qsigma_l2} and Figure \ref{fig:qsigma_l1}. In addition, the predicted function $q$ and $\sigma$ are depicted in Figure \ref{fig:qsigma_PDE_l2} and Figure \ref{fig:qsigma_PDE_l1}. These figures are further used to show the predicted $-\Delta q$ and $\partial_t \sigma$ to verify that the condition
$$\Delta q + \partial_t \sigma = 0$$
is satisfied. Finally, Figure \ref{fig:qsigma_usq_comp} is used to demonstrate the computational value of $\mathcal{L}_{q,\sigma}$ converges to
to $-\int_{Q}U_2^2$ as training proceeds.
This observation confirms once again the theoretical result \eqref{form_equivalency} derived in \cite{brenier2020examples}.
Here, the batch of training data at each training step is used to evaluate empirically the value of $-\int_Q U_2^2$ for the exact solution $U_2(t,\mathbf{x})$ defined as in \eqref{exact}. Such value is further compared with the empirical loss $\mathcal{L}_{q,\sigma}(u_{q,\sigma})$ at that time. The difference of the these values gradually reduces as the training continues (Figure \ref{fig:qsigma_usq_comp}).
\begin{table}[htbp]
\centering
\begin{tabular}{|c|l|c|c|c|c|c|c|}
\toprule
\textbf{Dimension} & & \multicolumn{1}{r|}{\textbf{1}} & \multicolumn{1}{r|}{\textbf{2}} & \multicolumn{1}{r|}{\textbf{3}} & \multicolumn{1}{r|}{\textbf{4}} & \multicolumn{1}{r|}{\textbf{5}} & \multicolumn{1}{r|}{\textbf{10}} \\
\midrule
\multirow{2}[4]{4cm}{\textbf{Relative Errors(\%) for $L^2-q-\sigma$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{1.95}} & \multicolumn{1}{r|}{\textbf{3.2}} & \multicolumn{1}{r|}{\textbf{3.88}} & \multicolumn{1}{r|}{\textbf{3.97}} & \multicolumn{1}{r|}{\textbf{4.77}} & \multicolumn{1}{r|}{\textbf{4.03}} \\
\cmidrule{2-8} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{1.64}} & \multicolumn{1}{r|}{\textbf{3.11}} & \multicolumn{1}{r|}{\textbf{3.5}} & \multicolumn{1}{r|}{\textbf{4.02}} & \multicolumn{1}{r|}{\textbf{5.11}} & \multicolumn{1}{r|}{\textbf{4.14}} \\
\midrule
\multirow{2}[4]{4cm}{\textbf{Relative Errors(\%) for $L^1-q-\sigma$ Formulation}} & \boldmath{}\textbf{$L^2$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{2.06}} & \multicolumn{1}{r|}{\textbf{2.96}} & \multicolumn{1}{r|}{\textbf{3.83}} & \multicolumn{1}{r|}{\textbf{3.94}} & \multicolumn{1}{r|}{\textbf{5.63}} & \multicolumn{1}{r|}{\textbf{4.28}} \\
\cmidrule{2-8} & \boldmath{}\textbf{$L^1$}\unboldmath{} & \multicolumn{1}{r|}{\textbf{1.72}} & \multicolumn{1}{r|}{\textbf{2.79}} & \multicolumn{1}{r|}{\textbf{3.28}} & \multicolumn{1}{r|}{\textbf{3.72}} & \multicolumn{1}{r|}{\textbf{6.41}} & \multicolumn{1}{r|}{\textbf{4.59}} \\
\midrule
\multirow{3}[6]{*}{Formulation Weights } & $\nu$ & \multicolumn{6}{c|}{1} \\
\cmidrule{2-8} & $\kappa$ & \multicolumn{6}{c|}{$10^3$} \\
\cmidrule{2-8} & $\gamma$ & \multicolumn{5}{c|}{$10^3$} & \multicolumn{1}{r|}{1} \\
\midrule
\multirow{2}[4]{*}{NN Architecture} & Width/Depth & \multicolumn{6}{c|}{200/2} \\
\cmidrule{2-8} & \# trainable & \multicolumn{1}{r|}{82002} & \multicolumn{1}{r|}{82402} & \multicolumn{1}{r|}{82802} & \multicolumn{1}{r|}{83202} & \multicolumn{1}{r|}{83602} & \multicolumn{1}{r|}{85602} \\
\midrule
\multirow{2}[4]{*}{Data Sampling} & $\theta_0$ & \multicolumn{6}{c|}{0.3} \\
\cmidrule{2-8} & $\theta_1$ & \multicolumn{6}{c|}{0.3} \\
\midrule
\multirow{2}[4]{*}{Training} & Steps & \multicolumn{6}{c|}{$10^5$} \\
\cmidrule{2-8} & Learning Rate & \multicolumn{6}{c|}{$10^{-3}$} \\
\bottomrule
\end{tabular}%
\caption{\textbf{$q-\sigma$ formulation \eqref{qsigma_form} (hard Dirichlet B.C.+soft I.C.): } Relative error comparison for various dimensions.}
\label{tab:qsigma_error}%
\end{table}%
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L2/reference_u_t_0.5.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L2/prediction_u_t_0.5.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L2/u_prediction_error_t_0.5.png}
\caption{Learned solution error}
\end{subfigure}
\caption{\textbf{10D, $L^2-q-\sigma$ formulation \eqref{qsigma_full}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-6,6]^{10}$, $t= 0.5$. }
\label{fig:qsigma_l2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_q_t_0.5.png}
\caption{Learned $q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_sigma_t_0.5.png}
\caption{Learned $\displaystyle \sigma$}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_q_xx_t_0.5.png}
\caption{Learned $-\Delta q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L2/prediction_sigma_t_t_0.5.png}
\caption{Learned $\displaystyle \partial_t \sigma$}
\end{subfigure}
\caption{\textbf{10D, $L^2-q-\sigma$ formulation \eqref{qsigma_full}:} predicted $q$, $\sigma$ and their partial derivatives.}
\label{fig:qsigma_PDE_l2}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L1/reference_u_t_0.5.png}
\caption{Barenblatt reference solution }
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L1/prediction_u_t_0.5.png}
\caption{Learned solution slice}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{qsigma/L1/u_prediction_error_t_0.5.png}
\caption{Learned solution error}
\end{subfigure}
\caption{\textbf{10D, $L^1-q-\sigma$ formulation \eqref{qsigma_full}:} Predicted solution slice $u(0.5,x,y,1.0,\cdots, 1.0)$ for $\mathbf{x}\in \Omega = [-6,6]^{10}$, $t= 0.5$. }
\label{fig:qsigma_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_q_t_0.5.png}
\caption{Learned $q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_sigma_t_0.5.png}
\caption{Learned $\displaystyle \sigma$}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_q_xx_t_0.5.png}
\caption{Learned $-\Delta q$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width = 1.2\textwidth]{qsigma/L1/prediction_sigma_t_t_0.5.png}
\caption{Learned $\displaystyle \partial_t \sigma$}
\end{subfigure}
\caption{\textbf{10D, $L^1-q-\sigma$ formulation \eqref{qsigma_full}:} predicted $q,\sigma$ and their partial derivatives.}
\label{fig:qsigma_PDE_l1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{qsigma/5d_usq_comp.png}
\caption{\textbf{5D}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width =1.2\textwidth]{qsigma/10d_usq_comp.png}
\caption{\textbf{10D}}
\end{subfigure}
\caption{Empirical $\int_Q U_2^2(t,\mathbf{x}) +\mathcal{L}_{q,\sigma}\big(u_{q,\sigma}(t,\mathbf{x};\theta_{q},\theta_{\sigma} )\big)$ v.s. training steps. }
\label{fig:qsigma_usq_comp}
\end{figure}
\section{Variational formulations of QPME}\label{sec:formulation}
Since the mesh-based algorithms suffer from the curse of dimensionality, we therefore turn to neural network based techniques for solutions to high dimensional PDEs.
In particular, we first convert the initial/boundary value problem (IBVP) of QPME into a variational formulation and then take the neural network as an ansatz of the solution. The objective function is then taken as the loss function and the extrema will be obtained by optimizing the loss function with stochastic gradient descent (SGD) or its variants.
In this section, we specifically focus on the first step of this procedure, i.e., the IBVP and the variational reformulation.
\subsection{Initial / boundary value problem}
Consider the QPME on a hyperrectangle
$$\partial_t u= \frac{1}{2} \Delta u^2, \quad (t,\mathbf{x} )\in Q$$
where $Q =[0,T]\times \Omega,$ and $ \Omega = [-a_i,a_i]^{d}$. We consider the QPME with
the homogeneous Dirichlet boundary condition
\begin{equation}\label{BC}
\textbf{Dirichlet B.C.}\quad u(t,\mathbf{x})|_{\Sigma_T} = 0
\end{equation}
where $\Sigma_T: = [0,T] \times\partial \Omega$. We also impose the initial condition to the PDE as
\begin{equation}\label{IC}
\textbf{I.C.}\quad u(0,\mathbf{x}) = u_0(\mathbf{x})\quad \mathbf{x}\in \Omega.
\end{equation}
\subsection{Strong formulation}\label{sec:PINN}
One immediate optimization formulation is to use the strong form of the PDE by minimizing the squared PDE residual
\begin{equation}\label{PINN}
\mathcal{L}_{\text{PDE}} (u) = \int_Q \left( \partial _t u -\frac{1}{2} \Delta u^2\right)^2.
\end{equation}
If both the I.C.{} and B.C.{} are strictly enforced as hard constraints, the optimization problem can then be formulated as
\begin{equation}\label{PINNN_strong}
\min_{u\in V_0} \mathcal{L}_{\text{PDE}} (u)
\end{equation}
where $V_0: = \{ f : f|_{\Sigma_t} = 0 , f(0,\mathbf{x}) = u_0(\mathbf{x})\}$.
Alternatively, both I.C.{} and B.C.{} can be treated as soft constraints enforced by penalizations: We may define
\begin{equation}\label{bc_weak}
\mathcal{L}_{B}(u) = \int_{\Sigma_T} u ^2
\end{equation}
for homogeneous Dirichlet boundary condition, and
\begin{equation}\label{weak_initial}
\mathcal{L}_{I}(u) := \int_{\Omega} \left(u \left(0,\mathbf{x}\right) - u_{0}\left(\mathbf{x}\right)\right)^2
\end{equation}
for the initial condition.
The optimization problem \eqref{PINNN_strong} can then be relaxed to
\begin{equation}\label{PINN_full}
\min_{u\in V} \mathcal{L}_{\text{PINN}}(u)
\end{equation}
for some function space $V$, where
\begin{equation} \mathcal{L}_{\text{PINN}}(u): = \kappa \mathcal{L}_{\text{PDE}}(u) + \mu\mathcal{L}_{B}(u) +
\nu\mathcal{L}_{I}(u)
\end{equation}
is weighted sum of the PDE residual, the error of boundary condition and the error for initial condition. $\kappa, \mu,\nu$ are weights for each term.
We used the subscript PINN for the loss function, as such formulation was popularized by the PINN method \cite{raissi2019physics} in recent years, while the idea dates back to early days of using neural network ansatz for PDE solutions \cite{lagaris1998artificial}.
So far, the PDE residual, mismatch in initial condition and boundary condition of $u$ are all measured in a $L^2$ sense. We could also define an analogous $L^1$ optimization problem \eqref{PINN_full} with:
\begin{equation}\label{PINN_L1}
\begin{split}
&\mathcal{L}_{\text{PDE}} (u) = \int_Q \left|\partial _t u -\frac{1}{2} \Delta u^2\right|,
\\
&
\mathcal{L}_{B}(u) = \int_{\Sigma_T} |u|,\\
&\mathcal{L}_{I}(u;u_0) := \int_{\Omega} |u \left(0,\mathbf{x}\right) - u_{0}\left(\mathbf{x}\right)|.
\end{split}
\end{equation}
We refer the target function in $L^1$ by $\mathcal{L}_{\text{PINN}-L^1}$ and that in $L^2$ by $\mathcal{L}_{\text{PINN}-L^2}$.
While using $L^2$ to measure the PDE residual and I.C./B.C. mismatch is a standard practice in PINN, the use of $L^1$ is inspired by the following stability analysis.
\section{Solving high dimensional QPME with neural network ansatz}\label{sec:nn}
Neural network is a class of functions that have a certain layered structure, for example the feed-forward fully connected neural network is defined to be
\begin{equation}\label{FFNN}
\mathcal{NN}(\mathbf{x}; \theta) := W_n g(\cdots g(W_2 g(W_1\mathbf{x}+ b_1 )+b_2)\cdots) +b_n.
\end{equation}
In this case, each layer of the network is a composition of a linear transformation and an nonlinear function $g$ acting component-wise. Here, $\theta := [W_1, W_2,\cdots, W_n, b_1, b_2,\cdots, b_n]$ are the trainable parameters.
The idea of neural network based numerical solver for PDEs is to utilize such a neural network $\mathcal{NN}$ to approximate the function of interest, say $u$. This is usually achieved by solving an optimization problem
\begin{equation}
u = \argmin_f \mathcal{C}(f),
\end{equation}
where $\mathcal{C}$ is some suitable objective function. Then one could take a neural network as an ansatz and minimize $\mathcal{C}$ by tuning its parameters $\theta$ to get an approximate solution $\mathcal{NN}\left(\cdot; {\theta^*}\right)$ where
\begin{equation}
\theta^* = \argmin_\theta \mathcal{C}\left(\mathcal{NN}\left(\cdot; {\theta}\right)\right).
\end{equation}
The process of optimization is also referred as ``training'', using the terminology from machine learning. The objective function $\mathcal{C}$ is often referred as the loss function.
In Section \ref{sec:formulation}, we have derived a few loss functions which can be used to solve the QPME. In this section, we provide further details on how to solve the aforementioned optimization problems with neural networks especially on how initial and boundary conditions can be imposed to the neural network as a solution ansatz. In particular, the following conditions are generally considered as a solution ansatz to QPME:
1) the initial condition, 2) the boundary condition, 3) the physical constraint, i.e., $u\geq 0$. In addition, one could consider to impose conditions like $1 - \Delta \phi \geq (\frac{t}{T})^{\frac{d}{d+2}}$ to narrow down the search space as we know by Theorem \ref{thm} that it is satisfied by the true solution.
We will slightly modify existing neural network structure as needed to satisfy the constraints.
In this paper, we take the architecture of the neural networks to be feed-forward fully-connected as defined in \eqref{FFNN}, while other architectures could also be considered.
\subsection{PINN formulation}
To solve QPME with PINN formulation \eqref{PINN}, we first notice the argmin to \eqref{PINN_full} is exactly a solution to QPME. Then the solution itself can be directly parametrized with a neural network. In particular, to further impose the aforementioned conditions to the solution ansatz, we start with a neural network $\mathcal{NN}_{u}(t,\mathbf{x};\theta_u)$ with both time $t$ and spatial coordinates $\mathbf{x}$ as its inputs and denote the collection of trainable parameters as $\theta_u$. Moreover, since we need to compute the PDE residual, which
includes the computation of second-order derivative of the solution ansatz, $\mathcal{NN}_{u}(t,\mathbf{x};\theta_u)$ must be at least second order differentiable. We thus require activation functions $g$ to be smooth ones such as $\tanh$ and $\text{softplus}$ functions.
To impose the \textbf{initial condition \eqref{IC} as a hard constraint}, we can parametrize the solution $u(t,\mathbf{x})$ as:
$$u(t,\mathbf{x};\theta_u) = u_0(\mathbf{x}) +t \mathcal{NN}_u(t,\mathbf{x};\theta_u).$$
However, in this case, the physical constraint ($u \geq 0$) cannot easily be imposed explicitly. The positivity of the solution can only be reached through minimizing PDE residual.
In the case where the \textbf{initial condition is imposed softly}, the term $\mathcal{L}_I$ defined as in \eqref{weak_initial} will be added as a part of the loss $\mathcal{L}_{\text{PINN}}$ and minimized through training. Meanwhile, the physical constraint of solution can be imposed by parametrization:
$$ u(t,\mathbf{x}; \theta_u) = \text{softplus}\left( \mathcal{NN}_u \left(t,\mathbf{x};\theta_u\right)\right)$$
where the softplus function is given by
$$\text{softplus}(x) = \ln(1+e^x)$$
which guarantees the solution ansatz to be positive.
As for the boundary condition,
the \textbf{homogeneous Dirichlet boundary condition \eqref{BC}} can be imposed as a hard constraint. We take advantage of the function
\begin{equation}\label{f_dc}
f_{dc}(\mathbf{x}) := \prod_{i=1}^{d} \frac{(a_i -x_i)(a_i+x_i)}{a_i^2}
\end{equation}
so that $f_{dc}(\mathbf{x}) = 0$ for any $\mathbf{x}\in\partial \Omega$. Moreover, we notice that $f_{dc}(\mathbf{0}) = 1 $ and $0\leq f_{dc}(\mathbf{x})\leq 1$ for all $\mathbf{x}\in \Omega$.
The solution ansatz $u(t,\mathbf{x})$ can then be further modified by multiplying $f_{dc}$ to satisfy the boundary condition:
\begin{equation}
\begin{split}
\textbf{hard I.C. + hard B.C.}\quad u(t,\mathbf{x};\theta_u) = u_0(\mathbf{x})+tf_{dc}(x)\ \mathcal{NN}_u \left(t,\mathbf{x};\theta_u\right)\\
\end{split}
\end{equation}
\begin{equation}\label{PINN_with_condition}
\textbf{soft I.C. + hard B.C.}\quad u(t,\mathbf{x};\theta_u) = f_{dc}(\mathbf{x})\ \text{softplus} \left(\mathcal{NN}_u \left(t,\mathbf{x};\theta_u\right)\right)
\end{equation}
assuming the homogeneous Dirichlet B.C. is satisfied by $u_0$.
The benefit of solving PINN is that the convergence of training of $\mathcal{L_{\text{PINN}}}$ guarantees accurate solution, which is justified in \eqref{convergence_L1} and \eqref{convergence_L2}.
On the other hand, PINN formulation also has its own limitation that it only allows strong solutions. Solutions with less regularity can not be identified with this formulation.
\subsection{\texorpdfstring{$\phi$}{} formulation}
To solve QPME following the $\phi$ formulation, we need to parametrize $\phi(t,\mathbf{x})$ in \eqref{phi_form} instead of the solution $u$ directly.
When computing $\mathcal{L}_{\phi}$ as in \eqref{phi_form}, we also need the ansatz of $\phi(t,\mathbf{x})$ to be at least second-order differentiable. Note that this is a much weaker assumption on the solution ansatz of $u$ compared with the PINN. In particular, no assumption is needed on the smoothness of $u$ directly. We simply take a neural network $\mathcal{NN}_{\phi} (t,\mathbf{x};\theta_{\phi})$ with smooth activation function as its solution ansatz.
We then note that the minimizers $\phi^*$ to \eqref{phi_form} must also satisfy certain conditions in order to obtain reasonable solutions.
As suggested by Theorem \ref{thm}, we would like to require $\phi$ to vanish at $t = T$. We thus let
\begin{equation}
\phi(t,\mathbf{x};\theta_{\phi}) = (T-t)\mathcal{NN}_{\phi}(t,\mathbf{x};\theta_{\phi}).
\end{equation}
For the recovered solution $u_{\phi}$
\begin{equation}\label{u_phi}
u_{\phi}:= \frac{\partial_t \phi }{1-\Delta \phi},
\end{equation}
unlike PINN formulations, the solution to QPME is not directly parametrized; thus it is not easy to impose the initial condition as a hard constraint. Instead, we enforce the constraint softly relying on the penalty term $\mathcal{L}_{I}$.
The homogeneous Dirichlet boundary condition, on the other hand, can be softly enforced with the term $\mathcal{L}_B$ or enforced as a hard constraint by modifying the neural network. Essentially, we only need
$$\partial_t \phi|_{\partial \Omega} = 0,$$
which can be achieved using the ansatz
\begin{equation}\label{phi_condition}
\textbf{soft I.C. + hard B.C.} \quad \phi (t,\mathbf{x};\theta_{\phi}) = (T-t) f_{dc}(\mathbf{x}) \mathcal{NN}_{\phi} (t,\mathbf{x};\theta_{\phi}),
\end{equation}
where $f_{dc}(\mathbf{x})$ is defined as in \eqref{f_dc}.
Additionally, while the recovered solution $u_{\phi}\geq 0$ is desired, this condition cannot be easily imposed by simply modifying the solution ansatz.
Compared with the PINN formulation, using a $\phi$-formulation allows solutions in a very weak sense. It potentially can find solutions with less regularity. The smoothness requirement is not directly applied to $u_{\phi}$. However, as described above, a few conditions of $\phi$ can not be easily enforced. In addition to the positivity of $u_{\phi}$, conditions like
\begin{equation}\label{growing_condition}
1- \Delta \phi \geq (\frac{t}{T})^{\frac{d}{d+2}},
\end{equation}
is difficult to enforce as a hard constraint either. While \eqref{growing_condition} is preferable as it can narrow down the search function space for $\phi$ (since we know the PDE solution would satisfy that), it is not necessary. However, the fact that $u_{\phi}$ is not confined to be positive function can potentially cause the training of $\mathcal{L}_{\phi-NN}$ converges to unphysical solutions.
\subsection{\texorpdfstring{$q$}{}-\texorpdfstring{$\sigma$}{} formulation}
The $q-\sigma$ formulation \eqref{qsigma_form} is derived from the $\phi$ formulation, and thus also inherits a few conditions for $\phi$.
We first notice that when computing $\mathcal{L}_{q,\sigma}$, no computations of derivatives will be needed. However, when computing $\mathcal{L}_{\Delta q,\partial_t\sigma}$, first-order and second-order derivatives of $\sigma$ and $q$ are required respectively. We thus can start by parametrizing $q(t,\mathbf{x})$ and $\sigma(t,\mathbf{x})$ with neural networks $\mathcal{NN}_q(t,\mathbf{x};\theta_{q})$ and $\mathcal{NN}_{\sigma}(t,\mathbf{x};\theta_{\sigma})$ which should be at least first and second order differentiable respectively.
To ensure the positivity of $\sigma$,
as suggested in \eqref{strong_terminal}, we further parametrize $\sigma$ by
$$\sigma (t,\mathbf{x} ;\theta_{\sigma})= \text{softplus}\left( \mathcal{NN}_{\sigma}\left(t,\mathbf{x} ;\theta_{\sigma}\right)\right).$$
To guarantee that
$$\sigma(T,\cdot) = 1,$$
we modify the above and let
\begin{equation}\label{sigma_condition}
\sigma(t,\mathbf{x};\theta_{\sigma}) = \text{softplus}\big(\ln\left(e-1\right)+\left(T- t\right) \mathcal{NN}_{\sigma}\left(t,\mathbf{x};\theta_{\sigma}\right) \big).
\end{equation}
Alternatively, if we also impose the condition (to narrow down the search space)
\begin{equation}\label{sigma_selection}
\sigma \geq (\frac{t}{T})^{\frac{d}{d+2}},
\end{equation}
one can also parametrize $\sigma$ as
\begin{equation}
\sigma (t,\mathbf{x};\theta_{\sigma}) = \left(\frac{t}{T}\right)^{\frac{d}{d+2}}+(T- t)\, \text{softplus}\left(\mathcal{NN}\left(t,\mathbf{x};\theta_{\sigma}\right) \right).
\end{equation}
For the recovered solution
$$u_{q,\sigma} = \frac{q}{\sigma},$$ like $\phi$ formulation, the initial condition can only be softly imposed. The homogeneous Dirichlet boundary condition can be enforced as a hard constraint, as long as
$$q|_{\partial \Omega} = 0.$$ We thus let
\begin{equation}
q(t,\mathbf{x};\theta_{q}) = f_{dc}(\mathbf{x})\mathcal{NN}_{q}(t,\mathbf{x};\theta_{q}).
\end{equation}
Moreover, to ensure $u_{q,\sigma} \geq 0$, we further let
\begin{equation}\label{q_with_conditions}
\textbf{soft I.C. + hard B.C. }\quad q(t,\mathbf{x};\theta_{q}) = f_{dc}(\mathbf{x})\text{softplus}\left(\mathcal{NN}_{q}\left(t,\mathbf{x};\theta_{q}\right)\right).
\end{equation}
Similar to $\phi$ formulation, the $q-\sigma$ formulation allows solutions with less regularity. However, two neural networks will be needed to parametrize a solution to QPME, which could potentially be more challenging to train.
\subsection{Empirical loss and training data sampling}\label{empirical_loss}
\newcommand{\mathrm{P}}{\mathrm{P}}
\newcommand{\mathcal{L}_{\text{PINN}}}{\mathcal{L}_{\text{PINN}}}
\newcommand{\mathbf{X}}{\mathbf{X}}
To solve the QPME with aforementioned formulations \eqref{PINN_full},\eqref{full_phi} and \eqref{qsigma_full}, we need to compute the high-dimensional integrals of the neural network or its derivatives to evaluate the loss functions.
In practice, Monte Carlo methods are usually used to approximate those high dimensional integrals. The approximate solutions are then obtained by minimizing the surrogate empirical loss functions. Take the PINN formulation as an example, let $\mathrm{P}_{\Omega}$ be the uniform probability distributions over the spatial domain $\Omega$ and let $\{\mathbf{X}_j\}_{j=1}^{n}$ be an i.i.d. sequence of random variables distributed according to $\mathrm{P}_{\Omega}$. Parallelly, we also define $\mathrm{P}_{[0,T]}$ be the uniform probability distributions over the spatial domain $[0,T]$ and let $\{T_j\}_{j=1}^{n}$ be an i.i.d. sequence of random variables distributed according to $\mathrm{P}_{[0,T]}$. Define the empirical loss $\mathcal{L}_{\text{PINN}}^{n}$ by setting
\begin{equation}\label{empirical_PINN}
\mathcal{L}_{\text{PINN}}^{n} = \frac{ \kappa}{n}\sum_{j=1}^{n}\left( \partial _t u(T_j,\mathbf{X}_j) -\frac{1}{2} \Delta u(T_j,\mathbf{X}_j)\right)^2 + \frac{\nu}{n} \sum_{j=1}^{n}\left(u \left(0,\mathbf{X}_j\right) - u_{0}\left(\mathbf{X}_j\right)\right)^2
\end{equation}
for the case where only I.C. is imposed softly and the loss measuring norm is taken to be $L^2$. Notice that all terms are scaled by $\frac{1}{|\Omega|}$, which does not change the minimizer of the problem but can effectively avoid numerical blowup in evaluating the loss during the training. Similarly, $\mathcal{L}_{B}$ can also be approximated with points uniformly sampled from $\partial \Omega$ when needed. We further refer such sampled data as training data.
However, a uniform sampling of $\mathbf{X}_j$'s sometimes does not meet the need of our computation especially in the case where the dimension $d$ is very large. Notice that one essential feature of solutions to QPME is that it has a free boundary that separates positive part of the solution from the zeros. In particular, in the case where the solution is a Barenblatt solution, the nonzero values of the solution actually concentrate near the origin. Ideally, one would like to sample points in both non-zero and zero region, to capture the local features of the solution. However, with a fixed budget of training samples, it could happen that all randomly sampled data points only reside in the zero region, which is apparently problematic. In fact, this could become a serious issue for high dimensional problems. For example, when $d =20$, the probability of sampling the nonzero region of a Barrenblatt solution \eqref{barenblatt} at $t=2$ within $[-7,7]^{20}$ can be computed by the ratio of the volume of the $d$-ball $V_{\text{nonzero}}$ with radius $(22)^{1/2} 2^{6/11}$ standing for the non-zero region versus the volume of the hypercubic. It can then be computed that
$$ \mathrm{P}_{\text{nonzero}} = \frac{V_{\text{nonzero}}}{14^{20}}\approx 1.57\times 10^{-8}$$
Which means, the non-zero region can rarely be sampled if not impossible.
Therefore, we would need an effective sampling scheme which puts larger weights over the non-zero region, so that we can accurately approximate the loss function. Ideally, one could hope for an adaptive important sampling scheme which provides samples of the training data based on a distribution adapted to current status of the parameterized solution and its derivatives throughout the training process. However, especially for high-dimensional problems, such sampling scheme is challenging and computationally expensive to implement. Therefore, a hand-crafted sampling scheme is used instead, which is explained in details in Section \ref{sec:nuemrical} for specified numerical examples.
| 2024-02-18T23:39:46.084Z | 2022-05-09T02:03:49.000Z | algebraic_stack_train_0000 | 326 | 28,622 |
|
proofpile-arXiv_065-1821 | \section{Introduction}
\label{sec:intro}
The observations of the redshifts of the distant Type Ia supernovae (SNe Ia) imply the existence of an unknown repulsive interaction that
accelerates the expansion of the universe in relatively late times~\cite{sne0,sne1,sne2,sne3,sne4}; otherwise, this fact suggests the breakdown of
general relativity on cosmological scales.
To account for these observations, dark energy has become an essential component of cosmology since the late 1990s, in addition to the cold dark matter (CDM)~\cite{Peebles2002}.
Albert Einstein first introduced a cosmological constant $\Lambda$ into general relativity to establish a static universe.
Since the expansion of the universe was discovered and the big-bang cosmology became a paradigm after the discovery of the cosmic microwave background (CMB), the cosmological constant $\Lambda$ was revived and discussed occasionally (see, Ref.~\cite{SWein1989} for a review).
The cosmological constant $\Lambda$ is now included in the standard model of cosmology as the simplest model of dark energy to explain the accelerated expansion, which has also been tested by observing the CMB and baryon acoustic oscillations (BAO) in the context of structure formation theories~\cite{Weinberg}.
The scenario is summarized as the well-known standard $\Lambda$CDM cosmological model~\cite{Planck2018I, Planck2018VI}, where dark energy consists of approximately 70\% of the total energy density of the universe at the present epoch.
Many dark energy models have been proposed as variants of the cosmological constant, where the equation of state (EoS) $\omega$ is defined by the ratio of pressure $p$ to energy density $\rho$ as $\omega=p/\rho$ is a typical quantity used to characterize the property of the dark energy. Early observations of SNe Ia
constrained that $\omega < -1/3$ for dark energy,
which have been followed by more precise observations, suggesting that the dark energy EoS would be very close to the cosmological constant, with $\omega=-1$.
As the energy density of radiation $\rho_r$ and matter $\rho_m$ decays with $\rho_r\propto a^{-4}$, $\rho_m\propto a^{-3}$ with the scale factor of the universe $a$, and dark energy seems to behave as an almost constant
and homogeneous background of the universe, its energy density is suggested to
become dominant in the late times of the universe when $a\gtrsim0.5$.
Hence, the property of dark energy is important for the evolution of the universe, especially in the late times and in the future.
The large-scale structure of matter distributions serves as a useful probe for dark energy EoS because the BAO signature is useful as a standard ruler.
Furthermore, the growth of clustering of the matter is affected by dark energy.
On the other hand, these also gave rise to another mysterious aspect of dark energy as a famous fine-tuning problem, i.e. ``cosmological constant problem'' (see Refs.~\cite{Peebles2002,SWein1989}). The problem is why the dark energy density is of the same order as the matter density at the present epoch,
much smaller than the prediction from a naive expectation of modern particle physics theories, while its EoS implies linkage with the vacuum energy of quantum fields.
These problems may be closely related to the origin and nature of dark energy, which remains to be explored.
Many theoretical models of dark energy have been investigated~\cite{Weinberg,Peebles2002},
in which dynamical models are very interesting~\cite{Tsujikawa:2013fta,Ringeval,
Glavan1,Glavan2,Glavan3,Glavan4,DEquantum,DEquantum2},
because they are related to the field theory associated with the primordial high-energy epoch of the universe and fundamental theories of theoretical physics~\cite{SWein1989,Peebles2002,Linder2020}.
Particularly interesting ones are the dynamical models based
on the quantum fluctuations of ultralight scalar fields~\cite{Ringeval,Glavan1,Glavan2,Glavan3,Glavan4,DEquantum,DEquantum2}, which reveal an interesting connection to the string axiverse scenario~\cite{Witten,Arvanitaki,Obied,Agrawal,Ooguri,Garg2019,Visinelli:2018utg}.
As in the $\Lambda$CDM model and in many models of dark energy,
a basic assumption of their property is spatial isotropy and homogeneity, which follows the cosmological principle. Nevertheless, since the late-time expansion of the universe is dominated by dark energy,
some interesting outcomes may occur to affect cosmological observables if large-scale inhomogeneities of dark energy arise, which could be tested by various cosmological observations~\cite{ABM,TSJ,Jassal2010,Yamauchi2018,Linder2020}.
On the other hand, anomalous features in the CMB anisotropies have been pointed out
by some authors \cite{cmb,fosalba}. Although
the cosmic variance limits the ability for our precise
comparison between theoretical predictions and observations,
there is the possibility that the low CMB multipoles provide us with a clue for physics beyond the standard cosmological model for dark energy~\cite{Gordon2005,Polastri2015}.
The general interpretation for the CMB dipole
anisotropy is our peculiar motion toward a CMB rest frame, related to a dragging towards the Great Attractor in the sky; at least part of the peculiar motions is interpreted as evidence of gravitational bounding~\cite{Tully2008, Courtois2013}.
The latest result shows the validity of the interpretation of the CMB dipole by the peculiar motion \cite{Ferreira}.
However, the result does not necessarily mean that all of the CMB dipole anisotropy could be explained by the peculiar motions, and we will present a dark energy model with very large-scale inhomogeneities as a possible solution.
Recently, the Hubble tension problem has also garnered attention due to the precision of the observations.
The present expansion rate $H_0$ locally measured from standard candles, such as SNe Ia~\cite{Riess2011}
and that inferred from the BAO statistics on CMB fluctuations~\cite{Bennett2013, Planck2018VI},
have shown nontrivial deviations from each other. Many attempts have been made to ease or explain this tension,
and among them stands out the possibility that this tension is due to new physics concerning dark energy
beyond the standard $\Lambda$CDM model~\cite{Mortsell2018}.
A recent related investigation in Ref.~\cite{Migkas2020} reported that the variation in luminosity distance $d_L$ appears to exist in different regions of the sky,
potentially suggesting anisotropy in the expansion rate, which also motivated this work.
To shed light on the problems concerning dark energy, the authors investigated a model for dark energy
with large-scale stochastic fluctuations assumed in an open universe associated with a specific inflationary scenario~\cite{scmde1}.
These fluctuations will be translated into large-scale spatial inhomogeneities and time-dependent dark energy EoS in the evolution of the universe.
In the present work, we consider a general dynamical model for dark energy with large-scale spatial inhomogeneities consisting of a scalar field $\phi$ by handling them in the framework of the cosmological perturbation theory. This model may introduce
some observable effects on the anisotropies of the cosmological observations to address the problems concerning the dark energy property mentioned previously.
The remainder of this paper is organized as follows. In Sec.~II, we propose a basic formulation for the model and its cosmological setups. Then, we use the formulation to derive the Einstein equations for the system as well as the equations of motion:
for both the dark energy represented by the dynamical scalar field $\phi$ and the matter component in the late-time universe. In Sec.~III,
we use the analytic approximations to solve for the equations in the limit $a \ll 1$, where $a$ denotes the scale factor of the universe.
This is useful to determine the necessary initial conditions for the numerical solution to the late-time cosmological
evolution of the system.
In Sec.~IV, we consider the possible effects of large-scale dark energy fluctuations on cosmological observations, such as the CMB temperature power spectra and luminosity distance. Sec.~V is devoted to summarizing our results and brief discussions.
The Appendices provide additional explanations for specific technical details.
In Appendix~\ref{appen:matrix}, explicit forms of the matrices used in the definition of the perturbations are presented, and their relations with multipole expansion are discussed.
In Appendix~\ref{appen:fluideq}, we show the consistency of the derived equations with previous works~\cite{scmde1,WHuthesis}, especially for the superhorizon Euler equation of the matter component.
Appendix~\ref{appen:EOSCPL} shows the dark energy EoS and its relation to the
Chevallier-Polarski-Linder (CPL)
parametrization in our model \cite{ChePolar,Linde0}. In Appendix~\ref{appen:ld}, we show that our application of the model to the correction of the luminosity distance is valid and consistent with previous works~\cite{FS1989,sasaki1987}. Finally, in Appendix~\ref{appen:transf}, we present a helpful toolkit for transforming equations between forms with respect to different variables in our model.
\section{Basic Formulation}
\label{sec:basic}
In the present paper,
motivated by a previous model with supercurvature-mode dark energy associated with an open universe scenario~\cite{DEquantum,DEquantum2,Aoki},
we consider the evolution of dark energy with superhorizon large-scale inhomogeneities and its possible imprints on cosmological observations by characterizing the inhomogeneities analytically.
To formulate these inhomogeneities, we start with following the cosmological setup of metric perturbations.
\subsection{Fundamental Setups}
\label{sec:setup}
The characteristic feature of the dark energy model
previously proposed in Refs.~\cite{scmde1,Aoki} is
the spatial inhomogeneities of the dark energy density
on the very large scales.
Following the scenario, such large scale spatial inhomogeneities of
dark energy originated from the vacuum fluctuations of the supercurvature-modes of a scalar field during an open inflationary scenario~\cite{Aoki,Yamauchi2011}.
An ultralight scalar field $\phi$ with spatial fluctuations
taking nonlinear amplitude on the supercurvature scales is responsible for
the dark energy in the scenario.
Because the horizon size of our universe is much smaller than
the scales of the inhomogeneities of the dark energy, the
breaking of the cosmological principle is small within the
observable universe, which might enable us to escape from the
observational constraints.
In the present paper, we formulate a phenomenological model of dark energy that slightly breaks the cosmological principle by mimicking
the previous model~\cite{scmde1,Aoki}.
We consider a dark energy model of a scalar field
spatially varying on the superhorizon scales
on the spatially flat background universe, for simplicity, by assuming
\begin{align}
d s^{2}
=a^{2}(\eta)\left[-(1+2 \Psi) d \eta^{2}+(1+2 \Phi) \delta_{i j} d x^{i} d x^{j}\right],
\label{eq:metric}
\end{align}
where $\delta_{i j}$ is the Kronecker delta $\delta_{i j}$, and $a(\eta)$ is the scale factor of the universe with the
conformal time $\eta$,
$\Psi$ and $\Phi$ are the metric perturbations that we want to characterize later.
Now, we set the cosmological metric perturbation as $\Psi$, considering only the large-scale superhorizon mode perturbations.
In the Ref.~\cite{scmde1}, it was discussed that the inhomogeneities induced by superhorizon fluctuations are dominated by dipole and quadrupole components among all possible contributions. Now, neglecting higher multipoles, we can explicitly write out the metric perturbations as
\begin{align}
\Psi=\epsilon_1\sum^3_{m=1}\Psi_{1(m)}(\eta)P_i^{(m)}x^{i}+\epsilon_2\sum^5_{m=1}\Psi_{2(m)}(\eta)P^{(m)}_{ij} x^i x^j,
\label{def:Psi}
\\
\Phi=\epsilon_1\sum^3_{m=1}\Phi_{1(m)}(\eta)P_i^{(m)}x^{i}+\epsilon_2\sum^5_{m=1}\Phi_{2(m)}(\eta)P^{(m)}_{ij} x^i x^j,
\label{def:Phi}
\\
\phi=\phi_0(\eta)+\epsilon_1\sum^3_{m=1}\phi_{1(m)}(\eta)P_i^{(m)}x^{i}+\epsilon_2\sum^5_{m=1}\phi_{2(m)}(\eta)P^{(m)}_{ij} x^i x^j,
\label{def:field}
\end{align}
where $P^{(m)}_{i}$ and $P^{(m)}_{ij}$ are the vectors of traceless matrices related to the multipole expansion of the perturbations to the spatial basis, whose expressions are explicitly given in the Appendix~\ref{appen:matrix}.
We use $\phi$ to denote the ultralight scalar field we assume as the source of dark energy with large-scale spatial inhomogeneities. Here $\epsilon_1$ and $\epsilon_2$ are introduced to explicitly express
the order of perturbations for of the dipole and the quadrupole,
which can be include in the perturbations. We set
$\epsilon_1$ and $\epsilon_2$ to be unity later.
Considering a standard CDM scenario, we can write the perturbations
for the matter density distribution as
\begin{align}
&\rho=\rho_{0}(\eta)+\epsilon_1\sum^3_{m=1}\rho_{1(m)}(\eta)P_i^{(m)}x^{i}+\epsilon_2\sum^5_{m=1}\rho_{2(m)}(\eta)P^{(m)}_{ij} x^i x^j,
\label{def:den}
\end{align}
and we define the velocity field as
\begin{align}
&u_i\equiv \partial_i \overline V,
\label{def:velo}
\end{align}
with constraints $u_{\mu}u^{\mu}=-1$, where
$\overline V$ is the velocity potential, which is expressed as
\begin{align}
&\overline V=\epsilon_1\sum^3_{m=1}V_{1(m)}(\eta)P_i^{(m)}x^{i}+\epsilon_2\sum^5_{m=1}V_{2(m)}(\eta)P^{(m)}_{ij} x^i x^j.
\end{align}
Here $\Psi_{\ell(m)}$, $\Phi_{\ell(m)}$, $\phi_{\ell(m)}$,
$\rho_{\ell(m)}$, $V_{\ell(m)}$ with $\ell=1,2$ are the
coefficients of the dipole and the quadrupole components,
and $\phi_0$ and $\rho_0$ are the background quantities.
\subsection{Essence of the Equations}
The evolution of the system is described by the Einstein equations
\begin{align}
G^{\mu}_{}{}_{\nu}=8 \pi G\left(T^{(\phi) \mu}_{}{}_{\nu}+T^{(\rm M)}{}^{\mu}_{}{}_{\nu}\right),
\label{eins}
\end{align}
with the energy momentum tensors for the scalar field with mass $m$ and the
matter component,
\begin{align}
&T_{\mu \nu}^{(\phi)}=\partial_{\mu} \phi \partial_{\nu} \phi-
g_{\mu \nu}\left(\frac{1}{2} g^{\alpha \beta} \partial_{\alpha} \phi \partial_{\beta} \phi+\frac{1}{2} {m_{\phi}^2} \phi^{2}\right),
~~~~~~~~~
T^{(\rm M)}_{\mu \nu}=\rho u_{\mu} u_{\nu},
\end{align}
and the equations of motion for the scalar field $\phi$
and the conservation law for the matter component,
\begin{align}
\frac{1}{\sqrt{-g}} \partial_{\mu}\left(\sqrt{-g} g^{\mu \nu} \partial_{\nu} \phi\right)-{m_{\phi}^2} \phi=0,
\label{eomscm}
\end{align}
\begin{align}
\nabla_{\mu} T^{\rm (M)}{}^{\mu}_{}{}_{\nu}=0.
\label{eomdm}
\end{align}
The EoS of the dark energy field $\phi$ is an important quantity characterizing its properties and evolution.
From the standard formula for the energy density and the pressure
of a scalar field, taking the form of a scalar field potential $V(\phi)={m_{\phi}^2}\phi^2/2$ into account
we obtain the equation of state $\omega_\phi$ as
\begin{align}
\omega_\phi
\equiv \frac{P_\phi}{\rho_\phi}
\simeq
-{2a^2 V(\phi)-\dot\phi^2 \over 2a^2 V(\phi)+\dot\phi^2}
=-{{m_{\phi}^2} a^2 \phi^2-\dot\phi^2 \over {m_{\phi}^2} a^2\phi^2+\dot\phi^2},
\label{def:eos}
\end{align}
where the dot denotes the differentiation with respect to the conformal time $\eta$.
Here we neglected the contribution from the spatial variations,
which is small in our case.
The EoS depends on the dynamical evolution of $\phi$ and is a concordant generalization to the Chevallier-Polarski-Linder (CPL)
parametrization (see Appendix~\ref{appen:EOSCPL})~\cite{ChePolar,Linde0}.
The linear expansion of Eqs.~(\ref{def:Psi})--(\ref{def:field}) ensures that Eqs.~(\ref{eins})--(\ref{eomdm}) gives the same form as
equations for each multipole component with indices $\ell=1,2$ and $m=1,2,3,4,5$.
Indeed, the components with different $\ell$ indices, for example, $\Psi_{\ell=1}$ and $\Psi_{\ell=2}$,
have different dimensions to the order of length by definition.
Keeping this fact in mind,
for simplicity of the notations, we neglect indices $(m)$ in the following parts,
and use only the lower indices $\ell$ to denote the multipole components of these perturbations. In the following parts,
we use lower indices $0$ for the background quantities and $\ell$ for the perturbations on the superhorizon scales.
Using the conformal Hubble parameter $\mathcal{H}=aH(a)=\dot a/a$ instead of Hubble parameter $H(a)$, Eq.~(\ref{eomscm}) yields:
\begin{gather}
\ddot\phi_0+2\mathcal{H}\dot\phi_0+{m_{\phi}^2} a^2\phi_0=0,
\\
\ddot\phi_\ell+2\mathcal{H}\dot\phi_\ell+{m_{\phi}^2} a^2\phi_\ell
+\dot\phi_0(3\dot\Phi_\ell-\dot\Psi_\ell-4\mathcal{H}\Psi_\ell)-2\ddot\phi_0\Psi_\ell =0.
\end{gather}
On the other hand, Eq.~(\ref{eomdm}) leads to
\begin{gather}
3\mathcal{H}\rho_0+\dot\rho_0=0,
\label{eq:density0}
\\
3\mathcal{H}\rho_\ell + \dot\rho_\ell + 3\rho_0\dot\Phi_\ell=0,
\label{eq:density1}
\\
\dot V_\ell-a\Psi_\ell=0.
\label{eq:flu}
\end{gather}
By defining the density perturbation as $\rho_\ell \equiv \rho_0 \delta_\ell$, it is obvious that Eqs.~(\ref{eq:density0}) and (\ref{eq:density1}),
are consistent with those obtained from the continuity equation, and Eq.~(25) in Ref.~\cite{scmde1} at a large-scale limit.
It is worth mentioning that the velocity equation in Eq.~(\ref{eq:flu}) is also consistent with Eq.~(26) in Ref.~\cite{scmde1}, which is obtained from the Euler equation (see Appendix~\ref{appen:fluideq}).
Define $M_{\rm pl}^{-2}\equiv 8 \pi G$ for short, the Einstein equations can be written as
\begin{gather}
-3 \mathcal{H}^2 + M_{\rm pl}^{-2}(\frac{1}{2}{m_{\phi}^2} a^2\phi_0^2+\frac{1}{2}\dot\phi_0^2+a^2\rho_0)=0,
\\
\mathcal{H}^2-2\frac{\ddot a}{a} +M_{\rm pl}^{-2}(\frac{1}{2}{m_{\phi}^2} a^2\phi_0^2-\frac{1}{2}\dot\phi_0^2)=0,
\\
-2(\mathcal{H}\Psi_\ell-\dot\Phi_\ell) +M_{\rm pl}^{-2}(a \rho_0 V_\ell + \dot\phi_0 \phi_\ell)=0,
\\
6 \mathcal{H} (\mathcal{H}\Psi_\ell-\dot\Phi_\ell) + M_{\rm pl}^{-2} \left(a^2\rho_\ell+{m_{\phi}^2} a^2 \phi_0 \phi_\ell
-\dot\phi_0(\dot\phi_0\Psi_\ell-\dot\phi_\ell) \right)=0,
\\
(2\frac{\ddot a}{a}-\mathcal{H}^2)\Psi_\ell +\mathcal{H}\dot\Psi_\ell -2\mathcal{H}\dot\Phi_\ell -\ddot\Phi_\ell
+\frac{M_{\rm pl}^{-2}}{2} \left({m_{\phi}^2} a^2\phi_0\phi_\ell+ \dot\phi_0(\dot\phi_0\Psi_\ell-\dot\phi_\ell)\right)=0.
\end{gather}
We can classify these equations by the order of the perturbations, dividing them into the background equations that read
\begin{gather}
\dot\rho_0+3\mathcal{H}\rho_0=0,
\label{eq:01}
\\
\ddot\phi_0+2\mathcal{H}\dot\phi_0+{m_{\phi}^2} a^2\phi_0=0,
\label{eq:02}
\\
-3 \mathcal{H}^2 + M_{\rm pl}^{-2}(\frac{1}{2}{m_{\phi}^2} a^2\phi_0^2+\frac{1}{2}\dot\phi_0^2+a^2\rho_0)=0,
\label{eq:03}
\\
\mathcal{H}^2-2\frac{\ddot a}{a} +M_{\rm pl}^{-2}(\frac{1}{2}{m_{\phi}^2} a^2\phi_0^2-\frac{1}{2}\dot\phi_0^2)=0,
\label{eq:04}
\end{gather}
and first-order perturbative equations relying on the background as follows
\begin{gather}
\dot\rho_\ell + 3\mathcal{H}\rho_\ell +3\rho_0\dot\Phi_\ell=0,
\label{eq:11}
\\
\ddot\phi_\ell+2\mathcal{H}\dot\phi_\ell+{m_{\phi}^2} a^2\phi_\ell
+\dot\phi_0(3\dot\Phi_\ell-\dot\Psi_\ell-4\mathcal{H}\Psi_\ell)-2\ddot\phi_0\Psi_\ell =0,
\label{eq:12}
\\
\dot V_\ell-a\Psi_\ell=0,
\label{eq:13}
\\
-2(\mathcal{H}\Psi_\ell-\dot\Phi_\ell) +M_{\rm pl}^{-2}(a \rho_0 V_\ell + \dot\phi_0 \phi_\ell)=0,
\label{eq:14}
\\
6 \mathcal{H} (\mathcal{H}\Psi_\ell-\dot\Phi_\ell) + M_{\rm pl}^{-2} \left(a^2\rho_\ell+{m_{\phi}^2} a^2 \phi_0 \phi_\ell
-\dot\phi_0(\dot\phi_0\Psi_\ell-\dot\phi_\ell) \right)=0,
\label{eq:15}
\\
(2\frac{\ddot a}{a}-\mathcal{H}^2)\Psi_\ell +\mathcal{H}\dot\Psi_\ell -2\mathcal{H}\dot\Phi_\ell -\ddot\Phi_\ell
+\frac{M_{\rm pl}^{-2}}{2} \left({m_{\phi}^2} a^2\phi_0\phi_\ell+ \dot\phi_0(\dot\phi_0\Psi_\ell-\dot\phi_\ell)\right)=0.
\label{eq:16}
\end{gather}
After solving for the background, we can find out the evolution
of large-scale perturbations originated from the fluctuations of
the dark energy field $\phi$.
\section{Analytic Approximations and Numerical Solutions}
\label{sec:numer}
In this section, we consider solving the evolution equations
obtained in the previous section both for the background and the perturbations.
Because we are interested in the late-time evolution after the last scattering ($a_d\sim1/1100$), we first find the
analytic approximates of the solutions based in
the matter-dominant epoch, which are useful as the
initial conditions for numerical evaluation when
$a_d \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} a \ll 1$.
\begin{figure}[b]
\includegraphics[width=0.7\linewidth]{fig-phi0LCDM.pdf}
\caption{An example of the evolution of the background solutions $\tilde{\phi}_0(a)$ as a function of the scale factor $a$ with the different sets of parameters for $\widetilde{r}$ and
$\widetilde{m}$ presented in the figure, which mimic $\Lambda$CDM universes
with $\Omega_m=0.3$ using Eq.~(\ref{eq:lcdmparameter}).
Notice that each model has different initial values for $\widetilde\phi_0$.
We
observe from the figure that the lighter field $\phi$ is more ``frozen'' in its evolutionary history because $\widetilde{m}$ is normalized by the Hubble constant in Eq.~(\ref{def:ndm}). Here, the curve with $\widetilde{m}=1/20$ and $\widetilde{r}=280$ is most similar to
the cosmological constant model among the curves.}
\label{fig:phi0LCDM}
\end{figure}
\subsection{The background evolution}
\label{sec:numer_background}
First, we must solve the background evolution of our system in Eqs.~(\ref{eq:01})--(\ref{eq:04}) before considering the perturbations,
which should yield cosmological observational constraints that
models close to the $\Lambda$CDM models are favored. Moreover, we must use the
observed value of the Hubble parameter at the present epoch to determine the dark energy density of the field $\phi$.
Using these approximates we may infer the initial conditions of the background for numerical solution of the background evolution.
To parametrize the equations, we introduce the cosmic time $t$ by $dt=ad\eta$.
Defining tilde dimensionless quantities as
\begin{align}
\tilde{t} &\equiv H_0 t,
\label{def:ndt}
\\
\widetilde{\phi}_0 &\equiv {\phi_0 / \overline \phi_0 } ,
\label{def:ndphi}
\\
\widetilde r &\equiv \frac{1}{6}\left({\overline \phi_0 / M_{\rm pl}}\right)^2 ,
\label{def:ndr}
\\
\widetilde{m} &\equiv {m_{\phi}}/H_0 ,
\label{def:ndm}
\\
\widetilde{H} &\equiv H/H_0,
\label{def:ndh}
\end{align}
we can obtain dimensionless ordinary differential equations using $\tilde{t}$ as independent variable as
\begin{align}
\widetilde r \widetilde{m}^2\widetilde{\phi}_0^2(\tilde{t})+\widetilde r \left(\frac{\mathop{}\!\mathrm{d}\widetilde{\phi}_0}{\mathop{}\!\mathrm{d}\tilde{t}} \right)^2+\Omega_m a^{-3}
&=\left(\frac{1}{a}\frac{\mathop{}\!\mathrm{d} a}{\mathop{}\!\mathrm{d}\tilde{t}}\right)^2,
\label{eq:tnd1}
\\
\frac{\mathop{}\!\mathrm{d}^2\widetilde{\phi}_0}{\mathop{}\!\mathrm{d}\tilde{t}^2} + 3 \frac{1}{a} \frac{\mathop{}\!\mathrm{d} a}{\mathop{}\!\mathrm{d}\tilde{t}}
\frac{\mathop{}\!\mathrm{d}\widetilde{\phi}_0}{\mathop{}\!\mathrm{d}\tilde{t}}+\widetilde{m}^2 \widetilde{\phi}_0
&=0,
\label{eq:tnd2}
\end{align}
where $H_0$ is the Hubble constant, and
$\overline\phi_0$ is a constant related to the initial value of $\phi_0$.
If we use the scale factor $a$ instead of $t$, and use superscript $'$ to denote the derivative with respect to scale factor $a$, then the equations corresponds to
\begin{align}
& \left(1-\widetilde{r} a^2 \widetilde{\phi}_0'^2\right)\widetilde{H}^2=\widetilde{r}\widetilde{m}^2\widetilde{\phi}_0^2+\Omega_m a^{-3},
\label{eq:ha1}
\\
&
a^2 \widetilde{H}^2 \widetilde{\phi}_0''+ \left( 4 a \widetilde{H}^2
+ a^2 \widetilde{H}\widetilde{H}' \right) \widetilde{\phi}_0'
+ \widetilde{m}^2\widetilde{\phi}_0=0.
\label{eq:phia2}
\end{align}
Following Eq.~(\ref{eq:ha1}) we may also write out the dimensionless expansion rate as
\begin{align}
\widetilde{H}(a)=\sqrt{\widetilde{r}\widetilde{m}^2\widetilde{\phi}_0^2+\Omega_m a^{-3} \over 1-\widetilde{r} a^2 \widetilde{\phi}_0'^2}.
\label{eq:ha2}
\end{align}
We leave more details of procedures of solving these background equations to the Appendix~\ref{appen:background}.
It is worth noting that according to the definitions in Eqs.~(\ref{def:ndt})~to~(\ref{def:ndm}), there are two degrees of freedom for
the parameters $\widetilde{m}$ and $\widetilde{r}$, to specify the mass and energy scale of the dark energy field $\phi$, respectively. The unknown component in our model, dark energy $\phi$, can be fundamentally characterized by two parameters. One is the shape of its potential $V(\phi)={m_{\phi}^2}\phi^2/2$, and the other is the initial value
in our universe, while the properties of the other component (e.g., matter) are considered as known under the standard cosmological model.
\begin{comment}
To focus on the solution, In search of initial conditions and the analytic approximations in the limit $a \ll 1$, Eq.~(\ref{eq:tnd1}) approaches to
\begin{align}
\left(\frac{1}{a}\frac{\mathop{}\!\mathrm{d} a}{\mathop{}\!\mathrm{d}\tilde{t}} \right)^2=\Omega_m a^{-3},
\label{eq:friedmd}
\end{align}
which has the solution
\begin{align}
\tilde{t} = \frac{2}{3} \frac{a^\frac{3}{2}}{\sqrt \Omega_m}
\qquad \textrm{or} \qquad
a= \left(\frac{9}{4}\Omega_m \right)^{\frac{1}{3}} \tilde{t}^{\frac{2}{3}}
\label{eq:ini_a}
\end{align}
as an analytic approximation in the limit $a\ll1$.
Inserting this into Eq.~(\ref{eq:tnd2}) gives
\begin{align}
\frac{\mathop{}\!\mathrm{d}^2\widetilde{\phi}_0}{\mathop{}\!\mathrm{d}\tilde{t}^2} + 2 \frac{1}{\tilde{t}}(\frac{\mathop{}\!\mathrm{d}\widetilde{\phi}_0}{\mathop{}\!\mathrm{d}\tilde{t}})+\widetilde{{m_{\phi}^2}}\widetilde{\phi}_0=0,
\end{align}
which has the general solution
\begin{align}
\widetilde{\phi}_0(\tilde{t})= C_1 \frac{\sin(\widetilde{m}\tilde{t})}{\widetilde{m}\tilde{t}}+ C_2\frac{\cos(\widetilde{m}\tilde{t})}{\widetilde{m}\tilde{t}} .
\end{align}
The cosine part diverges in the limit $a \ll 1$ to be abandoned; hence, we write
\begin{align}
\widetilde{\phi}_0(\tilde{t})= C_1 \frac{\sin(\widetilde{m}\tilde{t})}{\widetilde{m}\tilde{t}},
\label{eq:iniphi}
\end{align}
which imposes the initial condition
\begin{align}
\widetilde{\phi}_0(\tilde{t}\to 0)= \lim_{\tilde{t} \to 0}C_1\frac{\sin(\widetilde{m} \tilde{t})}{\widetilde{m} \tilde{t}} =C_1.
\label{eq:bc_choice}
\end{align}
However, the initial value of $\widetilde\phi_0(\tilde{t}\to0)=C_1$ is not self-evident and should be determined in association with the dark energy density of the present epoch inferred from observations.
\end{comment}
In order to fix the dark energy density today, we have the constraint from the present present Hubble rate
by definitions
\begin{align}
a(\tilde{t}_0)= a(H_0t_0) &\equiv 1,
\\
H(\tilde{t}_0) = H(H_0{t_0}) &\equiv H_0,
\end{align}
where $t_0$ is the proper cosmic time for the present epoch.
Inserting this into Eq.~(\ref{eq:tnd1}) actually gives
\begin{align}
1-\Omega_m= \widetilde{r} \widetilde{m}^2 \left(\widetilde{\phi}_0\Big|_{\tilde{t}
=\tilde{t}_0}\right)^2+\widetilde{r}\left(\frac{\mathop{}\!\mathrm{d} \widetilde{\phi}_0}{\mathop{}\!\mathrm{d} \tilde{t}}\bigg|_{\tilde{t}=\tilde{t}_0}\right)^2.
\label{eq:constr}
\end{align}
Eq.~(\ref{eq:constr}) is the necessary condition for specifying the dark energy density observed today when solving the background equations.
Together with Eqs.~(\ref{eq:tnd1}) and (\ref{eq:tnd2}), the system is now prepared for numerical evaluation to obtain the evolution of $a(\tilde{t})$ and $\widetilde{\phi}_0(\tilde{t})$.
As we are mainly interested in the late-time evolution here,
we can determine the initial value for independent variables $\tilde{t}$ or $a$ (to be discussed later) manually as a typical value; for example, $a_i = a_d \approx 1/1100$ at the photon decoupling off the last scattering,
by use of Eq.~(\ref{eq:ini_a}). These solutions determine the background evolution that we rely on to solve the perturbation equations.
It is worth mentioning that Eq.~(\ref{eq:constr}) also provides a particular baseline for choosing the parameters $\widetilde{m}$ and $\widetilde{r}$ from the various parameter spaces,
that the case for the choice of parameters approximating the $\Lambda$CDM model is
\begin{align}
\widetilde{r} \widetilde{m}^2 \simeq 1-\Omega_m,
\label{eq:lcdmparameter}
\end{align}
concerning which more details can be found in Appendix~\ref{appen:background} (see also Eq.~(\ref{eq:phi0-discuss2})).
However, this condition for parameter choice is not mandatory to solve for the system.
\begin{comment}
\subsubsection{As functions of scale factor \bm{$a$}}
\label{subsec:background-a}
Because the scale factor $a$ can be chosen as a time-evolution parameter instead of the dimensionless time $\tilde{t}$,
as a double check for the previous subsection, we can write out the dimensionless equations for $\widetilde{\phi}_0$ and $\widetilde{H}(a)$ as functions of the scale factor $a$. From here on,
By inserting Eq.~(\ref{eq:ha2}) into Eq.~(\ref{eq:phia2}), we obtain the background equation to be solved in the form
\begin{align}
\widetilde{m}^2 a^2 \widetilde{\phi}_0 (1-\widetilde{r}a^2\widetilde{\phi}_0'^2)
+\widetilde{m}^2 \widetilde{r} a^3 \widetilde{\phi}_0^2 \left(4\widetilde{\phi}_0'-3\widetilde{r}a^2\widetilde{\phi}_0'^3+a\widetilde{\phi}_0'' \right)
+{\Omega_m \over 2}\left(5\widetilde{\phi}_0'-3\widetilde{r}a^2\widetilde{\phi}_0'^3+2a\widetilde{\phi}_0'' \right)
=0,
\label{eq:phia3}
\end{align}
where an initial condition for $\widetilde{\phi}_0(a)$ is necessary. After solving $\widetilde{\phi}_0(a)$, we can obtain $\widetilde{H}(a)$ from Eq.~(\ref{eq:ha2}).
For the initial conditions, we consider the analytic approximations. When $a \ll 1$, Eq.~(\ref{eq:ha2}) simply approaches to
\begin{align}
\widetilde{H}=\sqrt{\Omega_m}a^{-3/2}.
\end{align}
Inserting this into Eq.~(\ref{eq:phia2}) and simplifying will lead to
\begin{align}
a \widetilde{\phi}_0''+{5 \over 2}\widetilde{\phi}_0'+\widetilde{m}^2 a^2 \Omega_m^{-1} \widetilde{\phi}_0=0,
\end{align}
which can be solved analytically as
\begin{align}
\widetilde{\phi}_0(a)= C_1 \frac{3 \sqrt{\Omega _m} }{2 \widetilde{m} a^{3/2} } \sin \left(\frac{2 \widetilde{m} a^{3/2} }{3 \sqrt{\Omega _m}}\right) ,
\end{align}
which is identical to that in Eq.~(\ref{eq:iniphi}) by recalling Eq.~(\ref{eq:ini_a}).
Then we are able to infer
\begin{align}
\widetilde{\phi}_0(a\rightarrow0)&=C_1,
\\
\widetilde{\phi}_0'(a\rightarrow0)&=0,
\end{align}
are the appropriate initial conditions for the system, which are
consistent with the equations using dimensionless time $\tilde{t}$ as the independent variable.
\end{comment}
We can now solve for $\widetilde{\phi}_0(a)$ numerically under two degrees of freedom for the choice of
parameters $\widetilde{m}$ and $\widetilde{r}$. Examples of the solutions under the conditions that allow the recovery of the models close to the $\Lambda$CDM universe are presented in Figs.~1--2.
To investigate the impact of parameter choices on the background solutions more specifically, we also chose other sets of parameters.
Table~\ref{tab:para} provides the parameter sets adopted in the present paper.
We present some typical figures showing how parameters $\widetilde{r}$, $\widetilde{m}$, and $\Omega_m$ can affect the evolution of the background solution and
the equation of state as a function of $a$ in Figs.~1--2 and Figs.~7--11 in Appendix~\ref{appen:background}.
We now discuss the behaviors of the background solutions under different parameters. Fig.~\ref{fig:phi0LCDM} shows the impact of the
parameter choice on the behavior of the solution for $\tilde{\phi}_0(a)$ in the cases following Eq.~(\ref{eq:lcdmparameter}), where models close to the $\Lambda$CDM cosmologies are expected.
Fig.~\ref{fig:phi0r} shows
how the parameters $\widetilde{r}$ and $\widetilde{m}$ affect the behaviors of $\tilde{\phi}_0$, while Fig.~\ref{fig:phi0omegam} shows
that for and $\widetilde{\Omega}_m$.
The behaviors of the $\widetilde{\phi}_0$ curves in these figures can be understood as follows.
From Eqs.~(\ref{eq:tnd1}) and (\ref{eq:constr}) we can see that the parameter $\widetilde{r}$ can actually be
absorbed into the amplitude of $\widetilde{\phi}_0$ as a rescaling factor, namely
\begin{align}
\widetilde{m}^2(\sqrt{\widetilde{r}}\widetilde{\phi}_0)^2+\left(\frac{\mathop{}\!\mathrm{d}(\sqrt{\widetilde{r}}\widetilde{\phi}_0)}{\mathop{}\!\mathrm{d}\tilde{t}} \right)^2
=\left(\frac{1}{a}\frac{\mathop{}\!\mathrm{d} a}{\mathop{}\!\mathrm{d}\tilde{t}}\right)^2-\Omega_m a^{-3},
\label{eq:phi0-discuss1}
\end{align}
with
\begin{align}
1-\Omega_m=\widetilde{m}^2(\sqrt{\widetilde{r}}\widetilde{\phi}_0\Big|_{a=1})^2+(\sqrt{\widetilde{r}}\widetilde{\phi}_0'\Big|_{a=1})^2.
\label{eq:phi0-discuss2}
\end{align}
These two equations facilitates to understand
why only changing $\widetilde{r}$ with other parameters fixed only alters the value of $\widetilde{\phi}_0$ without causing a nontrivial difference in the characteristic behaviors of the curves in Fig.~\ref{fig:phi0r}.
Moreover, as we evaluate $\widetilde{\phi}_0$, choosing the condition in Eq.~(\ref{eq:lcdmparameter}) close to the $\Lambda$CDM model as a baseline for the natural choices of the parameters,
\begin{align}
\frac{\mathop{}\!\mathrm{d}(\sqrt{\widetilde{r}}\widetilde{\phi}_0)}{\mathop{}\!\mathrm{d}\tilde{t}}\ll1
\qquad {\rm or} \qquad
\sqrt{\widetilde{r}}\widetilde{\phi}_0'\ll 1
\label{eq:phi0-discuss3}
\end{align}
always holds.
Hence, it follows Eq.~(\ref{eq:phi0-discuss1}) that
\begin{align}
(\sqrt{\widetilde{r}}\widetilde{m}\widetilde{\phi}_0)^2
\simeq\left(\frac{1}{a}\frac{\mathop{}\!\mathrm{d} a}{\mathop{}\!\mathrm{d}\tilde{t}}\right)^2-\Omega_m a^{-3}.
\label{eq:phi0-discuss4}
\end{align}
Due to similar arguments for $\widetilde{r}$, we understand that, to some extent, $\widetilde{m}$ also works as a rescaling factor for the background $\widetilde{\phi}_0$, which explains the behavior of $\widetilde{\phi}_0$ in Fig.~\ref{fig:phi0r}. At the same time, the appearance of $\Omega_m$ on the right-hand side of Eq.~(\ref{eq:phi0-discuss4}) explains the dependence of the background solution $\widetilde{\phi}_0$ on $\Omega_m$ in Fig.~\ref{fig:phi0omegam}.
Now, let us discuss the parameter dependence of the dark energy EoS $\omega_\phi(a)$, as shown in
Fig.~\ref{fig:wLCDM} and Fig.~\ref{fig:womegam}. We may conclude that the background dark energy EoS $\omega_{\phi}(a)$ is almost
independent of $\widetilde{r}$; in contrast, $\widetilde{m}$ is the main influencing factor. There is also a slight dependence on the cosmological parameter $\Omega_m$, as shown in Fig.~\ref{fig:womegam}.
These behaviors can be understood using Eqs.~(\ref{eq:eoscpl2})--(\ref{eq:eosparam3}) in Appendix~\ref{appen:EOSCPL} as an analogy to the CPL parametrization. Generally, $\omega_{\phi}(a)\simeq-1+2\left(1-(a\widetilde{m}^2\tilde{\phi}_0^2)/(\Omega_m\tilde{\phi}_0'^2)\right)$ holds for almost all models; hence, $\widetilde{r}$ does not have an impact on $\omega_\phi$ at the background level, while $\widetilde{m}$ and $\Omega_m$ do affect the dark energy EoS $\omega_\phi$ .
\begin{figure}[t]
\includegraphics[width=0.7\linewidth]{fig-wLCDM.pdf}
\caption{Evolution of the dark energy EoS $\omega_\phi(a)$ with the different sets of the
parameters chosen in Fig.~\ref{fig:phi0LCDM}.
From Eq.~(\ref{eq:constr}) and Eq.~(\ref{eq:eosparam2}), it is straightforward to see that $\widetilde{r}$ does not affect the EoS of $\widetilde{\phi}_0$. The figure shows the influence of $\widetilde{m}$ on the EoS of $\widetilde{\phi}_0$ with fixed $\Omega_m=0.3$.
}
\label{fig:wLCDM}
\end{figure}
Fig.~\ref{fig:Homegam} shows a slight dependence on $\Omega_m$ for the expansion rate $\widetilde{H}(a)$ as a function of the scale factor for $0.5<a<1$,
while Fig.~\ref{fig:Hmfuture} shows a possible impact on the future expansion rate from the mass parameter $\widetilde{m}$.
To explain these behaviors for $\widetilde{H}(a)$, let us consider the analytic approximation of $\widetilde{H}(a)$ starting from Eq.~(\ref{eq:ha2}).
For models close to the $\Lambda$CDM model, where $\widetilde{\phi}_0\simeq {\rm const.}$ and $\widetilde{\phi}_0'\simeq0$ with Eq.~(\ref{eq:lcdmparameter}), reading $\widetilde{r}\widetilde{m}^2 \simeq 1-\Omega_m$ holds, we have
\begin{align}
\widetilde{H}(a) \simeq \sqrt{(1-\Omega_m)\widetilde{\phi}_0^2+\Omega_m a^{-3}},
\label{Hubbleequation}
\end{align}
which is almost the same as the Hubble equation for the standard $\Lambda$CDM parametrization. Hence, it is obvious that $\Omega_m$ is the dominant parameter for the background expansion history when $0<a<1$.
\begin{figure}[t]
\begin{minipage}{0.55\hsize}
\begin{center}
\includegraphics[width=\linewidth]{figdeltal-approx.pdf}
\end{center}
\hspace{.5cm}
\end{minipage}
\begin{minipage}{0.55\hsize}
\begin{center}
\includegraphics[width=\linewidth]{figphil-approx.pdf}
\end{center}
\vspace{-0.cm}
\end{minipage}
\caption{Comparison of the evolution of $\delta_\ell$ and $\widetilde \phi_\ell$ between the analytic approximation (dashed curve) by Eqs.~(\ref{eq:ini_deltal}) and (\ref{eq:ini_phil}), and the exact numerical solutions (solid curves). Here, we adopt $\widetilde{r}=70$ and $\widetilde{m}=1/10$ for $\delta_\ell$, and
$\widetilde{r}=280$ and $\widetilde{m}=1/20$ for $\tilde{\phi}_\ell$ as examples. We checked the validity of the analytic approximations for other values of $\widetilde{m}$ and $\widetilde{r}$ adopted in Table I.
The deviation between the analytic approximation and the numerical solution starting around $a\gtrsim 0.5$ arises from the emerging domination of dark energy, which breaks down the analytic approximation obtained from the initial condition of matter domination.
}
\label{fig:numer-analytic}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.7\linewidth]{fig-philm.pdf}
\caption{Numerical solutions for the perturbation for $\tilde{\phi}_\ell(a)$, with the different values of parameter $\widetilde{m}$, where $\Omega_m=0.3$ and $\widetilde{r}=6.3$ are fixed.
}
\label{fig:philm}
\end{figure}
\subsection{Equations governing 1st order perturbations}
In the previous subsection, we have solved for the background, and on the basis of the background solutions we now consider the numerical solution for the first-order perturbation equations in Eqs.~(\ref{eq:11})--(\ref{eq:16}) that we are interested in.
We define the perturbation to dark matter density as
\begin{align}
\rho_\ell\equiv\rho_0\delta_\ell,
\end{align}
together with the following quantity associated with the velocity as
\begin{align}
\widetilde{V_{\ell}}\equiv H_0 V_{\ell}.
\end{align}
Then, we can utilize the Friedmann equation relation in Eqs.~(\ref{eq:11})--(\ref{eq:16}) to eliminate quantities such as $\rho_0$ and $\rho_\ell$, and
use $\delta_\ell$ to characterize the first-order matter perturbations as
\begin{align}
\rho_0(a)=3H_0^2\Omega_m a^{-3} M_{\rm pl}^2,
\label{eq:rho0}
\end{align}
Thus the dimensionless differential equations as functions of $\tilde{t}$ will be
\begin{gather}
{\partial {\delta}_\ell \over \partial \tilde t}+3{\partial{\Phi}_\ell \over \partial \tilde{t}}=0,
\label{eq:1st1}
\\
{\partial^2\widetilde{\phi}_\ell \over \partial \tilde{t}^2}
+ {3 \over a}{\partial a\over \partial \tilde{t}}{\partial \widetilde{\phi}_\ell \over \partial \tilde{t}}+\widetilde{m}^2\widetilde{\phi}_\ell
-2\Psi_\ell {\partial^2\widetilde{\phi}_0 \over \partial \tilde{t}^2}
-{6 \Psi_\ell \over a}{\partial a\over \partial \tilde{t}}{\partial \widetilde{\phi}_0 \over \partial \tilde{t}}
+{\partial \over \partial \tilde{t}}(3\Phi_\ell-\Psi_\ell)
{\partial \widetilde{\phi}_0 \over \partial \tilde{t}}=0,
\label{eq:1st3}
\\
{\partial\widetilde{V}_\ell \over \partial \tilde{t}}-\Psi_\ell=0,
\label{eq:1st2}
\\
-{2\over a}{\partial a \over \partial \tilde{t}}\Psi_\ell
+2{\partial \Phi_\ell \over \partial \tilde{t}}+ 3 \widetilde{V}_\ell \Omega_m a^{-3}
+6\widetilde{r} \widetilde{\phi}_\ell{\partial\widetilde{\phi}_0 \over \partial\tilde{t}}=0,
\\
6 ({1 \over a}{\partial a\over \partial \tilde{t}})^2 \Psi_\ell -6({1 \over a}{\partial a\over \partial \tilde{t}}){\partial{\Psi}_\ell \over \partial \tilde{t}}+3\Omega_m a^{-3}\delta_\ell
+6\widetilde{r}\left(
\widetilde{m}^2\widetilde{\phi}_0\widetilde{\phi}_\ell+{\partial\widetilde{\phi}_0 \over \partial\tilde{t}}{\partial\widetilde{\phi}_\ell \over \partial\tilde{t}}-\Psi_\ell ({\partial\widetilde{\phi}_0 \over \partial\tilde{t}})^2
\right)=0,
\\
\left(({1 \over a}{\partial a\over \partial \tilde{t}})^2+{2 \over a}{\partial^2 a\over \partial \tilde{t}^2}\right)\Psi_\ell
+{1 \over a}{\partial a\over \partial \tilde{t}}{\partial \over \partial \tilde{t}}\left(\Psi_\ell-3\Phi_\ell\right)
-{\partial^2\Phi_\ell \over \partial \tilde{t}^2}+3\widetilde{r}
\left(\widetilde{m}^2\widetilde{\phi}_0\widetilde{\phi}_\ell-{\partial\widetilde{\phi}_0 \over \partial\tilde{t}}{\partial\widetilde{\phi}_\ell \over \partial\tilde{t}}+\Psi_\ell({\partial\widetilde{\phi}_0 \over \partial\tilde{t}})^2\right)=0.
\label{eq:1st6}
\end{gather}
Notice that from Eq.~({\ref{eq:1st1}})
\begin{align}
\delta_\ell+3{\Phi}_\ell={\rm constant},
\label{eq:inidelta}
\end{align}
where the constant is presumed to be zero as we assume that the superhorizon
perturbations of the scalar field are the isocurvature perturbations.
Then, we assume the initial values
\begin{align}
\delta_\ell(0)=\Phi_\ell(0)=0.
\label{eq:inidelta2}
\end{align}
As is the case of the super-curvature mode dark energy \cite{scmde1},
if we adopt the general condition that anisotropic stress is negligible, which reads
\begin{align}
{\Phi}_\ell+{\Psi}_\ell\simeq0,
\end{align}
we can eliminate ${\Phi}_\ell$ and ${\Psi}_\ell$ using $\delta_\ell$ and $\partial \widetilde{V}_\ell/ \partial \tilde{t}$ using Eq.~(\ref{eq:1st2}).
Finally, we will have two equations for $\delta_\ell$ and $\tilde{\phi}_\ell$ to solve, whose explicit forms are long and trivial
hence we omit them here.
We would like to note that our analysis is based on the conformal Newtonian (longitudinal) gauge,
which is widely used in various analyses of cosmological perturbations.
It is know that the conformal Newtonian gauge
leaves no residual gauge freedom except for the long wavelength mode of $k = 0$.
The effect of the inhomogeneities of our
dark energy model is the isocurvature perturbations
in the long wavelength limit. We consider that the gauge freedom is
fixed for the dipole and quadrupole modes with nonzero small $k$, however the possibility of
contamination by the gauge modes with $k=0$ could be mentioned.
Again, we need to consider the initial conditions for which we solve the equations in the limit
$a\ll1$ in an analytic manner.
First, recalling the definition of Eq.~(\ref{def:field}) and Eq.~(\ref{def:ndphi}), we can generalize the dimensionless quantities as:
\begin{align}
\phi
&\equiv
\overline{\phi}_0(\widetilde{\phi}_0+\epsilon_1\widetilde{\phi}_1\sum_m P_i^{(m)}x^{i}+\epsilon_2\widetilde{\phi}_2\sum_m P^{(m)}_{ij} x^i x^j)
\end{align}
In the limit $a\ll 1$ ($t \rightarrow 0, \widetilde t \rightarrow 0$),
we may assume the power law form for the perturbations
\begin{align}
\delta_\ell &\equiv A_1\tilde{t}^{\alpha},
\label{assump:delta}
\\
\widetilde{\phi}_\ell &\equiv {\cal D}+D_1\tilde{t}^{\gamma}.
\label{assump:phil}
\end{align}
Furthermore, Eq.~(\ref{eq:iniphi}) gives
\begin{align}
\widetilde{\phi}_0(\tilde{t})= C_1 \frac{\sin(\widetilde{m}\tilde{t})}{\widetilde{m}\tilde{t}}\approx
C_1 (1-\frac{\widetilde{m}^2\tilde{t}^2}{6})
\equiv F(1-\frac{\widetilde{m}^2\tilde{t}^2}{6}).
\label{assump:phi}
\end{align}
For a given $\widetilde{m}$ and $\widetilde{r}$, we solve for the background and fix the value for $C_1$ or $F$ in Sec.~\ref{sec:numer_background}.
we may take $F$ as a known quantity here.
For scale factor $a$, recall that Eq.~(\ref{eq:ini_a}) is the background analytical approximation as
\begin{gather}
a= \left(\frac{9}{4}\Omega_m \right)^{\frac{1}{3}} \tilde{t}^{\frac{2}{3}}\equiv B \tilde{t}^{\frac{2}{3}} ,
~~~ \tilde{t} = \left({\frac{a}{B}}\right)^{3 \over 2}.
\nonumber
\end{gather}
Inserting the ansatz Eq.~(\ref{assump:delta}) to Eq.~(\ref{assump:phi}) into Eq.~(\ref{eq:1st3}) to Eq.~(\ref{eq:1st6})
will give us equations as a function of $a$ or $\tilde{t}$ relating the unknown coefficients $\alpha$,$\gamma$,$A_1$,${\cal D}$, $D_1$ that we want to explore.
For the limit $a \to 0$ or $\tilde{t} \to 0$, by looking at the leading order of $a$ for each equation, we have
\begin{gather}
\alpha=\gamma=2,
\\
D_1= -{1\over 6}\widetilde{m}^2 {\cal D},
\label{coefd1}
\\
A_1= - {27\over 22}\widetilde{m}^2 \widetilde{r} F {\cal D},
\label{coefa1}
\end{gather}
where ${\cal D}$ may be understood as the amplitude of each mode of the perturbations as $\epsilon_1$ and $\epsilon_2$, which will be constrained later with the observational data.
For now, ${\cal D}=1$ may be set for the numerical solution.
Further, the analytic approximations for the evolution of the perturbations in the limit $a\ll 1$ ($t \rightarrow 0, \widetilde t \rightarrow 0$) are found as
\begin{align}
\delta_\ell &\simeq - {27\over 22}{\cal D}\widetilde{m}^2 \widetilde{r} F \tilde{t}^{2}= - {27\over 22}\widetilde{m}^2 \widetilde{r} F \tilde{t}^{2},
\label{eq:ini_deltal}
\\
\widetilde{\phi}_\ell &\simeq {\cal D} \left(1 -{1\over 6}\widetilde{m}^2\tilde{t}^{2}\right)= 1 -{1\over 6}\widetilde{m}^2\tilde{t}^{2},
\label{eq:ini_phil}
\end{align}
allowing us to set the proper initial conditions for $\delta_\ell$ and $\widetilde{\phi}_\ell$.
The equations using $a$ and $\tilde{t}$ as independent variables are mutually transformable using Eq.~(\ref{eq:ini_a}),
as was done in Sec.~\ref{sec:numer_background}.
The analytical solution of the first-order equations in Eqs.~(\ref{eq:11})--(\ref{eq:16}) for the other quantities can be
found in a similar way as,
\begin{align}
&\Phi_\ell\simeq-\Psi_\ell\simeq+\frac{9}{22}{\cal D}\widetilde{m}^2 \widetilde{r} F \tilde{t}^{2}=+\frac{9}{22}\widetilde{m}^2 \widetilde{r} F \tilde{t}^{2} ,
\label{eq:ini_PsiPhi}
\\
&\widetilde{V}_\ell \simeq -\frac{3}{22}{\cal D}\widetilde{m}^2 \widetilde{r} F \tilde{t}^{3}= -\frac{3}{22}\widetilde{m}^2 \widetilde{r} F \tilde{t}^{3}.
\label{eq:ini_v}
\end{align}
We notice that $\delta_\ell$ and $\Psi_\ell$ are negative values, which correspond to the positive values of $\widetilde{\phi}_\ell$ in Eq.~(\ref{eq:ini_phil}). Physically, this means that an increase in dark energy $\phi$
makes the matter density perturbations $\delta_\ell$ (curvature potentials $\Phi_\ell$) negative (positive).
The first-order equations Eqs.~(\ref{eq:11})--(\ref{eq:16}) can be
solved in an exact manner using a numerical method. We present examples of the numerical solutions for perturbations $\tilde{\phi}_\ell(a)$ and $\delta_\ell(a)$ in Fig.~\ref{fig:numer-analytic}, where
we adopted ${\cal D}=1$ with the same typical parameter sets $\widetilde{r}$ and $\widetilde{m}$ chosen in Sec.~\ref{sec:numer_background}. The consistency between the analytic approximations in Eq.~(\ref{eq:ini_deltal}) and (\ref{eq:ini_phil}) (dashed line) with the numerical results (solid line) when $a\lesssim0.5$ corresponding to the matter-dominant initial condition is also demonstrated
in Fig.~\ref{fig:numer-analytic}, while the analytic approximation deviates from the numerical solution when $a\gtrsim0.5$.
We show how $\widetilde{m}$ affects the solution $\tilde{\phi}_\ell$ in Fig.~\ref{fig:philm}.
It should be noted that there is a slight dependence on $\Omega_m$ for $\tilde{\phi}_\ell$, similar to the behavior of $\widetilde{\phi}_0$ in Fig.~\ref{fig:phi0omegam}.
The behaviors of $\tilde{\phi}_\ell$ can be roughly understood from Eq.~(\ref{eq:ini_phil}), which is valid for $a \lesssim 0.5$.
Here $\widetilde{m}$ is important for the evolution of $\tilde{\phi}_\ell$,
whereas $\widetilde{r}$ is not. On the other hand, Eq.~(\ref{eq:12}) indicates that the solution of $\tilde{\phi}_\ell$ depends on $\tilde{\phi}_0$; hence, it slightly depends on $\Omega_m$, which can be understood by a discussion similar to that on the behavior of $\tilde{\phi}_0$ in Sec.~\ref{sec:numer_background} (see Eq.~(\ref{eq:phi0-discuss4})).
The dependence on the parameters for $\delta_\ell$ is shown in Fig.~\ref{fig:delta-lcdm-omegam} From Eq.~(\ref{eq:ini_deltal}), we can conclude that $\widetilde{m}$ and $\widetilde{r}$ affect $\delta_\ell$, which is demonstrated in the upper left panel and the upper right panel of Fig.~11, respectively. However, for natural choices mimicking the standard $\Lambda$CDM scenario, satisfying Eq.~(\ref{eq:lcdmparameter}), the coefficient $F\approx1$ holds; hence, we have $\delta_\ell \simeq -(27/22)(1-\Omega_m)\tilde{t}^2$, which explains the behavior of $\delta_\ell$ in the lower panels of Fig.~\ref{fig:delta-lcdm-omegam}.
\begin{figure}
\begin{minipage}{0.45\hsize}
\begin{center}
\includegraphics[width=\linewidth]{fig-deltam.pdf}
\end{center}
\vspace{-0.cm}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{center}
\includegraphics[width=\linewidth]{fig-deltar.pdf}
\end{center}
\vspace{-0.cm}
\end{minipage}
\label{fig:deltal}
\begin{minipage}{0.45\hsize}
\begin{center}
\includegraphics[width=\linewidth]{fig-deltalcdm.pdf}
\end{center}
\vspace{-0.cm}
\end{minipage}
\begin{minipage}{0.45\hsize}
\begin{center}
\includegraphics[width=\linewidth]{fig-deltaomegam.pdf}
\end{center}
\vspace{-0.cm}
\end{minipage}
\caption{
Numerical solutions for the matter perturbation $\delta_\ell$.
The upper left and upper right panels demonstrate the dependence of $\delta_\ell$ on
$\widetilde{m}$ and $\widetilde{r}$, respectively.
The lower left panel assumes the same value of $\Omega_m=0.3$,
while the lower right panel assumes slightly different values of
$\Omega_m$,
where $\widetilde{r}=70$ and $\widetilde{m}=1/10$ are fixed.
The lower panels show that $\delta_\ell$ will
be almost independent of $\widetilde{r}$ or $\widetilde{m}$ values,
as long as they satisfy Eq.~(\ref{eq:lcdmparameter}).
\label{fig:delta-lcdm-omegam}}
\end{figure}
\section{Applications}
\label{sec:appli}
In this section, we consider two applications of our model for CMB temperature fluctuations and luminosity distance.
The first is the integrated Sachs-Wolfe (ISW) effect~\cite{scmde1,WHuthesis}. Some aspects of this effect were
investigated in a previous paper \cite{scmde1}, which relies on
the statistical argument based on the two-point correlation function.
We revisit this problem by applying the formulations developed in the present study. The second is the impact on the luminosity distances, which is related to the observations of SNe Ia.
As noted following the definition of $\phi$, Eq.~(\ref{def:field}), $\epsilon_\ell$
was introduced to explicitly express the order of perturbations that are small, and was related to the coordinates we choose to define the multipoles of the perturbations in Eq.~(\ref{def:Psi})---(\ref{def:velo}). These amplitudes of perturbations will be taken unity, that is, $\epsilon_\ell \sim {\cal D}_{(\ell m)} \equiv 1$ (see also Eqs.~(\ref{assump:delta}), (\ref{assump:phil}), (\ref{coefd1}) and (\ref{coefa1})), for the purpose of numerical evaluation, where most importantly we are interested in the evolution of the perturbations. These amplitudes of the perturbations will be constraint by reintroducing other
parameters $\varepsilon_1$ and $\varepsilon_2$ when comparing with the actual CMB multipoles observed.
\subsection{CMB temperature fluctuations}
\label{sec:appli_CMB}
Through the integrated Sachs-Wolfe (ISW) effect, the perturbations to the metric caused by
the large-scale inhomogeneities of the dark energy $\phi$ affect the observations of the CMB anisotropies.
By using the relation between the comoving distance and the conformal time on the photon's path on the background, $\chi=\eta_0-\eta$,
we can evaluate the ISW effect on the temperature fluctuations of the CMB as
\begin{align}
{\Delta T\over T}
\simeq &
2\int_{\eta_d}^{\eta_0}\mathop{}\!\mathrm{d}\eta\left({\partial\Psi(\eta,\chi,\theta,\varphi)\over\partial\eta}\right)\Bigg|_{\chi=\eta_0-\eta}
\nonumber
\\
=&
2 \int_{\eta_d}^{\eta_0} \mathop{}\!\mathrm{d}\eta \left(
\sum^3_{m=1}{\partial\Psi_{1(m)}(\eta)\over\partial\eta} P^{(m)}_i x^i +
\sum^5_{m=1}{\partial \Psi_{2(m)}(\eta)\over\partial\eta} P^{(m)}_{ij} x^i x^j \right) \Bigg|_{\chi=\eta_0-\eta}
\nonumber
\\
=&
2
\int_{\eta_d}^{\eta_0} \mathop{}\!\mathrm{d}\eta \left({\partial\Psi_{1(m)}(\eta) \over\partial\eta} \sum^3_{m=1} P^{(m)}_i x^i \right)\Bigg|_{\chi=\eta_0-\eta}
+
\int_{\eta_d}^{\eta_0} \mathop{}\!\mathrm{d}\eta \left({\partial\Psi_{2(m)}(\eta) \over\partial\eta} \sum^5_{m=1} P^{(m)}_{ij} x^i x^j \right)\Bigg|_{\chi=\eta_0-\eta},
\label{eq:tempfluc1}
\end{align}
where $\eta_d$ denotes the era of the photon decoupling.
In the last line of Eq.~(\ref{eq:tempfluc1}), we used the Einstein summation convention
with respect to the index of $m$.
We note that $\Psi_{\ell(m)}$, which are denoted as $\Psi_{\ell}$ with the index $m$ omitted in the previous section for simplicity, are only functions of
the conformal time $\eta$.
It can also be confirmed that the matrices $P_{ij}^{m}$ and $P_{i}^{m}$ introduced in Sec.~\ref{sec:basic} are related to the
real basis spherical harmonics $Y^{m}_\ell(\theta,\varphi)$ (see Appendix~\ref{appen:matrix}).
By utilizing the relation in Eq.~(\ref{def:y1m}) and (\ref{def:y2m}), it follows that
\begin{align}
{\Delta T\over T}
&=
2\sum_m
\int_{\eta_d}^{\eta_0} \mathop{}\!\mathrm{d}\eta \left({\partial\Psi_{1(m)} \over\partial\eta} \chi Y_{\ell=1}^{(m)}(\theta,\varphi) \right)\Bigg|_{\chi=\eta_0-\eta}
2\sum_m\int_{\eta_d}^{\eta_0} \mathop{}\!\mathrm{d}\eta \left({\partial\Psi_{2(m)} \over\partial\eta} \chi^2 Y_{\ell=2}^{(m)}(\theta,\varphi) \right)\Bigg|_{\chi=\eta_0-\eta}
\nonumber
\\
&\equiv
2\sum_{\ell=1}^2 \sum_{m=1}^{2\ell+1}
Q_{\ell(m)} Y_{\ell}^{(m)}(\theta,\varphi),
\label{eq:tempfluc2}
\end{align}
with
\begin{align}
&Q_{\ell (m)}\equiv \int_{\eta_d}^{\eta_0} \mathop{}\!\mathrm{d}\eta (\eta_0-\eta)^{\ell} {\partial\Psi_{\ell(m)} \over\partial\eta}.
\end{align}
defined.
Because we have obtained the evolution of the perturbation $\Psi_{\ell(m)}$ in the previous numerical solution in Sec.~\ref{sec:numer}, $Q_{\ell(m)}$ can be numerically evaluated.
On the other hand, the angular two-point correlation function can be written in multipole expansion as~\cite{Bielewicz2004}
\begin{align}
\langle {\Delta T\over T}(\bm \gamma) {\Delta T\over T } (\bm \gamma') \rangle
=\sum_\ell \frac{2\ell+1}{4\pi}C_\ell P_\ell(\cos\theta),
\end{align}
where $\bm \gamma$ and $\bm \gamma'$ represent different unit line-of-sight directions with included angle $\theta$, i.e., $\bm \gamma \cdot \bm\gamma'=\cos\theta$.
The angular power spectrum $C_\ell$ is defined by the ensemble of squared expansion coefficients as follows:
\begin{gather}
C_\ell \equiv \displaystyle{\sum_{m=1}^{2\ell+1} |A_{\ell m}|^2\over 2\ell+1},
\end{gather}
where the coefficients are defined by
\begin{gather}
{\Delta T\over T}=\sum_\ell \sum_{m=1}^{2\ell+1} A_{\ell m} Y_{\ell}^{(m)}(\theta,\varphi).
\label{def:tempfluc1}
\end{gather}
Here we used $1\leq m\leq 2\ell+1$ to denote the magnetic quantum number.
By comparing Eq.~(\ref{eq:tempfluc2}) with (\ref{def:tempfluc1}), we find that
\begin{align}
A_{\ell m}=2
Q_{\ell(m)}=
2\left(\int_{\eta_d}^{\eta_0} \mathop{}\!\mathrm{d}\eta (\eta_0-\eta)^\ell {\partial\Psi_{\ell(m)} \over\partial\eta} \right).
\end{align}
A constraint on our model from the observational CMB power spectrum is $
C_\ell \leqslant C_\ell^{\rm obs}$,
which means that the contribution of the large-scale mode perturbations to the CMB power spectrum multipoles should not exceed what is actually observed, because
there may be other sources contributing to the anisotropies, as long as cancelations do not occur.
Consequently, we have two constraints from the $\ell=1$ dipole and the $\ell=2$ quadrupole respectively as
\begin{align}
{4\sum_{m=1}^{2\ell+1} Q_{\ell(m)}^2\over 2\ell+1} \leqslant C_\ell^{\rm obs}.
\end{align}
Thanks to the Planck Legacy Archive \footnote{Based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.},
we can apply the upper limit of the observational data as $C_1^{\rm obs}<6.3\times10^{-6}$ and $C_2^{\rm obs}<(2\pi/6)\times(1.0\times10^{-10})$ to put
constraints on the amplitudes of the perturbations.
For example, for both parameter sets $(\widetilde{r}=70, \widetilde{m}=1/10)$ and $(\widetilde{r}=6.3, \widetilde{m}=1/3)$, or, more generally, for models close to $\Lambda$CDM sets labeled with No.~(1,2,7,8) in Table~\ref{tab:para}, where the condition in Eq.~(\ref{eq:lcdmparameter}) is satisfied, the calculations on $Q_{1(m)}$ and $Q_{2(m)}$ give consistent results as
\begin{align}
&Q_{1(m)}=-1.1\times10^{-1}{\cal D}_{(1 m)},
\\
&Q_{2(m)}=-9.0\times10^{-2}{\cal D}_{(2 m)},
\end{align}
where the amplitude of the perturbations for each mode
${\cal D}_{(\ell m)}$ is recovered,
which lead to the following constraints
\begin{align}
&\varepsilon_1\equiv
\left[\sum_{m=1}^{2\ell+1} {\cal D}_{(1 m)}^2\over 2\ell+1\right]^{1/2}
\leqslant 1.2\times10^{-2},
\label{constr1}
\\
&\varepsilon_2\equiv\left[\sum_{m=1}^{2\ell+1} {\cal D}_{(2 m)}^2\over 2\ell+1\right]^{1/2}
\leqslant 5.7\times10^{-5} ,
\label{constr2}
\end{align}
because both parameter sets mimic the cosmology close to a $\Lambda$CDM model to yield the observational constraints safely.
We also present numerical evaluations with different parameter choices in Table~\ref{tab:para}.
\begin{table}[h]
\begin{center}
\begin{tabular}{c||cc||cc|cc|cc||c}
\hline
\hline
$\rm{No.}$&$(\widetilde{r},\widetilde{m})$ & $\Omega_m$ & $Q_{1(m)}$ & $Q_{2(m)}$ & $\varepsilon_1^{\rm max}$ & $\varepsilon_2^{\rm max}$ & $F_{S1(m)}(z=3)$ & $F_{S2(m)}(z=3)$ & $H_0\eta_0$\\
\hline
$(1)$&$(70, 1/10)$ & 0.30 & -0.107 & -0.0895 & $1.17\times10^{-2}$ & $5.72\times10^{-5}$ & -0.0462 & -0.0693 & 3.19\\
$(2)$&$(6.3, 1/3)$ & 0.30 & -0.107 & -0.0896 & $1.17\times10^{-2}$ & $5.71\times10^{-5}$ & -0.0462 & -0.0692 & 3.19\\
$(3)$&$(50, 1/10)$ & 0.30 & -0.0904 & -0.0757 & $1.39\times10^{-2}$ & $6.76\times10^{-5}$ & -0.0390 &-0.0586 & 3.19\\
$(4)$&$(100, 1/10)$ & 0.30 & -0.128 & -0.107 & $9.82\times10^{-3}$ & $4.78\times10^{-5}$ & -0.0552 & -0.0828 & 3.19\\
$(5)$&$(6.3, 1/5)$ & 0.30 & -0.0642 & -0.0537 & $1.96\times10^{-2}$ & $9.52\times10^{-5}$ & -0.0277 & -0.0416 & 3.19\\
$(6)$&$(6.3, 1/10)$ & 0.30 & -0.0321 & -0.0269 & $3.91\times10^{-2}$ & $1.91\times10^{-4}$ & -0.0138 & -0.0208 & 3.19\\
$(7)$&$(2.8, 1/2)$ & 0.30 & -0.107 & -0.0897 & $1.18\times10^{-2}$ & $5.70\times10^{-5}$ & -0.0463 & -0.0692 & 3.19\\
$(8)$&$(280, 1/20)$ & 0.30 & -0.107 & -0.0895 & $1.17\times10^{-2}$ & $5.72\times10^{-5}$ & -0.0461 & -0.0693 & 3.19\\
$(9)$&$(72, 1/10)$ & 0.28 & -0.116 & -0.100 & $1.08\times10^{-2}$ & $5.11\times10^{-5}$ & -0.0503 & -0.0770 & 3.28\\
$(10)$&$(68, 1/10)$ & 0.32 & -0.0985 & -0.0803 & $1.27\times10^{-2}$ & $6.37\times10^{-5}$ & -0.0425 & -0.0626 & 3.11\\
$(11)$&$(1/70, 1/10)$ & 0.30 & -0.00153 & -0.00128 & $8.21\times10^{-1}$ & $4.00\times10^{-5}$ & -0.000659 & -0.000990 & 3.19\\
$(12)$&$(6.3, 1/2)$ & 0.30 & -0.160 & -0.135 & $7.83\times10^{-3}$ & $3.80\times10^{-5}$ & -0.0694 & -0.104 & 3.19\\
$(13)$&$(70, 1/10)$ & 0.32 & -0.100 & -0.0815 & $1.25\times10^{-2}$ & $6.28\times10^{-5}$ & -0.0431 & -0.0635 & 3.11\\
$(14)$&$(70, 1/10)$ & 0.28 & -0.115 & -0.0988 & $1.09\times10^{-2}$ & $5.18\times10^{-5}$ & -0.0496 & -0.0759 & 3.28\\
\hline
\hline
\end{tabular}
\caption{
Numerical results with different model parameters $(\widetilde{r},\widetilde{m})$ and cosmological parameter $\Omega_m$.
The models close to the $\Lambda$CDM model are labeled as Nos.~(1,2,7,8,9,10,13,14).
Within these models, the Nos.~(1,2,7,8,9,10) satisfy the condition in Eq.~(\ref{eq:lcdmparameter}) with exact holding of the equality. Note that the values for the present comoving horizon $\eta_0$ also indicate that $\widetilde{r}$ is not important for the background expansion, while $\Omega_m$ does show its expected influence on $\eta_0$.
To see this, we focus on comparing the conditions of the models labeled with Nos.~(1,3,4,6,11), where different values of $\widetilde{r}$ rarely change $\eta_0$; on the
other hand, a comparison between Nos.~(1,13,14) shows a slight dependence of $\eta_0$ on $\Omega_m$, as expected.
Especially, No.~(11) is a model extremely close to the $\Lambda$CDM model,
and the EoS of dark energy is almost constant $w_{\phi}\approx-1$, predicting a future evolution
quickly approaching the de-Sitter expansion.
}
\label{tab:para}
\end{center}
\end{table}
\subsection{Perturbations to light propagation and luminosity distance}
Following Refs.~\cite{FS1989,AOF2019}, as we have solved the metric perturbations $\Psi_\ell$ associated with large-scale fluctuations of the dark energy, we can evaluate the
perturbation to the luminosity distance introduced by the inhomogeneities of the dark energy by considering the metric perturbations formulated previously.
The relative perturbations of the luminosity distance in an inhomogeneous universe is given as~\cite{FS1989,sasaki1987}
\begin{align}
I &\equiv {\delta d_L \over d_L}
=\int^{\lambda_s}_0 \mathop{}\!\mathrm{d} \lambda {\lambda \over \lambda_s}(\lambda-\lambda_s) \left(\Delta^{(3)}\Psi-\left(\ddot{\Psi}+2{\mathop{}\!\mathrm{d}\dot\Psi \over \mathop{}\!\mathrm{d}\lambda}\right)\right),
\label{eq:deltald}
\end{align}
where $\dot\Psi\equiv
{\partial \Psi(\eta,\chi) \over \partial \eta}$, and assume that the
spatially flat universe.
The traceless property of matrices $P_{ij}^{(m)}$ defined by
Eq.~(\ref{def:Psi}) in $\Psi$ ensures that $\Delta^{(3)}\Psi=0$ (see Eq.~(\ref{laplacian})).
For the term containing differentiation with respect to the propagation parameter $\lambda$, we may write
\begin{align}
{\mathop{}\!\mathrm{d}\over \mathop{}\!\mathrm{d}\lambda}={\mathop{}\!\mathrm{d}\eta\over \mathop{}\!\mathrm{d}\lambda}{\partial \over \partial \eta}
+{\mathop{}\!\mathrm{d}\chi\over \mathop{}\!\mathrm{d}\lambda}{\partial \over \partial \chi}.
\label{eq:dlambda}
\end{align}
Here, we may take the parameter $\lambda$ as the comoving distance $\chi$; hence, $\lambda\equiv\chi=\eta_0-\eta$ and $\lambda_s\equiv\chi_s=\eta_0-\eta_s$ with an arbitrary light source indicated by lower index $s$,
thus, we have
\begin{align}
I =& \int^{\chi_s}_{0} \mathop{}\!\mathrm{d} \chi {\chi \over \chi_s}(\chi-\chi_s)
\left(\ddot{\Psi}-2 {\partial \dot{\Psi} \over \partial \chi}\right).
\end{align}
Using a procedure similar to that used to transform Eq.~(\ref{eq:tempfluc1}) to Eq.~(\ref{eq:tempfluc2}),
with the definition of $\Psi$ in Eq.~(\ref{def:Psi}) and Eqs.~(\ref{def:y1m})--(\ref{def:y2m}), we can rewrite $I$ as
\begin{align}
I =&
\int^{\chi_s}_{0} \mathop{}\!\mathrm{d} \chi(\chi-\chi_s) {\chi \over \chi_s}
\left[
\left(\ddot{\Psi}_{\ell(m)}-2 \dot{\Psi}_{\ell(m)} {\partial \over \partial \chi}\right)
\left(\sum^{3}_{m=1} \chi Y_{\ell=1}^{(m)}(\theta,\varphi)+\sum_{m=1}^{5} \chi^2 Y_{\ell=2}^{(m)}(\theta,\varphi)\right)
\right]
\nonumber
\\
\equiv& \sum_{\ell=1}^2 \sum_{m=1}^{2\ell+1}
S_{\ell(m)} Y_\ell^{(m)}(\theta,\varphi),
\label{eq:ld}
\end{align}
with the integral defined as
\begin{align}
S_{\ell(m)}\equiv\int^{\chi_s}_{0} \mathop{}\!\mathrm{d} \chi {\chi-\chi_s\over \chi_s} \left(\chi^{\ell+1}\ddot{\Psi}_{\ell(m)}-2\ell\chi^\ell
\dot{\Psi}_{\ell(m)}
\right).
\end{align}
It is worth reminding again that $\Psi_{\ell(m)}(\eta)$ is only a function of $\eta$.
$S_{\ell(m)}$ is the quantity that reflects the impact of accumulative corrections on the luminosity distance by the inhomogeneities of the dark energy, which can be evaluated numerically.
We evaluate $S_{\ell(m)}$ due to the perturbation of $\Psi$ caused by dark energy inhomogeneity as a function of $a$ or the cosmological redshift $z$, corresponding to the light sources from different epochs,
\begin{align}
S_{\ell(m)}(a)=F_{S\ell(m)}(a){\cal D}_{(\ell m)}.
\label{eq:slma}
\end{align}
Then we have
\begin{align}
F_{S\ell(m)}(a)
&\equiv
\int^{\eta_s(a)}_{\eta_0} \mathop{}\!\mathrm{d}\eta \left((\eta_0-\eta)^{\ell+1}{\partial^2\Psi_{\ell(m)}\over\partial\eta^2}
-2\ell(\eta_0-\eta)^{\ell}{\partial\Psi_{\ell(m)}\over\partial\eta}
\right)
{\eta-\eta_s(a) \over \eta_0-\eta_s(a)},
\label{eq:fsaeta}
\end{align}
in a more explicit manner for numerical evaluation with respect to scale factor $a$ using $a_1$ as the variable of integration,
\begin{align}
F_{S\ell(m)}(a)
&=
-\int^{1}_{a} \mathop{}\!\mathrm{d} a_1
\left[
\left(\eta_0-\eta(a_1)\right)^{\ell+1}
{\partial \over \partial a_1} \left(a_1^2 H(a_1) {\partial\Psi_{\ell(m)} \over \partial a_1} \right)
- 2\ell \left(\eta_0-\eta(a_1)\right)^{\ell}{\partial\Psi_{\ell(m)}\over\partial a_1}
\right]{\eta(a_1)-\eta_s(a) \over \eta_0-\eta_s(a)}.
\label{eq:fsaa}
\end{align}
We notice that $F_{S\ell(m)}(a)$ does not increase or decrease monotonically, whose typical behaviors are illustrated as a function of $a$ or $z$ in Fig.~\ref{fig:fsaz}.
The scale factor is related to the cosmological redshift by $z=a^{-1}-1$, which is used to
convert each other.
\begin{figure}[t]
\includegraphics[width=0.47\linewidth]{fig-sa.pdf}
\hspace{5mm}
\includegraphics[width=0.47\linewidth]{fig-sz.pdf}
\caption{Multipole components of the perturbations to the luminosity distance defined as $F_{S\ell(m)}(a;z)$
as a function of scale factor $a$ (left panel) and redshift $z$ (right panel).
In each panel, the solid curve is the dipole, $\ell=1$, and the dashed curve is the quadrupole $\ell=2$.
Here we adopted the model No.~(1) in the Table I.
}
\label{fig:fsaz}
\end{figure}
\if0
As an extreme case, if we observe the oldest free-streaming light for the observer, i.e., the last-scattering surface of the CMB, we take
$\eta_s=\eta_d$, $\chi_s=\eta_0-\eta_d$, and use $\eta$ instead of $\chi$ to write down
the integral
\begin{align}
S_{\ell(m)}(a(\eta_d)
&\equiv
{1\over \eta_0-\eta_d}\int^{\eta_d}_{\eta_0} \mathop{}\!\mathrm{d}\eta \left((\eta_0-\eta)^{\ell+1}{\partial^2\Psi_{\ell(m)}\over\partial\eta^2}
-2\ell(\eta_0-\eta)^{\ell}{\partial\Psi_{\ell(m)}\over\partial\eta}
\right)
(\eta-\eta_d),
\label{coefeta}
\end{align}
where $\eta_d$ is the decoupling time.
Because $\eta_d\ll \eta_0$, the above expression can be written as a function of $a$ using Eq.~(\ref{eq:trans-a-eta}),
\begin{align}
S_{\ell(m)}(a(\eta_d))
&\simeq
-{1\over \eta_0}\int^{1}_{a(\eta_d)} \mathop{}\!\mathrm{d} a
\left[
\left(\eta_0-\eta(a)\right)^{\ell+1}
{\partial \over \partial a} \left(a^2 H(a) {\partial\Psi_{\ell(m)} \over \partial a} \right)
- 2\ell \left(\eta_0-\eta(a)\right)^{\ell}{\partial\Psi_{\ell(m)}\over\partial a}
\right]\eta(a).
\label{coefa}
\end{align}
\fi
Because we have solved for the system as functions of the scale factor $a$ in Sec.~\ref{sec:numer}, $\Psi_\ell(a)$, $\eta(a)$, and $H(a)$, and the particle horizon $\eta_0$ are already known for the given parameters $\widetilde{r}$ and $\widetilde{m}$ . If necessary, we can also transform these quantities using the conformal time $\eta$
as an independent variable (see Appendix~\ref{appen:transf}). On the other hand, we put constraints on $\varepsilon_1$ and $\varepsilon_2$ in Sec.~\ref{sec:appli_CMB},
thus we can evaluate the modification to the luminosity distance $I$ with Eq.~(\ref{eq:ld}) by numerically evaluating $S_{\ell(m)}$
with the constraint Eqs.~(\ref{constr1})~and~(\ref{constr2}).
Our numerical results $F_{S\ell(m)}$ are shown in Figure \ref{fig:fsaz} as a function of $a$ (left panel),
and $z$ (right panel), respectively.
Our results with different parameters can be found in Table~\ref{tab:para}.
We evaluated $F_{S\ell(m)}({a})$ at $a=0.25$, which corresponds to $z=3$.
\if0
\begin{align}
S_{1(m)}(z=3)=F_{s1}|_{a=0.25} {\cal D}_{(1 m)},
\label{result:ld0}
\\
S_{2(m)}(z=3)=F_{s2}|_{a=0.25} {\cal D}_{(2 m)}.
\label{result:ld}
\end{align}
where $D_{0(\ell m)}$ is the amplitude of the scalar field perturbations for each mode labeled by $\ell$ and $m$.
\fi
We estimate the multipole compoments of $I$ as
\begin{align}
I_\ell\equiv\sum_{m=1}^{2\ell+1} S_{\ell(m)}
\sim(2\ell+1) S_{\ell(m)}.
\end{align}
Allowed values of ${\cal D}_{(
\ell m)}\sim\mathcal{O}(\varepsilon_\ell)$ ($\ell=1,2$) are found
in Sec.~\ref{sec:appli_CMB} (see Eqs.~(\ref{constr1}) and (\ref{constr2})),
e.g.,
with $\varepsilon_1<1.2\times10^{-2}$ and $\varepsilon_2<5.7\times10^{-5}$,
we can evaluate the modification to the luminosity distance caused by large-scale vector modes using Eq.~(\ref{eq:ld}),
that the magnitude of the correction caused by the $\ell=1$ component is $\mathcal{O}(10^{-3})$, whereas it is $\mathcal{O}(10^{-5})$ for the $\ell=2$ component.
We have the consistent results of modification to the luminosity distance $I_\ell$ as,
\begin{align}
I_{\ell=1}
\simeq-1.6\times10^{-3},
\label{eq:estima1}
\\
I_{\ell=2}
\simeq-2.0\times10^{-5},
\label{eq:estima2}
\end{align}
at the redshift $z=3$ for all models in Table~\ref{tab:para}.
\section{Discussions and Conclusions}
We formulated a cosmological model with inhomogeneous dark energy sourced from a dynamical scalar field with extremely large-scale fluctuations,
by handling them as cosmological perturbations to a homogeneous background to focus on a local observable universe.
This model is capable of reproducing an observable universe that mimics the $\Lambda$CDM flat universe favored by the observations but with inhomogeneity and anisotropy of small amplitudes on very large scales.
We investigated the basic equations governing the evolution of the universe for the background and perturbations,
presented the numerical solutions for these equations by choosing appropriate parameters that reproduce cosmological models close to the $\Lambda$CDM universe.
As the examples for the application of the results, we investigated
the impact of the extremely large-scale inhomogeneity of the dark energy
on the cosmological observations in the late-time universe, where dark energy becomes important for background evolution.
In our numerical evaluations, we chose the parameters of the models close to the $\Lambda$CDM model, for example, $(\widetilde{r}=70, \widetilde{m}=1/10)$
and $(\widetilde{r}=6.3, \widetilde{m}=1/3)$, which satisfies the condition in Eq.~(\ref{eq:lcdmparameter}). However, the prediction of
the models is robust for the different choices of the parameters ($\widetilde{r}$, $\widetilde{m}$), as shown in Table~\ref{tab:para} in Sec.~\ref{sec:numer}.
We also showed that slight changes in the values of $\Omega_m$ do not
alter the results.
The observational constraints on cosmological parameters allow deviation from the standard $\Lambda$CDM scenario to some extent~\cite{TSJ,Jassal2010}, potentially suggesting that dynamical quintessence models for dark energy EoS are favored~\cite{DiValentino2020}.
Hence it is interesting
to investigate constraints on the parameter space consistent with these observations.
Using numerical solutions, we focused our investigation on the impact of the large-scale inhomogeneities of the dark energy on the large angular anisotropies in the CMB temperature map and in the luminosity distance.
The time variations of the metric perturbations give rise to the ISW effect, which affects the temperature anisotropies.
On the contrary to the previous work \cite{scmde1}, we investigated the
multipole spectrum in the spatially flat universe using numerical solutions without approximations.
We obtained the constraints Eqs.~(\ref{constr1})~and~(\ref{constr2}) on the
amplitude of the models from the observational data.
The contribution from the large-scale inhomogeneities of the dark energy on the dipole of the CMB temperature power spectrum may partly account for
the anomalies in the dipole and low multipoles of the CMB power spectra~\cite{Bielewicz2004,Polastri2015}.
The inhomogeneities of the dark energy affect the cosmic distance, which may
impact the observations of SNe Ia and BAO measurements.
We used the formula Eq.~(\ref{eq:deltald}), according to Refs.~\cite{FS1989,AOF2019} for evaluation of these parts.
Our numerical calculations showed that
the relative correction to the luminosity distance
could be $\mathcal{O}(10^{-3})$ for the
dipole and $\mathcal{O}(10^{-5})$ for the quadrupole components.
For general parameter choices in Table~\ref{tab:para},
these corrections seem too small to resolve the Hubble tension, which is becoming increasingly conspicuous between measurements via CMB and via standard candles such as SNe Ia \cite{Nielsen2016,Mohayaee2020,Colin2019a}, as addressed in Sec.~\ref{sec:intro}.
However, comprehensive analyses, including wide ranges of the model parameters and the various observational results taking systematics into account, will be interesting ~\cite{Rubin2016,Rubin2020,DiValentino2020}.
Especially, the future progress of the gravitational wave observations with associative
electromagnetic observations will be promising to provide with a standard siren \cite{Holz2005,Dalal2006,Vitale2018,Zhang2019}.
Our model presented here is a possible dark energy model predicting the anisotropic expansion rate or anisotropic dark energy density and equation of state.
Using the solutions in the present paper, we can realize the dynamical
dark energy models with the inhomogeneous
density on the large scales on the smooth background of the local universe.
The inhomogeneous dark energy model will be interesting from the viewpoint
that it
is potentially verifiable/falsifiable
by on-going/planned data release of existing observations and future generation observations (for example,
DES, DESI, LSST~\cite{LSST}, Euclid~\cite{Euclid}, and Roman Space Telescope (formerly known as WFIRST)~\cite{WFIRST}, cf. \cite{Yamauchi2018}).
Additionally, the neutral hydrogen cosmology from the 21-cm spectrum survey planned by SKA~\cite{SKA} may link BAO with redshift-space distortions and add up to a better understanding of dark energy.
The future data of these surveys may help to test the
inhomogeneous properties of the dark energy.
The work in the present paper is inspired by a previous work \cite{scmde1},
in which large-scale dark energy perturbations are
generated by the quantum fluctuations of a scalar field
according to an open-inflation scenario. The original model predicts a cosmological model with negative spatial curvature. However, in the present study, we considered a spatially flat universe $\Omega_K=0$;
therefore, the origin of the scalar field as the candidate for dark energy in our model is a subject to be discussed further.
Recently, ultralight scalar fields such as axion-like particles have attracted great interest as cosmological candidates for dark energy and dark matter~\cite{Visinelli:2018utg}, linked with the
strong CP problem and motivated by the string axiverse and the swampland conjectures ~\cite{Arvanitaki,Witten,Ooguri,Heisenberg,Mizuno2019}.
Exploring the possibility to generate the initial conditions necessary for a scalar
field in our model could be interesting within the framework of these scenarios in future investigations.
\acknowledgments
This work was supported by MEXT/JSPS KAKENHI Grant Number JP No.~20J13640 (Y. N.), No.~15H05895,
No. 16H03977, No. 17K05444, No. 17H06359 (K. Y.).
We would like to thank K. Yamashita, Y. Sugiyama, Y. Kojima, N. Okabe,
A. Naruko and M. Sasaki for fruitful discussions and helpful comments.
| 2024-02-18T23:39:46.523Z | 2021-11-30T02:20:37.000Z | algebraic_stack_train_0000 | 350 | 12,656 |
|
proofpile-arXiv_065-1825 |
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{teaser.pdf}
\vspace{-1em}
\caption{Given a set of trained/seen attribute detectors (e.g. ``red wing", ``red head", ``blue breast", and ``green breast"), our ZSLA can synthesize a novel detector for the unseen attribute (e.g. ``red breast") by the following process:
(1) applying the intersection operation on the subsets $\{$``red wing", ``red head"$\}$ and $\{$``blue breast", ``green breast"$\}$ respectively to extract the common semantics of each subsets, i.e. ``red" and ``breast", as the \emph{\textbf{base attributes}}; (2) combining the base attributes via the union operation to realize the novel/unseen attribute detector, i.e. ``red breast". The novel attribute detectors can later be applied to annotate the dataset.}
\label{Fig.teaser}
\vspace{+0.25cm}
\end{figure}
\par \new{Zero-shot learning (ZSL) algorithms for classification aim to recognize novel categories without observing any of their instances during model training; thus, the cost of collecting training samples for the novel categories can be eliminated. Typically, the core challenge behind zero-shot classification lies in associating novel categories with the seen ones during training. Various existing approaches leverage different auxiliary semantic information to construct such associations across categories, thus being able to generalize the learned models for classifying novel categories~\cite{DAP_IAP, ALE, ESZSL, SAE, LAGO, AGZSL} or synthesize the training samples for each novel category~\cite{f-WGAN, f-VAEGAN-D2, CADAVAE, EPGN, tfVAEGAN}. Among different types of auxiliary semantic information adopted for ZSL, defining a group of attributes shared among categories becomes one of the most popular choices, where each category is described by multiple attributes (i.e., multi-labeled by the attributes), and the attribute-based representations are discriminative across categories.
However, it comes with the expensive cost of manually annotating the samples in the dataset their attribute labels at a much granular level.
For example, CUB dataset~\cite{CUB}, one of the most widely-used benchmarks for learning zero-shot classification, is built by spending a great deal of time and effort to label 312 attributes for 11788 images.}
\par \new{As motivated by the issue of annotation efficiency on attribute labels, this paper aims to \emph{\textbf{develop ZSL on known attributes to annotate novel attributes for a dataset automatically}}.
That is, analogous to the zero-shot classification scenario, we now advance to annotate novel attributes for a dataset via utilizing the knowledge from a few types of seen/given manual attributes, as illustrated in Figure \ref{Fig.teaser}. Specifically, we take the well-known CUB dataset~\cite{CUB} as our main test-bed and have a deep investigation on its attributes. We discover that, many attributes in CUB dataset (e.g. ``red head" or ``blue belly") follow the form of combinations over \emph{\textbf{base attributes}} (e.g. ``red", ``blue", ``head" and ``belly" respectively).
Building upon such observation, given a defined set of attributes in the form of the ones used in the CUB dataset and labels of a few \emph{\textbf{seen attributes}} (where the number is far less than that of overall defined attributes), we propose \textbf{Z}ero-\textbf{S}hot \textbf{L}earning for \textbf{A}ttributes (ZSLA), a method of training the \emph{\textbf{seen attribute detectors}} and then tackle the ZSL problem to synthesis unseen attribute detectors via a \emph{\textbf{decompose-and-reassemble}} manner. In detail, the seen attribute detectors are firstly decomposed into base attribute representations, in which they are further reassembled with novel combinations into novel attribute detectors, as illustrated in Figure~\ref{Fig.teaser}. Here, both the decomposition and reassembly steps are achieved via set operations (i.e., the interaction and union operators, respectively). Together with the seen ones, the novel attribute detectors can be utilized to annotate the attribute labels for the dataset automatically.
}
\par \new{
To demonstrate the efficiency of ZSLA, we synthesize 207 novel attribute detectors by leveraging only 32 seen ones from the CUB dataset.
These novel attribute detectors are shown to be effective in capturing their corresponding semantic information and benefit both the attribute detection and localization for the samples in CUB dataset.
Besides, we also synthesize $\alpha$-CLEVR dataset by \cite{clevr} for conducting the controlled experiments to further discuss the influence of noisy seen attribute labels. The results show that ZSLA can provide more robust annotations than the other baseline methods under the noisy scenario. Below, we highlight the contributions of this paper:
}
\begin{itemize}
\item To the best of our knowledge, we are the first to propose ZSL for attributes to automatically annotate attribute labels for the zero-shot classification datasets.
\item We propose a novel decompose-and-reassemble approach to single out the base attribute representations via applying intersection on the seen ones and synthesize the unseen attribute detectors by having the union operation over the base attributes representations.
\item We show on the CUB dataset that, given only 32 attributes with manual annotations, ZSLA can synthesize novel attribute detectors to provide high-quality annotations for the dataset. By using the auto-annotated attributes, generalized zero-shot classification algorithms can also achieve comparable or even better performance than that using 312 manually-annotated attributes.
\end{itemize}
\begin{comment}
\par \old{Zero-shot learning (ZSL) algorithms for classification devote to recognizing the novel categories without observing any of their instances during the model training. In this way, the cost of collecting training sets for the novel categories can be eliminated. Typically, the core challenge behind zero-shot classification lies in associating novel categories with those categories which have been seen during training. Various existing approaches leverage different auxiliary semantic information to construct such associations across categories, thus being able to generalize the learned models for classifying novel categories~\cite{DAP_IAP, ALE, ESZSL, SAE, LAGO, AGZSL} or synthesize the training samples for each novel category~\cite{f-WGAN, f-VAEGAN-D2, CADAVAE, EPGN, tfVAEGAN}. Among different types of auxiliary semantic information adopted for ZSL, defining a group of attributes shared among categories becomes one of the most popular choices, where each category is described by multiple attributes that it contains (i.e. multi-labelled by the attributes) and the attribute-based representations are discriminative across categories.
However, attribute-based representations comes with expensive the cost of manually annotating the samples in the training dataset their attribute labels at a much granular level.
For example, CUB dataset~\cite{CUB}, one of the most widely-used benchmarks for learning zero-shot classification, is built by spending a great deal of time and effort to label 312 attributes for 11788 images. Yet, there is almost no work to discuss how such cost of attribute annotations can be further reduced.
}
\par \old{As motivated by the aforementioned issue of requiring expensive annotation on attribute labels to construct the zero-shot classification dataset, we hence come up with a new research problem to \emph{\textbf{develop zero-shot learning on attributes}}. That is, being analogous to the zero-shot classification scenario, we now advance to create novel attributes via utilizing the knowledge from the seen/trained attributes. In this paper, we take the well-known CUB dataset~\cite{CUB} as our main test-bed and have a deep investigation on its attributes. We discover that, many attributes in CUB dataset (e.g. ``red head" or ``blue belly") follow the form of combinations over \emph{\textbf{base attributes}} (e.g. ``red", ``blue", ``head" and ``belly" respectively). Building upon such observation, we propose ZSLA - an approach to tackle the zero-shot learning problem for attributes (in the form as the ones used in CUB dataset) via a \emph{\textbf{decompose-and-reassemble}} manner: the attributes seen during training are firstly decomposed into base attributes, in which they are further reassembled with novel combinations into unseen attributes, as illustrated in Figure~\ref{Fig.att_batt}. In particular, both the decomposition and reassembly steps are achieved via set operations (i.e. the interaction and union operators respectively). Moreover, because both the input and output of ZSLA (i.e. the seen and novel/unseen attributes respectively) are connected to the attribute detectors/classifiers, we are able to utilize the novel attribute detectors, together with the seen ones, to automatically annotate the attribute labels for the zero-shot classification dataset. We demonstrate the efficacy of ZSLA via conducting experiments on the CUB dataset, in which ZSLA can synthesize 207 novel attributes by leveraging only 32 seen ones, and these novel attribute detectors are effective well capturing their corresponding semantic information thus benefiting both the attribute detection and localization for the samples in CUB dataset.
}
\old{
We highlight the contributions of this paper as follows:
\begin{itemize}
\item To the best of our knowledge, we are the first to
tackle the problem of zero-shot learning for attributes.
\item We propose a novel decompose-and-reassemble approach to single out the base attributes via applying intersection on the seen ones, and synthesize the unseen attributes by having the union operation over the base attributes.
\item The novel attribute detectors synthesized by ZSLA are effective to provide high-quality attribute annotations for the zero-shot classification dataset.
\end{itemize}}
\end{comment}
\section{Related Works}
\walon{Zero-shot learning (ZSL) was originally proposed to tackle the specific classification problem, where the model is expected to be capable of classifying the samples belonging to the novel categories which are not seen previously during training. The problem setup has been extended to other applications such as detection~\cite{bansal2018zero, rahman2018zero, demirel2018zero} and segmentation~\cite{bucher2019zero, zheng2021zero}. Here we provide a brief review of the works of zero-shot classification~\cite{DAP_IAP, ESZSL, ALE, SJE, SAE, CADAVAE, xu2020attribute,f-WGAN, f-VAEGAN-D2, CADAVAE, EPGN, tfVAEGAN}. Without loss of generality, the ZSL approaches rely on utilizing the auxiliary information (such as attributes, word embeddings, or text descriptions) as the basis for describing the categories and building the semantic relation among seen and unseen categories, and the existing methods can be roughly categorized into two groups: the embedding-based methods~\cite{ALE, SJE, SAE, CADAVAE, xu2020attribute} and generative methods~\cite{f-WGAN, f-VAEGAN-D2, CADAVAE, EPGN, tfVAEGAN}. The embedding-based methods basically aim to learn a latent space that connects between the feature representations of training samples and the embeddings of their corresponding auxiliary information (e.g., the visual features and the embeddings of attribute labels for the training images in the CUB dataset), such that the test samples can be classified as the novel categories once their feature representations are close to the embeddings of novel categories (which are defined upon auxiliary information without requiring any additional training samples). The generative methods instead utilize the deep-generative models (e.g., generative adversarial networks~\cite{goodfellow2014generative}, variational autoencoder~\cite{kingma2013auto}, or their hybrids/variants) for learning to synthesize the samples or features of the unseen categories based on their auxiliary semantic information. Though saving the effort of collecting the training samples to recognize novel categories via ZSL techniques, manually annotating the auxiliary semantic information for the samples in the zero-shot training dataset is still quite expensive and time-consuming. The proposed ZSL for novel attribute learning helps to reduce such costs for the scenario of zero-shot classification where the auxiliary information is defined on attributes.}
\walon{In addition to the typical zero-shot classification problem, recently there exists another specific zero-shot task that our work is also conceptually related to: \textit{compositional zero-shot learning} (CZSL)~\cite{misra2017red, nagarajan2018attributes, atzmon2020causal, mancini2021open, huynh2020compositional, naeem2021learning}. Also known as \textit{state-object compositionality} problem, CZSL aims to recognize the novel compositions (e.g. ``ripe tomato'') given the seen visual primitives of states/attributes (e.g. ``ripe'', ``rotten'') and objects (e.g. ``apple'', ``tomato'') in the training dataset, where various models have been proposed and we just name a few here:~\cite{misra2017red} utilizes the state and object classifiers pretrained on a large-scale dataset, and learns a transformation network to compose these classifers into a novel classifier for their combination;~\cite{nagarajan2018attributes} proposes to treat the attributes as the linear operators which are applied upon the word-embeddings of objects to produce the embedded vectors of their compositions.~\cite{atzmon2020causal} models the causal graph from the intervention
between attributes and objects to the corresponding image observation. In comparison, our proposed problem scenario is different from CZSL under several perspectives: (1) CSZL studies the compositionality between states/attributes and objects, while our proposed problem scenario focuses on decomposing and reassembling attributes; (2) An image in our problem scenario would have multiple attributes while there usually exists only a single state-object composition for CZSL; (3) Our synthesized attribute detectors are able to provide labels of novel attributes for all samples thus leading to more detailed descriptions for all categories, while CZSL typically aims to increase the number of categories (i.e. each novel composition is treated as a new fine-grained class).
}
\section{ZSLA: Proposed Method}
\walon{Given a zero-shot classification dataset $\{\mathbf{X}, \mathbf{Y}, \mathbf{A}^s\}$, each image $x \in \mathbf{X}$ has its class label $y \in \mathbf{Y}$ and the multi-attribute labels $\phi^s(x)$, where $\phi^s(x)$ is a binary vector with its each element denoting if $x$ has a certain attribute $a \in \mathbf{A}^s$. ZSLA starts with using $\{\mathbf{X}, \mathbf{A}^s\}$ to train the detectors $M^s$ for all the attributes in $\mathbf{A}^s$, which are treated as seen attributes, then it adopts the seen attribute detectors $M^s$ to synthesize the detectors $M^u$ for the unseen attributes $\mathbf{A}^u$ via a decompose-and-reassemble procedure, where $\mathbf{A}^s \cap \mathbf{A}^u = \emptyset$.} Without loss of generality,
we use the most popular zero-shot classification dataset, CUB~\cite{CUB}, to illustrate how these steps are realized in the following subsections.
\subsection{Training Seen Attribute Detectors}
\vspace{-0.5em}
\walon{
Our attribute detectors are built on top of the image feature space produced by the image feature extractor $f$. Given an input image $x$ and its feature map $f(x) \in \mathbb{R}^{W\times H\times C}$ where each $C$-dimensional feature vector at position $(i, j)$ of $f(x)$, denoted as $f(x)[i, j]$, is the feature representation of the corresponding image patch on $x$, the attribute detectors $M^s \in \mathbb{R}^{C \times N^s}$ (in which $N^s$ denotes the number of attributes in $\mathbf{A}^s$) aim to give high response on the image patches containing the visual appearance related to the attributes in $\mathbf{A}^s$. Specifically, each column in $M^s$ is acting as the embedding of a certain attribute. We use $m^s_k$ to indicate the $k$-th column of $M^s$. The response of the corresponding $k$-th attribute in $\mathbf{A}^s$
with respect to the patch-wise feature vector $f(x)[i, j]$ is calculated by
a specific form of their cosine similarity $\texttt{cos}(\left|m^s_k\right|, f(x)[i, j])$, where $\left|m^s_k\right|$ denotes applying element-wise absolute-value operator on $m^s_k$. We have $\left|m^s_k\right|$ in our cosine similarity computation due to the reason that: Each dimension along channels of $f(x)$ is considered to capture a specific visual pattern. Our $\left|m^s_k\right|$ hence acts as to apply the weighted combination over these various visual patterns for representing the characteristics of the $k$-th attribute in $\mathbf{A}^s$, and the absolute-value operator over $m^s_k$ is to ensure the combination weights are non-negative.}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{approach.pdf}
\vspace{-1em}
\caption{\walon{Overview of Our ZSLA}. \textbf{(1)}~Training the seen attribute detectors: \walon{Seen attribute detectors, defined as the embeddings for each seen attribute, are built on top of the image features and their training is guided by two objectives: $\mathcal{L}_{bce}$ and $\mathcal{L}_{umc}$, where the former drives the trained detectors to perform binary classification for attributes on image patches (cf. Eq.~\ref{eq:seen_ce_no_location}) while the latter enforces the uni-modal constraint on the response map $\mathcal{R}^s(x)$ of patch-wise image features with respect to each attribute, in order to make it compact and concentrated (cf. Eq.~\ref{eq:umc}).} \textbf{(2)}~Learning to synthesize novel/unseen attribute detectors via a decompose-and-reassemble procedure:
\walon{Given the trained detectors of seen attributes, the intersection operation is firstly applied on them to extract base attributes, and then these base attributes are further combined by union operation to synthesize the novel/unseen attributes. The training of these operations is driven by the reconstruction loss $\mathcal{L}_{rec}$ (cf. Eq.~\ref{eq:rec}) once the synthesized attribute coincides with any of the seen ones.
}
}
\vspace{+0.25cm}
\label{Fig.train_process}
\end{figure*}
\new{
We denote $\mathcal{R}^s(x) \in \mathbb{R}^{W\times H \times N^s}$ as the response map which has included the cosine similarities of all the seen attributes $\mathbf{A}^s$ at each position on $f(x)$. Note that, as our feature extractor $f$ adopts the ReLU activation function in its last layer (similar to most image feature extractors based on the convolutional networks), the values in $f(x)$ become non-negative. Furthermore, as both $\left|m^s_k\right|$ and $f(x)[i, j]$ are non-negative vectors, all entries of $\mathcal{R}^s$ results to be within the range $[0, 1]$. Following the popular tricks for ZSL and deep learning pointed out in~\cite{skorokhodov2020class}, where adopting scaled cosine similarity in logits computation is important to achieve better model training, we use the computation below to calibrate the value of elements in $\mathcal{R}^s(x)$:
\begin{equation}
\tilde{R}^s(x) = \gamma^2 \cdot (2\cdot R^s(x) -1 )
\label{eq:scaled_cos_sim}
\end{equation}
where the calculation within brackets shifts and expands the values in $R^s(x)$ towards $[-1, 1]$ to match the typical value range of cosine similarity, and the hyperparameter $\gamma$ is set to 5 as suggested by~\cite{skorokhodov2020class}.
Then, we perform the max-pooling operation on $\tilde{R}^s(x)$ and obtain the image-wise attribute response $\tilde{r}^s(x) \in \mathbb{R}^{N^s}$.
Such logits over attributes thus are able to drive the model training (i.e. optimization over $M^s$ and $f$) via the error between the attribute detection results and the ground-truth attribute labels $\phi^s(x)$. The objective function $\mathcal{L}_{bce}$ to evaluate the error between the logits of attribute detection result $\tilde{r}^s(x)$ and the ground-truth attribute labels $\phi^s(x)$ is defined via the binary cross-entropy:
\vspace{-0.5em}
\begin{equation}
\begin{aligned}
\mathcal{L}_{bce} = -\sum \limits_{k}^{N^s} &\phi^s_k(x) \cdot \log(\sigma(\tilde{r}^s_k(x))) + (1-\phi^s_k(x)) \cdot \log(1-\sigma(\tilde{r}^s_k(x)))
\label{eq:seen_ce_no_location}
\end{aligned}
\end{equation}
where $\phi^s_k(x)$ and $\tilde{r}^s_k(x)$ denote the $k$-th elements in $\phi^s(x)$ and $\tilde{r}^s(x)$ respectively, and $\sigma$ is the sigmoid function.
}
\new{
In addition to the $\mathcal{L}_{bce}$ loss, we introduce another objective function $\mathcal{L}_{umc}$ to place the \emph{\textbf{uni-modal constraint}} on the response map $\tilde{R}^s(x)$, which
encourages the response map for a certain attribute (e.g. $\tilde{R}_k^s(x)$, the $k$-th channel of $\tilde{R}^s_k(x)$) to be uni-modal and concentrated. In other words, we expect that an attribute only appears at a single location or a small region on the image $x$.
\begin{equation}
\begin{aligned}
\mathcal{L}_{umc} = \sum^{N^s}_k\sum_{(i,j)} \sigma(\tilde{R}^s_k(x)[i, j]) \cdot (\left\| i-\breve{i}_k \right\|^2 + \left\| j-\breve{j}_k \right\|^2),
\end{aligned}
\label{eq:umc}
\end{equation}
where $\breve{i}_k, \breve{j}_k = \mathop{\arg\max}_{i, j} \tilde{R}^s_k(x)[i,j]$ and $\|\cdot\|$ denotes the Euclidean norm.
}
\new{
The overall objective to train the feature extractor $f$ and the seen attribute detectors $M^s$ is illustrated in the left portion of Figure~\ref{Fig.train_process} and summarized as:
$\mathcal{L}_{bce}+\lambda\mathcal{L}_{umc}$,
where the hyperparameter $\lambda$ controls the balance between losses and is set to $0.2$ in our experiments.
Moreover, we are aware that in CUB dataset the additional annotations of indicating the ground-truth locations for the attributes which an image $x$ has are also available (e.g. we know where the attribute ``brown wing'' appears on an image of ``gadwall''). Hence, in addition to max-pooling the response map $\mathcal{R}^s(x)$ to obtain the image-wise response $r^s(x)$ for attributes, we experiment another way to obtain $r^s(x)$: (1) If $\phi^s_k(x)$ is true, the $k$-th element in $r^s(x)$, i.e. $r^s_k(x)$, is assigned by $\mathcal{R}^s(x)[i,j]$ where the centre of the ground-truth location for the $k$-th attribute in $\mathbf{A}^s$ is located on the patch related to the position $(i, j)$ of $\mathcal{R}^s$; (2) If $\phi^s_k(x)$ is false, $r^s_k(x)$ is assigned by having the average pooling over the $k$-th channel of $\mathcal{R}^s(x)$. We provide in supplement the analysis for the impact of using such additional annotations of attribution location on the performance of ZSLA.
\subsection{Decompose-and-Reassemble for Synthesizing Novel Attribute Detectors}
\label{sec:DandR}
\vspace{-0.5em}
After obtaining the seen attribute detectors $M^s$, we now aim to perform the decompose-and-reassemble procedure (as shown in the right-half of Figure~\ref{Fig.train_process}) for generating the detectors $M^u \in \mathbb{R}^{C\times N^u}$ of the novel attributes $\mathbf{A}^u$ (where $N^u$ is the number of attributes in $\mathbf{A}^u$) by leveraging $M^s$.
\walon{First, we observe that most of the attributes in CUB dataset (the most popular zero-shot classification dataset and also our test-bed in this work) follow the form of ``\textit{adjective} + \textit{object part}'', for instance: ``black eye'', ``brown forehead'', ``red upper-tail'', or ``buff breast''. Starting from such observation, we define two disjoint sets of \emph{\textbf{base attributes}}, $\mathbf{B}^c$ and $\mathbf{B}^p$, representing the \textit{adjectives} and \textit{object parts} used in the seen attributes, respectively (e.g. ``blue'', ``yellow'', ``solid'', and ``perching-like'' for $\mathbf{B}^c$; ``leg'', ``beak'', ``belly'', and ``throat'' for $\mathbf{B}^p$). Please note that the concepts behind adjectives $\mathbf{B}^c$ in CUB dataset include not only color but also texture, shape, and others. Formally, given an attribute $a$, we use $\beta^c(a)$ and $\beta^p(a)$ to denote its corresponding base attributes on the adjective and object part, respectively (i.e. $\beta^c(a) \in \mathbf{B}^c$ and $\beta^p(a) \in \mathbf{B}^p$), where $\beta^c(\cdot)$ and $\beta^p(\cdot)$ are functions to indicate the base attributes in $\mathbf{B}^c$ and $\mathbf{B}^p$ for an attribute $a$, respectively.}
\walon{Now, given two seen attributes $a_k$ and $a_l \in \mathbf{A}^s$ in which $a_k = \{\beta^c(a_k), \beta^p(a_k)\}$ and $a_l = \{\beta^c(a_l), \beta^p(a_l)\}$, if $a_k$ and $a_l$ have common ground in either the base attribute of adjectives (i.e. $\beta^c(a_k) = \beta^c(a_l) \in \mathbf{B}^c$) or the one of object parts (i.e. $\beta^p(a_k) = \beta^p(a_l) \in \mathbf{B}^p$) but not both, then we can use the \emph{\textbf{intersection operation}} $\mathbb{I}$ to extract such common base attribute from $a_k$ and $a_l$:
\begin{equation}
\begin{aligned}
\mathbb{I}(&a_k, a_l) = \begin{cases}
\beta^c(a_k) & \text{ if } \beta^c(a_k) = \beta^c(a_l),~\beta^p(a_k) \neq \beta^p(a_l)\\
\beta^p(a_k)& \text{ if } \beta^c(a_k) \neq \beta^c(a_l),~\beta^p(a_k) = \beta^p(a_l)
\end{cases}
\end{aligned}
\end{equation}
For instance, the intersection operation $\mathbb{I}$ is able to extract the base attribute ``red'' from the seen attributes ``red wing'' and ``red breast''; or the base attribute ``tail'' from the seen attributes ``buff tail'' and ``black tail''.
}
\walon{Once we obtain the base attributes via intersection over seen attributes, we further adopt the \emph{\textbf{union operation}} $\mathbb{U}$ to create novel attributes. Given two pairs of seen attributes $\{a_k, a_l\}$ and $\{a_{{k}'}, a_{{l}'}\}$ in which $\beta^c(a_k) = \mathbb{I}(a_k, a_l)$ and $\beta^p(a_{{k}'}) = \mathbb{I}(a_{{k}'}, a_{{l}'})$, i.e. $\{a_k, a_l\}$ share the same base attribute of adjective while $\{a_{{k}'}, a_{{l}'}\}$ share the same base attribute of object part, a novel attribute $\tilde{a}$ can be synthesized by combining $\beta^c(a_k)$ and $\beta^p(a_{{k}'})$, i.e. $\tilde{a} = \mathbb{U}(\beta^c(a_k), \beta^p(a_{{k}'}))$. In particular, if such combination of base attributes has been seen in $\mathbf{A}^s$, i.e. there exists an attribute $a \in \mathbf{A}^s$ where $\beta^c(a) = \beta^c(\tilde{a})$ and $\beta^p(a) = \beta^p(\tilde{a})$, we say the seen attribute $a$ is \emph{\textbf{reconstructed}} by $\tilde{a}$. Otherwise, if none of the seen attributes has the identical combination as our synthesized $\tilde{a}$, we denote $\tilde{a}$ a \emph{\textbf{novel attribute}} and $\tilde{a} \in \mathbf{A}^u$. In summary, extracting base attributes from seen attributes via intersection, followed by combining the base attributes into novel attributes via union, holistically forms our \emph{\textbf{decompose-and-reassemble}} procedure to synthesize the novel attributes.}
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\textwidth]{architecture.pdf}
\caption{\walon{The implementation of our intersection $\mathbb{I}$ and union $\mathbb{U}$ operations to realize the decompose-and-reassemble procedure, where $\mathbb{I}$ adopts the architecture extended from the vision transformer~\cite{dosovitskiy2020image} while $\mathbb{U}$ simply adopts the average operation.}}
\label{Fig.architecture}
\vspace{+0.25cm}
\end{figure}
\walon{In practice, the implementation of our intersection function $\mathbb{I}$ as illustrated in Figure~\ref{Fig.architecture} is built based on the encoder architecture of vision transformer~\cite{dosovitskiy2020image} (ViT), in which its input is the embeddings of the seen attributes, i.e. the transformer takes $m^s_k$ and $m^s_l$ from $M^s$ as input when performing $\mathbb{I}(a_k, a_l)$, where $a_k, a_l \in \mathbf{A}^s$. To be detailed, there are several modifications in our transformer for intersection $\mathbb{I}$ with respect to the original ViT: (1) We remove the position embedding in order to fulfil the commutative property of intersection, i.e. $\mathbb{I}(a_k, a_l) = \mathbb{I}(a_l, a_k)$; (2) We attach a learnable token named ``intersection head'' to the input sequence of transformer, which is similar to the extra class embedding in ViT. The corresponding output of this intersection head after going through the transformer encoder represents the embedding of the resultant base attribute, where we apply the element-wise absolute-value operation on it to make it a non-negative vector (being analogous to what we did for the seen attributes). Please note that, the embedding of a base attribute is also a $C$-dimensional vector. Regarding our union function $\mathbb{U}$, we simply adopt the average operation for its implementation, that is: Given two base attributes $b^c \in \mathbf{B}^c$ and $b^p \in \mathbf{B}^p$, we obtain the embedding $\tilde{m}$ of the synthesized attribute $\tilde{a} = \mathbb{U}(b^c, b^p)$ by averaging the embeddings of $b^c$ and $b^p$. Specifically, such $C$-dimensional embedding $\tilde{m}$ is also defined upon the image feature and acts as the detector for the synthesized attribute $\tilde{a}$.}
\walon{The training of our proposed decompose-and-reassemble procedure for synthesizing novel attributes is simply based on the reconstruction loss of the seen attributes $\mathcal{L}_{rec}$. Given a synthesized attribute $\tilde{a}$, if there exists a seen attribute $a_k \in \mathbf{A}^s $ with having $\beta^c(a_k) = \beta^c(\tilde{a})$ and $\beta^p(a_k) = \beta^p(\tilde{a})$, the embedding $\tilde{m}$ of $\tilde{a}$ and the embedding $m^s_k$ of $a_k$ are expected to be identical to each other, and $\mathcal{L}_{rec}$ is thus defined as:
\begin{equation}
\mathcal{L}_{rec} = \left \| m^s_k - \tilde{m} \right\|
\label{eq:rec}
\end{equation}
Note that, as our union function $\mathbb{U}$ has no trainable parameters (since it is simply an average operation), the gradient of $\mathcal{L}_{rec}$ is propagated to focus on learning the parameters of our transformer for the intersection function $\mathbb{I}$. In other words, we expect that the transformer is so powerful to be capable of extracting the base attributes where their averages are informative enough to act as the detectors for the synthesized attributes. Furthermore, in order to fully leverage the seen attributes for training our decompose-and-reassemble procedure, we have the particular training scheme follows Algorithm~\ref{alg:1}.
\begin{algorithm}[!hp]
\SetAlgoLined
\SetKwInput{KwData}{Given}
\KwData{trained detectors $M^s$ of seen attributes $\mathbf{A}^s$}
\KwResult{parameters $\theta$ of the transformer for $\mathbb{I}$}
\For{\text{every attribute } $a \in \mathbf{A}^s$}{
randomly sample attributes $a_k$, $a_l$ from $\mathbf{A}^s$ with\\ $\beta^c(a)=\beta^c(a_k)=\beta^c(a_l)$, $\beta^p(a_k)\neq\beta^p(a_l)$;\\
obtain the embedding $m^c$ of base attribute $\beta^c(a)$ via intersection $\mathbb{I}(a_k, a_l)$;\\
randomly sample attributes $a_{{k}'}$, $a_{{l}'}$ from $\mathbf{A}^s$ with\\ $\beta^p(a)=\beta^p(a_{{k}'})=\beta^p(a_{{l}'})$, $\beta^c(a_{{k}'})\neq\beta^c(a_{{l}'})$;\\
obtain the embedding $m^p$ of base attribute $\beta^p(a)$ via intersection $\mathbb{I}(a_{{k}'}, a_{{l}'})$;\\
synthesize attribute $\tilde{a}$ via union $\mathbb{U}(\beta^c(a), \beta^p(a))$ with its embedding $\tilde{m} = (1/2) \cdot (m^c+m^p)$;\\
$\theta \leftarrow \arg\min\limits_{\theta} \mathcal{L}_{rec}(m^s_k, \tilde{m})$;
}
\caption{
\section{Experimental Results}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{retrieval.pdf}
\caption{Examples of attribute retrieval and localization. Each set shows the top-5 retrieved images and their response maps for a synthesized novel attribute, where the images marked with red borders are the false positives according to CUB ground-truth.
}
\label{retrieval}
\vspace{+0.25cm}
\end{figure*}
\noindent\textbf{Dataset.}
Our experiments are mainly conducted on the Caltech-UCSD Birds-200-2011 dataset~\cite{CUB} (usually abbreviated as CUB) for zero-shot classification. CUB dataset collects 11,788 images of 200 bird categories, where each image is annotated with 312 attributes. We select 32 attributes as of our seen attributes $\mathbf{A}^s$, which can be decomposed into 15 base attributes of adjective $\mathbf{B}^c$ and 16 base attributes of object part $\mathbf{B}^p$, and we can use these base attributes to synthesize 207 novel attributes $\mathbf{A}^u$.
We follow the setting proposed by~\cite{ZSLGBU} for the task of generalized zero-shot learning (GZSL) to split the CUB dataset, where such training and testing sets are used to train and evaluate our proposed scenario of ZSL on attributes, respectively.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{base_retrieval.pdf}
\caption{Examples of showing the retrieval and localization ability of base attributes. Each set shows the top-5 retrieved images and their corresponding response map for a base attribute representation (extracted by applying our intersection operation on seen attributes detectors).}
\label{Fig.base_retrieval}
\vspace{+0.25cm}
\end{figure*}
\noindent\textbf{Baselines.}
As our task of ZSL on attributes for dataset annotation is novel, there is no prior work that we can directly make a comparison with. However, as ZSLA follows the decompose-and-reassemble procedure which has a hierarchy between attributes and base attributes, we adapt two representative methods of zero-shot classification which explicitly have the class--attribute hierarchy behind their formulation to be our baselines, by using the analogy between two hierarchies (i.e. our attribute--base attribute versus their class--attribute). These two baselines are ESZSL~\cite{ESZSL} and LAGO singleton~\cite{LAGO} (note that both of them realize classification with the help of attribute prediction), in which we particularly rename their adaptions to our scenario of ZSL on attributes as $\mathbf{A}$-\textbf{ESZSL} and $\mathbf{A}$-\textbf{LAGO} respectively for avoiding confusion. There are several modifications on their original formulation to achieve the adaption: (1) replacing class/attribute with attribute/base-attribute,
(2) changing the task setting from multi-class to multi-attribute binary classification, and (3) switching image-wise feature representations to patch-wise ones. Note that, in the following experiments, both the baselines and ZSLA use the additional ground-truth of attribute locations (i.e. knowing where an attribute appears on the image) provided by CUB to train the seen attribute detectors, unless stated otherwise.
\subsection{Evaluation on Unseen Attributes}
\begin{center}
\setlength{\tabcolsep}{2mm}
\begin{table}[t]
\centering
\begin{tabular}{c|c|ccc}
& $N^s$ & mAUROC & mAP@50 & mLA \\ \hline\hline
& {\color[HTML]{3531FF} 32} & {\color[HTML]{3531FF} .626} & {\color[HTML]{3531FF} .223} & {\color[HTML]{3531FF} .756} \\
& {\color[HTML]{009901} 64} & {\color[HTML]{009901} .614} & {\color[HTML]{009901} .200} & {\color[HTML]{009901} .769} \\
\multirow{-3}{*}{\textbf{A-ESZSL}} & {\color[HTML]{963400} 96} & {\color[HTML]{963400} .632} & {\color[HTML]{963400} .234} & {\color[HTML]{963400} .756} \\ \hline
& {\color[HTML]{3531FF} 32} & {\color[HTML]{3531FF} .600} & {\color[HTML]{3531FF} .173} & {\color[HTML]{3531FF} .782} \\
& {\color[HTML]{009901} 64} & {\color[HTML]{009901} .612} & {\color[HTML]{009901} .180} & {\color[HTML]{009901} .787} \\
\multirow{-3}{*}{\textbf{A-LAGO}} & {\color[HTML]{963400} 96} & {\color[HTML]{963400} .627} & {\color[HTML]{963400} .222} & {\color[HTML]{963400} .795} \\ \hline
& {\color[HTML]{3531FF} \textbf{32}} & {\color[HTML]{3531FF} \textbf{.689}} & {\color[HTML]{3531FF} \textbf{.320}} & {\color[HTML]{3531FF} \textbf{.846}} \\
& {\color[HTML]{009901} \textbf{64}} & {\color[HTML]{009901} \textbf{.704}} & {\color[HTML]{009901} \textbf{.327}} & {\color[HTML]{009901} \textbf{.860}} \\
\multirow{-3}{*}{\textbf{Our ZSLA}} & {\color[HTML]{963400} \textbf{96}} & {\color[HTML]{963400} \textbf{.717}} & {\color[HTML]{963400} \textbf{.329}} & {\color[HTML]{963400} \textbf{.867}}
\end{tabular}
\caption{Evaluation of synthesized novel/unseen attributes on attribute classification (mAUROC), retrieval (mAP@50), and localization (mLA). $N^s$ is the number of seen attributes.
}\label{tab:table1}
\vspace{+0.25cm}
\end{table}
\end{center}
We design three schemes to evaluate the quality of the synthesized novel attribute detectors learnt by ZSLA: (1) \textbf{Attribute Classification}. Based on ground-truth attribute annotation of the test images (note that each image typically has multiple attributes), we measure the performance of our synthesized attribute detectors on recognizing their corresponding attributes in the test images. We adopt the area under receiver operating characteristic (AUROC) as our metric for the classification accuracy of each attribute, and we report the average over AUROCs (denoted as mAUROC) of all synthesized attribute detectors; (2) \textbf{Attribute Retrieval.} We rank the test images according to their image-wise responses as to a given attribute detector, to simulate the application scenario of retrieving the images which are most likely to own the target attribute from an image set. Note that the image-wise response is computed by max-pooling over the responses of patch-wise image features with respect to the attribute detector. For each attribute detector we compute the average precision (AP) of its top 50 retrieved images, and report the average AP (denoted as mAP@50) of all detectors as the metric; (3) \textbf{Attribute Localization.} As in CUB the ground-truth locations that an attribute appears on the test images are available, we introduce the localization accuracy (LA) to measure how well the location having the highest response to an attribute detector matches with the ground-truth ones (counted as correct if they are located on the same or neighboring patches). We average over the LA of each attribute as the metric (denoted as mLA).
Table~\ref{tab:table1} summarizes the performance in terms of mAUROC, mAP@50, and mLA obtained by baselines and ZSLA, with the number $N^s$ of seen attributes $\mathbf{A}^s$ set as $\{32, 64, 96\}$. It is clear to see that ZSLA provides superior performance in comparison to the baselines on all the settings of $N^s$ and evaluation schemes, particularly the localization accuracy. Moreover, by using merely 32 seen attributes to perform the synthesis of novel attribute detectors, ZSLA can achieve comparable results with the baselines of using 64 or 96 seen attributes. Qualitative examples for showing the results of attribute retrieval and attribute localization for the novel attributes synthesized by ZSLA are provided in Figure~\ref{retrieval}.
Besides these quantitative and qualitative results demonstrating the efficacy of ZSLA on novel attributes, we also provide some qualitative examples in Figure~\ref{Fig.base_retrieval} to showcase the localization and retrieval ability of our base attribute representations extracted from the seen attribute detectors.
\subsection{Automatic Annotations for Learning Generalized Zero-Shot Image Classification}
\label{sec:reannotate}
\begin{table*}[t!]
\resizebox{\textwidth}{!}{
\begin{tabular}{c|cccc|cccc|cccc|cccc|}
\cline{2-17}
\multicolumn{1}{l|}{} & \multicolumn{4}{c|}{CADAVAE} & \multicolumn{4}{c|}{TFVAEGAN} & \multicolumn{4}{c|}{ALE} & \multicolumn{4}{c|}{ESZSL} \\ \cline{2-17}
& S & U & H & {\color[HTML]{B234FC} \textbf{GAIN}} & S & U & H & {\color[HTML]{B234FC} \textbf{GAIN}} & S & U & H & {\color[HTML]{B234FC} \textbf{GAIN}} & S & U & H & {\color[HTML]{B234FC} \textbf{GAIN}} \\ \hline
\multicolumn{1}{|c|}{\cellcolor[HTML]{ECF4FF}} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 42.9} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 27.3} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 33.4} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} -} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 45.5} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 31.2} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 37.1} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{B234FC} -} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 26.4} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 9.2} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 13.7} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{B234FC} -} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 29.8} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 10.8} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{000000} 15.9} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{B234FC} \textbf{-}} \\
\multicolumn{1}{|c|}{\multirow{-2}{*}{\cellcolor[HTML]{ECF4FF}\begin{tabular}[c]{@{}c@{}}Manual\\ ($N^s$=32 for CUB)\end{tabular}}} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF}\textbf{} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{B234FC} \textbf{}} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{B234FC} \textbf{}} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF} & \cellcolor[HTML]{ECF4FF}{\color[HTML]{B234FC} \textbf{}} \\ \cline{2-17}
\multicolumn{1}{|c|}{\cellcolor[HTML]{E6FFE6}} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{3531FF} \textbf{53.5}} & \cellcolor[HTML]{E6FFE6}51.6 & \cellcolor[HTML]{E6FFE6}{\color[HTML]{3531FF} \textbf{52.4}} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{B234FC} +19.0} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{FF0000} \textbf{64.7}} & \cellcolor[HTML]{E6FFE6}52.8 & \cellcolor[HTML]{E6FFE6}{\color[HTML]{FF0000} \textbf{58.1}} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{B234FC} {\ul \textbf{+21.0}}} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{FF0000} \textbf{62.8}} & \cellcolor[HTML]{E6FFE6}23.7 & \cellcolor[HTML]{E6FFE6}34.4 & \cellcolor[HTML]{E6FFE6}{\color[HTML]{B234FC} +20.7} & \cellcolor[HTML]{E6FFE6}63.8 & \cellcolor[HTML]{E6FFE6}12.6 & \cellcolor[HTML]{E6FFE6}21.0 & \cellcolor[HTML]{E6FFE6}{\color[HTML]{B234FC} +5.1} \\
\multicolumn{1}{|c|}{\multirow{-2}{*}{\cellcolor[HTML]{E6FFE6}\begin{tabular}[c]{@{}c@{}}Manual\\ ($N^s$=312 for CUB)\end{tabular}}} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6}{\ul } & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{B234FC} \textbf{}} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{B234FC} } & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6} & \cellcolor[HTML]{E6FFE6}{\color[HTML]{B234FC} } \\ \cline{2-17}
\multicolumn{1}{|c|}{\cellcolor[HTML]{FFF3E4}A-LAGO} & \cellcolor[HTML]{FFF3E4}45.4 & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{55.4}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{000000} 49.9} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +16.5} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{000000} 57.4} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{53.0}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{333333} 55.1} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +18.0} & \cellcolor[HTML]{FFF3E4}51.8 & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{27.2}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{35.6}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +21.9} & \cellcolor[HTML]{FFF3E4}49.7 & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FF0000} \textbf{17.1}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{25.4}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +9.5} \\
\multicolumn{1}{|c|}{\cellcolor[HTML]{FFF3E4}A-ESZSL} & \cellcolor[HTML]{FFF3E4}41.5 & \cellcolor[HTML]{FFF3E4}48.7 & \cellcolor[HTML]{FFF3E4}44.8 & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +11.4} & \cellcolor[HTML]{FFF3E4}56.0 & \cellcolor[HTML]{FFF3E4}48.5 & \cellcolor[HTML]{FFF3E4}52.0 & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +14.9} & \cellcolor[HTML]{FFF3E4}46.1 & \cellcolor[HTML]{FFF3E4}19.0 & \cellcolor[HTML]{FFF3E4}26.9 & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +13.2} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{61.3}} & \cellcolor[HTML]{FFF3E4}9.2 & \cellcolor[HTML]{FFF3E4}16.0 & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +0.1} \\
\multicolumn{1}{|c|}{\cellcolor[HTML]{FFF3E4}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{000000} 50.3} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FF0000} \textbf{56.4}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FF0000} \textbf{53.2}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} {\ul \textbf{+19.8}}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{59.0}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FF0000} \textbf{55.9}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{57.4}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} +20.3} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{52.4}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FF0000} \textbf{27.5}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FF0000} \textbf{36.1}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} {\ul \textbf{+22.4}}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FF0000} \textbf{65.1}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{3531FF} \textbf{16.4}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{FE0000} \textbf{26.2}} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} {\ul \textbf{+10.3}}} \\
\multicolumn{1}{|c|}{\multirow{-2}{*}{\cellcolor[HTML]{FFF3E4}\begin{tabular}[c]{@{}c@{}}Our ZSLA\\ ($N^s$=32, $N^u$=207 for $\delta$-CUB)\end{tabular}}} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4}\textbf{} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} \textbf{}} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} \textbf{}} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4} & \cellcolor[HTML]{FFF3E4}{\color[HTML]{B234FC} \textbf{}} \\ \hline
\end{tabular}
}
\caption{
Experiments results of training and evaluating four representative GZSL methods (i.e. CADAVAE, TFVAEGAN, ALE, ESZSL) on the datasets built upon different sources of attribute annotation (e.g. manual annotation given by original CUB dataset, and re-annotation provided by ZSLA or baselines. Please refer to Section~\ref{sec:reannotate} for more details). As for the columns, $\textbf{S}$ and $\textbf{U}$ represent the accuracy on seen and unseen classes respectively, while $\textbf{H}$ represents the harmonic mean of $\textbf{S}$ and $\textbf{U}$. The highest scores are marked in bold \textcolor{red}{red}, while the second-highest ones are marked in bold \textcolor{blue}{blue}. \textbf{GAIN} columns show the difference in terms of harmonic mean with respect to the results obtained by using 32 manually-labelled seen attributes for GZSL (i.e. the results on the blue-shaded row for CUB dataset).
}\label{tab:deltaCUB_GZSL}
\vspace{+0.25cm}
\end{table*}
To further access the quality of our synthesized attribute detectors, we adopt the 32 seen attribute detectors and the 207 novel attribute detectors (i.e. $N^s$=32, $N^u$=207) learned by ZSLA to \textit{re-annotate the attribute labels for the whole CUB dataset} to simulate the labeling process during constructing a new dataset, and name the resultant new dataset ``$\delta$-CUB''. Then we adopt $\delta$-CUB to train and evaluate four representative
GZSL algorithms, i.e. ALE~\cite{ALE}, ESZSL~\cite{ESZSL}, CADAVAE~\cite{CADAVAE}, and TFVAEGAN~\cite{tfVAEGAN} using the settings proposed by~\cite{ZSLGBU} (i.e. for $\delta$-CUB and CUB, training with samples from the 150 seen classes, then evaluating the performance on all 200 classes including the 50 unseen ones). Note that, the class-attribute matrix, which shows the composition of attributes for each class and is needed for GZSL (i.e. the semantic information of classes), is computed by the statistics in $\delta$-CUB. Similarly, we also use $\mathbf{A}$-\textbf{ESZSL} and $\mathbf{A}$-\textbf{LAGO} baselines to re-annotate CUB dataset and perform GZSL under the same aforementioned setting. The results related to ZSLA and baselines are summarized in the row shaded by the orange color of Table~\ref{tab:deltaCUB_GZSL}.
Moreover, we additionally experiment on training the four GZSL algorithms by using only 32 attributes or using all 312 attributes obtained from the original CUB dataset as the semantic information, where their results are summarized in the rows shaded by the blue and green color of Table~\ref{tab:deltaCUB_GZSL}, respectively.
\new{
From the results, we observe that using $\delta$-CUB for training, where our ZSLA automatically annotates all the attribute labels, can largely benefit the performance of GZSL algorithms. By treating the harmonic mean over the accuracy numbers on both seen and unseen categories as the metric for GZSL, $\delta$-CUB is superior to those datasets annotated by baselines or even the one using manual annotations.
Specifically, the gain obtained by using our $\delta$-CUB with respect to the setting of using 32 manually-labeled attributes (i.e., the blue-shaded row of Table~\ref{tab:deltaCUB_GZSL}) demonstrates the practical value of our proposed problem scenario of ZSL on attributes: Without additional cost for collecting annotation, we provide more attribute labels via synthesizing novel attribute detectors from the seen ones, and thus different categories can be better distinguished by more fine-grained/detailed attribute-based representations.
Moreover, regarding the results that our automatic re-annotation leads to better performance than the manual one (i.e., the green-shaded row of Table~\ref{tab:deltaCUB_GZSL}), we believe that this is mainly due to the biased semantic information caused by noisy labels stemming from the inconsistency between different human annotators when building CUB dataset. In comparison, our attribute detectors can produce consistent attribute annotations as we use the same set of attribute detectors for labeling all images; it eventually contributes to a more suitable semantic for learning zero-shot classification. We provide more discussions on such issues in the supplementary. }
\begin{comment}
\setlength{\tabcolsep}{0.8mm}
\begin{table*}[ht]
\begin{tabular}{c|cccc|cccc}
\textbf{ANC} & \multicolumn{4}{c|}{\textbf{\ding{51}}} & \multicolumn{4}{c}{\textbf{}} \\ \hline
\textbf{method} & \textbf{UB} & \textbf{AndOr} & \textbf{AndAvg} & \textbf{LinearAvg} & \textbf{UB} & \textbf{AndOr} & \textbf{AndAvg} & \textbf{LinearAvg} \\ \hline
\textbf{mAUROC} & .747 & .624 & .699 & .679 & .678 & .567 & .510 & .500 \\
\textbf{mAP@50} & .418 & .217 & .329 & .298 & .332 & .174 & .157 & .135 \\
\textbf{mLA} & .890 & .827 & .842 & .807 & .529 & .410 & .123 & .218
\end{tabular}
\caption{Ablation study on attribute nonnegative constraint}\label{tab:table5}
\end{table*}
\vspace{+0.25cm}
\end{comment}
\subsection{Robustness against Noisy Attribute Labels}\label{sec:robustness}
\new{
Due to the preference bias among different annotators mentioned in section 4.2, it is hard to obtain perfect seen attribute labels for training. Thus, it is interesting to discuss the effect of the noisy level of seen attribute labels (used for training) on the final annotation quality produced by different auto-annotation methods. To conduct the controlled experiments to understand the effect of the noisy labels, we additionally synthesize a toy dataset (via~\cite{clevr}), $\alpha$-CLEVR, to create perfect attribute labels and adjustable noise labels for analysis.}
\new{ Specifically, the $\alpha$-CLEVR dataset is composed of 24 attributes which are the combinations of 8 colors (i.e., base attributes of adjective $\mathbf{B}^c$) and 3 shapes (i.e., base attributes of object part $\mathbf{B}^p$). Among them, 16 attributes, which can be decomposed into the 11 base attributes, are selected as seen attributes $\mathbf{A}^s$ for training the annotation algorithms. On the other hand, to perform the GZSL task and evaluate the annotation quality, we create 160 classes in $\alpha$-CLEVR; these images, including the same toy bricks, are treated as the same class. Ultimately, each class has 30 images; 80 classes are set as seen data, and the other 80 classes are set as unseen data. In the GZSL inference phase, testing images from both seen and unseen classes are used. More details about the $\alpha$-CLEVR dataset and image examples are provided in our supplementary.
}
\new{To measure the performance drop caused by noisy seen attribute labels, we define the \textbf{wrong attribute label rate} (abbreviated as \textbf{\textbf{\textbf{\textbf{WALR}}}}) to represent the noisy level of attribute labels.
For instance, when WALR is set to 0.3, any toy brick in the training images has a 30\% chance of inaccurately annotating (e.g., a blue cube is annotated as a purple sphere). Considering the uncertainty when injecting noise to randomly-selected labels, our evaluation is calculated based on five runs of the experiments. Thus, for each noisy label training set, we report both the mean performance and its 95\% confidence interval (cf. Figure~\ref{Fig.noisy_label_95_CI} and~\ref{Fig.noisy_label_GZSLAH}).
}
\begin{figure}[h]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3in]{CI_mAUROC.pdf}
\captionsetup{width=0.95\textwidth}
\caption{\new{
Evaluation (in terms of attribute classification, with mAUROC as metric) on the robustness against noisy attribute labels for various methods which learn to synthesize the novel attributes. The shaded bands around each curve represent the 95\% confidence interval over 5 runs of different noisy label sets.
}}
\label{Fig.noisy_label_95_CI}
\end{minipage
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3in]{ClevrGZSLAvgH.pdf}
\captionsetup{width=0.95\textwidth}
\caption{\walon{Evaluation on the quality of automatic re-annotation produced by different methods, where the performance is based on the average harmonic mean of four GZSL algorithms using the re-annotated attributes (cf. last paragraph of Sec.~\ref{sec:robustness} for details).}}
\label{Fig.noisy_label_GZSLAH}
\end{minipage}
\end{figure}
\new{
As the mAUROC curves shown in Figure~\ref{Fig.noisy_label_95_CI}, we can observe that: (1) ZSLA outperforms the baselines in attribution classification, no matter how noisy the training data is; (2) the performance drop of ZSLA with respect to WALR is much smaller than that of the baselines; (3) baselines have a larger variance than ours, i.e., they are more sensitive to different combinations of the noisy labels even the noise level is the same. The observations prove the robustness of ZSLA against the noisy labels of training attributes. We also show in our supplementary that mAP@50 and mLA (for attribute retrieval and location) have a similar trend as mAUROC.}
\walon{Moreover, similar to the experimental setting as Section~\ref{sec:reannotate}, we use the novel attribute detectors, which are synthesized by different methods under various WALR settings, to automatically re-annotate the dataset. The resultant dataset is used for learning four GZSL algorithms (i.e., CADAVAE, TFVAEGAN, ALE, and ESZSL). The average of their harmonic means is reported in Figure~\ref{Fig.noisy_label_GZSLAH}. We can observe the superior quality in terms of automatic re-annotation produced by our ZSLA (i.e., the red curve) compared to the other baselines (i.e., the blue and green curves for A-ESZSL A-LAGO, respectively) under all WALR settings. Specifically, we also simulate the situation where humans annotate all attribute labels for the dataset while maintaining the corresponding WALRs (i.e., the purple curve). It leads to a similar observation as we find in the CUB dataset. Once WALR is high (i.e., quite noisy labeling), the performance of GZSL algorithms trained with the semantic information provided by our ZSLA (i.e., the red curve) becomes superior to the one trained with the noisy manual labels.
}
\section{Conclusion}
\new{This paper proposes a new method of developing zero-shot learning on novel attributes to reduce the attribute annotation cost for constructing a zero-shot classification dataset. By leveraging the trained detectors of seen attributes, our model learns to decompose them into base attributes to further synthesize novel unseen attributes by reassembling pairs of base attributes.
Experimental results show that our method is able to exploit the information embedded in the seen attributes to generate high-quality unseen attributes, validated by various evaluation schemes for attribute classification, retrieval, and localization. We also demonstrate that the semantic information based on our automatic re-annotation is beneficial for the GZSL task.
}
\section*{Appendix}
\begin{figure*}[h]
\centering
\includegraphics[width=1\textwidth]{attribute_selection.pdf}
\caption{
\walon{Colorized cells in this table present the indexes of 239 CUB attributes used in our experiments (i.e. $\mathbf{A}^s \cap \mathbf{A}^u$), in which their corresponding base attributes are indicated in the black-shaded cells (i.e. $\mathbf{B}^c$ on the left-most column while $\mathbf{B}^p$ on the top row). For instance, the $279^{th}$ attribute in CUB is ``blue beak'', so we put ``279'' in the cell where its horizontal position in the table coincides with the one of the base attribute ``beak'', and its vertical position in table coincides with the one of the base attribute ``blue''.
Cells with the same background color are in the same group.}}
\label{fig:att_selection}
\vspace{+0.5cm}
\end{figure*}
\subsection*{Attribute Selection} \label{sec:att_selection}
\walon{
As previously stated, the CUB dataset has 312 attributes in total, each of which could be decomposed into an adjective and an object part.~(e.g., ``solid'' and ``breast'' for attribute ``solid breast''; ``red'' and ``throat'' for attribute ``red throat''). The meanings behind the adjectives contain color, texture, shape, and others, while color (to which 239 of 312 attributes are related) is the dominant one. We thus focus on these 239 attributes (which have adjectives for color) in CUB and construct a table summarizing their corresponding base attributes (in total, 16 base attributes of object parts and 15 base attributes of colors)
as shown in Figure \ref{fig:att_selection} (please check the caption for interpreting this table).
Please note that, though ideally there should be 240 attributes produced by all the combinations from 16 base attributes of object parts and 15 base attributes of colors, we do not have the attribute ``iridescent eye'' as it has no example shown in the CUB dataset. Therefore, the number of attributes used in our experiments is one less 240 (i.e., 239 attributes in total).}
\walon{
We divide the 239 attributes into 15 groups such that each of them has all the base attributes (i.e., 16 for object parts and 15 for colors) included (except for group 10, owing to the absent attribute: ``iridescent eye''). The attributes assigned to each of these 15 groups can be found in Figure \ref{fig:att_selection} (grouped by the cells with different color backgrounds). Such grouping helps us select the minimum number of seen attributes required for learning to synthesize the novel ones in a more efficient way, as the attributes from any two different groups (excluding group10) can be used to factor out all the base attributes via our intersection function $\mathbb{I}$. Please note that there exist more than one possible ways of grouping to achieve the same goal; here, we only describe the way used in our experiments.}
\walon{In our experimental settings, we use group1 and group2 as seen attributes $\mathbf{A}^s$ for the experiments of $N^s=32$ (cf. Table.1 and Table.2 in our main manuscript). For the experiments of $N^s=64$, group1, group2, group3, and group4 are used as seen attributes. Moreover, for the experiments of $N^s=96$, group1 to group8 are used together as seen attributes. Next, we conducted a study to verify the consistency of our proposed method to different combinations of seen attributes. We randomly select two groups as seen attributes (i.e., $N^s=32$) to train our decompose-and-reassemble procedure and evaluate the performance of synthesized novel attribute detectors. In total, we repeat this experiment for six rounds. The standard deviations of three metrics (i.e., mAUROC, mAP@50, and mLA) among these 6 rounds are $0.0056$, $0.0124$, and $0.0175$, respectively. The relatively low variance thus successfully verifies the consistency of our proposed method to various combinations of seen attributes.}
\walon{
\subsection*{Ablation Study}
Here, we conduct an ablation study and investigate the influence/impact of \textbf{1)} the ``\textbf{uni-modal constraint}'' (abbreviated as UMC, implemented by $\mathcal{L}_{umc}$ in our proposed method, cf. Equation~3
of our main manuscript), and \textbf{2)} the usage of the ground-truth of the attribute locations (i.e. knowing where an attribute appears on the image, denoted as ``\textbf{location information}'') in training the seen attribute detectors.
Ideally, we expect that: if the seen attribute detectors are better trained, it is more likely to obtain the synthesized attribute detectors with better performance (as those seen attribute detectors are the input materials for learning decompose-and-reassemble procedure).
The evaluation results on the synthesized novel attributes learnt by adopting different usage combinations of the uni-modal constraint and the location information are summarized in Table~\ref{tab:ablation_study}. We are able to observe that: (1)
With the help of uni-modal constraint, the mLA (i.e. average localization accuracy) of synthesized novel attributes clearly improves (i.e. from 0.348 to 0.613); (2) In addition to the uni-modal constraint, if the location information is also considered during the model training, the mLA can even go further to gain an extra boost by 0.233 (i.e. from 0.613 to 0.846). The overall improvements in terms of mLA made by having both uni-modal constraint and location information adopted in training our proposed method clearly indicate their effectiveness to help precisely extract and synthesize novel attributes.
}
\walon{
This study also finds that: as both mAUROC and mAP@50 metrics (which are related to attribute classification and retrieval) do not aim to localize the image regions of the target attributes, they are hence relatively insensitive to the usage of uni-modal constraint and location information.
Some qualitative examples of this ablation study are provided in Figure~\ref{Fig.ablation_retrieval}. We can see that: Without using uni-modal constraint and location information (cf. the right portion of Figure~\ref{Fig.ablation_retrieval}), the response maps of the target novel attributes show multiple modes on wrong locations; after introducing the uni-modal constraint, the response maps turn to have more concentrated distribution (i.e. uni-modal) but occasionally have the modes on the incorrect locations for the target attributes (cf. the middle portion of Figure~\ref{Fig.ablation_retrieval}); upon further taking the location information into consideration for model training, the localization of the target attribution is improved and becomes more accurate (cf. the left portion of Figure~\ref{Fig.ablation_retrieval}).
}
\setlength{\tabcolsep}{0.9mm}
\begin{table}[h]
\centering
\begin{tabular}{cc|ccc}
\textbf{Loc Info} & \textbf{UMC} & \textbf{mAUROC} & \textbf{mAP@50} & {\color[HTML]{333333} \textbf{mLA}} \\ \hline\hline
\ding{51} & \ding{51} & .689 & .320 & .846 \\
\ding{55} & \ding{51} & .701 & .296 & .613 \\
\ding{55} & \ding{55} & .702 & .325 & .348
\end{tabular}
\caption{
\walon{
Quantitative evaluation (in terms of attribute classification, retrieval, and localization) on the novel attribute detectors learnt by three model variants, in order to have ablation study on the usages of uni-modal constraint (abbreviated as ``UMC'', implemented by $\mathcal{L}_{umc}$) and location information (abbreviated as ``Loc Info'').}}
\label{tab:ablation_study}
\vspace{+0.5cm}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{ablation_by_retrieval.pdf}
\caption{
\walon{Example results of attribute retrieval and localization for the novel attribute detectors learnt by three model variants, in order to have ablation study on the usages of uni-modal constraint (abbreviated as UMC, implemented by $\mathcal{L}_{umc}$) and location information. These three model variants are trained (left) with UMC and location information, (middle) with UMC but without location information, and (right) with neither UMC nor location information. For each example set, we show the top-5 retrieved images and their response maps for a synthesized novel attribute, where the images marked with red borders are the false positives according to CUB ground-truth.}
}
\label{Fig.ablation_retrieval}
\vspace{+0.5cm}
\end{figure*}
\subsection*{Details of Obtaining Class-attribute Matrix for $\delta$-CUB}
Here we give a detailed discussion on how we generate the class-attribute matrix for~$\delta$-CUB. The class-attribute matrix plays an essential role in the zero-shot classification task to associate the categories by describing them as the composition of attributes. The meaning of each entry in the class-attribute matrix (in size of ``number of categories'' $\times$ ``number of attributes'') can be roughly understood as "what percentage of instances in a category are considered to have a certain attribute". In the CUB dataset, to build the class-attribute matrix, they random sample some images from a category and ask multiple workers to annotate these images several times, and then the percentage of assigning different attributes to the images will be treated as the attribute composition of this category.
| 2024-02-18T23:39:46.548Z | 2021-11-30T02:20:50.000Z | algebraic_stack_train_0000 | 351 | 10,276 |
|
proofpile-arXiv_065-1849 | \section{Introduction}
Cross-lingual transfer learning provides a way to train a model using a dataset in one or more languages and use this model to make inferences in other languages. This type of transfer learning can benefit applications such as question answering \cite{lee2019cross}, dialogue systems \cite{schuster2018cross}, machine translation \cite{ji2020cross}, named entity recognition \cite{johnson2019cross}, as in all of these applications it is essential to have good representations of words and texts. These representations should be independent of the language and capture high-level semantic relations.
Contextual word embeddings (such as ELMo \cite{peters2018deep}, GPT \cite{radford2018improving}, or BERT \cite{devlin2018bert}) have shown state-of-the-art performance on many NLP tasks. Their performance depends on the availability of a large amount of labeled text data. Recent work with Multilingual BERT (M-BERT) has demonstrated that the model performs well in zero-shot settings \cite{conneau2018xnli}. In this case, only labeled English data are necessary to train the model and use it to make inferences in another language.
Large-scale Multi-label Text Classification (LMTC) is the task of assigning a subset from a collection of thousands of labels to a given document. There are many challenges connected with this task. First, the distribution of labels is usually sparse and follows the power-law distribution. Another challenge is the availability of a large dataset to train a good model that generalizes well to unseen data. Collecting and annotating such datasets is an expensive and cumbersome process; annotators need to read the entire document and check against all available labels to decide which labels to assign to the document. Furthermore, it is very likely that annotators are missing some potentially correct tags.
Cross-lingual transfer learning (CLTL) can mitigate the issue of dataset availability for LMTC tasks by jointly training an LTMC model for several languages. It is also possible to train an LTMC for low-resources languages in zero-shot settings using available data in other languages and then making inferences in the unseen target language.
French and German alongside with English are the main focus of this paper. Ethnologue's method of calculating lexical similarity between languages \cite{rensch1992calculating} shows that English has a lexical similarity of 60\% with German and 27\% with French. Ethnologue's method compares a regionally standardized wordlist and counts those forms that show similarity in both form and meaning.
In this work, we focus on cross-lingual transfer learning for LMTC task, based on JRC-Aquis dataset \cite{steinberger2006jrc} and an extended version of EURLEX57K \cite{chalkidis2019large} dataset. Both datasets contain documents from EurLex, the legal database of the European Enion (EU), and they are annotated using descriptors from the the European Union’s multilingual and multidisciplinary thesaurus EuroVoc. JRC-Aquis is a large parallel corpus of documents available in 25 languages including English, French and German. EURLEX57K is available in English, we extended this dataset to include parallel documents in French and German.
The goal of this work is to start a baseline for LMTC based on these two multilingual datasets which contain parallel documents in English, French and German. We compare between two CLTL settings for this task: (i) a zero-shot setting in which we train a multi-lingual model using the English training set and then we test using the French and German test sets; (ii) a joint training setting in which we train the model using all training data including English, French and German training sets.
The main findings and contributions of this work are: (i) the experiments with multilingual-BERT and multilingual-DistilBERT with gradual unfreezing and language model finetuning (ii) providing a new standardized multilingual dataset for further investigation, (iii) ablation studies to measure the impact and benefits of various training strategies.
The remainder of the paper is organized as follows: After a discussion of related work in Section \ref{sec-relatedworks}, we discuss CLTL (section \ref{sec-cross-lingual}) and multi-lingual datasets (sections \ref{sec-datasets}). Then we present the main methods (BERT, DistilBERT) and strategies for training multi-lingual model in Section \ref{section-methods}. Section \ref{sec_results} contains extensive evaluations of the methods on both datasets as well as ablation studies, and after a discussion of results (Section \ref{sec-discussion}) we conclude the paper in Section \ref{sec-conclusion}.
\section{Related Works}
\label{sec-relatedworks}
In the realm of cross-lingual transfer learning, Eriguchi et al.~\cite{eriguchi2018zero} performed zero-shot binary sentiment classification by reusing an encoder from multilingual neural machine translation; they extended this encoder with a task-specific classifier component to perform text classification in a new language, where training data in this particular language was not used. On Amazon reviews, their model achieves 73.88\% accuracy on the French test set in zero-shot settings when training using English training data only, meanwhile including French training data in the training process increases the accuracy on the French test set to 83.10\%. As a result, the zero-shot model obtains 92.8 \% of the accuracy achieved after including French training data. \\
Pelicon et al.~\cite{pelicon2020zero} used multilingual BERT to perform zero-shot sentiment classification by training a classifier in Slovene and making inference using texts in other languages. The model trained using the Slovene training set obtains $52.41 \pm2.58$ F1-score on the Croatian test set, however on the Slovene test set its performance reaches $63.39 \pm 2.42$ F1-score.\\
Keung et al.~\cite{keung2019adversarial} improved zero-shot cross-lingual transfer learning for text classification and named entity recognition by incorporating language-adversarial training to extract language-independent representations of the texts and align the embeddings of English documents and their translations. Regarding the classification task, they trained a classifier using English training data of the MLDoc dataset, they report 85.7\% and 88.1\% accuracy on French and German test sets correspondingly after using language-adversarial training.\\
Chalkidis et al.~\cite{chalkidis2019large} published a new EURLEX57K dataset, a dataset of European legal documents in English. Steinberger et al.~\cite{steinberger2006jrc} presented JRC-Acquis, a freely available parallel corpus containing European Union documents. This dataset is available in 20 official EU languages, including English, French, and German.\\
In our previous work~\cite{shaheen2020large}, we used a transformer-based pre-trained model (BERT, DistilBERT, RoBerta, XLNet) to extract high-level vector representations from legal documents. First, we applied Language Model Finetuning (LMFT) to this model using documents from the training set; the goal here is to improve the quality of document representations extracted from this model. Then, we extened the previously finetuned model with a classifier. Later, the transformer model and the classifier were jointly trained while gradually unfreezing the layers of the transformer model during training. This approach led to a significant improvement in the quality of the model.
In this work, we experiment with Multilingual-BERT and Multilingual-DistilBERT under cross-lingual zero-shot and joint-training transfer settings. We provide ablation studies to measure the impact of various training strategies and heuristics. Moreover, we provide new standardized multilingual dataset for further investigation by the research community.
\section{Cross-Lingual Transfer Learning}
\label{sec-cross-lingual}
The idea behind Cross-Lingual Transfer Learning (CLTL) in text classification tasks is to use a representation of words or documents extracted using a multilingual model; this representation should be independent of the language and capture high-level semantic and syntactic relations. Through transfer learning, it is possible to train a classifier using a dataset in one or more languages (source languages) and then transfer knowledge to different languages (target languages). This transfer learning approach is well-suited for low-resourced languages and for tasks requiring a lot of data. The performance obtained with CLTL aims to be as close as possible to training the entire system on language-specific resources.
There are different schemes for cross and multilingual document classification, which can be distinguished by the source and target languages, as well as the approach of selecting the best model. In a Zero-Shot Learning (ZSL) scheme, the source languages are different from the target languages, and the selection of the best model is performed using a development set from the source languages. In the Target Learning (TL) scheme, the source and target languages do not overlap, but the model selection is performed using the development set of target languages. In a Joint Learning (JL) scheme, the source and target languages are the same, and the selection method is applied using the development set of these languages.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figs/cross.png}
\caption{Zero-shot Cross-lingual transfer learning.}
\end{figure}
\section{Datasets}
\label{sec-datasets}
In this section, we introduce the multilingual EuroVoc thesaurus used to classify legal documents in both JRC-Aquis and EURLEX57K datasets. Then we explore the multilingual version of JRC-Aquis V3. We also describe how we extended EURLEX57K dataset by adding parallel documents available in French and German.
\subsection{EuroVoc Thesaurus}
The EuroVoc thesaurus is a multilingual thesaurus thematically covering many of the activities of the EU. It contains 20 domains, each domain contains a number of micro-thesauri. Descriptors in EuroVoc are classified under these micro-thesauri, and each descriptor belongs to one or more micro-thesauri. Relations between descriptors are represented using the SKOS ontology\footnote{https://www.w3.org/2004/02/skos}. Hierarchical relation between descriptors are specified with the SKOS \emph{broader} relation.
Used instead relation identifies the relation between a descriptor and its replacement.
The SKOS \emph{related} link is used to map a descriptor to its related descriptors; Used for relation maps each descriptor to its related labels. In total, there are 127 micro-thesaurus and 7221 Descriptors.
\subsection{JRC-Acquis multilingual}
\label{secjrc}
JRC-Acquis dataset is a smaller dataset with parallel documents in 20 languages; this dataset overlaps with EURLEX57K dataset and contains additional documents. It is labeled using descriptors from EuroVoc. We selected documents in English, French, and German for our experiments; we show statistics about this dataset in table~\ref{tab_stats_jrc}. We do not use unlabeled documents for classifier finetuning. Therefore, we do not assign them to any training split, and we use them only for language model finetuning.
\begin{table}[htbp]
\small
\centering
\caption{JRC-Acquis dataset in English (EN), French (Fr) and German (DE). Number of documents in train, development and test sets in addition to the number of documents with no split and the total number of documents.}
\label{tab_stats_jrc}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Language & train & development & test & no split & total \\\hline
EN&16454&1960&1968&3163&23545\\\hline
FR&16434&1959&1967&3267&23627\\\hline
DE&16363&1957&1965&3256&23541\\\hline
\end{tabular}
\end{table}
\subsection{EURLEX57K multilingual}
\label{seceurlex}
EUR-Lex documents are Legal documents from the European Union labeled using the set of EuroVoc thesaurus descriptors.
We collected parallel documents in German and French to the documents in EURLEX57K dataset. We use the CELEX ID from the original EURLEX57K dataset to divide the data into train, development, and test sets. The documents from the parallel corpora are assigned the same splits as in the original monolingual EURLEX57K dataset. Therefore, our final dataset contains parallel texts in 3 languages. Statistics about this dataset are found in table~\ref{tab_stats_eur}.
\begin{table}[htbp]
\small
\centering
\caption{Multilingual EURLEX57K dataset in English (EN), French (Fr) and German (DE). Number of documents in train, development and test sets in addition to the number of documents with no split and the total number of documents.}
\label{tab_stats_eur}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Language & train & development & test & no split & total \\\hline
EN&44428&5929&5921&24004&80282\\\hline
FR&44427&5929&5921&24452&80729\\\hline
DE&43749&5842&5820&23942&79353\\\hline
\end{tabular}
\end{table}
We extended our dataset by including EUR-Lex documents that are not available in EURLEX57K.
We use these additional documents only for Language Model finetuning stage (see section \ref{sub_sec_training_strategies}), so they do not have a training split, and we do not use them in classifier finetuning.
\section{Methods}
\label{section-methods}
In this section we describe the methods used in the ZSL and JT experiments presented in the results section, and multilingual training process.
Also, we discuss important related points such as language model finetuning, and gradual unfreezing.
\subsection{Multilingual Transformer Based Models}
\label{secmultimodels}
\textbf{BERT} it is a transformer-based architecture trained using masked language model (MLM) and next sentence prediction (NSP) objectives. In MLM, 15\% of the tokens are randomly masked, and the model tries to predict them from the context. BERT learns rich contextual representations of words and relations between them.
BERT uses a special token [CLS] token for classification, which is added at the beginning of the text by the tokenizer.
The token reflects the hidden representation of the last BERT layer and aggregates the sequence representation.
BERT appeared in 2019, and since then was successfully applied in many natural language processing and understanding tasks.
In this work, we utilize the multilingual version of BERT called M-BERT. \\
\textbf{DistillBERT} it is a distilled version of BERT; it achieves over 95\% of BERT's performance while having 40\% fewer parameters. In our experiments, we used DistillBERT to select the best training strategy for computationally expensive experiments, and then apply the strategy for M-BERT. We refer to the multilingual version of DistilBERT as M-DistilBERT.
\subsection{Multilingual Training}
To train our multilingual cross-lingual model, we finetune transformer-based models (see Section \ref{secmultimodels}) using multilingual documents from the legal domain (see Sections \ref{seceurlex} and \ref{secjrc}).
The classifier is built upon the document representation produced by the M-BERT and M-DistilBERT models. We pass the representation of [CLS] token through a fully connected layer and then project the output to a vector with the size of the target classes count.\\
In finetuning the language model, we experimented with different numbers of epochs and different combinations of datasets; an ablation study is found in section \ref{ablation}.\\
The classifier is trained in the ZSL scheme using the English part of the dataset, we pick the model configuration with the best F1-score on the English test set, and evaluate it on the French and German datasets independently.\\
In JT scheme, the model is trained by including all the languages in the training and model picking process.
We evaluate the selected model using the test sets in English, French, and German independently.
To evaluate the effect of having parallel languages in the training process, we compare the model trained in the ZSL scheme and the model trained in the JL scheme on the English test set; the results of this ablation study are given in Section \ref{ablation}
\subsection{Training Strategies}
\label{sub_sec_training_strategies}
In line with Shaheen et al.~\cite{shaheen2020large}, we train multilingual classifiers using the training strategies described below.
The first strategy is \emph{language model finetuning} of the transformer model before using it in classification. Finetuning is done on all training documents, and additionally on unlabeled documents available
in the EurLex database. This step aims at improving the model's representation of legal documents.
Secondly, in \emph{gradual unfreezing}, we freeze all the model's layers with the exception of the last few layers. We start by training only those layers.
Later, the number of unfrozen layers is gradually increased during training. An ablation study about the effect of using such training strategies on multilingual models trained in ZSL and JT schemes is found in Section \ref{ablation}.
Both training strategies are proposed by Howard and Ruder \cite{howard2018universal}.
\subsection{Baseline}
Shaheen et al.~\cite{shaheen2020large} investigated the performance of various transformer-based models (including BERT, RoBERTa, and DistillBERT), in combination with training strategies such as language model finetuning and gradual unfreezing. The authors report their results on the English part of JRC-Acquis and EURLEX57K dataset.
Here, we use these results as a baseline to compare our results on the English part of the datasets.\\
However, to the best of our knowledge, no baseline exists for the text classification using EurLex and JRC-Acquis for French and German,
for which we provide a reference evaluation for both the JT and ZSL schemes.
\subsection{Evaluation}
Following Shaheen et al. \cite{shaheen2020large}, we use F1-score as a decision support metric. This metric focuses on measuring how well the system helps to recommend correct labels, it aims at selecting relevant labels and avoiding irrelevant labels. Precision is the percentage of selected labels that are relevant to the document. Its focus is recommending mostly related labels. Recall is the percentage of relevant labels that the system selected. Its focus is not missing relevant labels. F1-score is the harmonic mean between precision and recall. These metrics have a major drawback, they are targeted at predicting relevant labels regardless their position in the list of predicted labels, and as a result this metric is not suitable for applications like a recommendation system.
Shaheen et al. \cite{shaheen2020large} use additional retrieval measures for evaluation. R-Precision@K (RP@K), and Normalized Discounted Cumulative Gain (nDCG@K). These Rank-Aware metrics emphasis being good at finding and ranking labels, they rewards putting relevant labels high up in the list of recommendations and penalizes late recommendation of relevant labels.
\section{Results}
\label{sec_results}
This section reports the results of multilingual transformer-based models trained in the ZSL scheme (Section~\ref{sec_results_zsl}), the JT scheme (Section~\ref{sec_results_jt}) and the ablation studies (Section~\ref{sec_results}).
\subsection{Zero-Shot Results}
\label{sec_results_zsl}
First, we evaluate multilingual transformer-based models (M-BERT and M-DistilBERT) trained in the ZSL scheme to classify French and German texts -- using only English texts as training data.
Table~\ref{tab_zero_shot_jrc} shows the results on JRC-Acquis dataset and, followed by Table~\ref{tab_zero_shot_eurlex} with the results for M-EURLEX57K.
The French and German test sets are being evaluated separately.\\
In our experiments M-BERT consistently outperforms M-DistilBERT in the ZSL setting by a large margin across both datasets.
Further, we observe better classification performance for the French datasets than for the respective German datasets, both for M-BERT and M-DistilBERT models.
\begin{table*}[htbp]
\caption{The results of multilingual models (M-BERT, M-DistilBERT) trained in ZSL scheme using the English part of JRC-Acquis on French (FR) and German (DE) parallel test sets.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.504 & 0.628 & 0.56 & 0.66 & 0.604\\
FR & M-BERT & \textbf{0.55} & \textbf{0.674} & \textbf{0.604} & \textbf{0.704} & \textbf{0.648}\\\hline\hline
DE & M-DistilBERT & 0.473 & 0.583 & 0.527 & 0.613 & 0.566\\
DE & M-BERT & \textbf{0.519} & \textbf{0.637} & \textbf{0.571} & \textbf{0.667} & \textbf{0.613}\\\hline
\end{tabular}
\label{tab_zero_shot_jrc}
\end{center}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{The results of multilingual models (M-BERT, M-DistilBERT) trained in ZSL scheme using the English part of the multilingual EURLEX57K dataset on French (FR) and German (DE) test sets.}
\label{tab_zero_shot_eurlex}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.614 & 0.718 & 0.677 & 0.741 & 0.706\\
FR & M-BERT & \textbf{0.67} & \textbf{0.771} & \textbf{0.726} & \textbf{0.795} & \textbf{0.757}\\\hline\hline
DE & M-DistilBERT & 0.594 & 0.7 & 0.652 & 0.723 & 0.683\\
DE & M-BERT & \textbf{0.648} & \textbf{0.751} & \textbf{0.7} & \textbf{0.776} & \textbf{0.733}\\\hline
\end{tabular}
\end{table*}
\subsection{Joint Training Results}
\label{sec_results_jt}
We continue with the evaluation of multilingual transformer-based Models (M-BERT and M-DistilBERT) trained in the JT scheme for English, French and German languages.
The results of monolingual models (BERT, RoBERTa, and DistilBERT), as reported in Shaheen et al.~\cite{shaheen2020large}, serve as a baseline on the English test set.
\textbf{JRC Acquis:} Table~\ref{tab_jrc_results} presents an overview of the results on JRC-Acquis. We observe that transformer-based models, trained using JRC Acquis in the JT scheme, fail to reach the performance of monolingual models on the English test set. In this manner, multilingual models achieve about 96.83-98.39\% of the performance achieved by monolingual baseline models. Interestingly, both M-DistilBERT and M-BERT perform similarly according to all metrics, with slightly better performance for M-BERT on F1-score and slightly better performance for M-DistilBERT on the rest of the metrics (RP@3, RP@5, nDCG@3, nDCG@5).
\begin{table*}[t]
\centering
\caption{M-BERT and M-DistilBERT results trained in the JT scheme for the JRC Acquis dataset in English (EN), French (FR) and German(DE), plus baseline results of monolingual models (BERT, DistilBERT, RoBERTa) on the English test set.}
\label{tab_jrc_results}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.637 & \textbf{0.766} & 0.692 & \textbf{0.79} & 0.732\\
FR & M-BERT & \textbf{0.642} & 0.763 & \textbf{0.696} & 0.785 & \textbf{0.733}\\\hline\hline
DE & M-DistilBERT & 0.634 & \textbf{0.762} & 0.691 & \textbf{0.787} & \textbf{0.731}\\
DE & M-BERT & \textbf{0.641} & 0.759 & \textbf{0.693} & 0.781 & 0.729\\\hline\hline
EN & M-DistilBERT & 0.638 & 0.768 & 0.697 & 0.794 & 0.737\\
EN & M-BERT & 0.644 & 0.763 & 0.695 & 0.785 & 0.733\\\hline
EN & DistilBERT & 0.652 & 0.78 & 0.711 & 0.805 & 0.75\\
EN & BERT & \textbf{0.661} & 0.784 & 0.715 & 0.803 & 0.750\\
EN & RoBERTa & 0.659 & \textbf{0.788} & \textbf{0.716} & \textbf{0.807} & \textbf{0.753}\\\hline
\end{tabular}
\end{table*}
\textbf{EURLEX57K:} In contrast to JRC-Aquis, for the M-EURLEX57K (see Table~\ref{tab_eur_results})
M-BERT achieves similar or slightly better results on all metrics than RoBERTa (the best baseline model),
when comparing multilingual models to the monolingual baseline.
Also, M-BERT provides an improvement of 1\% over monolingual (English) BERT on all metrics. Although monolingual DistilBERT achieves slightly better results than M-DistilBERT, results are also identical.
\begin{table*}[htbp]
\small
\centering
\caption{M-BERT and M-DistilBERT results trained in the JT scheme for the EURLEX57K dataset in English (EN), French (FR) and German(DE), plus baseline results of monolingual models (BERT, DistilBERT, RoBERTa) on the English test set.}
\label{tab_eur_results}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.754 & 0.846 & 0.803 & 0.864 & 0.829\\
FR & M-BERT & \textbf{0.761} & \textbf{0.851} & \textbf{0.811} & \textbf{0.867} & \textbf{0.833}\\\hline\hline
DE & M-DistilBERT & 0.751 & 0.843 & 0.801 & 0.862 & 0.827\\
DE & M-BERT & \textbf{0.759} & \textbf{0.847} & \textbf{0.807} & \textbf{0.864} & \textbf{0.831}\\\hline\hline
EN & M-DistilBERT & 0.753 & 0.847 & 0.803 & 0.865 & 0.829\\
EN & M-BERT & \textbf{0.761} & \textbf{0.85} & \textbf{0.812} & \textbf{0.867} & \textbf{0.836}\\\hline
EN & DistilBERT & 0.754 & 0.848 & 0.807 & 0.866 & 0.833\\
EN & BERT & 0.751 & 0.843 & 0.805 & 0.859 & 0.828\\
EN & RoBERTa & 0.758 & \textbf{0.85} & \textbf{0.812} & 0.866 & 0.835\\\hline
\end{tabular}
\end{table*}
\subsection{Ablation Studies}
\label{ablation}
In this set of experiments, we study the contributions of different training components and training strategies on the ZSL model -- by excluding some of those components individually or reducing the number of training epochs. We focus on three components:
(i) the use of gradual unfreezing or not, (ii) the number of unfrozen layers, (iii) and the number of language model finetuning epochs.
In all those experiments, we train the models using the English training data of JRC Acquis, and we test using French and German test sets.\\
Table~\ref{tab_abl_nogduf} provides a comparison of the evaluation metrics with or without gradual unfreezing. For both French and German, we can see consistent a improvement of results when using gradual unfreezing. The relative improvement for French is in the range 38-45\%, and for German is in the range 58-70\%. In conclusion, gradual unfreezing is a crucial component for good classification performance of a model trained in the ZSL scheme.\\
Next, we examine the effect of freezing the network layers at the start of training, and gradually unfreezing some the of the layers during training (Table~\ref{tab_abl_gduf}).
\begin{table*}[htbp]
\small
\centering
\caption{Ablation Study: ZSL M-DistilBERT performance on JRC-Acquis depending on the number the unfrozen layers. Again, we train on the English training set, and test on French and German.}
\label{tab_abl_gduf}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & Unfrozen Layers & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & Last 2 layers & 0.434 & 0.543 & 0.486 & 0.574 & 0.527\\
FR & Last 3 layers & 0.442 & 0.547 & 0.493 & 0.58 & 0.533\\
FR & Last 4 layers & 0.439 & 0.549 & 0.491 & 0.579 & 0.532\\
FR & Last 5 layers & \textbf{0.455} & \textbf{0.567} & \textbf{0.505} & \textbf{0.597} & \textbf{0.547}\\
FR & All 6 layers & 0.451 & 0.563 & 0.5 & 0.593 & 0.542\\
FR & All 6 layers + EMB & \textbf{0.455} & 0.566 & 0.504 & 0.596 & 0.546\\\hline
DE & Last 2 layers & 0.388 & 0.471 & 0.429 & 0.501 & 0.463\\
DE & Last 3 layers & 0.393 & 0.484 & 0.434 & 0.509 & 0.468\\
DE & Last 4 layers & 0.381 & 0.466 & 0.418 & 0.495 & 0.454\\
DE & Last 5 layers & \textbf{0.395} & \textbf{0.488} & \textbf{0.442} & \textbf{0.516} & \textbf{0.477}\\
DE & All 6 layers & 0.384 & 0.468 & 0.42 & 0.497 & 0.456\\
DE & All 6 layers + EMB & 0.391 & 0.474 & 0.428 & 0.504 & 0.464\\\hline
\end{tabular}
\end{table*}
Gradually unfreezing the last five layers while keeping the first and embedding (EMB) layers frozen achieves the best performance on French and German test sets.
Unfreezing all layers (including the embedding layer) obtains very close results to the best results on the French test set, while the difference on the German test set is a bit larger.\\
In Table~\ref{tab_abl_lmft}, we test the effect of the number of language model finetuning epochs.
On the French test set, one cycle of language model finetuning leads to 18.6-20.48\% of relative gain compared to no LM finetuning at all. Increasing the number of epochs to 5 and 10 increases the relative gain to 29.6-32.53\% and 32.0-34.94\% correspondingly. The difference is much bigger on the German test set, compared to no LM finetuning the relative gain is 42.82-49.47\%, 70.69-81.49\%, 76.15-87.54\% for 1, 5, 10 epochs of LM finetuning.
\begin{table*}[htbp]
\small
\centering
\caption{Ablation Study: ZSL M-DistilBERT performance on JRC-Acquis depending on the number of language model finetuning cycles (LMFT-cycles) -- with 6 layers and unfrozen and training on the English training set.}
\label{tab_abl_lmft}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & \#LMFT-cycles & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & 0 & 0.379 & 0.47 & 0.415 & 0.5 & 0.454\\
FR & 1 & 0.451 & 0.563 & 0.5 & 0.593 & 0.542\\
FR & 5 & 0.498 & 0.615 & 0.55 & 0.648 & 0.595\\
FR & 10 & \textbf{0.504} & \textbf{0.628} & \textbf{0.56} & \textbf{0.66} & \textbf{0.604}\\\hline
DE & 0 & 0.267 & 0.32 & 0.281 & 0.348 & 0.313\\
DE & 1 & 0.384 & 0.468 & 0.42 & 0.497 & 0.456\\
DE & 5 & 0.459 & 0.563 & 0.51 & 0.594 & 0.549\\
DE & 10 & \textbf{0.473} & \textbf{0.583} & \textbf{0.527} & \textbf{0.613} & \textbf{0.566}\\\hline
\end{tabular}
\end{table*}
\begin{table*}[htbp]
\small
\centering
\caption{Ablation Study: ZSL M-DistilBERT performance on JRC-Acquis regarding the use of gradual unfreezing (GDUF). We unfreeze 6 layers and train on the English training set.}
\label{tab_abl_nogduf}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & GDUF & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & False & 0.327 & 0.385 & 0.351 & 0.406 & 0.377\\
FR & True & \textbf{0.451} & \textbf{0.563} & \textbf{0.5} & \textbf{0.593} & \textbf{0.542}\\\hline
DE & False & 0.243 & 0.274 & 0.248 & 0.291 & 0.267\\
DE & True & \textbf{0.384} & \textbf{0.468} & \textbf{0.42} & \textbf{0.497} & \textbf{0.456}\\\hline
\end{tabular}
\end{table*}
\section{Discussion}
\label{sec-discussion}
We included much of the detailed discussion in
the results section (Section \ref{sec_results}), so here we will summarize and extend on some of the key findings.\\
Comparing the results of the ZSL scheme (Tables ~\ref{tab_zero_shot_jrc} and \ref{tab_zero_shot_eurlex}) to the JT scheme (Tables ~\ref{tab_jrc_results} and \ref{tab_eur_results}) on French and German test sets, the experiments show that M-BERT trained in ZSL scheme reaches about 86\% of the performance of a model trained in the JT scheme. In the same way, M-DistilBERT in ZSL settings achieves about 79\% of the performance of the JT scheme.
Additionally, the multilingual models (M-BERT, M-DistilBERT) trained in the JT scheme on English, French and German provide similar performance on their respective test sets (see tables \ref{tab_jrc_results} and \ref{tab_eur_results}).
However, when using the ZSL scheme, there is a discrepancy of between French and German results, indicating that the multilingual models can more easily transfer from the English to the French representations (Tables \ref{tab_zero_shot_jrc} and \ref{tab_zero_shot_eurlex}).
\section{Conclusion}
\label{sec-conclusion}
In this work, we evaluate cross-lingual transfer learning for LMTC task, based on JRC-Aquis dataset and an extended version of EURLEX57K dataset. We started a baseline for LMTC based on these two multilingual datasets which contain parallel documents in English, French and German. We also compared between two CLTL settings for this task: zero-shot setting and joint training setting.
The main contributions of this work are: (i) the experiments with multilingual-BERT and multilingual-DistilBERT with gradual unfreezing and language model finetuning (ii) providing a new standardized multilingual dataset for further investigation, (iii) ablation studies to measure the impact and benefits of various training strategies on Zero-shot and Joint-Training transfer learning.
There are multiple angles for future work, including potentially deriving higher performance by using hand-picked learning rates and other hyperparameters for each model individually. Moreover, experiments with language adversarial training and various data augmentation techniques are candidates to improve classification performance.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:39:46.658Z | 2021-11-30T02:21:19.000Z | algebraic_stack_train_0000 | 355 | 5,207 |
|
proofpile-arXiv_065-1852 | \section{Introduction}
\label{sec:intro}
The quality of atmospheric conditions is vital in the context of overall life-form existence including humankind, animals, and plants \cite{N1,N2}. Nonetheless, over the past years, a subsequent deterioration of air quality has been noticed due to the increasing emissions of pollutants into the atmosphere from industries, automobiles, and burnt areas. Even though many people have scarcely recognized the depth of the problem \cite{N3,N4}, the world health organization (WHO) continuously emphasizes the abominable statistics of the issue: $90\%$ of the population breaths polluted air while leading seven million deaths per year \cite{N5,N6}. Further, poor air quality results in a deleterious negative impact on the ecosystem of the planet such as causing to accelerate the depletion of atmospheric ozone layer \cite{N7}.
Recent scientific studies convey \cite{N8,N9} the correlation between the on-going corona virus disease 2019 (COVID-19) pandemic and air pollution, highlighting that particulate matter (PM) could carry the RNA samples of the SARS-CoV-2 virus \cite{N9} or the increase of COVID-19 mortality rate is associated with the increment of the concentration of $PM_{2.5}$ in the air \cite{N8}. Therefore, it is evident to state that the real-time accurate monitoring of air quality is essential for the sustainable long-term health of the humankind and the natural ecosystem and in addition, it also assists in the prevailing combat against the COVID-19 pandemic.
\begin{figure*}[t!]
\centering
\begin{minipage}[b]{1.0\linewidth}
\centerline{\includegraphics[width=15cm]{Block_diagram_new.png}}
\end{minipage}
\vspace{-1.1cm}
\caption{The high-level architecture of the proposed system}
\label{fig:system}
\end{figure*}
Internet of Things (IoT) is a promising technology which facilitates the communication between objects, machines and people in a uniquely addressable manner via a set of standard communication protocols. Since IoT systems are dynamic, distributive, and built upon vast number of smart heterogeneous objects, the need for semantic inter-operability and energy optimization, while being scalable, is utmost important. The constrained environments of the IoT systems further stress the requirement of resource optimization in IoT systems, leading to light-weight communication protocols and low-power hardware implementations. These requirements further manipulate IoT as an integrative technology which could be implanted in various fields including real-time air quality monitoring. Even though many scholars have studied on utilizing IoT into air quality monitoring \cite{N10,N11,N12,N13,N14}, the semantic interpretation of the acquired data towards a futuristic perspective is yet to be explored in a thorough manner.
Thus, this paper presents a semantically distributive and easily implementable IoT predictive framework integrated with a machine learning model to detect and predict air quality parameters including $PM_1$, $PM_{2.5}$ and $PM_{10}$ along with temporal and spatial humidity, temperature and pressure distributions. The proposed system retrieves primary data through a public air quality sensor network, airly \cite{Nweb1}, and is equipped with a NodeRED dashboard \cite{Nweb2} which operates as a client of the sensor network to process, visualize and store the acquired air quality and weather data. The NodeRED dashboard is further responsible for delivering the predicative outputs via the embedded time-series decision-tree machine learning model in the dashboard back-end. The system is incorporated with a ESP8266 NodeMCU node \cite{Nweb3, Nweb4} to be operated as a subscriber to the NodeRED dashboard via message queuing telemetry transport (MQTT) protocol to deliver the quantitative air quality data to the end-users via publish-subscribe architecture. The end-users of the proposed system could access the sensor data as well as the predictions via quantitative and visualized formats via the developed mobile and web applications.
\section{Methods}
\label{sec:pagestyle}
\subsection{Background}
\subsubsection{Message Queuing Telemetry Transport Protocol}
Message Queuing Telemetry Transport \cite{don} is a lightweight publish-subscribe network protocol. For receiving messages that are published or sent under a particular topic by the publisher, the message receiver alias the subscriber should subscribe to the publisher under that specific topic. The main function of the MQTT is behaving as a broker that distributes received messages from the publisher to the specific subscriber by filtering the messages by topic. MQTT protocol can be applied in IoT applications under limited resources as it is a lightweight protocol. The Organization for the Advancement of Structured Information standard and ISO/IEC 20922 standard are recommended for the MQTT protocol.
\subsubsection{NodeRED framework}
NodeRED is a programming tool for wiring hardware devices, application programming interfaces (API) and online services together. It has a browser-based editor to manipulate a user-ergonomic platform to connect flows using a palette of nodes that can be deployed to the run-time with a single click. The users are enabled with stitching web services and hardware to each other by replacing common low-level coding tasks by NodeRED, through a visual drag-drop interface. Various components in NodeRED are connected to create such flows.
\subsubsection{NodeMCU module}
NodeMCU is an open-source based firmware and it is developed using ESP8266 which is a low cost Wi-Fi enabled chip. By exploring functionality on the ESP8266 chip, NodeMCU firmware comes with the ESP8266 development board. Since NodeMCU is an open-source platform, its hardware design is open for editing, modifying, and building.
\subsubsection{Decision-Tree based machine learning model}
Decision-tree learner is one of the widely utilized machine learning algorithms for classification purposes since of its ability to perform the task with less computational cost in both training and testing phases. In addition, the algorithm also guarantees the interpretability of the deduced models through the traditional recursive top-down induction of decision trees, in which the algorithm chooses the most effective data attribute to divide the dataset by considering the gain ratio criteria. The leaf of the tree is selected after reaching the pre-defined minimum number of instances through the dividing process. The generalization of the deduced model is performed thereafter in order to optimize the size of the model.
\subsection{System Overview}
At first, the airly air monitoring API is used to obtain the real-time sensor data of defined air quality parameters such as $PM_{1}$ and $PM_{2.5}$. The NodeRED framework is utilized for initial data processing and visualization obtained from airly API and it is supported by the deployed conventional decision-tree machine learning model which is implemented to predict the time-series values of the air parameters. Furthermore, the NodeRED framework communicates with the NodeMCU via the assigned MQTT broker and thus, with the mobile application in order to process the requests from the users within the implementation arena.
NodeRED framework is supported by the following special libraries which are related to different functions of the proposed system.
\begin{itemize}[noitemsep,nolistsep]
\item node-red-contrib-machine-learning-v2
\SubItem{To build the machine learning model}
\item node-red-contrib-credentials
\SubItem{To include credentials which are related to access airly}
\item node-red-node-email
\SubItem{To send emergency emails where the air quality parameters of the requested location is above the corresponding safe levels}
\item node-red-contrib-fs
\SubItem{To organize files which are related to obtaining primary sensor data and training the machine learning model}
\end{itemize}
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.7cm]{Sample_flow.PNG}}
\end{minipage}
\caption{Flow management design for $PM_1$ in the NodeRED framework. The shown flow is integrated and extended to obtain and process the remaining air quality parameters.}
\label{fig:res}
\end{figure}
The users, who are able to access the NodeRED dashboard, can obtain primary sensor data for certain air quality parameters such as particulate matter, air quality index, temperature, pressure, and humidity based on location and time. In addition, users can view previous data as well as one-hour forecast data, which thus, simplifies any preferred decision-making process in relation to the user. The NodeRED dashboard also contains a file management system that allows users to see and download current and previous data files as needed.
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.0cm]{ML_nodeRED.PNG}}
\end{minipage}
\caption{Flow management design for deploying the machine learning model with (1) creating and extending dataset with user-requested sensor data (2) training the dataset (3) testing the model for time-series prediction }
\label{fig:res}
\end{figure}
The publish-subscribe network protocol between NodeRED and NodeMCU is established by Mosquitto broker \cite{Nweb5} in the application layer. The NodeMCU web server is implemented through an Arduino code. Then the processed data is communicated to the mobile end-user by the mobile application through NodeMCU server and the users can even view the processed data from NodeRED dashboard via the subscription through Mosquitto.
Here, ESP8266 is used to implement the local server for the data transmission between NodeRED dashboard and mobile or web application. Data communication between ESP8266 and NodeRED occurs through the MQTT communication protocol via the selected broker service of Mosquitto. Therefore, NodeMCU is working as a subscriber to the NodeRED framework under publish-subscriber architecture, while the NodeRED dashboard works as a client for the primary server sensor network under a server-client architecture. Data communication between ESP8266 and mobile application occurs through the hyper-text transfer protocol (HTTP). A web application is also implemented to communicate with the web server and view the requested data. Since the ESP8266 is utilized as the local server, the mobile application and ESP8266 are connected within the same Wi-Fi network.
The following resources are utilized to implement the NodeMCU server.
\begin{itemize}[noitemsep,nolistsep]
\item “ESP8266WiFi.h Arduino library
\SubItem{To connect with the Wi-Fi network }
\item "PubSubClient.h Arduino Library
\SubItem{To implement the MQTT broker}
\item "ESP8266WebServer.h
\SubItem{To create a local web server}
\end{itemize}
Finally, the web application and the mobile application are used to access the requested data in all present, past, and forecast formats via real-time notifications. The mobile application is implemented using Android Studio 4.2.2 while facilitating the users to obtain air quality data in the current location. As an additional feature, the location coordinates of the user can be automatically generated and hence, his location can be viewed on a map via the Google Maps application which is embedded in the mobile application. The generated or inserted location coordinates can further be used to request the corresponding air quality data in real-time.
The following resources are utilized to implement the mobile application.
\begin{itemize}[noitemsep,nolistsep]
\item Android Studio \cite{C2}
\SubItem{Android Studio 4.2.2 version is used as the integrated development environment for the mobile application development. This mobile application is based on a default project in Android Studio which has modalities with source codes and resource files. These modalities include android application modules, library modules, and google application engine modules. Java is used as the programming language in developing the mobile application.}
\item Volley HTTP Library \cite{C3}
\SubItem{Volley, a HTTP library which is capable of building a network for mobile applications is used here. It also provides the facility of automatic scheduling of network requests.}
\item Postman API client \cite{G1}
\SubItem{The Postman API client allows users to create and save both simple and complex HTTP/s requests while being capable to read their responses. This API client is employed in the application development project to test the APIs.}
\item Google Maps \cite{G2}
\SubItem{Google Maps application is used to display the current location of the user and navigate the direction of the location.}
\end{itemize}
Further, following resources are employed to implement the web application.
\begin{itemize}[noitemsep,nolistsep]
\item Hyper-text markup language (HTML)
\SubItem{To design the content to be displayed in the web page}
\item Cascaded style sheets (CSS)
\SubItem{To describe the presentation of the document written HTML}
\end{itemize}
\section{Results}
\label{sec:typestyle}
The proposed system is consisted of both software implementation and hardware implementation. In the hardware implementation, ESP8266 is used to create a local server which is capable of interacting with the web application, NodeRED dashboard, and the mobile application which is integrated with the android operating system.
\subsection{NodeRED Dashboard}
The end-user can enter the coordinates of the location where he requires to obtain the air quality. Then the NodeRED dashboard visualizes the temporal variations and the hourly-based sensor data, which is given by the system, through a wide array of charts. Previous hour parameter values can also be obtained and compared with present and forecast values.
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.5cm]{dash1_1.PNG}}
\end{minipage}
\caption{The proposed NodeRED dashboard which visualizes the primary and forecast air quality and weather data}
\label{fig:res}
\end{figure}
Furthermore, the end-user can request and visualize the immediate past data and obtain the forecast data through the API and the trained machine learning model. Each air quality parameter has a safe level and if the parameter values of the requested location are exceeding the corresponding safe levels, the system will inform the user about the threat alert through an emergency email.
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=5cm]{email_new_new.png}}
\end{minipage}
\caption{The emergency email which is sent by the NodeRED framework for an end-user when the air quality index value of the requested location is above the recommended safe level}
\label{fig:res}
\end{figure}
The dashboard is integrated to work with the MQTT requests through the NodeMCU local server, web and mobile applications. In the NodeRED dashboard, the users who have access to the dashboard, can observe the air quality parameters on the relevant page of the dashboard.
\subsection{Mobile Application}
The user can click the "obtain location coordinates" button on the mobile application interface and enter the latitude and longitude values of the preferred location. According to the requested location, the corresponding results are added to the result tab of the mobile application. In the result tab, the user can observe the sent data through various forms:
\begin{itemize}[noitemsep,nolistsep]
\item Current data
\SubItem {Current sensor data values sent by the system}
\item Last 24 hours data
\SubItem{Sensor data values, which relate to the time before 24 hours at the requested location, stored by the system}
\item Forecast data
\SubItem{Predicted air quality parameter values for three next hours with hourly basis}
\end{itemize}
Users can visualize the required outputs by clicking the "obtain current data" button, "obtain Last 24 hours data" button, and "obtain forecasted data" button. The user interface of the mobile application is simple, attractive, and user-ergonomic. Additionally, mobile users have an option to connect to the Google Maps application as a free add-on.
\begin{figure}[htb]
\begin{minipage}[b]{.3\linewidth}
\centering
\centerline{\includegraphics[width=2.0cm]{mobresult1.jpeg}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=2.0cm]{mobresult2.jpeg}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=2.0cm]{mobresult3.jpeg}}
\end{minipage}
\caption{Mobile application interface with three pages for requesting the preferred air quality data and integrating with Google Maps application}
\label{fig:res}
\end{figure}
\vspace{-0.35cm}
\subsection{Web Application}
\begin{figure}[htb]
\begin{minipage}[b]{.3\linewidth}
\centering
\centerline{\includegraphics[width=2.0cm]{wp3.jpeg}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=2.0cm]{wp2.jpeg}}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=2.0cm]{wp1.jpeg}}
\end{minipage}
\caption{Web application interface with homepage to enter the coordinates of a specific location in order to obtain the air quality data and the corresponding result pages displaying the requested data with respect to recommended range of each air quality parameter}
\label{fig:res}
\end{figure}
Web users can input the preferred location coordinates on the web application's homepage, and the corresponding output will be directed to the results page. The user can obtain a clear idea about the air quality parameters and their standard definitions from WHO \cite{N2} on the homepage. On the result page, web users can observe the corresponding air quality parameter values according to their requested locations and the average safe range of those parameters.
\section{Conclusion}
\label{sec:majhead}
The proposed system presents a novel, semantically distributed, easily expandable, and real-time IoT framework empowered by a machine learning model to identify and forecast air quality parameters in a low-cost implementation. The NodeRED framework obtains primary data from airly and the integrated NodeRED dashboard processes, visualizes, and stores the collected air quality and weather data as a client of the sensor network. End-users may access sensor data as well as forecast data through quantitative and visual representations via built-in mobile and web applications.
Since the primary sensor data is from airly, the proposed system has less capability in controlling primary sensor data. Therefore, in order to extend the capability of the system in acquiring primary sensor data, the system can be integrated with an alternative on-site sensor network to obtain the localized primary sensor air quality data. Furthermore, the proposed system has the potential to be developed as a real-time, accurate, and location-precise health alarming system in future.
\section{Acknowledgments}
\label{sec:acknowledgments}
Authors would like to extend their gratitude to Prof. Dileeka Dias, the Department of Electronic and Telecommunication Engineering (ENTC), University of Moratuwa for providing valuable guidance. Further, we would like to thank our colleagues at ENTC for their helpful suggestions and feedback.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:39:46.668Z | 2021-11-30T02:19:04.000Z | algebraic_stack_train_0000 | 356 | 2,959 |
|
proofpile-arXiv_065-1890 | \section{Introduction}
In light of the detection of the gravitational waves (GWs) events \cite{LIGOScientific:2016aoc,LIGOScientific:2017vwq} as well as the current and the forthcoming GWs experiments, including LISA \cite{LISA:2017pwj}, BBO \cite{Harry:2006fi}, KAGRA \cite{Kawamura:2011zz}, ET \cite{Sathyaprakash:2012jk}, Taiji \cite{Hu:2017mde,Ruan:2018tsw}, TianQin \cite{TianQin:2015yph} and Ali \cite{Li:2017drr}, the GWs have supplied us a new tool to explore the nature of gravity.
The General Relativity (GR) propagates two polarization modes of the GWs, with exactly the same amplitude and propagating speed as that of the light.
One of the interests concerning the GWs is the possible parity violation in the gravity theory as well as in the early universe, which indicates different behavior of the two polarization modes of the GWs.
One of the parity violating (PV) theories is the Chern-Simons (CS) modified gravity \cite{Lue:1998mq,Jackiw:2003pm}, which has been studied extensively in cosmology and GWs \cite{Satoh:2007gn,Saito:2007kt,Satoh:2007gn,Alexander:2009tp,Yunes:2010yf,Gluscevic:2010vv,Yagi:2012ya,Dyda:2012rj,Myung:2014jha,Alexander:2017jmt,Yagi:2017zhb,Kawai:2017kqt,Bartolo:2017szm,Bartolo:2018elp,Nair:2019iur,Nishizawa:2018srh,Odintsov:2019mlf,Fu:2020tlw,Fronimos:2021czc,Odintsov:2022hxu,Odintsov:2022cbm,Li:2022grj,Cai:2022lec,Peng:2022ttg}.
Recently, the induced GWs from the scalar perturbations in CS modified gravity has also been studied \cite{Zhang:2022xmm}.
The CS modified gravity can be extended by including higher order derivatives of the scalar field \cite{Crisostomi:2017ugk}, which is proved to be ghostfree on a cosmological background.
While on the cosmological background or generally when the scalar field is timelike, the scalar-tensor theory is equivalent to a metric theory respecting only the spatial covariance, which we refer to as the spatially covariant gravity (SCG) \cite{Gao:2014soa,Gao:2014fra,Gao:2020yzr,Hu:2021bbo,Hu:2021yaq}.
The well-studied effective field theory of inflation \cite{Creminelli:2006xe,Cheung:2007st} and as well as the Ho\v{r}ava gravity \cite{Horava:2009uw,Blas:2009qj} can be viewed as subclasses of the SCG theories.
The Ho\v{r}ava gravity with parity violation was explored in \cite{Takahashi:2009wc,Wang:2012fi,Zhu:2013fja}.
The polarized GWs in such Lorentz breaking PV gravity models have been studied in \cite{Myung:2009ug,Cannone:2015rra,Zhao:2019szi,Zhao:2019xmm,Qiao:2019hkz,Qiao:2019wsh,Qiao:2021fwi,Gong:2021jgg}.
Within the framework of SCG, the general equations of motion for the polarized gravitational waves on a cosmological background was derived in \cite{Gao:2019liu}.
The polarized GWs exhibit interesting features such as the velocity and amplitude birefringence phenomena, i.e., the propagating velocities and the frictional terms in the equations of motion for the two polarized modes of GWs become different \cite{Alexander:2004wk,Mylova:2019jrj,Biagetti:2020lpx,Wang:2020pgu,Wang:2021gqm,Wang:2020cub,Hu:2020rub,Bartolo:2020gsh,Orlando:2022rih,Chen:2022soq,Zhao:2022pun}.
See \cite{Zhu:2022dfq,Qiao:2022mln} for recent reviews and more references therein.
Recently there also arises interest on modified gravity theories based on non-Riemannian geometry, i.e., with torsion and/or nonmetricity tensors.
In particular, with nonmetricity tensor $Q_{\rho\mu\nu}\equiv \nabla_{\rho}g_{\mu\nu}$ and vanishing curvature tensor, symmetric teleparallel gravity and its extensions (e.g., $f(Q)$ gravity) have also been studied \cite{Nester:1998mp,Adak:2005cd,Adak:2006rx,Adak:2008gd,Mol:2014ooa,Lu:2019hra,Xu:2020yeg,BeltranJimenez:2018vdo,BeltranJimenez:2019tme,Lu:2019hra,Lazkoz:2019sjl,Albuquerque:2022eac,Dimakis:2022rkd,Zhao:2021zab,Jimenez:2022uvo}.
Through the so-called ``geometric trinity'' \citep{BeltranJimenez:2017tkd,BeltranJimenez:2019odq,Jimenez:2019woj,BeltranJimenez:2019tme,Gomes:2022vrc}, it can be shown that the curvature, torsion and nonmetricity tensors are three equivalent and complimentary approaches to describing gravity.
The ``scalar-torsion'' and ``scalar-nonmetricity'' theories, i.e., general couplings between the scalar field and torsion and/or nonmetricity tensor have also been considered in \citep{Bahamonde:2017wwk,Bahamonde:2019shr,Runkla:2018xrv,Jarv:2018bgs,Runkla:2018xrv,Hohmann:2018wxu,Hohmann:2018xnb,Soudi:2018dhv}.
See \cite{Hehl:1994ue,Heisenberg:2018vsk,Krssak:2018ywd,Bahamonde:2021gfp,Lu:2021wif} for reviews and more references therein.
The simplest term corresponding to the CS term in the presence of torsion is the so-called Nieh-Yan (NY) term \cite{Nieh:1981ww}.
The polarized GWs have been extensively studied with NY term and its extensions (i.e., the parity violating extension of teleparallel equivalent General Relativity) \cite{Chatzistavrakidis:2020wum,Cai:2021uup,Wu:2021ndf,Langvik:2020nrs,Li:2020xjt,Li:2021wij,Rao:2021azn,Li:2022mti}, as well as in more general models with non-vanishing torsion and/or nonmetricity tensors \cite{Hohmann:2020dgy,Bombacigno:2021bpk,Iosifidis:2020dck,Hohmann:2022wrk,Conroy:2019ibo,Iosifidis:2021bad,Pagani:2015ema,Boudet:2022nub,Bombacigno:2022naf,Li:2021mdp,Li:2022vtn,Iosifidis:2018zwo}.
In this work we investigate the polarized gravitational waves in a class of parity violating scalar-nonmetricity gravity theories.
We concentrate on scalar monomials built of the nonmetricity tensor and a scalar field.
The nonmetricity tensor can be coupled to the first derivative of the scalar field.
The authors of \citep{Conroy:2019ibo} considered 3 types of PV scalar-nonmetricity monomials: $\epsilon\phi\phi QQ$, $\epsilon\phi Q\nabla Q$ and $\epsilon\phi\phi\nabla Q\nabla Q$, where $\epsilon$, $Q$ and $\phi$ stand for the Levi-Civita tensor, the nonmetricity tensor and the first order derivative of the scalar field, respectively.
Generally, monomials of the form $\sim \nabla Q\nabla Q$ are quadratic in the second order derivative of the metric and thus possibly suffer from the ghost problem.
According to the order of derivatives, monomials that are cubic in the nonmetricity tensor, i.e., in the form $\sim QQQ$ and $\sim \epsilon QQQ$, have the same importance as those in the form $\sim Q\nabla Q$.
It is thus natural to build monomials cubic order in the nonmetricity tensor and to investigate their implications on the GWs.
It is well-known that in the usual scalar-tensor theory, a non-canonical (i.e., non-quadratic) kinetic term for the scalar field will modify the propagating speed of the scalar perturbation.
In our case, the monomials in the form $\sim QQQ$ just play the same role as the non-canonical kinetic term for the spacetime metric.
One would expect that the propagating speed of the GWs will also get modified.
Therefore the observations on the propagating speed of the GWs can be used to constrain the theory.
This paper is devoted to this issue.
This paper is organized as follows. In Sec. \ref{sec:pvsq}, we build the scalar monomials in both the parity preserving and violating cases up to the cubic order in the nonmetricity tensor.
In Sec. \ref{sec:gws}, we consider the linear tensor perturbations in our model on a cosmological background, and derive the equations of motion for the gravitational waves.
In Sec. \ref{sec:con} we summarize our result.
Throughout this paper we choose the unit $8\pi G =1$ and the convention for the metric $\{-,+,+,+\}$.
\section{Parity violating scalar-nonmetricity theory} \label{sec:pvsq}
The nonmetricity tensor is defined by
\begin{equation}
Q_{\rho\mu\nu} \coloneqq \nabla_{\rho} g_{\mu\nu},
\end{equation}
where $g_{\mu\nu}$ is the spacetime metric and $\nabla$ is a general affine connection.
For later convenience, we denote
\begin{equation}
Q_{\mu}\equiv Q_{\mu\phantom{\rho}\rho}^{\phantom{\mu}\rho} = g^{\rho\sigma} Q_{\mu\rho\sigma},\quad q_{\mu}\equiv Q_{\phantom{\rho}\rho\mu}^{\rho} = g^{\rho\sigma} Q_{\rho\sigma\mu},
\end{equation}
for shorthands.
As in the symmetric teleparallelism, we assume that the affine connection is free of curvature and torsion,
\begin{equation}
R_{\phantom{\mu}\nu\rho\sigma}^{\mu} \equiv \partial_{\rho}\Gamma_{\phantom{\mu}\nu\sigma}^{\mu}-\partial_{\sigma}\Gamma_{\phantom{\mu}\nu\rho}^{\mu}+\Gamma_{\phantom{\mu}\lambda\rho}^{\mu}\Gamma_{\phantom{\lambda}\nu\sigma}^{\lambda}-\Gamma_{\phantom{\mu}\lambda\sigma}^{\mu}\Gamma_{\phantom{\lambda}\nu\rho}^{\lambda} = 0,
\end{equation}
and
\begin{equation}
T_{\phantom{\rho}\mu\nu}^{\rho} \equiv \Gamma_{\phantom{\rho}\nu\mu}^{\rho}-\Gamma_{\phantom{\rho}\mu\nu}^{\rho}=0.
\end{equation}
As a result, the coefficients of the affine connection take the general form \cite{BeltranJimenez:2017tkd,DAmbrosio:2020nqu}
\begin{equation}
\Gamma_{\phantom{\alpha}\beta\mu}^{\alpha}=\frac{\partial x^{\alpha}}{\partial\xi^{a}}\frac{\partial^{2}\xi^{a}}{\partial x^{\mu}\partial x^{\beta}}, \label{Gamma}
\end{equation}
where $\xi^{a} = \xi^{a}(x)$ with $a=0,1,2,3$ are four general scalar fields.
It is well-known that in the presence of nonmetricity tensor (with vanishing torsion tensor), the Ricci scalar is given by
\begin{equation}
R=\mathring{R}+Q+\mathring{\nabla}_{\mu}\left(Q^{\mu}-q^{\mu}\right),\label{RicS_dec_xpl}
\end{equation}
where the $\mathring{R}$ and $\mathring{\nabla}$ are the Ricci scalar and the covariant derivative adapted to the metric compatible Levi-Civita connection, and $Q$ is the nonmetricity scalar defined by
\begin{equation}
Q\coloneqq\frac{1}{4}Q_{\rho\mu\nu}Q^{\rho\mu\nu}-\frac{1}{2}Q^{\rho\mu\nu}Q_{\mu\nu\rho}-\frac{1}{4}Q_{\mu}Q^{\mu}+\frac{1}{2}q^{\mu}Q_{\mu}. \label{Qnms}
\end{equation}
With our assumption of nonvanishing curvature $R=0$, we have $-Q \simeq \mathring{R}$, which is thus equivalent to the Einstein-Hilbert Lagrangian.
Since (\ref{Qnms}) is equivalent to GR in the symmetric teleparallel gravity, i.e., in the framework of nonmetricity theory, it is natural to consider modifications of GR by extending (\ref{Qnms}) with more general monomials built of the nonmetricity tensor with couplings to a scalar field.
At the quadratic order in the nonmetricity tensor, besides the 4 monomials in the nonmetricity scalar (\ref{Qnms}), there is another one $q_{\mu}q^{\mu}$. Generally one may consider the linear combination of these five monomials \cite{Jimenez:2019woj}.
In this work, however, since we concentrate on the modification of GR (i.e., the equivalent nonmetricity scalar $Q$), we simply choose the nonmetricity scalar as the parity preserving term that is quadratic order in the nonmetricity tensor.
In the case of parity violation, the monomials quadratic in the nonmetricity tensor take the schematical form $\epsilon QQ\phi\cdots \phi$, where $\epsilon$ stands for the Levi-Civita tensor $\epsilon_{\mu\nu\rho\sigma} = \sqrt{-g} \varepsilon_{\mu\nu\rho\sigma}$ with $\varepsilon_{0123} = 1$, $Q$ stands for the nonmetricity tensor and $\phi$ stands for the first order derivative of the scalar field $\phi_{\mu} \equiv \nabla_{\mu}\phi$.
We find 6 independent monomials:
\begin{eqnarray}
\mathcal{F}_{1} & \coloneqq & \epsilon_{\mu\nu\rho\sigma}Q_{\phantom{\mu\nu}\lambda}^{\mu\nu}Q^{\rho\sigma\lambda},\label{calF1}\\
\mathcal{F}_{2} & \coloneqq & \epsilon_{\mu\nu\rho\sigma}Q_{\phantom{\mu\nu}\alpha}^{\mu\nu}Q_{\phantom{\rho\sigma}\beta}^{\rho\sigma}\phi^{\alpha}\phi^{\beta},\\
\mathcal{F}_{3} & \coloneqq & \epsilon_{\mu\nu\rho\beta}Q_{\phantom{\mu\nu}\lambda}^{\mu\nu}Q_{\alpha}^{\phantom{\alpha}\rho\lambda}\phi^{\alpha}\phi^{\beta},\\
\mathcal{F}_{4} & \coloneqq & \epsilon_{\mu\nu\rho\beta}Q_{\phantom{\mu\nu}\lambda}^{\mu\nu}Q_{\phantom{\lambda\rho}\alpha}^{\lambda\rho}\phi^{\alpha}\phi^{\beta},\\
\mathcal{F}_{5} & \coloneqq & \epsilon_{\mu\nu\rho\beta}Q_{\phantom{\mu\nu}\alpha}^{\mu\nu}Q_{\phantom{\rho\sigma}\sigma}^{\rho\sigma}\phi^{\alpha}\phi^{\beta},\\
\mathcal{F}_{6} & \coloneqq & \epsilon_{\mu\nu\lambda\beta}Q_{\phantom{\mu\nu}\alpha}^{\mu\nu}Q_{\phantom{\lambda}\rho\sigma}^{\lambda}\phi^{\alpha}\phi^{\beta}\phi^{\rho}\phi^{\sigma}. \label{calF6}
\end{eqnarray}
Note $\mathcal{F}_{1},\cdots,\mathcal{F}_{5}$ have been considered in \cite{Conroy:2019ibo,Li:2022vtn} (where 7 monomials are listed, of which only 5 are independent, see Appendix \ref{app:indep}).
$\mathcal{F}_{2},\cdots,\mathcal{F}_{5}$ are quadratic in the derivative of the scalar field.
Here we also include $\mathcal{F}_{6}$, which is fourth order in the derivative of the scalar field.
In summary, $\mathcal{F}_{1},\cdots ,\mathcal{F}_{6}$ are the most general parity violating monomials that are quadratic in the nonmetricity tensor with couplings to the first order derivative of a scalar field.
Next we consider the derivative of the nonmetricity tensor $\nabla_{\sigma}Q_{\rho\mu\nu} = \nabla_{\sigma}\nabla_{\rho}g_{\mu\nu}$.
In order to prevent our model from the ghost problem, we consider monomials that are linear in the derivative of the nonmetricity tensor.
As being stated before, we choose only the nonmetricity scalar $Q$ as the parity preserving term quadratic in the nonmetricity tensor, therefore we do not consider the parity preserving monomials of the form $Q\nabla Q$.
At the lowest order, we focus on the parity violating monomials of the form $\epsilon Q \nabla Q \phi$, where again $\phi$ stands for the first order derivative of the scalar field.
After taking into account the vanishing of curvature tensor, there are 12 contractions \cite{Conroy:2019ibo}:
\begin{align}
\mathcal{G}_{1} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\alpha}Q_{\mu\nu}^{\phantom{\mu\nu}\alpha}\nabla_{\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\beta}, & \mathcal{G}_{2} & =\epsilon^{\mu\nu\rho\sigma}\phi^{\alpha}Q_{\mu\nu}^{\phantom{\mu\nu}\beta}\nabla_{\alpha}Q_{\rho\sigma\beta},\nonumber \\
\mathcal{G}_{3} & =\epsilon^{\mu\nu\rho\sigma}\phi^{\alpha}Q_{\mu\nu}^{\phantom{\mu\nu}\beta}\nabla_{\beta}Q_{\rho\sigma\alpha}, & \mathcal{G}_{4} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}q_{\nu}\nabla_{\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\beta},\nonumber \\
\mathcal{G}_{5} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\nu}\nabla_{\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\beta}, & \mathcal{G}_{6} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\phantom{\alpha\beta}\nu}^{\alpha\beta}\nabla_{\alpha}Q_{\rho\sigma\beta},\nonumber \\
\mathcal{G}_{7} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\phantom{\alpha\beta}\nu}^{\alpha\beta}\nabla_{\beta}Q_{\rho\sigma\alpha}, & \mathcal{G}_{8} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\nu}^{\phantom{\nu}\alpha\beta}\nabla_{\alpha}Q_{\rho\sigma\beta},\nonumber \\
\mathcal{G}_{9} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\nu\rho}^{\phantom{\nu\rho}\alpha}\nabla_{\alpha}Q_{\beta\phantom{\beta}\sigma}^{\phantom{\beta}\beta}, & \mathcal{G}_{10} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\nu\rho}^{\phantom{\nu\rho}\alpha}\nabla_{\alpha}Q_{\sigma\beta}^{\phantom{\sigma\beta}\beta},\nonumber \\
\mathcal{G}_{11} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\nu\rho}^{\phantom{\nu\rho}\alpha}\nabla_{\beta}Q_{\phantom{\beta}\sigma\alpha}^{\beta}, & \mathcal{G}_{12} & =\epsilon^{\mu\nu\rho\sigma}\phi_{\mu}Q_{\nu\rho}^{\phantom{\nu\rho}\alpha}\nabla_{\beta}Q_{\sigma\alpha}^{\phantom{\sigma\alpha}\beta}. \label{calGn}
\end{align}
In the above, upper indices of $\nabla Q$ are raised by the metric from ``outside'', e.g., $\nabla_{\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\beta}\equiv g^{\alpha\beta}\nabla_{\alpha}Q_{\rho\sigma\beta}$, etc..
This is to ensure that $\mathcal{G}_n$'s are quadratic order in $Q_{\rho\mu\nu}$.
On the other hand, if one raises indices from ``inside'', since $\nabla_{\rho}g^{\mu\nu}=-g^{\mu\mu'}g^{\nu\nu'}\nabla_{\rho}g_{\mu'\nu'}\equiv-Q_{\rho}^{\phantom{\rho}\mu\nu}$, one has (e.g.)
\[
\nabla_{\alpha}\left(Q_{\rho\sigma}^{\phantom{\rho\sigma}\alpha}\right)\equiv\nabla_{\alpha}\left(g^{\alpha\beta}Q_{\rho\sigma\beta}\right)=\nabla_{\alpha}Q_{\rho\sigma}^{\phantom{\rho\sigma}\alpha}-Q_{\alpha}^{\phantom{\alpha}\alpha\beta}Q_{\rho\sigma\beta},
\]
which indicates that $QQQ$ terms naturally arise at the same order as $Q\nabla Q$ terms.
This fact also motivates the inclusion of monomials that are cubic order in the nonmetricity tensor, as we shall construct below.
Another motivation of considering monomials cubic order in the nonmetricity tensor comes from the analogue of the ``k-essence'' model of the scalar field theory.
For the Lagrangian of the scalar field in the form $P(X,\phi)$ with $X=-\frac{1}{2}(\partial\phi)^2$ the canonical kinetic term, it is well-known that the non-canonical kinetic term changes the propagating speed of the scalar perturbation to $c_{s}^2 = P_{,X}/(P_{,X}+2X P_{,XX})$.
The same feature appears for the tensor perturbations (e.g.) \cite{Gao:2011vs}.
In a word, higher order terms in the nonmetricity tensor act as the non-canonical kinetic terms for the metric, and will result in a change of the propagating speed of the gravitational waves.
In this work, we concentrate on the monomials that are cubic order in the nonmetricity tensor, which are coupled to the first order derivative of the scalar field. For simplicity, we consider monomials that are linear in $\phi_{\mu}$.
In order to investigate their effect on the propagation of the gravitational waves, we consider both the parity preserving and parity violating cases.
For the parity preserving case, we find 36 monomials of the form $QQQ\phi$:
\begin{align}
\mathcal{C}_{1} & =Q_{\alpha\mu\rho}Q_{\phantom{\mu}\nu\sigma}^{\mu}Q^{\rho\nu\sigma}\phi^{\alpha}, & \mathcal{C}_{2} & =Q_{\mu\rho\nu}Q_{\phantom{\mu}\alpha\sigma}^{\mu}Q^{\sigma\rho\nu}\phi^{\alpha}, & \mathcal{C}_{3} & =Q_{\alpha\mu\rho}Q^{\mu}Q^{\rho}\phi^{\alpha},\nonumber \\
\mathcal{C}_{4} & =Q_{\mu}Q_{\phantom{\mu}\alpha\rho}^{\mu}Q^{\rho}\phi^{\alpha}, & \mathcal{C}_{5} & =Q_{\mu}Q^{\mu}Q_{\alpha}\phi^{\alpha}, & \mathcal{C}_{6} & =Q_{\mu}Q^{\mu}q_{\alpha}\phi^{\alpha},\nonumber \\
\mathcal{C}_{7} & =Q_{\rho\mu\nu}Q^{\rho\mu\nu}Q_{\alpha}\phi^{\alpha}, & \mathcal{C}_{8} & =Q_{\rho\mu\nu}Q^{\rho\mu\nu}q_{\alpha}\phi^{\alpha}, & \mathcal{C}_{9} & =Q_{\mu\alpha\rho}Q_{\phantom{\rho}\nu\sigma}^{\rho}Q^{\nu\mu\sigma}\phi^{\alpha},\nonumber \\
\mathcal{C}_{10} & =Q_{\alpha\mu\rho}Q_{\phantom{\mu}\nu\sigma}^{\mu}Q^{\nu\rho\sigma}\phi^{\alpha}, & \mathcal{C}_{11} & =Q_{\mu\nu\sigma}Q_{\phantom{\mu}\alpha\rho}^{\mu}Q^{\nu\rho\sigma}\phi^{\alpha}, & \mathcal{C}_{12} & =Q_{\rho\mu\nu}Q^{\mu\rho\nu}Q_{\alpha}\phi^{\alpha},\nonumber \\
\mathcal{C}_{13} & =Q_{\rho\mu\nu}Q^{\mu\rho\nu}q_{\alpha}\phi^{\alpha}, & \mathcal{C}_{14} & =Q_{\alpha\mu\rho}Q_{\phantom{\mu\rho}\nu}^{\mu\rho}Q^{\nu}\phi^{\alpha}, & \mathcal{C}_{15} & =Q_{\mu\rho\nu}Q_{\phantom{\mu\rho}\alpha}^{\mu\rho}Q^{\nu}\phi^{\alpha},\nonumber \\
\mathcal{C}_{16} & =Q_{\mu\alpha\rho}Q_{\phantom{\rho\mu}\nu}^{\rho\mu}Q^{\nu}\phi^{\alpha}, & \mathcal{C}_{17} & =q_{\mu}Q^{\mu}Q_{\alpha}\phi^{\alpha}, & \mathcal{C}_{18} & =q_{\mu}Q^{\mu}q_{\alpha}\phi^{\alpha},\nonumber \\
\mathcal{C}_{19} & =Q_{\alpha\mu\rho}Q_{\nu}Q^{\nu\mu\rho}\phi^{\alpha}, & \mathcal{C}_{20} & =Q_{\mu\alpha\rho}Q_{\nu}Q^{\nu\mu\rho}\phi^{\alpha}, & \mathcal{C}_{21} & =Q_{\alpha\mu\rho}Q_{\nu\sigma}^{\phantom{\nu\sigma}\rho}Q^{\nu\mu\sigma}\phi^{\alpha},\nonumber \\
\mathcal{C}_{22} & =Q_{\mu\alpha\rho}Q_{\nu\sigma}^{\phantom{\nu\sigma}\rho}Q^{\nu\mu\sigma}\phi^{\alpha}, & \mathcal{C}_{23} & =Q_{\alpha\mu\rho}Q_{\nu\sigma}^{\phantom{\nu\sigma}\mu}Q^{\sigma\rho\nu}\phi^{\alpha}, & \mathcal{C}_{24} & =Q_{\mu\alpha\rho}Q_{\nu\sigma}^{\phantom{\nu\sigma}\mu}Q^{\sigma\rho\nu}\phi^{\alpha},\nonumber \\
\mathcal{C}_{25} & =Q_{\mu\alpha\rho}Q^{\rho}q^{\mu}\phi^{\alpha}, & \mathcal{C}_{26} & =Q_{\alpha\mu\rho}Q^{\mu}q^{\rho}\phi^{\alpha}, & \mathcal{C}_{27} & =Q_{\mu}Q_{\phantom{\nu}\alpha\rho}^{\mu}q^{\rho}\phi^{\alpha},\nonumber \\
\mathcal{C}_{28} & =Q_{\alpha\mu\rho}q^{\mu}q^{\rho}\phi^{\alpha}, & \mathcal{C}_{29} & =Q_{\mu\alpha\rho}q^{\mu}q^{\rho}\phi^{\alpha}, & \mathcal{C}_{30} & =Q_{\alpha\mu\rho}Q_{\phantom{\mu\rho}\nu}^{\mu\rho}q^{\nu}\phi^{\alpha},\nonumber \\
\mathcal{C}_{31} & =Q_{\mu\rho\nu}Q_{\phantom{\mu\rho}\alpha}^{\mu\rho}q^{\nu}\phi^{\alpha}, & \mathcal{C}_{32} & =Q_{\mu\alpha\rho}Q_{\phantom{\rho\mu}\nu}^{\rho\mu}q^{\nu}\phi^{\alpha}, & \mathcal{C}_{33} & =q_{\mu}q^{\mu}Q_{\alpha}\phi^{\alpha},\nonumber \\
\mathcal{C}_{34} & =q_{\mu}q^{\mu}q_{\alpha}\phi^{\alpha}, & \mathcal{C}_{35} & =Q_{\alpha\mu\rho}Q_{\nu}^{\phantom{\zeta}\mu\rho}q^{\nu}\phi^{\alpha}, & \mathcal{C}_{36} & =Q_{\mu\alpha\rho}Q_{\nu}^{\phantom{\zeta}\mu\rho}q^{\nu}\phi^{\alpha}. \label{calCn}
\end{align}
For the parity violating case, we find 44 monomials of the form $\epsilon QQQ\phi$:
\begin{align}
\mathcal{D}_{1} & =\epsilon^{\mu\nu\rho\sigma}Q_{\phantom{\alpha\beta}\rho}^{\alpha\beta}Q_{\alpha\nu}^{\phantom{\alpha\nu}\delta}Q_{\delta\beta\sigma}\phi_{\mu}, & \mathcal{D}_{2} & =\epsilon^{\mu\nu\rho\sigma}Q_{\phantom{\alpha\beta}\nu}^{\alpha\beta}Q_{\beta\rho}^{\phantom{\beta\rho}\delta}Q_{\delta\alpha\sigma}\phi_{\mu}, & \mathcal{D}_{3} & =\epsilon^{\mu\nu\rho\sigma}Q_{\nu}^{\phantom{\nu}\alpha\beta}Q_{\alpha\rho}^{\phantom{\alpha\rho}\delta}Q_{\delta\beta\sigma}\phi_{\mu},\nonumber \\
\mathcal{D}_{4} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\nu}^{\phantom{\mu\nu}\alpha}Q_{\rho}^{\phantom{\rho}\beta\delta}Q_{\beta\delta\sigma}\phi_{\alpha}, & \mathcal{D}_{5} & =\epsilon^{\mu\nu\rho\sigma}q_{\nu}Q_{\rho}^{\phantom{\rho}\delta\beta}Q_{\beta\delta\sigma}\phi_{\mu}, & \mathcal{D}_{6} & =\epsilon^{\mu\nu\rho\sigma}Q_{\nu}Q_{\rho}^{\phantom{\rho}\alpha\beta}Q_{\alpha\beta\sigma}\phi_{\mu},\nonumber \\
\mathcal{D}_{7} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha}^{\phantom{\alpha}\beta\delta}Q_{\beta\delta\nu}Q_{\rho\sigma}^{\phantom{\rho\sigma}\alpha}\phi_{\mu}, & \mathcal{D}_{8} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha}^{\phantom{\alpha}\beta\delta}Q_{\mu\nu\beta}Q_{\rho\sigma\delta}\phi^{\alpha}, & \mathcal{D}_{9} & =\epsilon^{\mu\nu\rho\sigma}Q_{\beta\delta\alpha}Q_{\mu\nu}^{\phantom{\mu\nu}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi^{\alpha},\nonumber \\
\mathcal{D}_{10} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\mu}^{\phantom{\alpha\mu}\beta}Q_{\beta\nu}^{\phantom{\beta\nu}\delta}Q_{\rho\sigma\delta}\phi^{\alpha}, & \mathcal{D}_{11} & =\epsilon^{\mu\nu\rho\sigma}Q_{\phantom{\beta\delta}\nu}^{\beta\delta}Q_{\beta\mu}^{\phantom{\beta\mu}\alpha}Q_{\rho\sigma\delta}\phi_{\alpha}, & \mathcal{D}_{12} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\nu}^{\phantom{\alpha\nu}\delta}Q^{\alpha}Q_{\rho\sigma\delta}\phi_{\mu},\nonumber \\
\mathcal{D}_{13} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\beta\nu}Q^{\alpha\beta\delta}Q_{\rho\delta\sigma}\phi_{\mu}, & \mathcal{D}_{14} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha}^{\phantom{\alpha}\delta\beta}Q_{\beta\nu}^{\phantom{\beta\nu}\alpha}Q_{\rho\sigma\delta}\phi_{\mu}, & \mathcal{D}_{15} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu}^{\phantom{\mu}\beta\alpha}Q_{\beta\nu}^{\phantom{\beta\nu}\delta}Q_{\rho\sigma\delta}\phi_{\alpha},\nonumber \\
\mathcal{D}_{16} & =\epsilon^{\mu\nu\rho\sigma}q^{\beta}Q_{\beta\nu}^{\phantom{\beta\nu}\delta}Q_{\rho\sigma\delta}\phi_{\mu}, & \mathcal{D}_{17} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha}Q_{\mu\nu}^{\phantom{\mu\nu}\delta}Q_{\rho\sigma\delta}\phi^{\alpha}, & \mathcal{D}_{18} & =\epsilon^{\mu\nu\rho\sigma}q_{\alpha}Q_{\mu\nu}^{\phantom{\mu\nu}\delta}Q_{\rho\sigma\delta}\phi^{\alpha},\nonumber \\
\mathcal{D}_{19} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\mu}^{\phantom{\alpha\mu}\beta}Q_{\delta\beta\nu}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi^{\alpha}, & \mathcal{D}_{20} & =\epsilon^{\mu\nu\rho\sigma}Q_{\beta\alpha\mu}Q_{\delta\nu}^{\phantom{\delta\nu}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi^{\alpha}, & \mathcal{D}_{21} & =\epsilon^{\mu\nu\rho\sigma}Q_{\beta}Q_{\delta\nu}^{\phantom{\delta\nu}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi_{\mu},\nonumber \\
\mathcal{D}_{22} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\alpha\beta}Q_{\delta\nu}^{\phantom{\delta\nu}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi^{\alpha}, & \mathcal{D}_{23} & =\epsilon^{\mu\nu\rho\sigma}q_{\beta}Q_{\delta\nu}^{\phantom{\delta\nu}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi_{\mu}, & \mathcal{D}_{24} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\nu\alpha}Q_{\delta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi^{\alpha},\nonumber \\
\mathcal{D}_{25} & =\epsilon^{\mu\nu\rho\sigma}q_{\nu}Q_{\delta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi_{\mu}, & \mathcal{D}_{26} & =\epsilon^{\mu\nu\rho\sigma}Q_{\nu\delta\alpha}Q_{\phantom{\alpha\delta}\beta}^{\alpha\delta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\beta}\phi_{\mu}, & \mathcal{D}_{27} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\mu\beta}Q_{\nu\delta}^{\phantom{\nu\delta}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi^{\alpha},\nonumber \\
\mathcal{D}_{28} & =\epsilon^{\mu\nu\rho\sigma}Q_{\beta\alpha\mu}Q_{\nu\delta}^{\phantom{\nu\delta}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi^{\alpha}, & \mathcal{D}_{29} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha}Q_{\nu\delta}^{\phantom{\nu\delta}\alpha}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi_{\mu}, & \mathcal{D}_{30} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\beta}^{\phantom{\mu\beta}\alpha}Q_{\nu\delta}^{\phantom{\nu\delta}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi_{\alpha},\nonumber \\
\mathcal{D}_{31} & =\epsilon^{\mu\nu\rho\sigma}q_{\beta}Q_{\nu\delta}^{\phantom{\nu\delta}\beta}Q_{\rho\sigma}^{\phantom{\rho\sigma}\delta}\phi_{\mu}, & \mathcal{D}_{32} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\nu}^{\phantom{\mu\nu}\alpha}q^{\delta}Q_{\rho\sigma\delta}\phi_{\alpha}, & \mathcal{D}_{33} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\mu}^{\phantom{\alpha\mu}\beta}Q_{\nu\beta\rho}q_{\sigma}\phi^{\alpha},\nonumber \\
\mathcal{D}_{34} & =\epsilon^{\mu\nu\rho\sigma}Q_{\beta\alpha\mu}Q_{\nu\rho}^{\phantom{\nu\rho}\beta}q_{\sigma}\phi^{\alpha}, & \mathcal{D}_{35} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\alpha\beta}Q_{\nu\rho}^{\phantom{\nu\rho}\beta}q_{\sigma}\phi^{\alpha}, & \mathcal{D}_{36} & =\epsilon^{\mu\nu\rho\sigma}q_{\beta}Q_{\nu\rho}^{\phantom{\nu\rho}\beta}q_{\sigma}\phi_{\mu},\nonumber \\
\mathcal{D}_{37} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\beta\mu}Q_{\nu\rho}^{\phantom{\nu\rho}\beta}Q_{\sigma}\phi^{\alpha}, & \mathcal{D}_{38} & =\epsilon^{\mu\nu\rho\sigma}Q_{\beta\alpha\mu}Q_{\nu\rho}^{\phantom{\nu\rho}\beta}Q_{\sigma}\phi^{\alpha}, & \mathcal{D}_{39} & =\epsilon^{\mu\nu\rho\sigma}Q_{\alpha}Q_{\nu\rho}^{\phantom{\nu\rho}\alpha}Q_{\sigma}\phi_{\mu},\nonumber \\
\mathcal{D}_{40} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\alpha\beta}Q_{\nu\rho}^{\phantom{\nu\rho}\beta}Q_{\sigma}\phi^{\alpha}, & \mathcal{D}_{41} & =\epsilon^{\mu\nu\rho\sigma}q_{\beta}Q_{\nu\rho}^{\phantom{\nu\rho}\beta}Q_{\sigma}\phi_{\mu}, & \mathcal{D}_{42} & =\epsilon^{\mu\nu\rho\sigma}Q_{\mu\nu\alpha}q_{\rho}Q_{\sigma}\phi^{\alpha},\nonumber \\
\mathcal{D}_{43} & =\epsilon^{\mu\nu\rho\sigma}Q_{\nu\alpha\beta}Q_{\phantom{\alpha\delta}\rho}^{\alpha\delta}Q_{\sigma\delta}^{\phantom{\sigma\delta}\beta}\phi_{\mu}, & \mathcal{D}_{44} & =\epsilon^{\mu\nu\rho\sigma}Q_{\nu\rho}^{\phantom{\nu\rho}\alpha}Q_{\alpha}^{\phantom{\alpha}\delta\beta}Q_{\sigma\beta\delta}\phi_{\mu}. \label{calDn}
\end{align}
We emphasize that in constructing the above monomials, we only made use of the (anti-)symmetries of the $\epsilon$-tensor and the nonmetricity tensor. In particular, we have not examined if additional relations among these monomials exist by using the identity (\ref{antisymm_id}).
Combining together all the monomials in the above, the full action we shall consider is given by
\begin{equation}
S=\int\mathrm{d}^{4}x\sqrt{-g}\left(-\frac{Q}{2}+\sum_{n=1}^{6}f_{n}\mathcal{F}_{n}+\sum_{n=1}^{12}g_{n}\mathcal{G}_{n}+\sum_{n=1}^{36}c_{n}\mathcal{C}_{n}+\sum_{n=1}^{44}d_{n}\mathcal{D}_{n}-\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-V\left(\phi\right)\right), \label{action}
\end{equation}
where $\mathcal{F}_{n}$, $\mathcal{G}_{n}$, $\mathcal{C}_{n}$ and $\mathcal{D}_{n}$ are the scalar-nonmetricity monomials listed in the above, and the coefficients $f_{n}$, $g_{n}$, $c_{n}$ and $d_{n}$ are functions of the scalar field $\phi$ only.
In (\ref{action}) we have also introduced a canonical kinetic terms for the scalar field, while neglected other matter content in the universe.
\section{Polarization of the gravitational waves} \label{sec:gws}
In this section, we investigate the propagation of the gravitational waves in the model (\ref{action}) on a cosmological background.
Since only the tensor perturbations are involved, the perturbed metric can be parametrized by
\begin{equation}
\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a^{2}\left(t\right)\mathfrak{g}_{ij}\mathrm{d}x^{i}\mathrm{d}x^{j},
\end{equation}
where $\mathfrak{g}_{ij}\equiv\delta_{ik}\left(e^{\bm{\gamma}}\right)_{\phantom{k}j}^{k}$ with
\begin{equation}
\left(e^{\bm{\gamma}}\right)_{\phantom{i}j}^{i}=\delta_{\phantom{i}j}^{i}+\gamma_{\phantom{i}j}^{i}+\frac{1}{2}\gamma_{\phantom{i}k}^{i}\gamma_{\phantom{k}j}^{k}+\cdots.
\end{equation}
Here $\gamma_{\phantom{i}j}^{i}$ are the transverse and traceless tensor perturbations satisfying $\gamma_{\phantom{i}i}^{i}=0$ and $\partial_{i}\gamma_{\phantom{i}j}^{i}=0$.
In the following, spatial indices are raised and lowered by $\delta^{ij}$ and $\delta_{ij}$.
When the nonmetricity tensor is present, the associated affine connection is independent of the spacetime metric. In our case with vanishing curvature and torsion tensors, the affine connection is given by (\ref{Gamma}). Therefore the four scalar field $\xi^{a}$ are treated as independent variables, which must be determined by their own equations of motion.
Nevertheless, in the cosmological background it is possible to have the solution $\xi^{a} \propto \delta^{a}_{\mu}x^{\mu}$ such that the affine connection is identically vanishing.
Such a choice of a vanishing affine connection is dubbed the ``coincident gauge'' in the literature \cite{BeltranJimenez:2017tkd,Zhao:2021zab}.
In the following we shall work with the coincident gauge, in which the nonmetricity tensor reduces to be $Q_{\rho\mu\nu} = \partial_{\rho}g_{\mu\nu}$.
In practice, since in this paper we consider only the linear tensor perturbations, a general affine connection the form (\ref{Gamma}) actually only affect the scalar and vector perturbation, and will not contribute to the tensor perturbations.
Our next task is to derive the quadratic action for the tensor perturbations in the cosmological background.
Not all the monomials in (\ref{action}) will contribute to the tensor perturbations.
For the $\mathcal{F}_{n}$'s, only $\mathcal{F}_{1}$ and $\mathcal{F}_{3}$ contribute to the tensor perturbations:
\begin{eqnarray}
\mathcal{F}_{1} & \supset & -2a^{2}\varepsilon^{ijk}\dot{\gamma}_{\phantom{l}i}^{l}\partial_{j}\gamma_{lk},\\
\mathcal{F}_{3} & \supset & -a^{2}\dot{\phi}^{2}\varepsilon^{ijk}\dot{\gamma}_{\phantom{l}i}^{l}\partial_{j}\gamma_{lk},
\end{eqnarray}
where $\varepsilon^{ijk}$ is the 3-dimension Levi-Civita symbol with $\varepsilon_{123}= \varepsilon^{123} = 1$.
In the above no integration by parts has been made.
Similarly, the contributions from the $\mathcal{G}_{n}$'s are
\begin{eqnarray}
\mathcal{G}_{2} & \supset & -6a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\dot{\gamma}_{lk}-a^{2}\dot{\phi}\varepsilon^{ijk}\dot{\gamma}_{\phantom{l}i}^{l}\partial_{k}\dot{\gamma}_{lj}-a^{2}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\ddot{\gamma}_{lk},\\
\mathcal{G}_{6} & \supset & \dot{\phi}\varepsilon^{ijk}\partial_{k}\partial^{l}\gamma_{mj}\partial_{l}\gamma_{\phantom{m}i}^{m}-2a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\dot{\gamma}_{lk}-a^{2}\dot{\phi}\varepsilon^{ijk}\dot{\gamma}_{\phantom{l}i}^{l}\partial_{k}\dot{\gamma}_{lj}, \label{calG6}\\
\mathcal{G}_{11} & \supset & \dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\partial^{2}\gamma_{lk}-4a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\dot{\gamma}_{lk}-a^{2}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\ddot{\gamma}_{lk}. \label{calG11}
\end{eqnarray}
For monomials that are cubic in the nonmetricity tensor, we find, for the parity preserving contributions:
\begin{eqnarray}
\mathcal{C}_{1} & \supset & -2\dot{a}\dot{\phi}\partial_{i}\gamma_{jk}\partial^{i}\gamma^{jk},\\
\mathcal{C}_{7} & \supset & -6\dot{a}\dot{\phi}\partial_{i}\gamma_{jk}\partial^{i}\gamma^{jk}+6a^{2}\dot{a}\dot{\phi}\dot{\gamma}_{ij}\dot{\gamma}^{ij},\\
\mathcal{C}_{19} & \supset & 6a^{2}\dot{a}\dot{\phi}\dot{\gamma}_{ij}\dot{\gamma}^{ij},\\
\mathcal{C}_{21} & \supset & -2\dot{a}\dot{\phi}\partial_{i}\gamma_{jk}\partial^{i}\gamma^{jk}+6a^{2}\dot{a}\dot{\phi}\dot{\gamma}_{ij}\dot{\gamma}^{ij},
\end{eqnarray}
and for the parity violating contributions:
\begin{eqnarray}
\mathcal{D}_{1} & \supset & 2a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{8} & \supset & -8a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{10} & \supset & -2a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{12} & \supset & -6a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{13} & \supset & -4a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{17} & \supset & -12a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{19} & \supset & 2a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{27} & \supset & 4a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l},\\
\mathcal{D}_{37} & \supset & 6a\dot{a}\dot{\phi}\varepsilon^{ijk}\partial_{j}\gamma_{li}\dot{\gamma}_{\phantom{l}k}^{l}.
\end{eqnarray}
Putting all the contributions together and after performing integrations by parts, the full quadratic action for the tensor perturbations is found to be
\begin{eqnarray}
S_{\text{T}}^{\left(2\right)} & = & \int\mathrm{d}t\frac{\mathrm{d}^{3}k}{\left(2\pi\right)^{3}}\frac{a^{3}}{2}\bigg[\mathcal{G}_{0}\dot{\gamma}^{ij}\left(t,\bm{k}\right)\dot{\gamma}_{ij}\left(t,-\bm{k}\right)+\mathcal{G}_{1}\varepsilon^{ijk}\left(\frac{-ik_{j}}{a}\right)\dot{\gamma}_{\phantom{l}i}^{l}\left(t,\bm{k}\right)\dot{\gamma}_{lk}\left(t,-\bm{k}\right)\nonumber \\
& & \qquad\qquad -\mathcal{W}_{0}\frac{k^{2}}{a^{2}}\gamma_{ij}\left(t,\bm{k}\right)\gamma^{ij}\left(t,-\bm{k}\right)-\mathcal{W}_{-1}\left(\frac{-ik_{j}}{a}\right)\varepsilon^{ijk}\gamma_{\phantom{l}i}^{l}\left(t,\bm{k}\right)\gamma_{lk}\left(t,-\bm{k}\right)\nonumber \\
& & \qquad\qquad -\mathcal{W}_{1}\varepsilon^{ijk}\gamma_{\phantom{l}i}^{l}\left(t,\bm{k}\right)\frac{k^{2}}{a^{2}}\left(\frac{-ik_{j}}{a}\right)\gamma_{lk}\left(t,-\bm{k}\right)\bigg], \label{ST2}
\end{eqnarray}
where we have moved to the Fourier space and the coefficients $\mathcal{G}_{0}$, $\mathcal{W}_{0}$ etc. are given by
\begin{eqnarray}
\mathcal{G}_{0} & = & \frac{1}{4}+12\left(c_{7}+c_{19}+c_{21}\right)H\dot{\phi}, \label{calG0}\\
\mathcal{W}_{0} & = & \frac{1}{4}+4\left(c_{1}+3c_{7}+c_{21}\right)H\dot{\phi}, \label{calW0}\\
\mathcal{W}_{-1} & = & -2\dot{f}_{1}-4f_{1}H-\dot{f}_{3}\dot{\phi}^{2}-2f_{3}H\dot{\phi}^{2}-2f_{3}\dot{\phi}\ddot{\phi}\nonumber \\
& & +\frac{2}{a^{2}}\partial_{t}\left[\dot{\phi}a^{2}H\left(2g_{2}+g_{6}+g_{11}\right)-\frac{1}{2}\dot{\phi}a^{2}\left(\dot{g}_{2}+\dot{g}_{11}\right)-\frac{1}{2}a^{2}\ddot{\phi}_{0}\left(g_{2}+g_{11}\right)\right]\nonumber \\
& & -\frac{2}{a^{2}}\partial_{t}\left[\left(d_{1}-4d_{8}-d_{10}-3d_{12}-2d_{13}-6d_{17}+d_{19}+2d_{27}+3d_{37}\right)a^{2}H\dot{\phi}\right],\\
\mathcal{G}_{1}=\mathcal{W}_{1} & = & -2\dot{\phi}\left(g_{6}-g_{11}\right). \label{calG1calW1}
\end{eqnarray}
In (\ref{ST2}), terms proportional to $\mathcal{G}_{0}$ and $\mathcal{W}_{0}$ take the same form as those in GR, which are parity preserving. The standard results in GR, i.e., with all $\mathcal{F}_{n} = \mathcal{G}_{n} = \mathcal{C}_{n} = \mathcal{D}_{n} = 0$, correspond to $\mathcal{G}_{0} = \mathcal{W}_{0} = \frac{1}{4}$.
Terms proportional to $\mathcal{G}_{1}$, $\mathcal{W}_{-1}$ and $\mathcal{W}_{1}$ involve $\varepsilon^{ijk}$, which signal the parity violating effects in the tensor perturbations.
The quadratic action (\ref{ST2}) explicitly shows that there is neither extra polarization modes of the GWs nor mixing between the two polarization modes in the parity violating scalar-nonmetricity theory (\ref{action}).
The quadratic action (\ref{ST2}) takes the general form in \cite{Gao:2019liu}, which is based on the framework of spatially covariant gravity.
While comparing with the case of spatially covariant gravity with parity violation, here there arises a contribution proportional to $\mathcal{W}_{-1}$, which is first order in the spatial derivatives (i.e., linear in $\bm{k}$ in the Fourier space).
Such contributions have been reported in \cite{Conroy:2019ibo,Li:2021mdp,Li:2022vtn}, as well as in the NY modified gravity with torsion \cite{Li:2020xjt,Li:2021wij,Wu:2021ndf,Cai:2021uup}.
Such contributions come from the terms with ``mixed'' temporal and spatial derivatives, which can be reduced by integration by parts using
\begin{equation}
f\left(t\right)\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\dot{\gamma}_{lk}\simeq-\frac{1}{2}\dot{f}\left(t\right)\varepsilon^{ijk}\partial_{j}\gamma_{\phantom{l}i}^{l}\gamma_{lk},
\end{equation}
and yield the terms with a single spatial derivative.
It is interesting that such kind of terms does not arise within the framework of spatially covariant gravity considered in \cite{Gao:2019liu}.
While such contributions are generally present in the parity violating scalar-nonmetricity theory. This can be seen form $\mathcal{W}_{-1}$ that it receives contributions from all types of $\mathcal{F}_{n}$, $\mathcal{G}_{n}$ and $\mathcal{D}_{n}$ monomials.
As a result, such kind of terms as well as their imprint on the gravitational waves can be regarded as a characteristic feature of parity violating scalar-nonmetricity theories.
The tensor perturbations $\gamma_{ij}$ can be decomposed into the polarization modes
\begin{equation}
\gamma_{ij}\left(t,\bm{k}\right)=\sum_{s=\pm2}e_{ij}^{\left(s\right)}(\hat{\bm{k}})\gamma^{\left(s\right)}\left(t,\bm{k}\right),
\end{equation}
where $\hat{\bm{k}} = \frac{\bm{k}}{|\bm{k}|}$, $e_{ij}^{\left(s\right)}(\hat{\bm{k}})$ are the circular polarization tensors with $s=\pm 2$ the helicity states. The polarization tensors satisfy the transverse and traceless conditions
\begin{equation}
\delta^{ij}e_{ij}^{\left(s\right)}(\hat{\bm{k}})=0,\quad k^{i}e_{ij}^{\left(s\right)}(\hat{\bm{k}})=0.
\end{equation}
In order to fully determine the polarization tensors, we choose the phase of $e_{ij}^{\left(s\right)}(\hat{\bm{k}})$ such that \cite{Gao:2011vs}
\begin{equation}
e_{ij}^{\left(s\right)*}(\hat{\bm{k}})=e_{ij}^{-s}(\hat{\bm{k}})=e_{ij}^{\left(s\right)}(-\hat{\bm{k}}),
\end{equation}
where an asterisk stands for the complex conjugate. The polarization tensors are normalized to be
\begin{equation}
e_{ij}^{\left(s\right)}(\hat{\bm{k}})e^{\left(-s'\right)ij}(\hat{\bm{k}})=\delta^{ss'}.
\end{equation}
With these settings, there is a useful relation \cite{Alexander:2004wk,Satoh:2007gn,Bartolo:2017szm}
\begin{equation}
i\hat{k}^{l}\varepsilon_{lij}e_{m}^{\left(s\right)i}(\hat{\bm{k}})e^{\left(s'\right)jm}(-\hat{\bm{k}})=\frac{s}{2}\delta^{ss'}.
\end{equation}
After some manipulations, the full quadratic action for the circular polarization modes is
\begin{eqnarray}
S_{\mathrm{T}}^{\left(2\right)} & = & \int\mathrm{d}\tau\frac{\mathrm{d}^{3}k}{\left(2\pi\right)^{3}}\sum_{s=\pm2}\frac{a^{2}}{2}\bigg[\mathcal{G}^{\left(s\right)}\left(\tau,k\right)\gamma'^{\left(s\right)}\left(\tau,\bm{k}\right)\gamma'^{\left(s\right)}\left(\tau,-\bm{k}\right)\nonumber \\
& & \qquad-k^{2}\mathcal{W}^{\left(s\right)}\left(\tau,k\right)\gamma^{\left(s\right)}\left(\tau,\bm{k}\right)\gamma^{\left(s\right)}\left(\tau,-\bm{k}\right)\bigg], \label{ST2p}
\end{eqnarray}
where we have used the conformal time $\tau$ with $\mathrm{d}t = a \mathrm{d}\tau$, and a prime denotes derivative with respect to $\tau$.
Following \cite{Gao:2019liu}, in (\ref{ST2p}) we have defined
\begin{eqnarray}
\mathcal{G}^{\left(s\right)}\left(\tau,k\right) & = & \sum_{n=0}^{1}\mathcal{G}_{n}\left(\tau\right)\left(\frac{sk}{2a}\right)^{n}=\mathcal{G}_{0}\left(\tau\right)+\mathcal{G}_{1}\left(\tau\right)\frac{sk}{2a},\\
\mathcal{W}^{\left(s\right)}\left(\tau,k\right) & = & \sum_{n=-1}^{1}\mathcal{W}_{n}\left(\tau\right)\left(\frac{sk}{2a}\right)^{n}=\mathcal{W}_{-1}\left(\tau\right)\frac{sa}{2k}+\mathcal{W}_{0}\left(\tau\right)+\mathcal{W}_{1}\left(\tau\right)\frac{sk^{3}}{2a^{3}}.
\end{eqnarray}
We emphasize that the coefficients $\mathcal{G}_{0}(\tau)$ and $\mathcal{G}_{1}(\tau)$ should not be confused with the monomials $\mathcal{G}_{n}$'s in (\ref{calGn}).
The equations of motion for the circularly polarized modes are
\begin{equation}
\gamma''{}^{\left(s\right)}\left(\tau,\bm{k}\right)+\mathcal{H}\left(2+\nu^{\left(s\right)}\right)\gamma'{}^{\left(s\right)}\left(\tau,\bm{k}\right)+\left(c_{\text{T}}^{\left(s\right)}\right)^{2}k^{2}\gamma^{\left(s\right)}\left(\tau,\bm{k}\right)=0, \quad s=\pm 2, \label{eom}
\end{equation}
where $\mathcal{H} = \frac{a'}{a}$ is the comoving Hubble parameter,
\begin{equation}
\nu^{\left(s\right)}\left(\tau,k\right) =\frac{1}{\mathcal{H}}\frac{\partial_{\tau}\mathcal{G}^{\left(s\right)}\left(\tau,k\right)}{\mathcal{G}{}^{\left(s\right)}\left(\tau,k\right)}, \label{nu}
\end{equation}
and
\begin{equation}
\left(c_{\mathrm{T}}^{\left(s\right)}\right)^{2}=\frac{\mathcal{W}_{-1}\frac{2a}{sk}+\mathcal{W}_{0}+\mathcal{W}_{1}\frac{sk}{2a}}{\mathcal{G}_{0}+\mathcal{G}_{1}\frac{sk}{2a}}. \label{cT2}
\end{equation}
The parameter $\nu^{(s)}$ defined in (\ref{nu}) is running rate of the effective Planck mass \cite{Lagos:2019kds}, $c_{\mathrm{T}}^{(s)}$ is regarded as the propagating speed of the GWs \cite{Saltas:2014dha,Sawicki:2016klv,Nishizawa:2017nef}.
Generally, due to the presence of parity violating terms in the action (\ref{action}), $\nu^{(+2)} \neq \nu^{(-2)}$ and thus the left/right-hand polarizations of the GWs acquire different dampings, which is the effect of ``amplitude birefringence'' \cite{Alexander:2004wk,Yunes:2010yf,Yagi:2012ya,Alexander:2017jmt,Yagi:2017zhb}.
On the other hand, $c_{\mathrm{T}}^{(+2)} \neq c_{\mathrm{T}}^{(-2)}$ implies that the left/right-hand polarizations of the GWs propagate with different velocities, which is the effect of ``velocity birefringence'' \cite{Takahashi:2009wc,Wang:2012fi,Zhu:2013fja,Nishizawa:2018srh}.
From (\ref{calG0}) and (\ref{calG1calW1}), $\mathcal{G}_{0}(\tau)$ and $\mathcal{G}_{1}(\tau)$ receive contributions from $\mathcal{G}_{n}$ and $\mathcal{C}_{n}$ terms in the action (\ref{action}), while not from $\mathcal{F}_{n}$ terms.
In other words, the parameter $\nu^{(s)}$ is vanishing if only the $\mathcal{F}_{n}$ terms are present (besides the nonmetricity scalar $Q$).
This is also consistent with the result in \cite{Conroy:2019ibo}.
From (\ref{calG0}) and (\ref{calW0}), it is interesting to note that even without the parity violating terms (i.e., by setting $\mathcal{G}_{n} = \mathcal{D}_{n}=0$), $\mathcal{G}_{0}(\tau)$ and $\mathcal{W}_{0}(\tau)$ will receive modifications from $\mathcal{C}_{n}$'s, i.e., monomials of the form $QQQ$.
This justifies our motivation of introducing monomials of the form $QQQ$, which act as the non-canonical (non-quadratic) kinetic terms for the metric as the analogue of ``k-essence'', and result in the change of the propagating speed of the GWs.
Due to the modification of $c_{\mathrm{T}}^{(s)}$, generally the polarization modes of the GWs may suffer from the gradient instability if $\left(c_{\mathrm{T}}^{\left(s\right)}\right)^{2} <0 $.
In particular, for long wavelength modes, the term $\mathcal{W}_{-1}\frac{2a}{sk}$ in $\left(c_{\mathrm{T}}^{\left(s\right)}\right)^{2}$ dominates.
Without loss of generality, we assume $\mathcal{W}_{-1}>0$ and thus the left-handed polarization mode (with $s=-2$) experiences a gradient instability since $\left(c_{\mathrm{T}}^{\left(-2\right)}\right)^{2} \approx -\mathcal{W}_{-1} \frac{a}{k}<0$, which yields an exponential growth of the perturbation mode.
The situation becomes even worse as the wavelength goes larger, i.e., on the large scales.
As we have argued before, the contributions to $\mathcal{W}_{-1}$ are generally present in the parity violating scalar-nonmetricity theory, therefore such a gradient instability is a characteristic feature of the parity violating scalar-nonmetricity theory.
\section{Conclusion} \label{sec:con}
Parity violating features in the gravity theory, in particular in the gravitational waves, have attracted much attention recently.
In this work, we consider a class of scalar-nonmetricity gravity theory with parity violation, and investigate the propagation of the gravitational waves in such kind of theory.
The Lagrangian of the scalar-nonmetricity theory is a polynomial built of the nonmetricity tensor $Q_{\rho\mu\nu}$ coupled with the first order derivative of a scalar field $\phi$.
The parity violating monomials in the form $\sim\epsilon QQ$ and $\sim\epsilon Q\nabla Q$ have been considered in \cite{Conroy:2019ibo}.
In this work, instead of considering higher derivative terms, we consider ``k-essence''-like generalization, i.e., higher order in the nonmetricity tensor $Q_{\rho\mu\nu}$.
Such kind of monomials, being first order in the spacetime metric, are manifestly ghostfree.
For completeness, we build monomials in both the parity preserving and parity violating cases, which are of the form $\sim QQQ\phi$ given in $\mathcal{C}_{n}$'s in (\ref{calCn}) and of the form $\sim \epsilon QQQ\phi$ given in $\mathcal{D}_{n}$'s in (\ref{calDn}), respectively.
The full action of our scalar-nonmetricity theory is given in (\ref{action}). We then investigated the tensor perturbations of this model in a cosmological background.
By deriving the quadratic action for the tensor perturbations (\ref{ST2}) and the corresponding equations of motion for the polarization modes (\ref{eom}), we examined the presence of both the amplitude and velocity birefringence of the propagation of the gravitational waves in our model.
In particular, due to the presence of $\mathcal{W}_{-1}\frac{2a}{sk}$ term in the effective propagating speeds (\ref{cT2}), one of the two polarization modes of the GWs suffers from the gradient instability on large scales.
\acknowledgments
This work was partly supported by the National Natural Science Foundation of China (NSFC) under the grant No. 11975020 and No. 12005309.
| 2024-02-18T23:39:46.810Z | 2023-01-02T02:10:00.000Z | algebraic_stack_train_0000 | 369 | 8,117 |
|
proofpile-arXiv_065-1919 | \section{Introduction \& Previous Work}
\label{intro}
We have frequently pointed out that there are two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions, e.g.,~\cite{Schmidhuber:90sab,powerplay2011and13,schmidhuber2020bit,schmidhuber2021cur}.
(A) is arguably just the standard problem of computer science. But how to implement the creative part (B) in artificial systems through reinforcement learning (RL), gradient-based artificial neural networks (NNs), and other machine learning methods?
To answer this question,
for three decades we have published work on artificial scientists equipped with artificial curiosity and creativity, e.g.,~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab,Schmidhuber:91singaporecur,Storck:95,Schmidhuber:97interesting,Schmidhuber:06cs,Schmidhuber:10ieeetamd,sunyi2011agi,powerplay2011and13,Srivastava2013first,ramesh2022exploring}.
Our first artificial Q\&A system designed to invent and answer questions was the intrinsic motivation-based {\bf adversarial system} from 1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab}.
It uses two artificial NNs.
The first NN is called the controller $C$.
$C$ probabilistically generates outputs that may influence an environment.
The second NN is called the world model $M$.
It predicts the environmental reactions to $C$'s outputs.
Using gradient descent, M minimizes its error, thus becoming a better predictor.
But in a zero-sum game, the reward-maximizing $C$ tries to find sequences of output actions that maximize the error of $M$.
$M$'s loss is the gain of $C$ (like in the later application of artificial curiosity called GANs~\cite{schmidhuber2020gan}, but also for the more general cases of sequential data and RL~\cite{Kaelbling:96,Sutton:98,wiering2012}).
4 years before a 2014 paper on GANs~\cite{goodfellow2014generative}, a well-known 2010 survey \cite{Schmidhuber:10ieeetamd} summarised the generative adversarial NNs of 1990 as follows: a ``neural network as a predictive world model is used to maximize the controller's intrinsic reward, which is proportional to the model's prediction errors'' (which are minimized).
The 2014 GANs are an instance of this where the trials are very short (like in bandit problems) and the environment simply returns 1 or 0 depending on whether the controller's (or generator's) output is in a given set \cite{schmidhuber2020gan, schmidhuber2021cur, schmidhuber2022integr}.
$C$ is asking questions through its action sequences:
What happens if I do that?
$M$ is learning to answer those questions.
$C$ is motivated to come up with questions where $M$ does not yet know the answer and loses interest in questions with known answers.
This was the start of a long series of papers on artificial curiosity and artificial scientists~\cite{Schmidhuber:10ieeetamd,Schmidhuber:04cur,Schmidhuber:12cur,schmidhuber2020mir}.
Not only the 1990s but also more recent years saw successful applications of this simple principle (and variants thereof) in sequential settings, e.g.,~\cite{Singh:05nips,Oudeyer:12intrinsic,pathak2017curiosity,burda2018curious}.
Q\&As help to understand the world which is necessary for planning~\cite{Schmidhuber:90sandiego,Schmidhuber:90diffenglish,Schmidhuber:90sab} and may boost external reward~\cite{Schmidhuber:91singaporecur,Schmidhuber:02predictable,Schmidhuber:04cur,Schmidhuber:12cur,pathak2017curiosity,burda2018curious}.
The approach of 1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab} makes for a fine exploration strategy in many deterministic environments.
{\bf In stochastic environments, however, it might fail.}
$C$ might learn to focus on those parts of the environment where $M$ can always get high prediction errors due to randomness, or due to computational limitations of $M$.
For example, an agent controlled by $C$ might get stuck in front of a TV screen showing highly unpredictable white noise, e.g.,~\cite{Schmidhuber:10ieeetamd} (see also~\cite{burda2018curious}).
Therefore, as pointed out in 1991, in stochastic environments, $C$'s reward should not be the errors of $M$, but (an approximation of) the {\em first derivative} of $M$'s errors across subsequent training iterations,
that is, $M$'s {\bf learning progress or improvements}~\cite{Schmidhuber:91singaporecur,Schmidhuber:07alt}.
As a consequence, despite $M$'s high errors in front of a noisy TV screen, $C$ won't get rewarded for getting stuck there, simply because $M$'s errors won't improve.
Both the totally predictable and the fundamentally unpredictable will get boring.
This simple insight led to lots of follow-up work~\cite{Schmidhuber:10ieeetamd}.
For example, one particular RL approach for artificial curiosity in stochastic environments was published in 1995~\cite{Storck:95}.
A simple $M$ learned to predict or estimate the probabilities of the environment's possible responses, given $C$'s actions.
After each interaction with the environment, $C$'s intrinsic reward was the KL-Divergence~\cite{kullback1951} between $M$'s estimated probability distributions
before and after the resulting new experience---the {\bf information gain}~\cite{Storck:95}.
This was later also called {\em Bayesian Surprise}~\cite{itti:05}.
Compare earlier work on information gain~\cite{Shannon:48} and its maximization {\em without} RL \& NNs~\cite{Fedorov:72}.
In the general RL setting where the environment is only partially observable~\cite[Sec.~6]{888}, $C$ and $M$ may greatly profit from a memory of previous events~\cite{Schmidhuber:90sandiego,Schmidhuber:90diffenglish,Schmidhuber:91nips}.
Towards this end, both $C$ and $M$ can be implemented as
LSTMs~\cite{lstm97and95,Gers:2000nc,Graves:09tpami,888}
which have become highly commercial
~\cite{googlevoice2015,wu2016google,amazon2016,facebook2017} and widely used in RL~\cite{wierstra2010,openai2019dota,deepmind2019starcraft,openai2020dex}.
The better the predictions of $M$, the fewer bits are required to encode the history $H$ of observations because short codes can be used for observations that $M$ considers highly probable~\cite{Huffman:52,Witten:87}.
That is, the learning progress of $M$ has a lot to do with the concept of {\em compression progress}~\cite{Schmidhuber:06cs,Schmidhuber:09abials,Schmidhuber:09videos,Schmidhuber:10ieeetamd}.
But it's not quite the same thing.
In particular, it does not take into account the bits of information needed to specify $M$.
A more general approach is based on algorithmic information theory, e.g.,~\cite{Solomonoff:64,Kolmogorov:65,Wallace:68,Wallace:87,LiVitanyi:97,Schmidhuber:02ijfcs}.
Here $C$'s intrinsic reward is indeed based on {\bf algorithmic compression progress}~\cite{Schmidhuber:06cs,Schmidhuber:09abials,Schmidhuber:09videos,Schmidhuber:10ieeetamd} based on
some coding scheme for the weights of the model network, e.g.,~\cite{Hochreiter:97nc1,Schmidhuber:95kol+,Schmidhuber:97nn,koutnik:gecco10,ppsn2012cncs,koutnik:gecco13,steenkiste2016wavelet}, and also a coding scheme for the history of all observations so far, given the model~\cite{Huffman:52,Wallace:68,Rissanen:78,Witten:87,Hochreiter:97nc1,Schmidhuber:06cs}.
Note that the history of science is a history of compression progress through incremental discovery of simple laws that govern seemingly complex observation sequences~\cite{Schmidhuber:06cs,Schmidhuber:09abials,Schmidhuber:09videos,Schmidhuber:10ieeetamd}.
Back in 1990, the questions asked by $C$ were restricted in the sense that they always referred to all the details of future inputs, e.g., pixels~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab}.
That’s why in 1997, a more general adversarial RL machine was built that could ignore many or all of these details and ask {\bf arbitrary abstract questions} with computable answers~\cite{Schmidhuber:97interesting,Schmidhuber:99cec,Schmidhuber:02predictable}.
Example question: if we run this policy (or program) for a while until it executes a special interrupt action, will the internal storage cell number 15 contain the value 5, or not? Again there are two learning, reward-maximising adversaries playing a zero-sum game, occasionally betting on different yes/no outcomes of such computational experiments.
The winner of such a bet gets a reward of 1, the loser -1.
So each adversary is motivated to come up with questions whose answers surprise the other.
And both are motivated to avoid seemingly trivial questions where both already agree on the outcome, or seemingly hard questions that none of them can reliably answer for now.
This is the approach closest to what we will present in the following sections.
All the systems above (now often called CM systems~\cite{learningtothink2015}) actually maximize the sum of the standard external rewards (for achieving user-given goals) and the intrinsic rewards.
{\bf Does this distort the basic RL problem?}
It turns out not so much. Unlike the external reward for eating three times a day, the curiosity reward in the systems above is ephemeral, because once something is known, there is no additional intrinsic reward for discovering it again.
That is, the external reward tends to dominate the total reward. In totally learnable environments, in the long run, the intrinsic reward even {\em vanishes} next to the external reward.
Which is nice, because in most RL applications we care only for the external reward.
Our RL Q\&A systems of the 1990s did not {\bf explicitly, formally enumerate their questions.}
But the more recent {\sc PowerPlay} framework (2011)~\cite{powerplay2011and13,Srivastava2013first} does.
Let us step back for a moment.
What is the set of all formalisable questions?
How to decide whether a given question has been answered by a learning machine?
To define a question, we need a computational procedure that takes a solution candidate (possibly proposed by a policy) and decides whether it is an answer to the question or not.
{\sc PowerPlay} essentially enumerates the set of all such procedures (or some user-defined subset thereof), thus enumerating all possible questions or problems.
{\bf It searches for the simplest question that the current policy cannot yet answer but can quickly {\em learn} to answer {\em without} forgetting the answers to previously answered questions.}
What is the simplest such Q\&A to be added to the repertoire?
It is the cheapest one---the one that is found first.
Then the next trial starts, where new Q\&As may build on previous Q\&As.
Compare also the {\em One Big Net For Everything}~\cite{onebignet2018} which offers a simplified, less strict NN version of {\sc PowerPlay}.
In our empirical investigation of Section~\ref{sec:empiricial}, we will revisit the above-mentioned concepts of complex computational experiments with yes/no outcomes, focusing on two settings:
(1) The generation of experiments driven by model prediction error in a deterministic reinforcement-providing environment,
and (2) An approach where $C$ (driven by information gain) generates pure thought experiments in form of weight matrices of RNNs.
\section{Self-Invented Experiments Encoded as Neural Networks}
\label{exabs}
We present a $CM$ system where $C$ can design essentially arbitrary computational experiments (including thought experiments) with binary yes/no outcomes.
Experiments may run for several time steps.
However, $C$ will prefer simple experiments whose outcomes still surprise $M$, until they become boring.
In general, both the controller $C$ and the model $M$ can be implemented as (potentially multi-dimensional) LSTMs \cite{graves:icann2007}.
At each time step $t=1,2, \ldots$, $C$'s input includes the current sensory input vector $in(t)$, the external reward vector $R_e(t)$, and the intrinsic curiosity reward $R_i(t)$.
$C$ may or may not interact directly with the environment through action outputs.
How does $C$ ask questions and propose experiments?
$C$ has an output unit called the START unit. Once it becomes active ($>0.5$),
$C$ uses a set of extra output units for producing the {\em weight matrix or program} $\theta$ of a separate RNN or LSTM called $E$ (for Experiment), in fast weight programmer style~\cite{Schmidhuber:92ncfastweights,Schmidhuber:91singaporefastweights,Schmidhuber:93ratioicann,Gomez:05icann,schlag2018tensor, faccio2020parameter, kirsch2021meta, schlag2021linear, irie2021going}.
$E$ takes sensory inputs from the environment and produces actions as outputs.
It also has two additional output units, the HALT unit~\cite{Schmidhuber:12slimnn} and the RESULT unit.
Once the weights $\theta$ are generated at time step $t'$, $E$ is tested in a trial, interacting with some environment.
Once $E$'s HALT unit exceeds 0.5 in a later time step $t''$,
the current experiment ends. That is, the experiment computes its own runtime~\cite{Schmidhuber:12slimnn}.
The experimental outcome $r(t'')$ is $1$ if the activation {\em result}$(t'')$ of $E$'s RESULT unit exceeds $0.5$, and $0$ otherwise.
At time $t'$, so before the experiment is being executed, $M$ has to compute its output {\em pr}$(t') \in [0,1]$ from $\theta$ (and the history of $C$'s inputs and actions up to $t'$, which includes all previous experiments their outcomes).
Here, {\em pr}$(t')$ models $M$'s (un)certainty that the final binary outcome of the experiment will be 1 (YES) or 0 (NO).
Then the experiment is run.
In short, $C$ is proposing an experimental question in form of $\theta$ that will yield a binary answer (unless some time limit is reached).
$M$ is trying to predict this answer before the experiment is executed.
Since $E$ is an RNN and thus a general computer whose weight matrix can implement any program executable on a traditional computer~\cite{siegelmann91turing}, any computable experiment with a binary outcome can be implemented in its weight matrix (ignoring storage limitations of finite RNNs or other computers).
That is, by generating an appropriate weight matrix $\theta$, $C$ can ask any scientific question with a computable solution.
In other words, $C$ can propose any scientific hypothesis that is experimentally verifiable or falsifiable.
At $t''$, $M$'s previous prediction {\em pr}$(t')$ is compared to the later observed outcome $r(t'')$ of C's experiment (which spans $t''-t'$ time steps), and $C$'s intrinsic curiosity reward $R_i(t'')$ is proportional to $M$'s surprise.
To calculate it, we interpret {\em pr}$(t')$ as $M$'s estimated probability of $r(t'')$, given the history of observations so far.
Then we train $M$ by gradient descent (with regularization to avoid overfitting) for a fixed amount of time to improve all of its previous predictions including the most recent one.
This yields an updated version of $M$ called $M^*$.
In general, $M^*$ will compute a different prediction {\em PR}$(t')$ of $r(t'')$, given the history up to $t'-1$.
At time $t''$, the contribution $R_{IG}(t'')$ to $C$'s curiosity reward is proportional to the apparent resulting information gain, the KL-divergence
\[
R_{IG}(t'') \sim D_{KL} \big(PR(t') || pr(t')\big).
\]
If $M$ had a confident belief in a particular experimental outcome, but this belief gets shattered in the wake of $C$'s experiment, there will be a major surprise and a big insight for $M$, as well as lots of intrinsic curiosity reward for $C$.
On the other hand, if $M$ was quite unsure about the experimental outcome, and remains quite unsure afterwards, then $C$'s experiment can hardly surprise $M$ and $C$ will fail to profit much.
$C$ is motivated to propose {\em interesting} hypotheses or experiments that violate $M$'s current deep beliefs and expand its horizon.
An alternative intrinsic curiosity reward would be based on compression progress~\cite{Schmidhuber:06cs,Schmidhuber:09abials,Schmidhuber:09videos,Schmidhuber:10ieeetamd}.
Note that the entire experimental protocol is the responsibility of $\theta$.
Through $\theta$, $E$ must initialize the experiment (e.g., by resetting the environment or moving the agent to some start position if that is important to obtain reliable results), then run the experiment by executing a sequence of computational steps or actions, and translate the incoming data sequence into some final abstract binary outcome YES or NO.
$C$ is motivated to design experimental protocols $\theta$ that surprise $M$.
$C$ will get bored by experiments whose outcomes are predicted by $M$ with little confidence (recall the noisy TV), as well as by experiments whose outcomes are correctly predicted by $M$ with high confidence.
{\em $C$ will get rewarded for surprising experiments whose outcomes are incorrectly predicted by $M$ with high confidence.}
A negative reward per time step encourages $C$ to be efficient and lazy and come up with simple and fast still surprising experiments.
If physical actions in the environment cost much more energy (resulting in immediate negative reward) than $E$'s internal computations per time step,
$C$ is motivated to propose a $\theta$ defining a ``thought experiment'' requiring only internal computations, without executing physical actions in the (typically non-differentiable) environment.
In fact, due to $C$'s bias towards the computationally cheapest and least costly experiments that are still surprising to $M$, most of $C$'s initial experiments may be thought experiments.
Hence, since $C$, $E$ and $M$ are differentiable, not only $M$ but also $C$ may be often trainable by backpropagation~\cite{faccio2020parameter} rather than the generally slower policy gradient methods~\cite{wierstra2010,openai2019dota,deepmind2019starcraft,openai2020dex}.
Of course, this is only true if the reward function is also differentiable with respect to $C$'s parameters.
\section{Experimental Evaluation}
\label{sec:empiricial}
Here we present initial studies of the automatic generation of interesting experiments encoded as NNs.
We evaluate these systems empirically and discuss the associated challenges.
This includes two setups: (1) Adversarial intrinsic reward encourages experiments executed in a differentiable environment through sequences of continuous control actions.
We demonstrate that these experiments aid the discovery of goal states in a sparse reward setting.
(2) Pure thought experiments encoded as RNNs (without any environmental interactions) are guided by an information gain reward.
Together, these two setups cover the important aspects discussed in Section~\ref{exabs}:
the use of abstract experiments with binary outcomes as a method for curious exploration, and the creation of interesting pure thought experiments encoded as RNNs.
We leave the integration of both setups into a single system (as described in section~\ref{exabs}) for future work.
\subsection{Generating Experiments in a Differentiable Environment}
\label{sec:differentiable_env}
Reinforcement learning (RL) usually involves exploration in an environment with non-differentiable dynamics.
This requires RL methods such as policy gradients~\cite{Williams:92}.
To simplify our investigation and focus solely on the generation of self-invented experiments, we introduce a fully differentiable environment that allows for computing analytical policy gradients via backpropagation.
This does not limit the generality of our approach, as standard RL methods can be used instead.
Our continuous force field environment is depicted in Figure~\ref{fig:environment}.
The agent has to navigate through a 2D environment with a fixed external force field.
This force field can have different levels of complexity.
The states in this environment are the position and velocity of the agent.
The agent's actions are real-valued force vectors applied to itself. To encourage laziness and a bias towards simple experiments,
each time step is associated with a small negative reward ($-0.1$).
A sparse large reward ($100$) is given whenever the agent gets very close to the goal state.
We operate in the single life setting without episodic resets.
Additional information about the force field environment can be found in Appendix~\ref{app:environment}.
Since the environment is deterministic, it is sufficient for $C$ to generate experiments whose results the current $M$ cannot predict.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/force_field_3_best.pdf}
\end{subfigure}
\caption{\textbf{A differentiable force field environment}. The agent (red) has to navigate to the goal state (yellow) while the external force field exerts forces on the agent.}
\label{fig:environment}
\hfill
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/adversarial_env.pdf}
\end{subfigure}
\caption{
\textbf{Generating self-invented experiments in a differentiable environment.}
A controller $C_\phi$ is motivated to generate experiments $E_\theta$ that still surprise the model $M_\mathbf{w}$.
After execution in the environment, the experiments and their binary results are stored in memory. The model is trained on the history of previous experiments.
}
\label{fig:adversarial_env}
\hfill
\end{figure}
\subsubsection{Method}
Algorithm~\ref{alg:adversarial} and Figure~\ref{fig:adversarial_env} summarize the process for generating a sequence of interesting abstract experiments with binary outcomes.
The goal is to test the following three hypotheses:
\begin{itemize}
\item Generated experiments implement exploratory behavior, facilitating the reaching of goal states.
\item If there are negative rewards in proportion to the runtime of experiments, then the average runtime will increase over time, as the controller will find it harder and harder to come up with new short experiments whose outcomes the model cannot yet predict.
\item As the model learns to predict the yes/no results of more and more experiments, it becomes harder for the controller to create experiments whose outcomes surprise the model.
\end{itemize}
\noindent The generated experiments have the form $E_\psi(s) = (a, \hat{r})$, where $E_\psi$ is a linear feedforward network with parameters $\psi$, $s$ is the environment state, $a$ are the actions and $\hat{r} \in [0, 1]$ is the experimental result.
Both $s$ and $a$ are real-valued vectors.
Instead of a HALT unit, a single scalar $\tau \in \mathbb{R}^+$ determines the number of steps for which an experiment will run.
To further simplify the setup, the experiment network is a feedforward NN without recurrence.
To make the experimental result differentiable with respect to the runtime parameter, $\tau$ predicts the mean of a Gaussian distribution with fixed variance over the number of steps.
The actual result $\tilde{r}$ is the expectation of the result unit $\hat{r}$ over the distribution defined by $\tau$ (more details on this can be found in Appendix~\ref{app:experiments}).
The binarized result $r$ has the value 1 if $\tilde{r} > 0.5$, and $0$ otherwise.
The parameters $\theta$ of the experiment are the network parameters $\psi$ together with the runtime parameter $\tau$, i.e. $\theta := (\psi, \tau)$.
For a given starting state $s$, the controller $C_\phi$ generates experiments: $C_\phi(s) = \theta$.
$C_\phi$ is a multi-layer perceptron (MLP) with parameters $\phi$, and $\theta$ denotes the parameters of the generated experiment.
The model $M_\mathbf{w}$ is an MLP with parameters $\mathbf{w}$.
It makes a prediction $M_\mathbf{w}(s, \theta) = \hat{o}$, with $\hat{o} \in [0, 1]$, for an experiment defined by the starting state $s$ and the parameters $\theta$.
During each iteration of the algorithm, $C_\phi$ generates an experiment based on the current state $s$ of the environment.
This experiment is executed until the cumulative halting probability defined by the generated $\tau$ exceeds a certain threshold (e.g., 99\%).
The starting state $s$, experiment parameters $\theta$ and binary result $r$ are saved in a memory buffer $\mathcal{D}$ of experiments.
Every state encountered during the experiment is saved to the state memory buffer $\mathcal{B}$.
After the experiment execution, the model $M_\mathbf{w}$ is trained for a fixed number of steps of stochastic gradient descent (SGD) to minimize the loss
\begin{equation}
\label{eq:loss_m}
\mathcal{L}_M = \mathbb{E}_{(s, \theta, r) \sim \mathcal{D}}[\text{bce}(M_\mathbf{w}(s, \theta), r)],
\end{equation}
where $\text{bce}$ is the binary cross-entropy loss function.
The third and last part of each iteration is the training of the controller $C_\phi$.
The loss that is being minimized via SGD is
\begin{equation}
\label{eq:loss_c}
\mathcal{L}_C = \mathbb{E}_{s \sim \mathcal{B}} [- \text{bce}\big( M_\mathbf{w}(s, C_\phi(s)), \tilde{r}(C_\phi(s), s) \big) - R_e(C_\phi(s), s)].
\end{equation}
The function $\tilde{r}$ maps the experiment parameters and starting state to the continuous result of the experiment.
The function $R_e$ maps the experiment parameters and starting state to the external reward.
Note that gradient information will flow back from $\tilde{r}$ and $R$ to $\phi$ through the execution of the experiment in the differentiable environment.
The first term corresponds to the intrinsic reward for the controller, which encourages it to generate experiments whose outcomes $M_\textbf{w}$ cannot predict.
The second term is the external reward from the environment, which punishes long experiments.
Since the reward for reaching the goal is sparse and not differentiable with respect to the experiment's actions, no information about the goal state reaches $C_\phi$ through the gradient.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.44\textwidth}
\centering
\includegraphics[width=\textwidth]{images/goal_states_reached.pdf}
\caption{Number of times the goal state was reached, adjusted by the number of environment interactions. Experiments generated with adversarial intrinsic reward benefit exploration more than random experiments. Without intrinsic motivation, the agent usually fails to reach any goal states in the sparse reward setting. Mean with bootstrapped 95\% confidence intervals across 30 seeds.}
\label{fig:force_field_goal_states}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{images/runtime_accuracy_diff.pdf}
\caption{Blue: the average runtime of each experiment generated by $C\phi$. Purple: the difference between result prediction accuracy of the current $M_\mathbf{w}$ for the generated experiment and the average prediction accuracy of the current $M_\mathbf{w}$ for random experiments. Mean with bootstrapped 95\% confidence intervals across 30 seeds.}
\label{fig:force_field_runtime}
\end{subfigure}
\label{fig:force_field_results}
\caption{Experiments in the differentiable force field environment}
\end{figure}
\subsubsection{Results and Discussion}
\label{sec:force_results}
To investigate our first hypothesis, Figure~\ref{fig:force_field_goal_states} shows the cumulative number of times a goal state was reached during an experiment, adjusted by the number of environment interactions of each experiment.
Specifically, it shows $h(j) = \sum_{k=1}^j \frac{g_k}{n_k}$, where $j = 1, 2, \ldots$ is the index of the generated experiment, $g_k$ is $1$ if the goal state was reached during the $k$th experiment and $0$ otherwise, and $n_k$ is the runtime of the $k$th experiment.
Our method, as described above and in Algorithm~\ref{alg:adversarial}, reaches the most goal states per environment interaction.
Purely random experiments also discover goal states, but less frequently.
Note that such random exploration in parameter space has been shown to be a powerful exploration strategy \cite{rueckstiess2008b, plappert2017parameter, vemula2019contrasting}.
The average runtime of the random experiments is $50$ steps, compared to $22.9$ for the experiments generated by $C_\phi$.
To rule out a potential unfair bias due to different runtimes, Figure~\ref{fig:additional_goal_states} in the Appendix shows an additional baseline of random experiments with an average runtime of $20$ steps, leading to results very similar to those of longer running random experiments.
If we remove the intrinsic adversarial reward, the controller is left only with the external reward.
This means that there is no $\text{bce}$ term in Equation~\ref{eq:loss_c}.
It is not surprising that in this setting, $C_\phi$ fails to generate experiments that discover goal states, since the gradient of $\mathcal{L}_C$ contains no information about the sparse goal reward.
Figure~\ref{fig:force_field_runtime} addresses our second and third hypotheses.
$C_\phi$ indeed tends to prolong experiments as $M_\mathbf{w}$ has been trained on more experiments, even though experiments with long runtimes are discouraged through the punitive external reward.
Our explanation for this is that it becomes harder with time for $C_\phi$ to come up with short experiments for which $M_\mathbf{w}$ cannot yet accurately predict the correct results.
This is supported by the fact that the prediction accuracy of $M_\mathbf{w}$ for newly generated experiments goes up. Specifically,
Figure~\ref{fig:force_field_runtime} shows the difference between prediction accuracy of the current $M_\mathbf{w}$ for the newly generated experiment and the expected prediction accuracy the current $M_\mathbf{w}$ for randomly sampled experiments.
This accounts for the general gain of $M_\mathbf{w}$'s prediction accuracy over the course of training.
It can be seen that in the beginning, $C_\phi$ is successful at creating adversarial experiments that surprise $M_\mathbf{w}$.
With time, however, it fails to continue doing so and is forced to create longer experiments to challenge $M_\mathbf{w}$.
\begin{algorithm*}[ht]
\caption{Adversarial yes/no experiments in a differentiable environment}
\label{alg:adversarial}
\textbf{Input}: Randomly initialized differentiable Controller $C_\phi: \text{S} \rightarrow \Theta$, randomly initialized differentiable Model $M_\mathbf{w}: \text{S} \times \Theta \rightarrow \mathbb{R}$, empty experiment memory $\mathcal{D}$, empty state memory $\mathcal{B}$, set of random initial experiments $\mathcal{E}_\text{init}$, Differentiable environment
\textbf{Output}: An experiment memory populated with (formerly) interesting experiments
\begin{algorithmic}[1]
\FOR {$\theta \in \mathcal{E}_\text{init}$}
\STATE $s \leftarrow$ current environment state
\STATE Execute the experiment parametrized by $\theta$ in the environment, obtain binary result $r$
\STATE Save the tuple $(s, \theta, r)$ to $\mathcal{D}$
\STATE Save all encountered states during the experiment to $\mathcal{B}$
\ENDFOR
\REPEAT
\STATE $s \leftarrow$ current environment state
\STATE $\theta \leftarrow C_\phi(s)$
\STATE Execute the experiment parametrized by $\theta$ in the environment, obtain binary result $r$
\STATE Save tuple $(s, \theta, r)$ to $\mathcal{D}$
\STATE $\hat{s} \leftarrow$ current environment state
\FOR {some steps}
\STATE Sample tuple $(s, \theta, r)$ from $\mathcal{D}$
\STATE Update the model using SGD: $\nabla_{\textbf{w}}\text{bce}(M_\mathbf{w}(s, \theta), r)$
\ENDFOR
\FOR {some steps}
\STATE Sample starting state $s$ from $\mathcal{B}$
\STATE Set environment to state $s$
\STATE Execute the experiment parametrized by $C_\phi(s)$ in the environment, obtain continuous result $\tilde{r}$ and external reward $R_e$
\STATE Update the controller using SGD: $\nabla_{\phi}\big( -\text{bce}(M_\mathbf{w}(s, C_\phi(s)), \tilde{r}) - R_e \big)$
\ENDFOR
\STATE Set environment to state $\hat{s}$
\UNTIL{no more interesting experiments are found}
\end{algorithmic}
\end{algorithm*}
\subsection{Pure RNN Thought Experiments}
\label{sec:pure_thought}
The previous experimental setup uses feedforward NNs as experiments and an intrinsic reward function that is differentiable with respect to the controller's weights.
This section investigates a complementary
setup: interesting pure thought experiments (with no environment interactions) are generated in the form of RNNs without any inputs, driven by an intrinsic curiosity reward based on information gain which we treat as non-differentiable.
\subsubsection{Method}
In many ways, this new setup (depicted in Figure~\ref{fig:rnn_schematic} and described in Algorithm~\ref{alg:pure_thought} in the Appendix) is similar to the one presented in Section~\ref{sec:differentiable_env}.
In what follows, we highlight the important differences.
An experiment $E_\theta$ is an RNN of the form $(h_{t+1}, r_{t+1}, \gamma_{t+1}) = E_\theta(h_t) $, where $h_t$ is the hidden state vector, $r_t \in \{0, 1\}$ is the binary result at experiment time step $t$, and $\gamma_t \in [0, 1]$ is the HALT unit.
The result $r$ of $E_\theta$ is the $r_t$ for the experiment step $t$ where $\gamma_t$ first is larger than $0.5$.
Since there is no external environment and the experiments are independent of each other, the model $M_\mathbf{w}$ is again a simple MLP with parameters $\mathbf{w}$.
It takes only the experiment parameters $\theta$ as input and makes a result prediction $\hat{o} = M_\mathbf{w}(\theta), \hat{o} \in [0, 1]$.
As mentioned above, here we treat the intrinsic reward signal as non-differentiable.
This means that---in contrast to the method presented in Section~\ref{sec:differentiable_env}---the controller cannot receive information about $M_\mathbf{w}$ from gradients that are backpropagated through the model.
Instead, it has to infer the learning behavior of $M_\mathbf{w}$ from the history $\omega$ of previous experiments and intrinsic rewards to come up with new surprising experiments.
The controller $C_\phi$ is now an LSTM that is trained by DDPG \cite{lillicrap2015continuous} and generates new experiments solely based on the history of past experiments: $C_\phi(\omega) = \theta$.
The history $\omega$ is a sequence of tuples $(\theta_i, r_i, R_i)$, where $i = 1, 2, \ldots$ is the index of the experiment.
It contains experiments up to the last one that has been executed.
More details can be found in Appendix~\ref{app:pure_thought}.
For these pure thought experiments, we use a reward based on information gain.
Let $\mathbf{w}$ be $M$'s weights before training for a fixed number of SGD steps on data that includes the newly generated experiment $\theta$ that has just been added to the memory $\mathcal{D}$, and $\mathbf{w}^*$ the weights after training.
Then, the information gain reward associated with experiment $\theta$ is
\begin{equation}
\label{eq:inf_gain}
R_{IG}(\theta, \mathbf{w}, \mathbf{w}^*) = \frac{1}{|\mathcal{D}|} \sum_{\tilde{\theta} \in \mathcal{D}} D_{KL}(M_{\mathbf{w}^*}(\tilde{\theta}) || M_\mathbf{w}(\tilde{\theta})),
\end{equation}
where we interpret the output of the model as a Bernoulli distribution.
\begin{figure}[t]
\centering
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{images/abstract_ddpg.pdf}
\caption{\textbf{Generating abstract thought experiments encoded as RNNs.}
The model is trained to predict the results of previous experiments.
The controller generates new interesting thought experiments (without environment interactions) based on the history of previous experiments and their results and rewards.
}
\label{fig:rnn_schematic}
\end{minipage}\hfill
\begin{minipage}{0.5\textwidth}
\centering
\raisebox{-5.1cm}{\includegraphics[width=1.0\textwidth]{images/rnn_runtime_inf_gain.pdf}}
\caption{\textbf{Empirical results for pure thought experiments encoded as RNNs.} Blue: the average runtime of each experiment generated by $C_\phi$. Purple: information gain reward (Equation~\ref{eq:inf_gain}) for $C_\phi$ associated with each experiment. Mean with bootstrapped 95\% confidence intervals across 20 seeds.}
\label{fig:rnn_results}
\end{minipage}
\end{figure}
\subsubsection{Results and Discussion}
Figure~\ref{fig:rnn_results} shows the information gain reward associated with each new experiment that $C_\phi$ generates.
We observe that, after a short initial phase, the intrinsic information gain reward steadily declines.
This is similar to what we observe for the prediction accuracy in section~\ref{sec:force_results}: it becomes harder for the controller to generate experiments that surprise the model.
It should be mentioned that this is a natural effect, since---as the model is trained on more and more experiments---every new additional experiment contributes on average less to the model's change during training, and thus is associated with less information gain reward.
An interesting, albeit minor, effect shown in Figure~\ref{fig:rnn_results} is that also in this setup, the average runtime of the generated experiments increases slightly over time, even though there is no negative reward for longer thought experiments.
For shorter experiments, however, it is apparently easier for the model to learn to predict the results.
Hence, at least in the beginning, they yield more learning progress and more information gain.
Later, however, longer experiments become more interesting.
In comparison to the experiments generated in Section~\ref{sec:differentiable_env}, the present ones have a much shorter runtime.
This is a side-effect of the experiments being RNNs with a HALT unit; for randomly initialized experiments, the average runtime is approximately $1.6$ steps.
\section{Conclusion and Future Work}
We extended the neural Controller-Model (CM) framework through the notion of arbitrary self-invented computational experiments with binary outcomes:
experimental protocols are essentially programs interacting with the environment, encoded as the weight matrices of RNNs generated by the controller.
The model has to predict the outcome of an experiment based solely on the experiment's parameters.
By creating experiments whose outcomes surprise the model, the controller curiously explores its environment and what can be done in it.
Such a system is analogous to a scientist who designs experiments to gain insights about the physical world.
However, an experiment does not necessarily involve actions taken in the environment: it may be a pure thought experiment akin to those of mathematicians.
We provide an empirical evaluation of two simple instances of such systems, focusing on different and complementary aspects of the idea.
In the first setup, we show that self-invented abstract experiments encoded as feedforward networks interacting with a continuous control environment facilitate the discovery of rewarding goal states.
Furthermore, we see that over time the controller is forced to create longer experiments (even though this is associated with a larger negative external reward) as short experiments start failing to surprise the model.
In the second setup, the controller generates pure abstract thought experiments in the form of RNNs.
We observe that over time, newly generated experiments result in less intrinsic information gain reward.
Again, later experiments tend to have slightly longer runtime.
We hypothesize that this is because simple experiments initially lead to a lot of information gain per time interval, but later do not provide much insight anymore.
These two empirical setups should be seen as initial steps towards more capable systems such as the one proposed in Section~\ref{exabs}.
Scaling these methods to more complex environments and the generation of more sophisticated experiments, however, is not without challenges.
Direct generation and interpretation of NN weights may not be very effective for large and deep networks.
Previous work~\cite{faccio2022goal} already combined hypernetworks~\cite{ha2016hypernetworks} and policy fingerprinting~\cite{harb2020policy, faccio2022general} to generate and evaluate policies.
Similar innovations will facilitate the generation of abstract self-invented experiments beyond the small scale setups presented in this paper.
\section{Acknowledgments}
\label{ack}
We are grateful to our friends for useful comments.
This work was supported in part by a European Research Council Advanced Grant (no: 742870).
| 2024-02-18T23:39:46.907Z | 2023-01-02T02:10:23.000Z | algebraic_stack_train_0000 | 375 | 6,108 |
|
proofpile-arXiv_065-1936 | \section{Introduction}
The outcomes of high energy collider experiments depend to a large extent on event simulations obtained with MC generators. So do the planning and development of future machines and measurements \cite{Azzi:2019yne,Feng:2022inv,Mangano:2016jyj,LHeC:2020van,Proceedings:2020eah}. The baseline MCs are based on the description of hadron structure provided by collinear PDFs \cite{Kovarik:2019xvh}, while a more complete, 3D description of hadron structure is given by TMD PDFs \cite{Angeles-Martinez:2015sea}. There are thus efforts to include elements of TMD physics in the modern MC generators and in the parton-branching algorithms on which they are based. The idea of the work \cite{Hautmann:2022xuc} described in this article is to include the TMD splitting functions obtained from the high-energy (or small-x) limit of partonic amplitudes \cite{Catani:1994sq} in a parton branching algorithm, with the goal to incorporate in the parton evolution both small-x and Sudakov contributions. Thanks to its applicability over a wide kinematic region, the algorithm provided by the TMD Parton Branching (PB) method \cite{Hautmann:2017xtx,Hautmann:2017fcj} was chosen to perform this research.
\section{The TMD Parton Branching method}
The PB method is a flexible, widely applicable MC approach to obtain QCD high energy predictions based on TMD PDFs, simply called TMDs.
One of its main ingredients is a forward evolution equation \cite{Hautmann:2017xtx,Hautmann:2017fcj}.
The evolution of the parton density is expressed in terms of real, resolvable branchings and virtual and non-resolvable contributions, which are treated with Sudakov form factors.
Thanks to the momentum sum rule \footnote{Momentum sum rule for the DGLAP splitting functions $P_{ab}(z,\mu^2)$ yields $\sum_a\int_0^1 \textrm{d} z \; z P_{ab}(z,\mu^2) = 0$. }
and unitarity, the Sudakov form factor can be written in terms of real, resolvable splittings and interpreted as a non-emission probability.
Owing to the simple, intuitive picture of the evolution in terms of cascade of branchings and the probabilistic interpretation of the splitting functions and the Sudakov form factors, the PB evolution equation can be solved with MC techniques using a parton branching algorithm.
Additionally to the evolution equation, PB provides also a procedure to fit parameters of the initial distribution to the experimental data using \texttt{xFitter} platform \cite{Alekhin:2014irh}. Obtained PB TMDs and PDFs \cite{BermudezMartinez:2018fsv,Jung:2021vym,Jung:2021mox} are accesible via TMDlib \cite{Abdulov:2021ivr} and in LHAPDF \cite{Buckley:2014ana} format for the usage in (TMD) MC generators. A generator of a special importance is the TMD MC generator Cascade \cite{Baranov:2021uol} where
the TMD initial state parton shower is implemented with the backward evolution guided by the PB TMDs.
The PB method provides the procedure to match PB TMDs with next-to-leading order (NLO) matrix elements \cite{BermudezMartinez:2019anj} to obtain predictions. Recently, there was also a merging procedure developed \cite{BermudezMartinez:2021lxz}.
The PB method was used to study different evolution scenarios
like ordering conditions or resolution scales, see e.g. \cite{Hautmann:2017xtx,Hautmann:2019biw}. The PB predictions have been calculated for multiple measurements, in very different energy and mass regimes, including hadron colliders, fixed target experiments and $ep$ collider \cite{BermudezMartinez:2018fsv,BermudezMartinez:2019anj,BermudezMartinez:2020tys,Yang:2022qgk,Abdulhamid:2021xtt,H1:2021wkz}.
All those successful PB studies were performed with the DGLAP \cite{Gribov:1972ri,Lipatov:1974qm,Altarelli:1977zs,Dokshitzer:1977sg} splitting functions calculated in the collinear approximation. However, in some infrared-sensitive phase space regions, the collinear approximation is not enough
\cite{Dooling:2012uw,Dooling:2014kia}. In this work the PB approach was extended by using the TMD splitting functions \cite{Catani:1994sq,Gituliar:2015agu,Hentschinski:2016wya,Hentschinski:2017ayz}.
\section{TMD splitting functions}
The concept of the TMD splitting functions originates from the high energy factorization \cite{Catani:1994sq}, where the TMD splitting function for the splitting of an off-shell gluon into quark, $\widetilde{P}_{qg}$, was calculated. The other channels were obtained in \cite{Gituliar:2015agu,Hentschinski:2016wya,Hentschinski:2017ayz}.
The splitting functions have well defined collinear and high energy limits.
It was demonstrated that in the limit of small incoming transverse momenta, after angular average, the TMD splitting functions converge to the DGLAP leading order (LO) splitting functions. For finite transverse momenta, the TMD splitting function \cite{Catani:1994sq} can be written as an expansion in powers of the transverse momenta with $z$-dependent coefficients, which, after convoluting them with the TMD gluon Green's functions \cite{Kuraev:1977fs,Balitsky:1978ic}, give the
corrections to the splitting function logarithmically enhanced for $z\rightarrow 0$. Therefore, the work presented next on the implementation of
TMD splitting functions in the PB method can be viewed as a step toward
constructing full MC generators for small-$x$ physics (see e.g. \cite{Chachamis:2015zzp,Andersen:2011zd,Jung:2010si,Hoeche:2007hlb,Golec-Biernat:2007tjf}).
\section{TMD splitting functions in the PB method}
The DGLAP splitting functions $P_{ab}^R (z, \mu^{\prime})$ were replaced by the TMD ones $\tilde{P}_{ab}^{R}\left(z, k_{\bot} +(1-z)\mu_{\bot}^{\prime}, \mu_{\bot}^{\prime}\right)$ in the PB evolution equation for the momentum weighted parton density $x{\mathcal{A}}_a = \tilde{\mathcal{A}}_a$ \cite{Hautmann:2017fcj}
\begin{multline}
\tilde{\mathcal{A}}_a\left( x,k_{\bot}^2, \mu^2\right) =
\Delta_a\left(\mu^2,k_{\bot}^2\right)\tilde{\mathcal{A}}_a\left( x,k_{\bot}^2, \mu_0^2\right) +
\sum_b\int\frac{d^2\mu_{\bot}^{\prime}}{\pi\mu_{\bot}^{\prime 2}}\Theta(\mu_{\bot}^{\prime 2}-\mu_0^2)\Theta(\mu^2-\mu_{\bot}^{\prime 2})
\\
\times \int\limits_x^{z_M }\textrm{d}z\, \frac{ \Delta_a\left(\mu^2, k_{\bot}^2 \right) }
{ \Delta_a\left(\mu_{\bot}^{\prime 2}, k_{\bot}^2 \right)} \tilde{P}_{ab}^{R}\left(z, k_{\bot} +(1-z)\mu_{\bot}^{\prime}, \mu_{\bot}^{\prime}\right)
\tilde{\mathcal{A}}_b\left( \frac{x}{z}, (k_{\bot}+(1-z)\mu_{\bot}^{\prime})^2, \mu_{\bot}^{\prime 2}\right),
\label{EvolEq}
\end{multline}
where $a,b$- are the flavour indices, $x$- the fraction of the proton's longitudinal momentum carried by the parton $a$, $k_{\bot}$-the transverse momentum, $\mu$ - the evolution scale, $\mu_0$- the initial evolution scale, $z$ - the momentum transfer in the splitting and $z_M$- the soft gluon resolution scale which can be scale dependent.
To treat the virtual/non-resolvable emissions, a new TMD Sudakov form factor was introduced \cite{Hautmann:2022xuc}
\begin{equation}
\Delta_a(\mu^2,\mu_0^2,k_{\bot}^2)\equiv\Delta_a(\mu^2,k_{\bot}^2)=\exp\left(-\sum_b\int_{\mu_0^2}^{\mu^2}\frac{d\mu'^2}{\mu'^2}\int_0^{z_M}dz\ z\bar P^R_{ba}(z,k_{\bot}^2,\mu'^2)\right),
\label{TMDSud}
\end{equation}
using the angular averaged TMD splitting functions $\bar P^R_{ba}(z,k_{\bot}^2,\mu'^2)$. This construction was possible thanks to the momentum sum rule and unitarity.
As an intermediate step, a scenario with the TMD splittings included in the real resolvable emissions but with
the default PB Sudakov form factor
\begin{equation}
\Delta_a(\mu^2,\mu_0^2)\equiv\Delta_a(\mu^2)=\exp\left(-\sum_b\int_{\mu_0^2}^{\mu^2}\frac{d\mu'^2}{\mu'^2}\int_0^{z_M}dz\ z P^R_{ba}(z,\mu^{\prime 2})\right)
\label{CollSud}
\end{equation}
was studied.
It was shown analytically \cite{Hautmann:2022xuc}, that only when the same type of splitting functions are used both in the real emissions and Sudakov form factors, the evolution equation from Eq.~\ref{EvolEq} satisfies the momentum sum rule.
In other words, for the evolution equation Eq.~\ref{EvolEq} with the TMD Sudakov form factor in the form given by Eq.~\ref{TMDSud} the momentum sum rule holds, whereas with the collinear Sudakov form factor from Eq.~\ref{CollSud} it is broken.
\begin{figure}[htb]
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_iTMDx-down-mu100.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_iTMDx-gluon-mu100.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_kt-down-x1e-3-mu100_linear.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=5.0cm]{asmu_kt-gluon-x1e-3-mu100_linear.pdf}
\end{minipage}
\hfill
\caption[]{
Down quark and gluon distributions for scenarios with the collinear splitting functions (red), with the TMD splitting functions in the real emissions and the collinear Sudakov form factor (blue) and with the TMD splitting functions both in the real emissions and in the Sudakov form factor (purple).
Top: integrated TMDs as a function of $x$ at $\mu=100\;\textrm{GeV}$. Bottom: TMDs as a function of $|k_{\bot}|$ at $x=0.001$ and $\mu=100\;\textrm{GeV}$ \cite{Hautmann:2022xuc}. }
\label{Fig:Distributions}
\end{figure}
\section{Numerical results}
In the upper part of Fig.~\ref{Fig:Distributions}, the integrated distributions (iTMDs) as a function of $x$ at the scale $\mu=100\;\textrm{GeV}$ are shown for down quark and gluon for 3 evolution scenarios: the dashed red curve is obtained from the PB evolution equation with collinear splitting functions, the blue dotted curve with the model with TMD splitting functions in real resolvable emissions but with the collinear Sudakov form factors and the solid magenta line with the TMD splitting functions both in the real resolvable emissions and the Sudakov form factors. In the bottom of Fig.~\ref{Fig:Distributions} the down quark and gluon TMDs as a function of $|k_{\bot}|$ are shown at $x=0.001$, $\mu=100\;\textrm{GeV}$ for the same 3 models.
The bottom panel of each plot shows the ratios obtained with respect to the fully collinear scenario.
For the purpose of this study, the same starting distribution was used for all those 3 models, which means that the differences between the curves come only from the evolution, i.e. purely from the treatment of the splitting functions. For the iTMDs, the effect of the TMD splitting functions is visible especially at low $x$, for the TMDs, the effects are visible in the whole $k_{\bot}$ region. It is worth reminding that for both the red and magenta curves the momentum sum rule holds, whereas the blue curve violates it. The numerical check of the momentum sum rule was performed in \cite{Hautmann:2022xuc}.
\section{Conclusions}
In this work a parton branching algorithm to obtain TMDs and integrated distributions, which for the first time includes TMD splitting functions and fulfils momentum sum rule, was presented.
A new TMD Sudakov form factor was constructed using the momentum sum rule and unitarity.
The studies presented here are at the level of the forward evolution but it is a
first step towards a full TMD MC generator covering the small-$x$ phase space.
\section*{Acknowledgements}
Presented results were obtained in collaboration with F. Hautmann, M. Hentschinski, L. Keersmaekers, A. Kusina and K. Kutak.
A. Lelek acknowledges funding by Research Foundation-Flanders (FWO) (application number: 1272421N).
\bibliographystyle{mybibstyle}
{
| 2024-02-18T23:39:46.977Z | 2023-01-02T02:10:54.000Z | algebraic_stack_train_0000 | 378 | 1,831 |
|
proofpile-arXiv_065-2106 | \section{Introduction} Dirac's equation describes the behavior of
particles with mass and spin and how they couple to the
electromagnetic field. The usual form of Dirac's equation is
\[ (\imath\gamma^{\mu}\partial_{\mu}-m)\Psi(x)=0
\]
The electromagnetic field is introduced by the minimal coupling
prescription\cite{ref:Peskin} $\partial_{\mu}\rightarrow D_{\mu}$,
with
\[D_{\mu}=\partial_{\mu}+\imath A_{\mu}(x)
\]
where $A_{\mu}$ is the electromagnetic vector potential. Dirac's
equation can be further coupled to gravity (at the classical level)
using the prescription\cite{ref:Brill_Wheeler}
\[
\partial_\mu\rightarrow\partial_\mu-\Gamma_\mu
\]
and the equation then takes the
form\cite{ref:Finster,ref:Brill_Wheeler,ref:Brill_Cohen,ref:Smoller_Finster,ref:Smoller_Finster2}
\begin{equation}
\label{eq:full_dirac}
\tilde{\gamma}^{\mu}[\imath\partial_{\mu}-\imath\Gamma_{\mu}-A_\mu]\Psi(x)-m\Psi(x)=0
\end{equation}
where $\Gamma_\mu$ is known as the spin connection, $A_\mu$ is the
electromagnetic vector potential, and $m$ is the mass. The
gravitational coupling enters through the modified dirac matrices
$\tilde{\gamma}_{\mu}$ which satisfy the anticommutation relation
\[\{\tilde{\gamma}^{\mu},\tilde{\gamma}^{\nu}\}=I g^{\mu\nu}.
\]
and the operator $\tilde{\gamma}^{\mu}[\partial_{\mu}-\Gamma_{\mu}]$
is (in the absence of electromagnetic interactions) the covariant
derivative for spinor fields in a curved
space\cite{ref:Brill_Wheeler}.
The above form of Dirac's equation describes the dynamics of the
spinor field $\Psi$ when coupled to the scalar fields $A_{\mu}$ and
gravity. There are two additional equations which describe the
dynamics of $A_{\mu}$ and $g_{\mu\nu}$, these are the Einstein field
equations
\begin{equation}
\label{eq:einstein_field}
R_{\mu\nu}-\frac{1}{2}R=T_{\mu\nu}
\end{equation}
and Maxwell's equations
\begin{equation}
\label{eq:maxwell}
\nabla_\mu F^{\mu\nu}=4\pi
e\bar{\Psi}\gamma^\nu\Psi
\end{equation}
The Equations~(\ref{eq:full_dirac}), (\ref{eq:einstein_field}) and
(\ref{eq:maxwell}) are collectively known as the
Einstein-Dirac-Maxwell
equations\cite{ref:Smoller_Finster,ref:Smoller_Finster2,ref:Krori}.
The subject of this paper is Equation~(\ref{eq:full_dirac}). We
will show that the equations of motion of an elastic solid have the
same form as Equation~(\ref{eq:full_dirac}) with the mass end
electromagnetic term emerging naturally from the formalism.
\section{Elasticity Theory}
\label{sec:elasticity_theory}
The theory of elasticity is usually
concerned with the infinitesimal deformations of an elastic
body\cite{ref:Love,ref:Sokolnikoff,ref:Landau_Lifshitz,ref:Green_Zerna,ref:Novozhilov}.
We assume that the material points of a body are continuous and can
be assigned a unique label $\vec{a}$. For definiteness the elastic
body can be taken to be a three dimensional object so each point of
the body may be labeled with three coordinate numbers $a_{i}$ with
$i=1,2,3$.
If this three dimensional elastic body is placed in a large ambient
three dimensional space then the material coordinates $a_{i}$ can be
described by their positions in the 3-D fixed space coordinates
$x_{i}$ with $i=1,2,3$. In this description the material points
$a_{i}(x_1,x_2,x_3)$ are functions of $\vec{x}$. A deformation of
the elastic body results in infinitesimal displacements of these
material points. If before deformation a material point $a_0$ is
located at fixed space coordinates $x_1,x_2,x_3$ then after
deformation it will be located at some other coordinate
$x'_1,x'_2,x'_3$. The deformation of the medium is characterized at
each point by the displacement vector
\[u_i=x'_i-x_i
\]
which measures the displacement of each point in the body after
deformation.
It is the aim of this paper to take this model of an elastic medium
and derive from it equations of motion that have the same form as
Dirac's equation.
We first consider the effect of a deformation on the measurement of
distance. After our elastic body is deformed, the distances between
its points changes as measured with the fixed space coordinates. If
two points which are very close together are separated by a radius
vector $dx_i$ before deformation, these same two points are
separated by a vector $dx'_i=dx_i+du_i$. The square distance between
the points before deformation is then $ds^2=dx_1^2+dx_2^2+dx_3^2$.
Since these coincide with the material points in the undeformed
state, this can be written $ds^2=da_1^2+da_2^2+da_3^2$. The squared
distance after deformation can be written\cite{ref:Landau_Lifshitz}
$ds'^{2}=dx_1'^2+dx_2'^2+dx_3'^2=\sum_i
dx_i'^2=\sum_i(da_i+du_i)^2$. The differential element $du_i$ can be
written as $du_i=\sum_i\frac{\partial u_i}{\partial a_k }da_k$,
which gives for the distance between the points
\begin{eqnarray*}
ds'^2&=&\sum_i\left(da_i + \sum_k\frac{\partial u_i}{\partial
a_k}da_k\right) \left(da_i + \sum_l\frac{\partial u_i}{\partial
a_l}da_l\right)\\
&=&\sum_i\left(da_i da_i + \sum_k\frac{\partial u_i}{\partial
a_k}da_i da_k+ \sum_l\frac{\partial u_i}{\partial a_l}da_i da_l +
\sum_k\sum_l\frac{\partial u_i}{\partial a_k}\frac{\partial
u_i}{\partial a_l}\right)\\
&=&\sum_i\sum_k\left(\delta_{ik}+\left(\frac{\partial u_i}{\partial
a_k}+\frac{\partial u_k}{\partial a_i}\right)+\sum_l\frac{\partial
u_l}{\partial
a_i}\frac{\partial u_l}{\partial a_k}\right) da_k da_l\\
&=&\sum_{ik}\left(\delta_{ik}+2\epsilon'_{ik}\right)da_i da_k
\end{eqnarray*}
where $\epsilon'_{ik}$ is
\begin{equation}
\label{eq:strain_tensor}
\epsilon'_{ik}=\frac{1}{2}\left(\frac{\partial u_i}{\partial
a_k}+\frac{\partial u_k}{\partial a_i}+\sum_l \frac{\partial
u_l}{\partial a_i}\frac{\partial u_l}{\partial a_k}\right)
\end{equation}
The
quantity $\epsilon'_{ik}$ is known as the strain tensor. It is
fundamental in the theory of elasticity. In most treatments of
elasticity it is assumed that the displacements $u_i$ as well as
their derivatives are infinitesimal so the last term in
Equation~(\ref{eq:strain_tensor}) is dropped. This is an
approximation that we will not make in this derivation.
The quantity
\begin{eqnarray}
\label{eq:metric}
g_{ik}&=&\delta_{i,k}+\frac{\partial u_i}{\partial
a_k}+\frac{\partial u_k}{\partial a_i}+\sum_l \frac{\partial
u_l}{\partial a_i}\frac{\partial u_l}{\partial a_k}\\
&=&\delta_{i,k}+2\epsilon'_{ik}\nonumber
\end{eqnarray}
is the metric for our system and
determines the distance between any two points.
That this metric is simply the result of a coordinate transformation
from the flat space metric can be seen by writing the metric in the
form\cite{ref:Millman_Parker}
\[
g_{\mu\nu}=
\left( \begin{array}{lll}{\displaystyle
\frac{\partial x'_1}{\partial a_1}}& {\displaystyle\frac{\partial
x'_2}{\partial
a_1}} & {\displaystyle\frac{\partial x'_3}{\partial a_1}}\\[15pt]
{\displaystyle\frac{\partial x'_1}{\partial a_2}}& {\displaystyle\frac{\partial x'_2}{\partial a_2}} &
{\displaystyle\frac{\partial x'_3}{\partial a_2}}\\[15pt]
{\displaystyle\frac{\partial x'_1}{\partial a_3}}&
{\displaystyle\frac{\partial x'_2}{\partial a_3}} &
{\displaystyle\frac{\partial x'_3}{\partial a_3}}
\end{array}
\right)
\left(\begin{array}{lll}
{\displaystyle 1}& {\displaystyle 0} & {\displaystyle 0}\\[15pt]
{\displaystyle 0}& {\displaystyle 1}& {\displaystyle 0}\\[15 pt]
{\displaystyle 0}& {\displaystyle 0} & {\displaystyle 1}
\end{array}
\right)
\left(\begin{array}{lll}
{\displaystyle \frac{\partial x'_1}{\partial a_1}}&
{\displaystyle\frac{\partial x'_1}{\partial
a_2}} & {\displaystyle \frac{\partial x'_1}{\partial a_3}}\\[15pt]
{\displaystyle\frac{\partial x'_2}{\partial a_1}}& {\displaystyle\frac{\partial x'_2}{\partial a_2}} &
{\displaystyle\frac{\partial x'_2}{\partial a_3}}\\[15pt]
{\displaystyle\frac{\partial x'_3}{\partial a_1}}&
{\displaystyle\frac{\partial x'_3}{\partial a_2}} &
{\displaystyle\frac{\partial x'_3}{\partial a_3}}
\end{array}
\right)
\]
\[
=J^TIJ
\]
where
\[
\frac{\partial x'_\mu}{\partial
x_\nu}=\delta_{\mu\nu}+\frac{\partial u_\mu}{\partial a_\nu}.
\]
and $J$ is the Jacobian of the transformation. Later in section
\ref{sec:fourier_transform} we will show that the metric for the
Fourier modes of our system is not a simple coordinate
transformation.
The inverse matrix $(g^{ik})=(g_{ik})^{-1}$ is given by
$(g^{ik})=(J^{-1})(J^{-1})^T$ where
\begin{equation}
J^{-1}=\left(\begin{array}{lll} {\displaystyle\frac{\partial
a_1}{\partial x'_1}}& {\displaystyle\frac{\partial a_1}{\partial
x'_2}} & {\displaystyle\frac{\partial a_1}{\partial x'_3}}\\[15pt]
{\displaystyle\frac{\partial a_2}{\partial x'_1}}& {\displaystyle\frac{\partial a_2}{\partial x'_2}} &
{\displaystyle\frac{\partial a_2}{\partial x'_3}}\\[15pt]
{\displaystyle\frac{\partial a_3}{\partial x'_1}}&
{\displaystyle\frac{\partial a_3}{\partial x'_2}} &
{\displaystyle\frac{\partial a_3}{\partial x'_3}}
\end{array}
\right)
\end{equation}
This yields for the inverse metric
\begin{eqnarray}
g^{ik}&=&\delta_{ik}-\frac{\partial
u_i}{\partial x_k}-\frac{\partial u_k}{\partial x_i}+\sum_l
\frac{\partial u_l}{\partial x_i}\frac{\partial u_l}{\partial x_k}\\
&=&\delta_{ik}-2\epsilon_{ik}\nonumber
\end{eqnarray}
where $\epsilon_{ik}$ is defined by
\[
\epsilon_{ik}=\frac{1}{2}\left(\frac{\partial
u_i}{\partial x_k}+\frac{\partial u_k}{\partial x_i}-\sum_l
\frac{\partial u_l}{\partial x_i}\frac{\partial u_l}{\partial
x_k}\right)
\]
We see that the metric components involves derivatives of the
displacement vector with respect to the internal coordinates and the
inverse metric involves derivatives with respect to the fixed space
coordinates.
\section{Equations of Motion}
\label{sec:EOM}
In the following we will use the notation
\[
u_{\mu\nu}=\frac{\partial u_\mu}{\partial x_\nu}
\]
and therefore the inverse strain tensor is
\[
\epsilon_{\mu\nu}=\frac{1}{2}\left(u_{\mu\nu}+u_{\nu\mu}+\sum_\beta
u_{\beta \mu}u_{\beta\nu}\right).
\]
We will use the lagrangian method to derive the equations of motion
for our system. Our model consists of an elastic solid embedded in a
$3$ dimensional euclidean space.
In the following we work in the fixed space coordinates and take the
strain energy as the lagrangian density of our system. This approach
leads to the usual equations of equilibrium in elasticity
theory\cite{ref:Love,ref:Novozhilov}. The strain energy is quadratic
in the strain tensor $\epsilon^{\mu\nu}$ and can be written as
\[
E=\sum_{\mu \nu\alpha\rho} C_{\mu \nu\alpha\rho}\, \epsilon_{\mu\nu}
\epsilon_{\alpha\rho}
\]
The quantities $C_{\mu \nu\alpha\rho}$ are known as the elastic
stiffness constants of the material\cite{ref:Sokolnikoff}. For an
isotropic space most of the coefficients are zero and in $3$
dimensions, the lagrangian density reduces to
\begin{equation}
\label{eq:lagrangian_3D}
L=(\lambda +
2\mu)\left[\epsilon_{11}^2+\epsilon_{22}^2+\epsilon_{33}^2\right] +
2 \lambda \left[\epsilon_{11} \epsilon_{22}+ \epsilon_{11}
\epsilon_{33} + \epsilon_{22}\epsilon_{33}\right] + 4\mu
\left[\epsilon_{12}^2 + \epsilon_{13}^2 + \epsilon_{33}^2\right]
\end{equation}
where $\lambda$ and $\mu$ are known as Lam\'e
constants\cite{ref:Sokolnikoff}.
The usual Lagrange equations,
\[
\sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial u_{\rho
\nu}}\right) - \frac{\partial L}{\partial u_\rho}=0,
\]
apply with each component of the displacement vector treated as an
independent field variable. Since our Lagrangian contains no terms
in the field $u_\rho$, Lagrange's equations reduce to
\[
\sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial u_{\rho
\nu}}\right)=0.
\]
The quantity
\[
V_\rho=\sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial
u_{\rho \nu}}\right)
\]
is a vector and as such can always be written as the sum of the
gradient of a scalar and the curl of a vector or
\[
\vec{V}=\nabla\phi+\nabla\times \vec{A}.
\]
From this decomposition we can immediately conclude,
\begin{equation}
\label{eq:Laplaces_equation}
\nabla^2 \phi=0
\end{equation}
We see therefore that the scalar quantity $\phi$ in the medium obeys
Laplace's equation.
\subsection{Physical Interpretation of $\phi$}
To understand the physical origin of $\phi$ we derive its form in
the usual infinitesimal theory of elasticity. The advantage of the
infinitesimal theory is that an explicit form of the vector
$\vec{V}$ may be obtained. In the infinitesimal theory of
elasticity the strain components $u_{\mu\nu}$ are assumed to be
small quantities and therefore the quadratic terms in the strain
tensor are dropped and the strain tensor reduces
to\cite{ref:Landau_Lifshitz}
\[
\epsilon_{\mu\nu}=\frac{1}{2}\left(u_{\mu\nu} + u_{\nu\mu}\right)
\]
Using the above Lagrangian we obtain the explicit form
\[
V_\rho=\sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial
u_{\rho \nu}}\right)=(2\mu+2\lambda)\frac{\partial \sigma}{\partial
x_\rho} + 2\mu \nabla^2 u_\rho=0,
\]
where $\sigma=u_{11}+u_{22}+u_{33}\equiv \nabla\cdot\vec{u}$.
Finally taking the divergence of $\vec{V}$ yields
\[
\nabla^2\sigma=0
\]
From this we see that the scalar in the infinitesimal theory is the
divergence of the strain field $\sigma=\nabla \cdot \vec{u}$. It is
an invariant with respect to change of coordinates and in general
varies from point to point in the medium. This exercise exhibits the
physical origin of $\phi$ which to lowest order in the strain
components is the divergence of the strain field.
In this work however will not make the infinitesimal approximation
and we will work with the scalar $\phi$ and not $\sigma$. In most of
what follows, the exact form of $\phi$ is not important. It is only
important that such a quantity exists and obeys Laplace's equation.
\subsection{Internal Coordinates}
The central results of this work will be given in sections
\ref{sec:internal_coordinates} and \ref{sec:fourier_transform},
where we will take one of our internal coordinates to be periodic
and we will Fourier transform all quantities in that coordinate. We
therefore need to translate the equations of motion $\nabla^2\phi=0$
from the fixed space coordinates to the internal coordinates. For
clarity, in the remainder of this text we change notation slightly
and write the internal coordinates not as $a_i$ but as $x_i'$ and
the fixed space coordinates will be unprimed and denoted $x_i$. Now
using $u_i=x_i'-x_i$ we can write
\begin{eqnarray}
\label{eq:coordinate_change} \frac{\partial}{\partial
x_i}&=&\sum_j\frac{\partial x'_j}{\partial
x_i}\frac{\partial}{\partial x'_j} \nonumber \\
&=& \sum_j\left(\frac{\partial
x_j}{\partial x_i}+\frac{\partial u_j}{\partial x_i}\right)\frac{\partial}{\partial x'_j}\nonumber\\
&=& \sum_j\left(\delta_{ij}+\frac{\partial u_j}{\partial
x_i}\right)\frac{\partial}{\partial x'_j}
\end{eqnarray}
Equation~(\ref{eq:coordinate_change}) relates derivatives in the
fixed space coordinates $x_i$ to derivatives in the material
coordinates $x'_i$. As mentioned earlier, in the standard treatment
of elastic solids the displacements $u_i$ as well as their
derivatives are assumed to be infinitesimal and so the second term
in Equation~(\ref{eq:coordinate_change}) is dropped and there is no
distinction made between the $x_i$ and the $x'_i$ coordinates. In
this paper we will keep the nonlinear terms in
Equation~(\ref{eq:coordinate_change}) when changing coordinates.
Hence we will make a distinction between the two sets of coordinates
and this will be pivotal in the derivations to follow.
We will now demonstrate that Laplace's equation
(\ref{eq:Laplaces_equation}) implies Dirac's equation.
\section{Cartan's Spinors}
\label{sec:Cartan}
The concept of Spinors was introduced by Eli
Cartan in 1913\cite{ref:Cartan}. In Cartan's original formulation
spinors were motivated by studying isotropic vectors which are
vectors of zero length. In three dimensions the equation of an
isotropic vector is
\begin{equation}
\label{eq:isotropic_vector}
x_1^2 + x_2^2 + x_3^2=0
\end{equation}
for complex quantities $x_i$. A closed form solution to this
equation is realized as
\begin{equation}
\label{eq:Cartan_spinor_solution}
\begin{array}{lccr}
{\displaystyle x_1 =\xi_0^2-\xi_1^2,\ } & {\displaystyle
x_2=i(\xi_0^2+\xi_1^2),}&\ \mathrm{and}\ & {\displaystyle
x_3=-2\xi_0\xi_1}
\end{array}
\end{equation}
where the two quantities $\xi_i$ are
\[
\begin{array}{lcr}
{\displaystyle\xi_0=\pm\sqrt{\frac{x_1-\imath x_2}{2}}}& \
\mathrm{and} \ & {\displaystyle \xi_1=\pm\sqrt{\frac{-x_1-\imath
x_2}{2}}}
\end{array}.
\]
That the two component object $\xi=(\xi_0,\xi_1)$ is a
spinor\cite{ref:Cartan} can be seen by considering a rotation on the
quantities $v_1=x_1-\imath x_2$, and $v_2=-x_1-\imath x_2$. If
$v_i$ is rotated by an angle $\alpha$,
\[v_i\rightarrow v_i \exp(\imath\alpha)
\]
then the spinor
component $\xi_0$ is rotated by $\alpha/2$. It is clear that the
spinor is not periodic in $2\pi$ but in $4\pi$. A quantity of this
type is a spinor and any equation of the form
(\ref{eq:isotropic_vector}) has a spinor solution.
Laplace's equation
\[\left(\frac{\partial^2}{\partial x_1^2}+
\frac{\partial^2}{\partial x_2^2} + \frac{\partial^2}{\partial
x_3^2}\right)\phi=0
\]
can be viewed as an isotropic vector in the following way. The
components of the vector are the partial derivative operators
$\partial/\partial x_i$ acting on the quantity $\phi$. As long as
the partial derivatives are restricted to acting on the scalar field
$\phi$ it has a spinor solution given by
\begin{equation}
\label{eq:spinor0}
\hat{\xi}_0^2=\frac{1}{2}\left(\frac{\partial}{\partial
x_1}-i\frac{\partial}{\partial x_2}\right)=\frac{\partial}{\partial
z_0}
\end{equation}
and
\begin{equation}
\label{eq:spinor1}
\hat{\xi}_1^2=-\frac{1}{2}\left(\frac{\partial}{\partial
x_1}+i\frac{\partial}{\partial x_2}\right)=\frac{\partial}{\partial
z_1}
\end{equation}
where
\[
\begin{array}{lcr}
{\displaystyle z_0=x_1+ix_2}& \ \mathrm{and}\ & {\displaystyle
z_1=-x_1+ix_2}
\end{array}
\]and the "hat" notation indicates that the quantities
$\hat{\xi}$ are operators. The equations
\[
\hat{\xi}_0^2=\frac{\partial}{\partial z_0}
\]
and
\[
\hat{\xi}_1^2=\frac{\partial}{\partial z_1}.
\]
are equations of fractional derivatives of order $1/2$ denoted
$\hat{\xi}_0=D^{1/2}_{z_0}$ and $\hat{\xi}_1=D^{1/2}_{z_1}$.
Fractional derivatives have the property that\cite{ref:Miller_Ross}
\[
D^{1/2}_{z}D^{1/2}_{z}=\frac{\partial}{\partial z}
\]
and solutions for these fractional derivatives can be
written\cite{ref:Miller_Ross}
\begin{equation}
D^{\frac{1}{2}}_z \phi=\frac{1}{\Gamma
\left(\frac{1}{2}\right)}\frac{\partial}{\partial z}\int^z_0
(z-t)^{-\frac{1}{2}}\phi(t)dt
\end{equation}
The exact form for these fractional derivatives however, is not
important here. The important thing to note is that a solution to
Laplace's equation can be written in terms of spinors which are
fractional derivatives.
If we assume that the fractional derivatives $\hat{\xi}_0$ and
$\hat{\xi}_1$ commute then we also have
\begin{eqnarray*}
(\hat{\xi}_0\hat{\xi}_1)^2&=&\hat{\xi}_0\hat{\xi}_0\hat{\xi}_1\hat{\xi}_1 \nonumber \\
&=&-\frac{\partial}{\partial z_0}\frac{\partial}{\partial z_1}\nonumber \\
&=&-\frac{1}{4}\left(\frac{\partial}{\partial x_3}-
\imath\frac{\partial}{\partial x_2}\right)
\left(\frac{\partial}{\partial x_3}+ \imath\frac{\partial}{\partial
x_2}\right)\nonumber \\
&=&-\frac{1}{4}\left(\frac{\partial ^2}{\partial x_2^2}+
\frac{\partial ^2}{\partial x_3^2}\right)\nonumber \\
&=&\frac{1}{4}\frac{\partial^2}{\partial x_1^2}\nonumber\\
\end {eqnarray*}
Using this result combined with Equations~(\ref{eq:spinor0}) and
(\ref{eq:spinor1}) we may write for the components of our vector
\begin{equation}
\label{eq:derivative1_solution}
\frac{\partial}{\partial x_1}=
-2\hat{\xi}_0\hat{\xi}_1
\end{equation}
\begin{equation}
\label{eq:derivative2_solution} \frac{\partial}{\partial
x_2}=\imath(\hat{\xi}_0^2 + \hat{\xi}_1^2)
\end{equation}
and
\begin{equation}
\label{eq:derivative3_solution}
\frac{\partial}{\partial x_3}=\hat{\xi}_0^2 - \hat{\xi}_1^2.
\end{equation}
This result gives the explicit solution of our vector quantities
$\frac{\partial}{\partial x_i}$ in terms of the spinor quantities
$\xi_i$.
\subsection{Matrix Form}
It can be readily verified that our spinors satisfy the following
equations
\begin{eqnarray*}
\left[\hat{\xi}_0 \frac{\partial}{\partial x_1}+ \hat{\xi}_1
\left(\frac{\partial}{\partial x_3}-i\frac{\partial}{\partial
x_2}\right)\right]\phi=0\\
\left[\hat{\xi}_0\left(\frac{\partial}{\partial x_3} +
i\frac{\partial}{\partial
x_2}\right)-\hat{\xi}_1\frac{\partial}{\partial x_1}\right]\phi=0
\end{eqnarray*}
and in matrix form
\begin{equation}
\label{eq:dirac_matrix}
\left(
\begin{array}{lr}
{\displaystyle\frac{\partial}{\partial x_1}} &
{\displaystyle\frac{\partial}{\partial
x_3}-i\frac{\partial}{\partial
x_2}} \\[15pt]
{\displaystyle\frac{\partial}{\partial
x_3}+i\frac{\partial}{\partial
x_2}} & {\displaystyle -\frac{\partial}{\partial x_1}}
\end{array}
\right)
\left(\begin {array}{c}
{\displaystyle\hat{\xi}_0 }\\[20pt]
{\displaystyle\hat{\xi}_1}
\end{array}
\right) \phi=0
\end{equation}
The matrix
\[
X=\left(\begin{array}{lr}
{\displaystyle\frac{\partial}{\partial x_1}} & {\displaystyle\frac{\partial}{\partial x_3}-i\frac{\partial}{\partial
x_2}} \\[15pt]
{\displaystyle\frac{\partial}{\partial
x_3}+i\frac{\partial}{\partial
x_2}} & {\displaystyle -\frac{\partial}{\partial x_1}}
\end{array}
\right)
\]
is equal to the dot product of the vector $\partial_\mu\equiv\partial/\partial x_\mu$ with the Pauli spin matrices
\[
X=\frac{\partial}{\partial x_1}\gamma^1 + \frac{\partial}{\partial
x_2}\gamma^2 + \frac{\partial}{\partial x_3}\gamma^3
\]
where
\[
\begin{array}{ccc}
\gamma_1=\left(\begin{array}{ll}
1 & 0\\
0 & -1
\end{array}
\right),&
\gamma_2=\left(\begin{array}{ll}
0 & -i\\
i & 0
\end{array}
\right),&
\gamma_3=\left(\begin{array}{ll}
0 & 1\\
1 & 0
\end{array}
\right)
\end{array}
\] are the Pauli matrices.
So Equation~(\ref{eq:dirac_matrix}) can be written
\begin{equation}
\label{eq:dirac_unstrained}
\sum_{\mu=1}^3\partial_\mu\gamma^\mu\xi=0.
\end{equation}
where we have used the notation $\xi\equiv \hat{\xi}\phi$. This
equation has the form of Dirac's equation in 3 dimensions. It
describes a spin $1/2$ particle of zero mass that is free of
interactions.
\subsection{Relation to the Dirac Decomposition}
The fact that Laplace' equation and Dirac's equation are related is
not new. However the decomposition used here is not the same as
that used by Dirac. In the usual method, starting with the dirac
equation $(i\gamma^\mu\partial_\mu)\Psi=0$ and operating with
$-\imath\gamma^\mu\partial_\mu$ yields Laplace's equation for each
component of the spinor field. In other words this method results
in not one Laplace's equation but several (one for each component of
the spinor). Conversely if one starts with Laplace's equation and
tries to recover Dirac's equation one must start with $2$
independent scalars ( $4$ in the usual $4$ dimensional case) in
order to derive the two component spinor equation
(\ref{eq:dirac_unstrained}).
What has been demonstrated in the preceding sections is that
starting with only one scalar quantity satisfying Laplace's equation
Dirac's equation for a two component spinor may be derived.
Furthermore any medium (such as an elastic solid) that has a single
scalar that satisfies Laplace's equation must have a spinor that
satisfies Dirac's equation and such a derivation necessitates the
use of fractional derivatives.
The form of Equation~(\ref{eq:dirac_unstrained}) is relevant for a
massless, non-interacting spin 1/2 particle. We will now
demonstrate that if one of our internal coordinates is taken to be
periodic a mass term as well as gravitational and electromagnetic
interaction terms appear in Dirac's equation.
\section{Transformation to Internal Coordinates}
\label{sec:internal_coordinates} In section
\ref{sec:fourier_transform} we will take the $x_3^\prime$ coordinate
to be periodic and we will derive equations for the Fourier
components of our fields. Since the elastic solid is assumed to be
periodic in the internal coordinates we need to translate our
equations of motion from fixed space coordinates to internal
coordinates. Using Equation~(\ref{eq:coordinate_change}) we can
rewrite Equation~(\ref{eq:dirac_unstrained}), as
\begin{equation}
\label{eq:dirac_before_FT}
\sum_{\mu=1}^3\gamma^\mu\left(\partial_\mu'+\sum_\nu\frac{\partial
u_\nu}{\partial x_\mu} \partial_\nu'\right)\xi=0
\end{equation}
or
\[
\sum_{\mu=1}^3\gamma'^\mu\partial'_\mu\xi=0
\]
where $\partial'_\mu=\partial/\partial x'_\mu$ and $\gamma'^\mu$ is
given by
\begin{equation}
\label{eq:modified_gamma_matrices}
\gamma'^\mu=\gamma^\mu+\sum_\alpha\frac{\partial u_\mu}{\partial
x_\alpha}\gamma^\alpha.
\end{equation}
The anticommutator of these matrices is
\begin{eqnarray*}
\{\gamma'^\mu,\gamma'^\nu\}&=&
\{\gamma^\mu+\sum_\alpha\frac{\partial u_\mu}{\partial
x_\alpha}\gamma^\alpha,\gamma^\nu+\sum_\beta\frac{\partial
u_\mu}{\partial x_\beta}\gamma^\beta\}\\
&=&\{\gamma^\mu,\gamma^\nu\}+\sum_\beta
u_{\nu\beta}\{\gamma^\mu,\gamma^\beta\} + \sum_\alpha
u_{\mu\alpha}\{\gamma^\alpha,\gamma^\nu\}+
\sum_{\alpha\beta}u_{\mu\alpha}u_{\nu\beta}\{
\gamma^\alpha,\gamma^\beta\}\\
&=&\delta_{\mu\nu}+\sum\beta u_{\nu\beta}\delta_{\mu\beta}+
\sum_\alpha u_{\mu\alpha}\delta_{\alpha\nu}+ \sum_\alpha\sum_\beta
u_{\mu\alpha}u_{\nu_\beta} \delta_{\alpha\beta}\\
&=&\delta_{\mu\nu}+2u_{\mu\nu}+\sum_\alpha
u_{\mu\alpha}u_{\nu\alpha}\\
&\equiv& g^{\mu\nu}
\end{eqnarray*}
This shows that the gamma matrices have the form of the usual dirac
matrices in a curved space\cite{ref:Brill_Wheeler}. To further
develop the form of Equation~(\ref{eq:dirac_before_FT}) we have to
transform the spinor properties of $\xi$. As currently written
$\xi$ is a spinor with respect to the $x_i$ coordinates not the
$x'_i$ coordinates. To transform its spinor properties we use a
similarity transformation and write $\xi=S\tilde{\xi}$ where $S$ is
a similarity transformation that takes our spinor in $x_\mu$ to a
spinor in $x'_\mu$. We will not attempt to give an explicit form
for $S$. We simply assume (similar to
reference[\cite{ref:Brill_Wheeler}]) that this transformation can be
effected by a real similarity transformation.
We then have
\[
\partial'_\mu\xi=(\partial'_\mu S)\tilde{\xi}+S\partial'_\mu\tilde{\xi}.
\]
Equation~(\ref{eq:dirac_before_FT}) then becomes
\[
\begin{array}{lcl}
\gamma'_\mu
[S\partial'_\mu\tilde{\xi}+(\partial'_\mu S)\tilde{\xi}]&=&0\\
\mbox{}
&=&\gamma'_\mu S[\partial'_\mu\tilde{\xi}+S^{-1}(\partial'_\mu
S)\tilde{\xi}]\\
\mbox{} &=&S^{-1}\gamma'_\mu
S[\partial'_\mu\tilde{\xi}+S^{-1}(\partial'_\mu S)\tilde{\xi}]
\end{array}
\]
Using $(\partial'_\mu S^{-1}) S=-S^{-1}(\partial'_\mu S)$. This can
finally be written as
\begin{equation}
\label{eq:dirac_curved_space} \tilde{\gamma}_\mu
[\partial'_\mu-\Gamma_\mu]\tilde{\xi}=0
\end{equation}
where $\Gamma_\mu=(\partial'_\mu S^{-1})S$ and
$\tilde{\gamma}_\mu=S^{-1}\gamma'_\mu S$.
Equation~(\ref{eq:dirac_curved_space}) has the form of the
Einstein-Dirac equation in 3 dimensions for a free particle of zero
mass. The quantity $\partial'_\mu-\Gamma_\mu$ is the covariant
derivative for an object with spin in a curved
space\cite{ref:Brill_Wheeler}. In order to make this identification,
the field $\Gamma_\mu$ must satisfy the additional
equation\cite{ref:Brill_Wheeler,ref:Brill_Cohen}
\begin{equation}
\label{eq:auxiliary_equation}
\frac{\partial
\tilde{\gamma}^\mu}{\partial
x'^\nu}+\tilde{\gamma}^\beta\Gamma^\mu_{\beta\nu}-\Gamma_\nu\tilde{\gamma}^\mu+\tilde{\gamma}^\mu\Gamma_\nu=0
\end{equation}
where $\Gamma^\mu_{\beta\nu}$ is the usual Christoffel symbol.
We will now show that this equation does hold for this form of
$\Gamma$.
\subsection{Spin Connection}
To show that Equation~(\ref{eq:auxiliary_equation}) holds we
consider the equation $\partial_\nu\vec{\gamma}=0$ where the vector
$\vec{\gamma}$ is
\[
\vec{\gamma}=\sum_{\mu=1}^3\gamma^\mu \vec{e_\mu}
\]
and $\vec{e_\mu}$ is a unit vector in the $x_\mu$ direction. Since
$\vec{\gamma}$ is a vector, then the quantity
$\partial_\nu\vec{\gamma}=0$ is a tensor equation. Therefore, in
the primed coordinate system we can immediately write
\[
\sum_{\mu=1}^3 \left(\partial'_\nu
\gamma'^\mu+\gamma'^\beta\Gamma^\mu_{\beta\nu}\right)\vec{e_\mu}=0
\]
where $\gamma'^\mu=\gamma^\mu+\sum_\alpha\frac{\partial
u_\mu}{\partial x_\alpha}\gamma^\alpha$ is the expression of
$\gamma^\mu$ in the primed coordinate system. Using
$\gamma'^\mu=S\tilde{\gamma}^\mu S^{-1}$, we have
\[
\sum_{\mu=1}^3 \left(\partial'_\nu (S\tilde{\gamma}^\mu
S^{-1})+(S\tilde{\gamma}^\beta
S^{-1})\Gamma^\mu_{\beta\nu}\right)\vec{e_\mu} =0
\]
or
\[\sum_{\mu=1}^3 \left((\partial'_\nu
S)\tilde{\gamma}^\mu S^{-1}+ S(\partial'_\nu \tilde{\gamma}^\mu) S^{-1}+ S
\tilde{\gamma}^\mu (\partial'_\nu S^{-1})+(S\tilde{\gamma}^\beta
S^{-1})\Gamma^\mu_{\beta\nu}\right)\vec{e_\mu}=0.
\]
Multiplying by $S^{-1}$ on the left and $S$ on the right yields
\[
\sum_{\mu=1}^3 \left(S^{-1}(\partial'_\nu S)\tilde{\gamma}^\mu +
(\partial'_\nu \tilde{\gamma}^\mu) + \tilde{\gamma}^\mu (\partial'_\nu
S^{-1})S+\tilde{\gamma}^\beta \Gamma^\mu_{\beta\nu}\right)\vec{e_\mu} =0
\]
Finally, using $\Gamma_\mu=(\partial_\mu S^{-1})S$ and again noting
that $\partial_\mu S^{-1}S=-S^{-1}\partial_\mu S$ we have,
\[
\tilde{\gamma}^\mu\Gamma_\mu-\Gamma_\mu\tilde{\gamma}^\mu +\left(\partial'_\nu
\tilde{\gamma}^\mu +\tilde{\gamma}^\beta \Gamma^\mu_{\beta\nu}\right)=0
\]
We have just demonstrated that in the internal coordinates, the
equations of motion of an elastic medium have the same form as the
free-field Einstein Dirac equation for a massless particle in three
dimensions.
\subsection{Physical Content}
Thus far all of the transormations that have been obtained are
"trivial" in the sense that they only result due to changing
coordinates from the unprimed coordinates $x_\mu$ to the primed
coordinates $x_\mu'$. Changes of coordinates of course do not
result in any new physical content. In particular the metric
derived in Equation~(\ref{eq:metric}) does not lead to a curved
space. The Riemann curvature tensor calculated from
Equation~(\ref{eq:metric}) is identically zero. Likewise the spin
connection $\Gamma_\mu$ is due solely to a gauge transformation
$\xi\rightarrow S\xi'$ and as such contains no physical content
since it can be removed by transforming $\xi'\rightarrow S^{-1}\xi$.
What we will demonstrate in the following sections is that for a
system where one coordinate is periodic, the resulting $2$
dimensional quantities are NOT trivial. In other words the metric
that determines the dynamics of the Fourier components of $\xi$ does
in fact lead to a curved space and the spin connection cannot be
removed by a gauge transformation. Furthermore the introduction of
the fourier components will generate extra terms in
Equation~(\ref{eq:dirac_before_FT}) that imply a series of equations
relevant for particles with mass coupled to fields that can be
associated with electromagnetism. We will show that in the low
energy approximation (ie a system in which only the lowest few modes
are present) the equations of motion are identical in form to
Equation~(\ref{eq:full_dirac}).
\section{Interacting particles with mass}
\label{sec:fourier_transform} In this section we again consider a
three dimensional elastic solid but we take the third internal
dimension to be compact with the topology of a circle. All variables
then become periodic functions of $x'_3$ and can be Fourier
transformed.
In preparation for Fourier Transforming we isolate the terms
involving $x'_3$ and rewrite Equation~(\ref{eq:dirac_before_FT}) as,
\begin{equation}
\label{eq:dirac_curved_space_separated} \sum_{\mu=1}^2\gamma^\mu
\left(\partial_\mu'+\sum_{\nu=1}^2 \frac{\partial u_\nu}{\partial
x_\mu} \partial_\nu' + \frac{\partial u_3}{\partial x_\mu}
\partial_3' \right)\xi
+ \gamma^3\left(\partial_3'+\sum_{\nu=1}^2 \frac{\partial
u_\nu}{\partial x_3} \partial_\nu' + \frac{\partial u_3}{\partial
x_3}
\partial_3' \right)\xi
\end{equation}
We first transform the partial derivatives of the $u_v$ in equation
(\ref{eq:dirac_curved_space_separated}) to obtain
\[
u_{\nu\mu}\equiv\frac{\partial u_\nu}{\partial
x_\mu}=\sum_ku_{\nu\mu,k}e^{ikx_3'}
\]
where $u_{\nu\mu,k}$ is the $k^{th}$ Fourier mode of $\partial
u_\nu/\partial x_\mu$ and $k=2\pi i/a$ with $a$ the length of the
circle formed by the elastic solid in the $x_3'$ direction and $i$
is an integer. Equation~(\ref{eq:dirac_curved_space_separated}) now
becomes,
\begin{eqnarray*}
\lefteqn{\sum_k
e^{ikx_3'}\left[\sum_{\mu=1}^2\gamma^\mu\left(\partial_\mu'\delta_{k,0}+
\sum_{\nu=1}^2
u_{\nu\mu,k} \partial_\nu' +
u_{3\mu,k}\partial_3'
\right)\xi \right.}\hspace{1.5in}\\
& &
\mbox{}+\left.\gamma^3\left(\partial_3'\delta_{k,0}+\sum_{\nu=1}^2
u_{\nu 3,k} \partial_\nu' +
u_{33,k}\partial_3'
\right)\xi\right]=0
\end{eqnarray*}
Next we transform the spinor (noting that it is periodic in $4\pi
a$),
\[
\xi=\sum_q \xi_{q/2} e^{i\frac{q}{2}x_3'}
\] with $q=2\pi j/a$ and $j$ an integer.
This yields,
\begin{eqnarray*}
\lefteqn{\sum_k\sum_q
e^{ix_3'(k+q/2)}\left[\sum_{\mu=1}^2\gamma^\mu\left(\partial_\mu'\delta_{k,0}+\sum_{\nu=1}^2
u_{\nu\mu,k} \partial_\nu' +
i(q/2) u_{3\mu,k}
\right)\xi_q/2\right.}\hspace{1.5in}\\
& & \left.\mbox{}+
\gamma^3\left(i(q/2)\delta_{k,0}+\sum_{\nu=1}^2
u_{\nu 3,k} \partial_\nu' + i(q/2)
u_{33,k}
\right)\xi_{q/2}\right]=0.
\end{eqnarray*}
This equation is independently true for each distinct
value of $k+q/2=m/2$ or $2k+q=m$ with $k,q,m$ an integer. Writing
$q=m-2k$ yields finally,
\begin{eqnarray}
\label{eq:dirac_eq_all_modes} \lefteqn{\sum_k
\left[\sum_{\mu=1}^2\gamma^\mu\left(\partial_\mu'\delta_{k,0}+\sum_{\nu=1}^2
u_{\nu\mu,k} \partial_\nu' +
i\frac{(m-2k)}{2} u_{3\mu,k}
\right)\xi_{(m-2k)/2}\right.}\hspace{1.5in}\\
& & \left.\mbox{}+
\gamma^3\left(i\frac{(m-2k)}{2}\delta_{k,0}+\sum_{\nu=1}^2
u_{\nu 3,k} \partial_\nu' + i\frac{(m-2k)}{2}
u_{33,k}
\right)\xi_{(m-2k)/2}\right]=0 \nonumber
\end{eqnarray}
This is an infinite series of equations describing the dynamics of
the fields $\xi_m$. This set of equations describes the dynamics of
our elastic solid and contains the same information as Laplace's
equation.
So far no approximations have been made. In the next section we will
demonstrate that if only the lowest modes are present, this reduces
to an equation that is identical in form to
Equation~(\ref{eq:full_dirac}).
\subsection{Spectrum of Lowest modes}
\label{sec:lowest_modes}
We now consider a theory in which only the lowest few modes in
Equation~(\ref{eq:dirac_eq_all_modes}) are present. We therefore
keep only the $m=0,\pm1/2$ modes we obtain the following 3
equations,
\begin{equation}
\label{eq:mode_0}
\sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'\right)\xi_0
+ \sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\gamma^3\xi_0 =0
\end{equation}
\begin{eqnarray}
\label{eq:mode_1}
\lefteqn{\sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'+im_{1/2}u_{3\mu,0}\right)\xi_{1/2}
+ \gamma^3\imath m_{1/2}(1+u_{33,0})\xi_{1/2}}\hspace{1.0in} \nonumber \\
& & \hbox{}+ \gamma^3\sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\xi_{1/2} +
\gamma^3\sum_{\nu=1}^2 u_{\nu 3,1}\partial'_\nu \xi_{-1/2}=0
\end{eqnarray}
\begin{eqnarray}
\label{eq:mode_-1}
\lefteqn{\sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'+\imath
m_{-1/2}u_{3\mu,0}\right)\xi_{-1/2}
+ \gamma^3\imath m_{-1/2}(1+u_{33,0})\xi_{-1/2}}\hspace{1.0in}\nonumber\\
& & \hbox{} +
\gamma^3\sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\xi_{-1/2} +
\gamma^3\sum_{\nu=1}^2 u_{\nu 3,-1}\partial'_\nu \xi_{1/2}=0
\end{eqnarray}
where $m_i=2\pi i/a$ denotes the Fourier mode with $i$ a half
integer. These equations describe the dynamics of three fields
$\xi_0$ and the coupled fields $\xi_{1/2}$ and $\xi_{-1/2}$.
The first equation ($m=0$ mode) describes the dynamics of a
massless, free particle. We will not attempt to identify this mode
with any physical particle but we simply note that in this
approximation this equation is completely uncoupled from the
$m=\pm1/2$ modes and therefore its dynamics are independent and have
no affect on these other modes.
We now examine the equations describing $\xi_{1/2}$ and
$\xi_{-1/2}$. These two equations can be combined by noting that for
real fields, $u_{\mu\nu,k}=u^\ast_{\mu\nu,-k}$. The $m=\pm 1/2$
modes can now be combined into the single equation
\begin{eqnarray}
\label{eq:Psi}
\lefteqn{\sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'+im_{1/2}u_{3\mu,0}\right)\Psi
+ \gamma^3\imath m_{1/2}(1+u_{33,0})\Psi}\hspace{1.0in} \nonumber \\
& & \hbox{}+ \gamma^3\sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\Psi +
\gamma^3\sum_{\nu=1}^2 u_{\nu 3,1}\partial'_\nu \Psi^\ast=0,
\end{eqnarray}
where $\Psi=\xi_{1/2}+\xi^\ast_{-1/2}$. To put this equation into a
more recognizable form we multiply Equation~(\ref{eq:Psi}) by
$\gamma^3$ from the left and define
\begin{equation}
\label{eq:dirac_matrices_FT}
\gamma'^\mu=\gamma^3\gamma^\mu+\sum_{\beta=1}^2\gamma^3\gamma^\beta
u_{\mu\beta,0}.
\end{equation}
These matrices with $\mu=1,2$ and $\nu=1,2$ satisfy the
anticommutation relations
\[
\left\{\gamma'^\mu,\gamma'^\mu\right\}= \delta_{\mu\nu} +
(u_{\mu\nu,0}+u_{\nu\mu,0})+\sum_{\beta=1}^2u_{\mu\beta,0}u_{\nu\beta,0}.
\]
If we insist that our new matrices satisfy
$\left\{\gamma^\mu,\gamma^\nu\right\}=g^{\mu\nu}$ then we are led to
define
\begin{equation}
\label{eq:commutation_relations_FT} g^{\mu\nu}\equiv\delta_{\mu\nu}
+
(u_{\mu\nu,0}+u_{\nu\mu,0})+\sum_{\beta=1}^2u_{\mu\beta,0}u_{\nu\beta,0}.
\end{equation} This is the metric for our two dimensional subspace and
it does not have the form of a simple coordinate transformation on a
flat space metric like that of section \ref{sec:elasticity_theory}.
Equation~(\ref{eq:Psi}) can now be rewritten as
\begin{eqnarray}
\label{eq:dirac_recognizable_form}
\lefteqn{\sum_{\mu=1}^2\gamma'^\mu\left(\partial'_\mu+\imath
m_{1/2}u_{3\mu,0}\right)\Psi -\imath
m_{1/2}u_{3\mu,0}\sum_{\beta=1}^2\gamma^3\gamma^\beta
u_{\mu\beta,0}\Psi+\imath m_{1/2}(1+u_{33,0})\xi_{1/2}}\hspace{2.5in} \nonumber \\
& & \hbox{}+ \sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\Psi
+\sum_{\nu=1}^2 u_{\nu 3,1}\partial'_\nu \Psi^\ast=0.
\end{eqnarray}
As we did for Equation~(\ref{eq:dirac_curved_space_separated}), in
going from the $x$ to the $x'$ coordinates, we assume that the
spinor properties of $\xi_{1/2}$ may be transformed using a real
similarity transformation and writing
$\xi_{1/2}=S\tilde{\xi}_{1/2}$. Transforming $\Psi$ in this way and
multiplying on the left by $S^{-1}$ gives us the following form for
the $m=\pm1/2$ modes,
\begin{eqnarray}
\label{eq:dirac_final_form} \lefteqn{\sum_{\mu=1}^2
S^{-1}\gamma'^\mu S \left(\partial'_\mu+\imath m_{1/2}u_{3\mu,0}+
S^{-1}\partial'_\mu S\right)\tilde{\Psi} -\imath
m_{1/2}u_{3\mu,0}\sum_{\beta=1}^2 S^{-1}\gamma^\beta S
u_{\mu\beta,0}\tilde{\Psi}} \hspace{.1in}\\
& & \mbox{}+ \imath
m_{1/2}(1+u_{33,0})\tilde{\Psi}+ \sum_{\nu=1}^2
u_{\nu3,0}(\partial'_\nu +S^{-1}\partial'_\nu S)\tilde{\Psi}
+\sum_{\nu=1}^2 u_{\nu 3,1}(\partial'_\nu+S^{-1}\partial'_\nu S)
\Psi^\ast\nonumber
\end{eqnarray}
We now examine each quantity in
Equation~(\ref{eq:dirac_final_form}). As before, we identify
$\tilde{\gamma}^\mu=S^{-1}\gamma'^\mu S$ with the transformed gamma
matrix and $\Gamma_\mu=(\partial'_\mu S^{-1})S$ with the spin
connection. We also would like to identify the quantity
$A_\mu=m_1u_{3\mu,0}$ in the first term with the electromagnetic
potential and the third term in Equation~(\ref{eq:dirac_final_form})
as a mass term with $m=m_1(1+u_{33,0})$ which implies that the field
$u_{33,0}$ provides a mass for our $\Psi$ particle. Let us further
assume that the quantities $u_{\mu\nu}$ are small compared to unity
so that the second term may be neglected as being of order
$u^2_{\mu\nu}$ (ie we are now assuming that our medium undergoes
only small deformations). If these identifications are made we can
write Equation~(\ref{eq:dirac_final_form}) in the final form
\begin{eqnarray}
\label{eq:dirac_final_form2} \lefteqn{\sum_{\mu=1}^2
\tilde{\gamma}^\mu \left(\imath\partial'_\mu+ \imath\Gamma_\mu-
A_\mu\right)\tilde{\Psi} -
m\tilde{\Psi} } \hspace{1.5in}\\
& & \mbox{}+ \imath\sum_{\nu=1}^2u_{\nu3,0}(\partial'_\nu-\Gamma_\nu)\tilde{\Psi}
+
\imath\sum_{\nu=1}^2 u_{\nu 3,1}(\partial'_\nu-\Gamma_\nu)
\Psi^\ast\nonumber.
\end{eqnarray} Notice the formal similarity of this equation to
Equation~(\ref{eq:full_dirac}). The first two terms have exactly
the form of Dirac's equation for a spin $1/2$ particle of mass $m$
in curved space interacting with the electromagnetic vector
potential $A_\mu$. Note that the mass term and the electromagnetic
potentials were not added by hand but emerged naturally from the
formalism. The nature of the last two terms in
Equation~(\ref{eq:dirac_final_form2}) are unknown. They don't appear
in the usual statement of Dirac's equation and their implications
are unknown.
Equation~(\ref{eq:dirac_final_form2}) is the central result of this
work. We have as yet not shown that the dynamics of the fields
$A_\mu$ are consistent with their identification as the
electromagnetic vector potential. To truly claim that the quantity
$u_{\mu 3,0}$ is the electromagnetic potential it must be shown to
satisfy Maxwell's equations. We believe however that the formal
correspondence between Equation~(\ref{eq:dirac_final_form2}) and
Dirac's equation is significant in its own right and will not, in
this paper, pursue the question of whether Maxwell's equation or the
Einstein Field equations are satisfied. Before concluding we note
that although our derivation assumed that we were working in three
dimensional space, the formalism extends to any number of
dimensions\cite{ref:Cartan,ref:Brauer_Weyl}. The major difference
is that in the three dimensional case, we were able to find an
explicit solution for the components of a spinor in terms the
components of the vector $\partial _\mu$. An explicit solution
might not exist in general. Nevertheless it can be
shown\cite{ref:Cartan} that the quadratic form in Laplace's equation
implies the existence of a multicomponent spinor $\xi$ satisfying a
dirac-like equation in any dimension.
\section{Conclusions}
We have taken a model of an elastic medium and derived an equation
of motion that has the same form as Dirac's equation in the presence
of electromagnetism and gravity. We derived our equation by using
the formalism of Cartan to reduce the quadratic form of Laplace's
equation to the linear form of Dirac's equation. We further assumed
that one coordinate was compact and upon Fourier transforming this
coordinate we obtained, in a natural way, a mass term and an
electromagnetic interaction term in the equations of motion.
| 2024-02-18T23:39:47.696Z | 2005-10-21T20:16:23.000Z | algebraic_stack_train_0000 | 426 | 7,246 |
|
proofpile-arXiv_065-2180 | \section{Introduction}
\PARstart{R}{ate-compatible} codes were introduced for the first
time in \cite{Hag88}, where the concept of punctured codes was
extended to the generation of a family of rate-compatible
punctured convolutional (RCPC) codes. The rate-compatibility
restriction requires that the rates are organized in a hierarchy,
where all code bits of a high rate punctured code are used by all
the lower rate codes. Based on RCPC codes, Hagenauer proposed an
ARQ strategy which provides a flexible way to accommodate code
rate to the error protection requirements, or varying channel
conditions. Furthermore, rate-compatible codes can be used to
provide unequal error protection (UEP). The concept of
rate-compatible codes has then been extended to parallel and
serial concatenated convolutional codes \cite{Ber93,Bar95,Kim01}.
Recently, a new class of hybrid serial concatenated codes was
proposed in \cite{Cha02} with bit error performance between that
of PCCC and SCCC. A similar concept has been presented in
\cite{Bab04} to obtain well performing rate-compatible SCCC
families. To obtain rate-compatible SCCCs, the puncturing is
limited to inner coded bits. However, in contrast to standard
SCCC, codes in \cite{Bab04} are obtained puncturing both inner
parity bits and systematic bits, thereby obtaining rates beyond
the outer code rate. With this assumption, puncturing is split
into two puncturing patterns, for both systematic and parity bits.
This particular code structure offers very good performance over a
range of rates, including very high ones, and performs better than
standard SCCC.
The optimization problem of this particular code structure
consists in optimizing these two puncturing patterns and finding
the optimal proportion of inner code systematic and parity bits to
be punctured to obtain a given rate. Some design criteria to
obtain good rate-compatible SCCC families are discussed in
\cite{Bab04}. However, the considerations in \cite{Bab04} are
limited to \textit{heuristic} design guidelines, with no
theoretical analysis support. Thus, a deeper and more formal
insight on the performance of this new class of SCCCs is required,
in order to provide suitable design guidelines aimed at the code
optimization.
In this paper, we provide a performance analysis of this new class
of concatenated codes. By properly redrawing the SCCC as a
parallel concatenation of two codes, we derive the analytical
upper bounds to the error probability using the concept of
\textit{uniform interleaver}. We then propose suitable design
criteria for the inner code puncturing patterns, and to optimize
the proportion of inner systematic and parity bits to be deleted.
We show that the optimal percentage of bits to be punctured
depends on the SNR region of interest. In particular, it is shown
that to improve the performance in the error floor region, it is
advantageous to increase the proportion of surviving inner code
parity bits, as far as a sufficient number systematic bits is
kept. Moreover, the optimal puncturing of the inner code
systematic bits depends on the outer encoder and, thus, it must be
interleaver dependent. Finally, based on these considerations, we
address design guidelines to obtain well-performing SCCC families.
The paper is organized as follows. In the next section, we
describe the new class of concatenated codes addressed in the
paper. In Section III, the upper bounds to the residual bit error
probability and frame error probability of this new class of codes
are derived and design criteria are outlined. Design guidelines to
obtain well-performing SCCC families are discussed in Section IV.
In Section V, simulation results are compared with the analytical
upper bounds. Finally, in Section VI we draw some conclusions.
\section{A New Class of Serial Concatenated Convolutional Codes}
Throughout the paper we shall refer to the encoder scheme shown in
Fig.~\ref{Fig:SCC_Enc1}.
We consider the serial concatenation of two systematic recursive
convolutional encoders. To obtain high rates both encoders are
punctured. However, in contrast to standard SCCC where high rates
are obtained by concatenating an extensively punctured outer
encoder with an inner encoder of rate $R_c^{i}\leqslant 1$ such
that the rate of the SCCC, $R_{\rm SCCC}$, is at most equal
to the rate of the outer encoder ($R_{\rm SCCC}\leqslant
R_c^{o}$), the inner encoder in Fig.~\ref{Fig:SCC_Enc1} can be
punctured beyond the unitary rate, i.e., the overall code rate
$R_{\rm SCCC}$ can be greater than the outer code rate $R_c^{o}$.
Moreover, as made evident in the figure, puncturing is not
directly applied to the inner code sequence but split into two
different puncturings, in correspondence to inner code systematic
bits and inner code parity bits ($P_i^s$ and $P_i^p$,
respectively). Assuming an inner mother code of rate $1/n$, the
rate of the resulting SCCC is given by
\begin{equation}
R_{\rm SCCC}=R_c^{o'}R_c^{i}=R_c^{o'}\frac{1}{\rho_s+(n-1)\rho_p}
\label{eq:Rsccc}
\end{equation}
where $R_c^{o'}$ is the outer code rate after applying the fixed
puncturing pattern $P_o$, and $\rho_s$ ($\rho_p$) is the
systematic permeability (parity permeability) rate, defined as the
proportion of inner code systematic bits (parity bits) which are
not punctured. Given a certain desired $R_{\rm SCCC}$, $\rho_s$
and $\rho_p$ are related by
\begin{equation}
\rho_s=\frac{R_c^{o'}}{R_{\rm SCCC}}-(n-1)\rho_p . \label{eq:rhou}
\end{equation}
This particular code structure offers superior performance to that
of standard SCCC, especially for high-rates. Notice that for high
rates, the exhaustive puncturing of the outer code leads to a poor
code in terms of free distance, thus leading to a higher error
floor. On the contrary, the code structure discussed here, keeps
the interleaver gain for low rates also in the case of very high
rates, since the heavy puncturing is moved to the inner encoder.
Moreover it is well suited for rate-compatible schemes.
It is clear that the performance of the overall SCCC code depends
on puncturing patterns $P_o$, $P_i^s$ and $P_i^p$, and,
subsequently, on the permeability rates $\rho_s$ and $\rho_p$,
which should be properly optimized. In \cite{Bab04}, some
\textit{heuristic} design guidelines were given to select $\rho_s$
and $\rho_p$, leading to well-performing families of
rate-compatible SCCCs. However, the work in \cite{Bab04} lacks in
providing formal analysis to clarify the behavior of this code
structure and to provide a unique framework to properly select
$\rho_s$ and $\rho_p$. The aim of this paper is to address design
guidelines to clarify some relevant aspects of this new code
structure, and to provide the clues for the code optimization.
The design of concatenated codes with interleavers involves the
choice of the interleaver and the constituent encoders. The joint
optimization, however, seems to lead to prohibitive complexity
\begin{figure}[t]
\centerline{\psfig{figure=Sccc-01.eps,width=\the\hsize}}
\caption{Block diagram of the Serial concatenated code scheme.}
\label{Fig:SCC_Enc1}
\end{figure}
problems. In \cite{Ben96} Benedetto and Montorsi proposed a method
to evaluate the error probability of parallel concatenated
convolutional codes (PCCC) independently from the interleaver
used. The method consists in a decoupled design, in which one
first designs the constituent encoders, and then tailors the
interleaver on their characteristics. To achieve this goal, the
notion of {\em uniform interleaver} was introduced in
\cite{Ben96}; the actual interleaver is replaced with the {\em
average} interleaver\footnote{This average interleaver is actually
the weighted set of all interleavers.}. The use of the uniform
interleaver drastically simplifies the performance evaluation of
Turbo Codes. Following this approach, the best constituent
encoders for serial code construction are found in \cite{Ben98},
where the analysis in \cite{Ben96} was extended to SCCCs, giving
design criteria for constituent encoders.
In the next section, we gain some analytical insight into the code
structure of Fig.~\ref{Fig:SCC_Enc1} to address design guidelines
to properly select $\rho_s,P_i^s$ and $\rho_p,P_i^p$. To this
purpose, we derive the analytical upper bounds to the bit and
frame error probability, following the concept of uniform
interleaver used in \cite{Ben96} and \cite{Ben98} for PCCC and
SCCC. However, we do not treat the code structure of
Fig.~\ref{Fig:SCC_Enc1} as a standard SCCC, so we cannot directly
apply the considerations in \cite{Ben98}. Indeed, the treatment in
\cite{Ben98} would consider the inner encoder (with its
puncturing) as a unique \textit{entity}, therefore diluting the
contribution of the inner code systeamtic bits and parity bits to
the bound. Instead, our idea is to decouple the contribution of
the inner systematic bits and inner parity bits to the error
probability bound to better identify how to choose $\rho_s,P_i^s$
and $\rho_p,P_i^p$. In fact, we shall show that to obtain good
SCCC codes in the form of Fig.~\ref{Fig:SCC_Enc1}, the selection
of the inner code puncturing directly depends on the outer code,
which has a crucial effect on performance. This dependence cannot
be taken into account by the upper bounds derived in \cite{Ben98}
for SCCC.
\section{Analytical Upper Bounds to the Error Probability}
Following the derivations in \cite{Ben96} and \cite{Ben98} for
PCCC and SCCC, in this section we derive the union bound of the
bit error probability for the code construction of
Fig.~\ref{Fig:SCC_Enc1}.
Recalling \cite{Ben98}, the bit error probability of a SCCC can be
upper bounded through
\begin{equation}\label{eq:Pbe1}
\begin{split}
P_b(e)&<\left.\sum_{w=w_m^o}^{NR_c^{o'}}\frac{w}{NR_c^{o'}}A^{\mathcal{C}_s}(w,H)\right|_{H=\mathrm{e}^{-\frac{R_{\mathrm{SCCC}}E_b}{N_0}}}\\
&=\sum_{h=h_m}^{N/R_c^{i}}\sum_{w=w_m^o}^{NR_c^{o'}}\frac{w}{NR_c^{o'}}A^{\mathcal{C}_s}_{w,h}\mathrm{e}^{-\frac{hR_{\mathrm{SCCC}}E_b}{N_0}}
\end{split}
\end{equation}
where $w_m^o$ is the minimum weight of an input sequence
generating an error event of the outer code, $N$ is the
interleaver length, and $h_m$ is the minimum weight of the
codewords of the SCCC, $\mathcal{C}_s$, of rate
$R_{\mathrm{SCCC}}$. $A^{\mathcal{C}_s}(w,H)$ is the
\textit{Conditional Weight Enumerating Function} (CWEF) of the
overall SCCC code. For a generic serially concatenated code,
consisting of the serial concatenation of an outer code
$\mathcal{C}_o$ with an inner code $\mathcal{C}_i$ through an
interleaver, the CWEF of the overall SCCC code
$A_{w,h}^{\mathcal{C}_s}$ can be calculated replacing the actual
interleaver with the uniform interleaver and exploiting its
properties. The uniform interleaver transforms a codeword of
weight $l$ at the output of the outer encoder into all distinct
${N \choose l}$ permutations. As a consequence, each codeword of
the outer code $\mathcal{C}_o$ of weight $l$, through the action
of the uniform interleaver, enters the inner encoder generating
${N \choose l}$ codewords of the inner code $\mathcal{C}_i$. The
CWEF of the overall SCCC code can then be evaluated from the
knowledge of the CWEFs of the outer and inner codes; the
coefficients $A_{w,h}^{\mathcal{C}_s}$ are given by
\begin{equation}\label{eq:Awh_1}
\begin{split}
A_{w,h}^{\mathcal{C}_s}=\sum_{l=0}^{N}\frac{A_{w,l}^{\mathcal{C}_o}\times
A_{l,h}^{\mathcal{C}_i}}{\left(%
\begin{array}{c}
N \\
l \\
\end{array}%
\right)}
\end{split}
\end{equation}
where $A_{w,l}^{\mathcal{C}_o}$ and $A_{l,h}^{\mathcal{C}_i}$ are
the coefficients of the CWEFs of the outer and inner codes,
respectively.
This is basically the same result obtained in \cite{Ben98}.
However, and this is the key novelty of our analysis, to evaluate
the performance of the code structure of Fig.~\ref{Fig:SCC_Enc1},
instead of proceeding as in \cite{Ben98} using (\ref{eq:Awh_1}),
it is more suitable to refer to Fig.~\ref{Fig:SCC_Enc2}, which
properly redraws the encoder of Fig.~\ref{Fig:SCC_Enc1}, for the
derivation of the upper bound. Fig.~\ref{Fig:SCC_Enc2} allows us
to decouple the contributions of the inner code puncturings
$P_i^s$ and $P_i^p$ to the error probability bound. Call
$\mathcal{C}_o^{''}$ the code obtained from the puncturing of the
outer code $\mathcal{C}_o$ through $P_o$ and $P'$, with
$P'=\Pi^{-1}[P_i^s]$, i.e., the de-interleaved version of $P_i^s$,
$\mathcal{C}_o^{'}$ the code obtained from the puncturing of the
outer code $\mathcal{C}_o$ through $P_o$, and $\mathcal{C}_i^{'}$
the inner encoder $\mathcal{C}_i$ generating only parity bits
punctured through $P_i^p$, which is fed with an interleaved
version of codewords generated by
$\mathcal{C}_o^{'}$\footnote{Notice that, in abuse of notation, we
have maintained the terminology \textit{outer encoder} and
\textit{inner encoder} in Fig.~\ref{Fig:SCC_Enc2} though they do
not strictly act as outer and inner encoders. However, we believe
that this notation reflects better the correspondence with
Fig.~\ref{Fig:SCC_Enc1}.}. Now, the serial concatenated code
structure under consideration can be interpreted as the parallel
concatenation of the code $\mathcal{C}_o^{''}$ and
$\mathcal{C}_i^{'}$. Therefore, the SCCC codeword weight $h$ can
be split into two contributions $j$ and $m$, corresponding to the
output weights of the codewords generated by encoder
$\mathcal{C}_o^{''}$ and by encoder $\mathcal{C}_i^{'}$,
\begin{figure}[t]
\centerline{\psfig{figure=ModifiedScheme_01.eps,width=\the\hsize}}
\caption{Modified block diagram of the serial concatenated
scheme.} \label{Fig:SCC_Enc2}
\end{figure}
respectively, such that $h=j+m$. With reference to
Fig.~\ref{Fig:SCC_Enc2}, equation (\ref{eq:Awh_1}) can then be
rewritten as
\begin{equation}\label{eq:Awh_2}
\begin{split}
A_{w,h}^{\mathcal{C}_s}=A_{w,j+m}^{\mathcal{C}_s}=\sum_{l=d_{\rm
f}^{o'}}^{N}\sum_{j=d_{\rm
f}^{o''}}^{N/R_{c}^{o''}}\left.\frac{A_{w,l,j}^{\mathcal{C}_o^{''}}\times
A_{l,m}^{\mathcal{C}_i^{'}}}{\left(%
\begin{array}{c}
N \\
l \\
\end{array}%
\right)}\right|_{j+m=h}
\end{split}
\end{equation}
where $d_\mathrm{f}^{o'}$ is the free distance of the code
$\mathcal{C}_o^{'}$ and $d_\mathrm{f}^{o''}$ is the free distance
of the code $\mathcal{C}_o^{''}$. In (\ref{eq:Awh_2}),
$R_{c}^{o''}$ is the rate of the code $\mathcal{C}_o^{''}$,
$A_{w,l,j}^{\mathcal{C}_o^{''}}$ indicates the number of codewords
of $\mathcal{C}_o^{''}$ of weight $j$ associated with a codeword
of $\mathcal{C}_o^{'}$ of weight $l$ generated from an information
word of weight $w$, and $A_{l,m}^{\mathcal{C}_i^{'}}$ indicates
the number of codewords of $\mathcal{C}_i^{'}$ of weight $m$
associated with a codeword of $\mathcal{C}_o^{'}$ of weight $l$.
$A_{w,l,j}^{\mathcal{C}_o^{''}}$ and $A_{l,m}^{\mathcal{C}_i^{'}}$
can be expressed as
\begin{equation}\label{eq:Awh_3}
\begin{split}
A_{w,l,j}^{\mathcal{C}_o^{''}}&\leqslant\sum_{n^{o''}=1}^{n_M^{o''}}\left(%
\begin{array}{c}
N/p \\
n^{o''} \\
\end{array}%
\right)A_{w,l,j,n^{o''}}^{o''}\\
A_{l,m}^{\mathcal{C}_i^{'}}&\leqslant\sum_{n^{i'}=1}^{n_M^{i'}}\left(%
\begin{array}{c}
N/p \\
n^{i'} \\
\end{array}%
\right)A_{l,m,n^{i'}}^{i'}
\end{split}
\end{equation}
where the coefficient $A_{w,l,j,n^{o''}}^{o''}$ represents the
number of code $\mathcal{C}_o^{''}$ sequences of weight $j$,
associated with a codeword of $\mathcal{C}_o^{'}$ of weight $l$
generated from an information word of weight $w$, and number of
concatenated error events $n^{o''}$. In (\ref{eq:Awh_3}),
$n_M^{o''}$ is the largest number of error events concatenated in
a codeword of the code $\mathcal{C}_o^{''}$ of output weight $j$
associated with a codeword of $\mathcal{C}_o^{'}$ of weight $l$
and an information word of weight $w$: $n_M^{o''}$ is a function
of $w$, $l$ and $j$ that depends on the encoder. Also in
(\ref{eq:Awh_3}), the coefficient $A_{l,m,n^{i'}}^{i'}$ represents
the number of code $\mathcal{C}_i^{'}$ sequences of weight $m$,
input weight $l$, and number of concatenated error events
$n^{i'}$. As for $n_M^{o''}$, $n_M^{i'}$ is the largest number of
error events concatenated in a codeword of the code
$\mathcal{C}_i^{'}$ of output weight $m$ generated from an
information word of weight $l$.
Substituting (\ref{eq:Awh_3}) in (\ref{eq:Awh_2}), the value of
the coefficients $A_{w,j+m}^{\mathcal{C}_s}$ is upper bounded as
\begin{equation}\label{eq:Awh_4}
\begin{split}
A_{w,j+m}^{\mathcal{C}_s} &\leqslant
\sum_{l=d_{\mathrm f}^{o'}}^{N}\sum_{j=d_{\mathrm f}^{o''}}^{N/R_c^{o''}}\sum_{n^{o''}=1}^{n_M^{o''}}\sum_{n^{i'}=1}^{n_M^{i'}}
\frac{\left(%
\begin{array}{c}
N/p \\
n^{o''} \\
\end{array}%
\right)\left(%
\begin{array}{c}
N/p \\
n^{i'} \\
\end{array}%
\right)}{\left(%
\begin{array}{c}
N \\
l \\
\end{array}%
\right)}
\cdot A_{w,l,j,n^{o''}}^{o''}A_{l,m,n^{i'}}^{i'}\\
&\leqslant
\sum_{l=d_{\mathrm f}^{o'}}^{N}\sum_{j=d_{\mathrm f}^{o''}}^{N/R_c^{o''}}\sum_{n^{o''}=1}^{n_M^{o''}}\sum_{n^{i'}=1}^{n_M^{i'}}
\frac{N^{n^{o''}+n^{i'}-l}l^ll!}{p^{n^{o''}+n^{i'}}n^{o''}!n^{i'}!}
\cdot A_{w,l,j,n^{o''}}^{o''}A_{l,m,n^{i'}}^{i'}
\end{split}
\end{equation}
Finally,
substituting (\ref{eq:Awh_4}) into (\ref{eq:Pbe1}), we obtain the
upper bound for the bit error probability,
\begin{equation}\label{eq:Pbe2}
\begin{split}
P_b(e) \leqslant
&\sum_{j+m=h_m}^{N/R_c^{i'}}\mathrm{e}^{-\frac{(j+m)R_{\mathrm{SCCC}}E_b}{N_0}} \\
&\cdot \sum_{w=w_m^o}^{NR_c^{o'}}\sum_{l=d_{\mathrm f}^{o'}}^{N}\sum_{j=d_{\mathrm f}^{o''}}^{N/R_c^{o''}}\sum_{n^{o''}=1}^{n_M^{o''}}\sum_{n^{i'}=1}^{n_M^{i'}}
N^{n^{o''}+n^{i'}-l-1} \frac{l^ll!}{p^{n^{o''}+n^{i'}}n^{o''}!n^{i'}!}\frac{w}{R_c^{o'}}A_{w,l,j,n^{o''}}^{o''}A_{l,m,n^{i'}}^{i'}
\end{split}
\end{equation}
Equivalently, the upper bound for the frame error probability is
given by
\begin{equation}\label{eq:Pfe2}
\begin{split}
P_f(e)\leqslant
&\sum_{j+m=h_m}^{N/R_c^{i'}}\mathrm{e}^{-\frac{(j+m)R_{\mathrm{SCCC}}E_b}{N_0}} \\
&\cdot \sum_{w=w_m^o}^{NR_c^{o'}}\sum_{l=d_{\mathrm
f}^{o'}}^{N}\sum_{j=d_{\mathrm
f}^{o''}}^{N/R_c^{o''}}\sum_{n^{o''}=1}^{n_M^{o''}}\sum_{n^{i'}=1}^{n_M^{i'}}
N^{n^{o''}+n^{i'}-l}
\frac{l^ll!}{p^{n^{o''}+n^{i'}}n^{o''}!n^{i'}!}A_{w,l,j,n^{o''}}^{o''}A_{l,m,n^{i'}}^{i'}
\end{split}
\end{equation}
For large $N$ and for a given $h=j+m$, the dominant coefficient of
the exponentials in (\ref{eq:Pbe2}) and (\ref{eq:Pfe2}) is the one for
which the exponent of $N$ is maximum \cite{Ben98}. This maximum exponent is defined as
\begin{equation}\label{eq:alfa}
\alpha(h=j+m)\triangleq \max_{w,l}\{n^{o''}+n^{i'}-l-1\}
\end{equation}
For large $E_b/N_0$, the dominating term is $\alpha(h_m)$, corresponding to the minimum value $h=h_m$,
\begin{equation}\label{eq:alfa_02}
\alpha(h_m)\leq 1-d_f^{o'}
\end{equation}
and the asymptotic bit error rate performance is given by
\begin{equation}
\lim_{E_b/N_0\longrightarrow \infty}P_b(e) \leq
BN^{1-d_f^{o'}}\mathrm{erfc}\left(\sqrt{\frac{h_mR_{\rm SCCC}E_b}{N_0}}\right)
\label{eq:BER}
\end{equation}
where $B$ is a constant that depends on the weight properties of
the encoders, and $N$ is the interleaver length.
On the other hand, the dominant contribution to the bit and frame
error probability for $N\longrightarrow \infty$ is the largest
exponent of $N$, defined as
\begin{equation}\label{eq:alfa_m}
\alpha_M\triangleq \max_{h}\alpha(h=j+m)=\max_{w,l,h}\{n^{o''}+n^{i'}-l-1\}
\end{equation}
We consider only the case of recursive convolutional inner
encoders. In this case, $\alpha_M$ is given by
\begin{equation}\label{eq:alfa_m1}
\alpha_M=-\left\lfloor\frac{d_\mathrm{f}^{o'}+1}{2}\right\rfloor
\end{equation}
and
\begin{equation}
\lim_{N\longrightarrow \infty}P_b(e) \leq
KN^{\alpha_M}\mathrm{erfc}\left(\sqrt{\frac{h(\alpha_M)R_{\rm SCCC}E_b}{N_0}}\right)
\label{eq:FER_bb}
\end{equation}
where again $K$ is a constant that depends on the weight
properties of the encoders and $h(\alpha_M)$ is the weight
associated to the highest exponent of $N$.
Now, denoting by $d^{i'}_\mathrm{f,eff}$ the minimum weight of
inner code $\mathcal{C}_i^{'}$ sequences generated by input
sequences of weight 2, we obtain the following results for the
weight $h(\alpha_M)$ associated to the highest exponent of $N$:
\begin{equation}\label{eq:dfo1}
\begin{split}
h(\alpha_M)&=\frac{d_\mathrm{f}^{o'}d^{i'}_\mathrm{f,eff}}{2}+d^{o''}(d_\mathrm{f}^{o'})~~~~~~~~~~~~~~~~~~~~\mathrm{if}~~d_\mathrm{f}^{o'}~~\mathrm{even}\\
h(\alpha_M)&=\frac{(d_\mathrm{f}^{o'}-3)d^{i'}_\mathrm{f,eff}}{2}+h_m^{(3)}+d^{o''}(d_\mathrm{f}^{o'})~~~~~\mathrm{if}~~d_\mathrm{f}^{o'}~~\mathrm{odd}
\end{split}
\end{equation}
where $d^{o''}(d_\mathrm{f}^{o'})$ is the minimum weight of
$\mathcal{C}_o^{''}$ code sequences corresponding to a
$\mathcal{C}_o^{'}$ code sequence of weight $d_\mathrm{f}^{o'}$
and $h_m^{(3)}$ is the minimum weight of sequences of the inner
code $\mathcal{C}_i^{'}$ generated by a weight-3 input sequence.
Finally, since $d^{o''}(d_\mathrm{f}^{o'})\geqslant
d_\mathrm{f}^{o''}$, we can also write
\begin{equation}\label{eq:dfo2}
\begin{split}
h(\alpha_M)&\geqslant\frac{d_\mathrm{f}^{o'}d^{i'}_\mathrm{f,eff}}{2}+d_\mathrm{f}^{o''}~~~~~~~~~~~~~~~~~~~~\mathrm{if}~~d_\mathrm{f}^{o'}~~\mathrm{even}\\
h(\alpha_M)&\geqslant\frac{(d_\mathrm{f}^{o'}-3)d^{i'}_\mathrm{f,eff}}{2}+h_m^{(3)}+d_\mathrm{f}^{o''}~~~~~\mathrm{if}~~d_\mathrm{f}^{o'}~~\mathrm{odd}
\end{split}
\end{equation}
From (\ref{eq:FER_bb}) and (\ref{eq:dfo1}) we obtain the following
result for the (asymptotic with respect $N$) bit error
probability:
\begin{equation}\label{eq:bound_even}
P_b(e) \leq
C_{\mathrm{even}}N^{-d_\mathrm{f}^{o'}/2}\mathrm{erfc}\left(\sqrt{\left(\frac{d_\mathrm{f}^{o'}d_{\mathrm{f,eff}}^{i'}}{2}+d^{o''}(d_\mathrm{f}^{o'})\right)\frac{R_{\mathrm{SCCC}}E_b}{N_0}}\right)
\end{equation}
if $d_\mathrm{f}^{o'}$ is even, and
\begin{equation}\label{eq:bound_odd}
P_b(e) \leq
C_{\mathrm{odd}}N^{-\frac{d_\mathrm{f}^{o'}+1}{2}}\mathrm{erfc}\left(\sqrt{\left(\frac{(d_\mathrm{f}^{o'}-3)d_{\mathrm{f,eff}}^{i'}}{2}+h_m^{(3)}+d^{o''}(d_\mathrm{f}^{o'})\right)\frac{R_{\mathrm{SCCC}}E_b}{N_0}}\right)
\end{equation}
if $d_\mathrm{f}^{o'}$ is odd. Constants $C_{\mathrm{even}}$ and
$C_{\mathrm{odd}}$ can be derived as in \cite{Ben98} for SCCC.
We observe that the coefficient $h(\alpha_M)$ increases with
$d_\mathrm{{f,eff}}^{i'}$, $d^{o''}(d_\mathrm{f}^{o'})$ and also
with $h_m^{(3)}$ in the case of odd $d_\mathrm{f}^{o'}$. This
suggests that, to improve the performance, one should choose a
suitable combination of $\mathcal{C}_o^{''}$ and
$\mathcal{C}_i^{'}$ such that $h(\alpha_M)$ is maximized, and the
puncturing patterns $P_o,P'$ and $P_i^p$ (and subsequently
permeabilities $\rho_s$ and $\rho_p$) should be selected
accordingly. Moreover, such a combination depends on the value of
$d_\mathrm{f}^{o'}$. For instance, if $d_\mathrm{f}^{o'}=4$ the
term $d_\mathrm{{f,eff}}^{i'}$ appears to be dominant with respect
to $d^{o''}(d_\mathrm{f}^{o'})$, since it is multiplied by a
factor two ($d_\mathrm{f}^{o'}/2$), whereas for
$d_\mathrm{f}^{o'}=2$ both contributions are equally weighted.
Notice also that the contribution of the code $\mathcal{C}''_o$ to
$h(\alpha_M)$, given by $d^{o''}(d_\mathrm{f}^{o'})$, corresponds
to the contribution of the inner code systematic part in
Fig.~\ref{Fig:SCC_Enc1}. Therefore, since
$d^{o''}(d_\mathrm{f}^{o'})$ depends on the outer code, to
optimize the puncturing pattern $P_i^s$ ($P_i^s=\Pi[P']$) of the
inner code systematic bits, one must take into account this
dependence.
We can draw from (\ref{eq:bound_even}) and (\ref{eq:bound_odd})
some important design considerations:
\begin{itemize}
\item As for traditional SCCC, $P_o$ should be chosen to optimize
the outer code distance spectrum. \item The coefficient that
multiplies the signal to noise ratio $E_b/N_0$ increases with
$d_{\mathrm{f,eff}}^{i'}$ and $d^{o''}(d_\mathrm{f}^{o'})$. Thus,
we deduce that $P'$ and $P_i^p$ should be chosen so that
$h(\alpha_M)$ is maximized. This implies to select a suitable
combination of permeabilities $\rho_s$ and $\rho_p$. For a fixed
pair $\rho_s$ and $\rho_p$, $P_i^p$ must be optimized to yield the
best encoder $\mathcal{C}'_i$ IOWEF. Furthermore, $P'$ (i.e.
$P_i^s$) must be selected to optimize
$d^{o''}(d_\mathrm{f}^{o'})$. If we consider (\ref{eq:dfo1})
instead of (\ref{eq:dfo2}), the criterion is equivalent to
optimize the distance spectrum of $\mathcal{C}_o''$. Notice that
this is equivalent to optimize the outer code $\mathcal{C}_o$
punctured through $P_o$ and $P'$ with permeability $\rho_s$. Then,
$P_i^s$ must be set to the interleaved version of $P'$, i.e.,
$P_i^s=\Pi[P']$. Therefore, $P_i^s$ turns out to depend on the
outer code, and thus, it is also interleaver dependent. We stress
the need to optimize $P_i^s$ according to this dependence.
\end{itemize}
A complementary analysis tool for the design of concatenated
schemes would be to consider the EXIT charts or equivalent plots
\cite{Ten01,Div01}. These analysis techniques explain very well
the behavior of iterative decoding schemes in the low SNR region
(convergence region) and often lead to design rules that are in
contrast with those outlined in this section, which are more
suited for the analysis in the error floor region. Unfortunately,
EXIT chart analysis is mainly based on Monte Carlo simulations and
does not allows to extract useful code design parameters. For this
reason we have not included this technique in the paper. The
reader however should be warned that for the careful design of
concatenated schemes both aspects must be considered and this
implies that comparison of the designed schemes through simulation
cannot be avoided. This fact also allow to justify some
differences in the simulation results which are not evident from
the uniform interleaver analysis. A convergence analysis of this
class of SCCC will be discussed in a forthcoming paper.
\section{Rate-compatible Serial Concatenated Convolutional Codes}
Rate-compatible serial concatenated convolutional codes are
obtained by puncturing inner code bits with the constraint that
all the code bits of a high rate code must be kept in all lower
rate codes. Depending on the puncturing pattern, the resulting
code may be systematic (none of the systematic bits are
punctured), partially systematic (a fraction of the systematic
bits are punctured) or non-systematic (all systematic bits are
punctured). In \cite{Aci00} it was argued that a systematic inner
code performs better than a partially systematic code. This result
was assumed in \cite{Kim01} and \cite{Cha01} to build
rate-compatible SCCCs limiting puncturing to inner parity bits.
This assumption, however, is not valid for all SNRs. Indeed,
keeping some systematic bits may be beneficial for speed up
iterative decoding convergence. Since puncturing is limited to
inner parity bits, the rate of the SCCC satisfies the constraint
$R_{\rm SCCC}\leqslant R_c^{o'}$. As already stated, in contrast
to \cite{Kim01} and \cite{Cha01} we do not restrict puncturing to
parity bits, but extend it also to systematic bits, thus allowing
$R_{\rm SCCC}$ beyond the outer code rate $R_c^{o'}$, which
provides a higher flexibility.
Assuming an outer encoder puncturing pattern fixed ($P_o$ in
Fig.~\ref{Fig:SCC_Enc1}), the design of well-performing
rate-compatible SCCCs in the form of Fig.~\ref{Fig:SCC_Enc1} limits
to optimize the inner code puncturing patterns for systematic and
parity bits according to the design criteria outlined in the
previous section, with the constraint of rate-compatibility.
Applying these design rules, optimal SCCC families can be found
considering inner systematic and inner parity bits separately:
\begin{itemize}
\item To find the optimum puncturing pattern for inner code parity bits,
start puncturing the inner mother code parity bits one bit at a
time, fulfilling the rate-compatibility restriction. Define as
$d_w$ the minimum weight of inner codewords generated by input
words with weight $w$, and by $N_w$ the number of nearest
neighbors (multiplicities) with weight $d_w$. Select at each step
the candidate puncturing pattern $P_i^p$ for the inner code parity
bits as the one optimizing its IOWEF, i.e., yielding the optimum
values for $(d_w,N_w)$ for $w=2,\hdots,w_{max}$ (first $d_w$ is
maximized and then $N_w$ is minimized).
\item Select the candidate puncturing pattern $P'$ as the one
yielding the best outer code (punctured through $P_o$ and $P'$)
output weight enumerating function (OWEF). Namely, to find the
optimum puncturing pattern for inner code systematic bits, start
puncturing the outer mother code output bits one bit at a time,
fulfilling the rate-compatibility restriction.
Define as $A_d$ the number of nearest neighbors (multiplicities)
with output distance $d$ of the outer code. Select at each step
the candidate puncturing pattern $P'$ as the one yielding the
optimum values for $A_d$, i.e., the one which sequentially
optimize the values $A_d$ for
$d=d_\mathrm{free},\hdots,d_\mathrm{max}$. Since also outer code
information bits are punctured, the invertibility\footnote{A code
is said to be invertible if, knowing only the parity-check symbols
of a code vector, the corresponding information symbols can be
uniquely determined \cite{lin1}.} of the outer code at each step
must be guaranteed. At the end, since the systematic bits at the
input of the inner encoder are an interleaved version of the outer
encoder output bits, take the best puncturing pattern $P'$ and
apply its interleaved version $P_i^s=\Pi[P']$ to inner code
systematic bits (see Figs.~\ref{Fig:SCC_Enc1}
and~\ref{Fig:SCC_Enc2}).
\end{itemize}
\section{Simulation Results and Comparison with Analytical Bounds}
The performance of rate-compatible SCCCs mainly depend on its
overall rate $R_{\rm SCCC}$ and on the selected combination of
$\rho_s$ and $\rho_p$. In this Section, based on the
considerations drawn in Section III and IV, we discuss how to
properly select $\rho_s$ and $\rho_p$. We compare through
simulation several rate-compatible puncturing schemes, with
different interleaver lengths, and compare the performance of the
proposed codes with the upper bounds to the error probability.
We consider the serial concatenation of two rate-1/2, 4-states,
systematic recursive encoders, with generator polynomials $(1,
5/7)$ in octal form. The outer encoder is punctured to rate 2/3 by
applying a fixed puncturing pattern. In particular, two puncturing
patterns $P_o$ have been taken into account, namely $P_{o,1}=
\left[
\begin{array}{cc} 1 & 1\\ 1 & 0\end{array} \right]$ and $P_{o,2}=
\left[ \begin{array}{cccc} 1 & 1 & 1 & 1\\ 1 & 1 & 0 &
0\end{array} \right]$. The overall code rate is, thus, $R_{\rm
SCCC}=1/3$. Higher rates are then obtained by puncturing the inner
encoder through puncturing patterns $P_i^s$ and $P_i^p$ for
systematic and parity bits, respectively, as previously discussed.
The free distance of the outer encoder, $d_\mathrm{f}^{o'}$, when
puncturing pattern $P_{o,1}$ is applied, is odd and equal to 3,
whereas for $P_{o,2}$, $d_\mathrm{f}^{o'}$ is even and equal to 4.
Some considerations must be done at this point:
\begin{enumerate}
\item If $d_\mathrm{f}^{o'}=3$,
$\alpha_M=-\left\lfloor\frac{d_\mathrm{f}^{o'}+1}{2}\right\rfloor
= -2$. In this case, the minimum weight of inner code input
sequences that yields $\alpha_M=-2$ (since $n^{o''}=n^{i'}=1$) is
$l_{\rm min}=3$, and
$h(\alpha_M)=h_m^{(3)}+d^{o''}(d_\mathrm{f}^{o'})$. However, this
value of $\alpha_M$ is achieved also by the inner input weights
$l=4$ and $l=6$, leading to a slight modification of
(\ref{eq:dfo1}). In fact, $l=4$ yields $\alpha_M=-2$ (since
$n^{o''}=1$ and $n^{i'}=2$), and
$h(\alpha_M)=2d^i_\mathrm{f,eff}+d^{o''}(d_\mathrm{f}^{o'}+1)$.
Also $l=6$ yields $\alpha_M=-2$ (since $n^{o''}=2$ and
$n^{i'}=3$), and
$h(\alpha_M)=3d^i_\mathrm{f,eff}+2d^{o''}(d_\mathrm{f}^{o'})$.
Notice that even when $l>l_{\mathrm{min}}$ yields the maximum
value of $\alpha_M=-2$, the design rules stated in Section IV are
still valid, leading in every case to the maximization of
$h(\alpha_M)$.
\item If $d_\mathrm{f}^{o'}=4$,
$\alpha_M=-\left\lfloor\frac{d_\mathrm{f}^{o'}+1}{2}\right\rfloor
= -2$. In this case, only the minimum weight of the inner code
input sequences $l_{\rm min}=4$ yields $\alpha_M=-2$ (since
$n^{o''}=1$ and $n^{i'}=2$), and
$h(\alpha_M)=2d^{i'}_\mathrm{f,eff}+d^{o''}(d_\mathrm{f}^{o'})$.
\end{enumerate}
The algorithm to find the optimal (where optimal is intended to be
according to the criterion addressed in Section IV) puncturing
patterns $P_i^p$ and $P_i^s=\Pi[P']$ for inner code parity and
systematic bits, respectively, works sequentially, by puncturing
one bit at a time in the optimal position, subject to the
constraint of rate compatibility. This sequential puncturing is
performed starting from the lowest rate code (i.e., the baseline
rate-1/3 code), and ending up at the highest possible rate. In
Table \ref{Table_K200_inner_parity_punc_pos} the puncturing
pattern $P_i^p$ for inner code parity bits is shown. To find this
pattern, a frame length $K=200$ and an interleaver length
$N=K/R_c^{o'}=300$ have been assumed. The puncturing pattern has
been found by optimizing the inner code IOWEF, as explained in the
previous section. This puncturing pattern yields the optimum
values of $(d_w,N_w)$ for $w=2,\hdots,w_{max}$ and for each
puncturing position. The puncturing positions of $P_i^p$ go from 1
to the interleaver length $N$. The evolution of the values
$(d_w,N_w)$ with the number of punctured inner parity bits for
$w=2$ are reported in Fig.~\ref{d2_inner}. Notice that $d_w$,
$\forall w$ (not only for $w=2$), is a non-increasing function of
the number of punctured bits, and there are some $d_w=0$ with a
corresponding $N_w \neq 0$, which means that the corresponding
code $\mathcal{C}_i^{'}$ is not invertible. Notice also that
$N_2$, given a value of $d_2$, is an increasing function of the
number of punctured bits.
\begin{figure}[t]
\centerline{\psfig{figure=d2_d3b.eps,width=\the\hsize,angle=0}}
\caption{Inner code effective free distance $d_2$ (thick line) and
its multiplicity $N_2$ (thin line) as a function of the number of
punctured inner parity bits.} \label{d2_inner}
\end{figure}
In Table \ref{Table_K200_inner_syst_P1_punc_pos} the puncturing
pattern $P'$, the interleaved version of which, $\Pi[P']$, is
meant for inner code systematic bits, is shown, having applied the
fixed puncturing pattern $P_{o,1}$ to the outer code. This
puncturing pattern yields the best outer code (punctured through
$P_{o,1}$ and $P'$) output weight enumerating function (OWEF) for
each puncturing position. The puncturing positions go from 1 to
$2K$, being $K$ the frame length. The number of punctured bits go
from 0 to $K/2$, i.e., the rate of the outer code punctured
through $P_{o,1}$ and $P'$ is assumed to go from $2/3$ (no
puncturing is applied to the systematic bits) to 1. The reason to
limit the rate of $\mathcal{C}_o^{''}$ up to 1 is that further
puncturing results in a significant performance degradation. The
puncturing pattern $P'$ for inner code systematic bits having
applied $P_{o,2}$ is shown in Table
\ref{Table_K200_inner_syst_P2_punc_pos}.
We have also performed an optimization of the inner code
systematic bits puncturing pattern $P_i^s=\Pi[P']$ restricting the
puncturing to outer code parity bits only, thus yielding to an
overall systematic SCCC. The puncturing pattern $P'$, having
applied the fixed puncturing pattern $P_{o,1}$ to systematic bits,
is reported in
Table~\ref{Table_K200_inner_syst_P1_punc_pos_systematic}. It is
worth to point out that the performances obtained by restricting
the puncturing to outer code systematic bits are very similar to
those obtained without this restriction.
In Table \ref{Table_par_P1} are listed the parameters $h_m^{(3)}$,
$d^{o''}(d_\mathrm{f}^{o'})$, $h(\alpha_M)$, $h_m$ and the
multiplicity $N_{h_m}$ of the codewords at distance $h_m$, for
different values of the parity permeability $\rho_p$ for an SCCC
of overall code rate $R_{\rm SCCC}=2/3$, being the outer encoder
punctured through $P_{o,1}$, and the inner encoder punctured
through $P_i^p$, reported in
Table~\ref{Table_K200_inner_parity_punc_pos}, and $P_i^s=\Pi[P']$,
where $P'$ is reported in
Table~\ref{Table_K200_inner_syst_P1_punc_pos}. Notice that being
$R_c^{o'}=2/3$ in (\ref{eq:rhou}), to obtain a rate $R_{\rm
SCCC}=2/3$ code $\rho_s$ and $\rho_p$ must be related by
\begin{equation}
\rho_s=1-\rho_p \label{eq:rhou2-3}
\end{equation}
For instance, the code with $\rho_p=20/300$ has been obtained by
applying the puncturing pattern of Table
\ref{Table_K200_inner_parity_punc_pos} to inner code parity bits,
selecting the first $280=N(1-\rho_p)$ puncturing positions in
Table \ref{Table_K200_inner_parity_punc_pos}, and applying the
interleaved version of the puncturing pattern of Table
\ref{Table_K200_inner_syst_P1_punc_pos} to inner code systematic
bits, selecting the first $20=N(1-\rho_s)$ puncturing positions in
Table \ref{Table_K200_inner_syst_P1_punc_pos}, so that
$\rho_s+\rho_p=1$ (see (\ref{eq:rhou2-3})).
The frame length selected for this example is $K=200$. The
corresponding interleaver length $N$ is given by $K/R_c^{o'}=300$.
The different values of $\rho_p$ are listed as rational numbers
with denominator $N$ (since the maximum number of inner parity
bits which are not punctured is $N$). For all permeabilities
$h_m^{(3)}=0$, thus $h({\alpha_M})$ is completely dominated by
$d^{o''}(d_\mathrm{f}^{o'})$.
The union bound (\ref{eq:Pfe2}) on the residual Frame Error Rate
(FER) of the codes listed in Table \ref{Table_par_P1} is plotted
in Fig.~\ref{fi:rate23FERboundP1}. The markers used in Fig.\
\ref{fi:rate23FERboundP1} correspond to those listed in Table
\ref{Table_par_P1}. It is shown that the error floor is lowered by
increasing $\rho_p$, i.e., the proportion of surviving inner code
parity bits. The higher error floor is obtained for
$\rho_p=20/300$, whereas increasing $\rho_p$ leads to better
performance in the error floor region. Nevertheless, it should be
stressed that a sufficient number of systematic bits should be
preserved in order to ensure a good behavior for high $E_b/N_0$
values. This can be observed for the curve $\rho_p=100/300$, which
shows a worse slope. Indeed, for asymptotic values of $E_b/N_0$,
the performance is dominated by $h_m$, the minimum weight of code
sequences. Therefore, the best performance for very high
signal-to-noise ratios $E_b/N_0$ is obtained for $\rho_p=20/300$
(curve with '$\square$' in Fig.\ \ref{fi:rate23FERboundP1}), since
the corresponding code has $h_m=3$, whereas the worst performance
is obtained for $\rho_p=100/300$ (curve with '$\circ$' in
Fig.~\ref{fi:rate23FERboundP1}), since the corresponding code has
$h_m=1$.
In Fig.~\ref{fi:Sim_01b} we compare simulation results of the
rate-2/3 SCCC of Table~\ref{Table_par_P1} with the analytical
upper bounds for several values of $\rho_p$. The curves are
obtained with a \textit{log-map} SISO algorithm and 10 decoding
iterations. These results are obtained considering a random
interleaver of length $N=3000$ and applying the puncturing
patterns of Tables \ref{Table_K200_inner_parity_punc_pos} and
\ref{Table_K200_inner_syst_P1_punc_pos} periodically. The
simulation results show a very good agreement with the analytical
bounds and confirm that lower error floors can be obtained by
increasing $\rho_p$. For example, the code $\rho_p=8/30$ shows a
gain of $1.4$ dB at FER$=10^{-5}$ w.r.t. the code $\rho_p=2/30$.
Howeverm this gain tends to vanish for very high $E_b/N_0$, where
the term $h_m$ is predominant (note the of the two curves).
On the other hand, the performance in the waterfall region can be
explained in part looking at the cumulative function $\sum_1^d
A_h^{C_s}$ of the output distance spectrum of the serial
concatenated codes. The codes for which the cumulative function of
the average distance spectrum is minimum perform better at low
SNRs, since, in this region, the higher distance error events have
a nontrivial contribution to error performance. The cumulative
functions of the codes listed in Table~\ref{Table_par_P1} are
traced in Fig.~\ref{fi:rate23spectrumP1}. The worst performance
for low signal-to-noise ratios $E_b/N_0$ is obtained for
$\rho_p=20/300$ (curve with '$\square$' in Fig.\
\ref{fi:rate23FERboundP1}), since the corresponding code has the
maximum cumulative function of the average distance spectrum,
whereas the best performance is obtained for $\rho_p=100/300$
(curve with '$\circ$' in Fig.\ \ref{fi:rate23FERboundP1}), since
the corresponding code has the minimum cumulative function of the
average distance spectrum. This is in agreement with the
simulation results of Fig.~\ref{fi:Sim_01b}.
For comparison purposes, we also report in Fig.~\ref{fi:Sim_01b}
the performance of the rate-2/3 PCCC proposed in \cite{Bab04b} and
the rate-2/3 SCCC proposed in \cite{Kim01}. The PCCC code in
\cite{Bab04b} is a code of similar complexity of the SCCC codes
proposed here obtained by optimally puncturing the mother code
specified in the wideband code-division multiple-access (WCDMA)
and CDMA2000 standards, consisting of the parallel concatenation
of two rate-1/2, 8-states, convolutional encoders. The SCCC code
in \cite{Kim01} is the same as our baseline code (two rate-1/2,
4-states, systematic recursive encoders), but puncturing is
limited to inner code parity bits. As it can be observed in
Fig.~\ref{fi:Sim_01b}, the proposed SCCC code shows a significant
gain in the error floor region w.r.t. the code in \cite{Bab04b}.
On the other hand, the code in \cite{Kim01} performs much worse
than our code, since all inner code systematic bits are maintained
after puncturing.
\begin{figure}[t]
\centerline{\psfig{figure=Fig_rate23FERboundP1.eps,width=\the\hsize}}
\caption{Union bound performance of the rate 2/3 $R_{\rm SCCC}$ in
terms of residual FER versus $E_b/N_0$ with $N=300$. The
performances obtained applying the different $\rho_p$ values
listed in Table \ref{Table_par_P1} are shown. The corresponding
markers are also listed in Table \ref{Table_par_P1}.}
\label{fi:rate23FERboundP1}
\end{figure}
\begin{figure}[t]
\centerline{\psfig{figure=Sim_01bb.eps,height=\the\hsize,angle=-90}}
\caption{Simulation results and performance bounds of the rate 2/3
$R_{\rm SCCC}$ with $N=3000$. The performances obtained applying
the different $\rho_p$ values listed in Table \ref{Table_par_P1}
are shown.} \label{fi:Sim_01b}
\end{figure}
In Table \ref{Table_par_P2} are listed the parameters
$d^{i'}_\mathrm{f,eff}$, $d^{o''}(d_\mathrm{f}^{o'})$,
$h(\alpha_M)$, $h_m$ and the multiplicity $N_{h_m}$ of the
codewords at distance $h_m$, for different values of $\rho_p$,
being the outer encoder punctured through $P_i^p$, reported in
Table~\ref{Table_K200_inner_parity_punc_pos}, and $P_i^s=\Pi[P']$,
where $P'$ is reported in
Table~\ref{Table_K200_inner_syst_P2_punc_pos}. The frame length
selected for this example is always $K=200$ ($N=300$).
Fig.~\ref{fi:rate23FERboundP2} gives the union bound
(\ref{eq:Pfe2}) on the residual Frame Error Rate of the codes
listed in Table \ref{Table_par_P2}. The markers used in Fig.\
\ref{fi:rate23FERboundP2} are listed in Table \ref{Table_par_P2}.
Similar performance to the codes of Fig.~\ref{fi:rate23FERboundP1}
(obtained applying $P_{o,1}$ and the puncturing patterns of
Tables~\ref{Table_K200_inner_parity_punc_pos}
and~\ref{Table_K200_inner_syst_P2_punc_pos}) are observed. The
bounds are congruent with the parameters reported in
Table~\ref{Table_par_P2}. All the codes with $\rho_p> 20/300$ have
$h(\alpha_M)=h_m=2$. Then, the performance are dominated by the
multiplicity of $N_{h_m}$ which diminishes as $\rho_p$ increases,
i.e., the number of inner code parity bits which are not punctured
is increased. Therefore, to enhance performance in the error floor
region one should put more puncturing on inner code systematic
bits. In fact, the hierarchy of the curves in
Fig.~\ref{fi:rate23FERboundP2} corresponds to the hierarchy of
$N_{h_m}$ in Table~\ref{Table_par_P2}. Finally, the curve
corresponding to $\rho_p=20/300$ shows the worst performance in
the region of interest, where the multiplicity $N_{h_m}$ is the
dominant term. However, for very high $E_b/N_0$, being the
performance mainly dominated by $h_m$ (equal to three), the curve
corresponding to $\rho_p=20/300$ shows the best performance.
\begin{figure}[t]
\centerline{\psfig{figure=Fig_spettro_cumulativo_rate23_P1.eps,width=\the\hsize}}
\caption{The cumulative function $\sum_1^d A_h^{C_s}$ of the
distance spectra of the rate 2/3 $R_{\rm SCCC}$ codes obtained
applying the different $\rho_p$ values listed in Table
\ref{Table_par_P1}. The corresponding markers are also listed in
Table \ref{Table_par_P1}.} \label{fi:rate23spectrumP1}
\end{figure}
\begin{figure}[t]
\centerline{\psfig{figure=Fig_rate23FERboundP2.eps,width=\the\hsize}}
\caption{Union bound performance of the rate 2/3 $R_{\rm SCCC}$ in
terms of residual FER versus $E_b/N_0$ with $N=300$. The
performances obtained applying the different $\rho_p$ values
listed in Table \ref{Table_par_P2} are shown. The corresponding
markers are also listed in Table \ref{Table_par_P2}.}
\label{fi:rate23FERboundP2}
\end{figure}
Fig.~\ref{fi:R9_10_Min} shows the simulated performance of the
SCCCs with rate $R_{\rm SCCC}=9/10$ in terms of residual FER vs.
$R_{\rm outer}=K \rho_s$, for different values of $E_b/N_0$. The
curves show that the higher the SNR, and hence the lower the
target FER, the heavier should be the puncturing on inner
systematic bits, i.e., the lower should be $\rho_s$. On the
contrary, for higher error probabilities it is advantageous to
keep more systematic bits.
Finally, in Fig.~\ref{fi:R9_10_Sim} we compare the simulated
performance of the SCCCs with rate $R_{\rm SCCC}=9/10$ with the
analytical upper bounds for several values of $\rho_p$. The curves
show that the higher the $E_b/N_0$, the heavier should be the
puncturing on inner systematic bits, i.e., the higher should be
$\rho_p$. Nevertheless, it should be stressed that some of the
inner systematic bits must be maintained in order to allow
convergence of the decoding process. For comparison purposes, we
also report in the same figure the performance of the rate-9/10
PCCC proposed in \cite{Bab04b}. A gain of $2$ dB at FER $10^{-5}$
is obtained for the code $\rho_p=160/2220$ w.r.t. the code in
\cite{Bab04b}.
From the analytical upper bounds and these examples we may
conclude that performance strongly depend on the puncturing
patterns, and also on the spreading of the puncturing over the
inner code systematic bits and parity bits. To lower the error
floor, it is advantageous to put more puncturing on inner code
systematic bits, resulting in a lower error floor and, in general,
in a faster convergence (see the curves marked with filled circles
in Fig.~\ref{fi:Sim_01b}).
\section{Conclusions}
In this paper we have proposed a method for the design of
rate-compatible serial concatenated convolutional codes (SCCC).
To obtain rate-compatible SCCCs, the puncturing has not been
limited to inner parity bits only, but has also been extended to
inner systematic bits, puncturing the inner encoder beyond the
unitary rate. A formal analysis has been provided for this new
class of SCCC by deriving the analytical upper bounds to the error
probability. Based on these bounds, we have derived suitable
design guidelines for this particular code structure to optimize
the inner code puncturing patterns. In particular, it has been
shown that the puncturing of the inner code systematic bits
depends on the outer code and, therefore, it is also interleaver
dependent. Moreover, the performance of a SCCC for a given rate
can be enhanced in the error-floor region by increasing the
proportion of surviving inner code parity bits, as far as a
sufficient number of systematic bits is preserved.
The code analyzed in this paper, due to its simplicity and
versatility, has been chosen for the implementation of a very high
speed (1Gbps) Adaptive Coded Modulation modem for satellite
application. The interested reader can find implementation details
in \cite{MHOMS}.
\begin{figure}[t]
\centerline{\psfig{figure=R9_10_Min.eps,height=\the\hsize,angle=-90}}
\caption{FER performance versus $R_{\rm outer}=K \rho_s$ for
several $E_b/N_0$. Rate-9/10 SCCC. N=3000.} \label{fi:R9_10_Min}
\end{figure}
\begin{figure}[t]
\centerline{\psfig{figure=Sim_9_10.eps,height=\the\hsize,angle=-90}}
\caption{Simulation results and performance bounds of the rate
9/10 $R_{\rm SCCC}$ with $N=3000$. The performances obtained
applying the different $\rho_p$ values listed in Table
\ref{Table_par_P1} are shown.} \label{fi:R9_10_Sim}
\end{figure}
| 2024-02-18T23:39:47.937Z | 2005-10-14T10:04:08.000Z | algebraic_stack_train_0000 | 442 | 8,149 |
|
proofpile-arXiv_065-2214 | \section{Introduction}
\label{intro}
{\it Planck} is a European Space Agency satellite designed to produce high-resolution temperature and polarisation maps of the CMB. It is equipped with detectors sensitive to a wide range of frequencies from 30 to 857 GHz, split between two instruments, HFI and LFI, the high and low frequency instruments, respectively. The full-sky coverage and frequency range provided by {\it Planck} will also enable the construction of a unique compact source catalogue. Below 100~GHz it will be a significant improvement over that of WMAP, (\cite{bennett03}), in terms of sensitivity and hence in the number of sources detected, while above 100~GHz it will be the only full-sky survey for many years to come.
This paper investigates the possiblity of recovering the geometric-calibration parameters, defined in section~\ref{geoCal_param}, as part of the initial stages of the construction of this source catalogue. Previous work, (\cite{harrison04}), on the reconstruction of the geometric-calibration parameters concentrated on the recovery of the boresight and roll angle parameters during the course of the mission; the recovery of the reference phase was not discussed, nor has it been discussed elsewhere. While the monitoring and evaluation of these parameters is crucial over the lifetime of the mission, it is expected that much higher accuracies are achievable with an a posteriori evaluation of these parameters using the entirety of the mission data. Hence it is expected that the construction of the final compact source catalogue, FCSC, will provide the definitive values of the geometric-calibration parameters.
An overview of the FCSC is presented in Section~\ref{overview_fcsc}. The geometric-calibration parameters are defined in Section~\ref{geoCal}, together with a discussion on their use in the pointing reconstruction of the {\it Planck} field-of-view and the accuracy requirements of the pointing reconstruction are assessed. The simulations generated to assess the performance of the geometric-calibration methods, outlined in Section~\ref{methods_section}, are discussed in Section~\ref{simul_section}. The results of this analysis are presented in Section~\ref{results_section}, where we show that it is possible to recover these parameters to the required accuracies using solely the bright extragalactic point sources, provided there is only a slow linear variation in the value of these parameters over the course of the mission.
\section{An Overview of the Final Compact Source Catalogue}
\label{overview_fcsc}
The Final Compact Source Catalogue, FCSC, should not be confused with the Early Release Compact Source Catalogue, ERCSC. The ERCSC will be released approximately 1 year after the start of the routine observations. Its primary purpose being to allow rapid follow-up observations from Herschel and ground-based millimetre instruments. This catalogue will be produced using the frequency-channel maps from the data corresponding to the first full-sky coverage, this is expected to be the first $\sim$ 7 months worth of data. The requirement on the release date, however, curtails the focus of the ERCSC to the brightest sources. Further details of the ERCSC implementation plan may be found in \cite{ganga02}. The FCSC will comprise of the full mission data for {\it Planck} and be a significant improvement over the ERCSC.
The primary goal in the construction of the FCSC is to ensure the internal consistency and accuracy of the catalogue and final frequency maps. This goal has motivated the division of the catalogue construction into four major stages.
\begin{enumerate}
\item{Stage 1a: The accumulation of the detections from bright point sources in the ring data, where the ring data corresponds to a single spin axis positioning, is used to find positions for these point sources and any deviations from the current bestfit values for the geometric-calibration parameters. It is this stage which forms the main focus of this paper.}
\item{Stage 1b: Solar system objects with their highly accurate positional data also allow the recovery of the geometric-calibration parameters. Given the relative motion of these bodies and the {\it Planck} satellite the methods employed are some what different from those in Stage~1a. Hence the discussion of the methods used in Stage~1b is reserved for paper~II, {\it The Geometric Calibration of the Planck satellite using Solar System Objects} }
\item{Stage 2 \& 3: These stages involve the treatment of the intermediate-source detections on rings and the source detections in maps, respectively. It is envisaged that for the source detections from maps, it may be necessary to return to the ring data to ensure that maximum accuracy and consistency will be attained. The discussion of the methods involved in these stages of construction is left to paper~III, on the final compact source catalogue}
\end{enumerate}
\section{Geometric Calibration}
\label{geoCal}
{\it Planck} will be inserted into a Lissajous orbit around the second Lagrange point of the Earth-Sun system, spinning about its axis once per minute. The line of sight being almost perpendicular to the spin axis, hence the detectors will almost follow a great circle. The spin axis will nominally be repositioned every hour, and the roughly 60 or so circles corresponding to a single spin axis positioning may be binned together to form a ring. The nominal spin axis passes through the centre of the solar panels and is directed away from the sun, thus maintaining the rest of the satellite in a cone of shadow produced by the solar panels. The scanning strategy is determined by the sequence of the nominal positions of the spin axis over the course of the mission.
The geometric calibration is the process whereby all the line-of-sight positions of the detectors are recovered, at any time during the observations. The relationship between the pointing and time may depend on multiple parameters, discussed in full in \cite{leeuwen02}; the star tracker will provide many of these parameters. However, as will be discussed in Section~\ref{pointingRecon}, there is an uncertainty between the line-of-sight of the star tracker and that of the focal plane, this will produce offsets in the values of a few of the geometric-calibration parameters. These offsets may only be recovered with the science data, and it is those geometric-calibration parameters which require this calibration with the science data that we discuss here.
\subsection{The Geometric-Calibration Parameters}
\label{geoCal_param}
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(10,8)(0,0)
\put(-1.5,10){\special{psfile=FIGS/LOS_fig.ps vscale=55
hscale=55 angle=270 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{The figure shows the two angles which describe the position of the line-of-sight of a detector, ${\rm LOS_d}$, with respect to the spin axis position, SA, and the North Ecliptic Pole, NEP. The angle between the spin axis and the ${\rm LOS_d}$ is given by the opening angle to the detector, $\alpha_d$. The opening angle to the detector from the spin axis position describes the path followed along the sky by the ${\rm LOS_d}$. The position of the ${\rm LOS_d}$ along this path is given by the phase, $\psi_d$. The zero point for the phase may be determined by the intersection point of the great circle which connects the spin axis position and the NEP and the path of the scan circle. This great circle may be defined as the reference meridian.}
\label{los_fig}
\end{figure}
The position of the line-of-sight of a detector with respect to the spin axis position may be described in terms of two angles, which are shown in Figure~\ref{los_fig}. The first, the opening angle, is the angle between the spin axis position and the line-of-sight of the detector. The opening angle defines the path described by the detector, the scan circle. The second angle, the phase, defines the position along the scan circle from a given reference point. This reference point may be given by the intersection point of the scan circle and the great circle connecting the spin axis position with the North Ecliptic Pole, NEP. This great circle, shown in Figure~\ref{los_fig}, will be referred to as the reference meridian. Each repositioning of the spin axis defines its own reference meridian.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(10,8)(0,0)
\put(-1.5,10){\special{psfile=FIGS/geoParam.ps vscale=55
hscale=55 angle=270 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{The figure shows the geometric-calibration parameters which allow a description of the position of the field of view, FOV, with respect to the spin axis position, SA. The boresight angle, $\alpha_{FRP}$ is the angle between the FRP and the spin axis. The rotation of the focal plane around the FRP with respect to the nominal scan direction is given by the roll angle, $\rho$. The reference phase, $\psi_{ref}$, is the value of the initial phase at the point at which the FRP crosses the reference point, as defined by the intersection point of the reference circle and reference meridian closest to the NEP. The initial phase is measured from the first point observed on the reference circle.}
\label{geo_fig}
\end{figure}
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8,8)(0,0)
\put(8.85,0){\special{psfile=FIGS/roll.eps vscale=40
hscale=40 angle=90 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{The figure shows the positions of the line-of-sight of the {\it Planck} detectors, filled circles, relative to the FRP which is shown by the cross. The black circles are the positions for the nominal scan direction, whereas the grey circles illustrate the effects of a non-zero roll angle on the detector positions. The dotted lines enclose detectors belonging to the same frequency channel.}
\label{roll_fig}
\end{figure}
If we assume the focal-plane geometry, as defined by the relative locations of the detectors to the position of a fiducial reference point, FRP, defined somewhere central in the focal plane, remains fixed, then instead of requiring two angles per detector we may describe the positions of all the detectors in terms of just three angles. These three angles, shown in Figure~\ref{geo_fig}, are the opening angle to the FRP, otherwise known as the boresight angle, $\alpha_{FRP}$, the phase $\psi_{FRP}$ of the FRP, determined from the reference phase, $\psi_{ref}$, as measured from the reference point defined by the reference meridian, and the roll angle, $\rho$, which is given by the rotation of the focal plane around the FRP, relative to a nominal scan direction defined within the focal-plane geometry, as illustrated in Figure~\ref{roll_fig}.
The position of the line-of-sight of a detector may now be described in terms of the nominal values for these angles, the phase of the $d^{th}$ detector is given by:
\begin{equation}
\label{def_detPhase_eqn}
\psi_d=\psi_{FRP}+x_d(\rho)
\end{equation} and similarly the opening angle to the detector by:
\begin{equation}
\label{openAngle_eqn}
\alpha_d=\alpha_{FRP}+y_d(\rho)
\end{equation} where $x_d$ and $y_d$ are respectively, the scan and cross-scan positions of the $d^{th}$ detector with respect to the FRP and may be determined by the focal-plane layout and the roll angle. The scan and cross-scan positions of a detector for a given roll angle, $\rho$, may be found using the position given for the detector in the focal plane, $(x_{d,0}, y_{d,0})$, which corresponds to the scan and cross-scan positions of the detector in the case of zero roll angle:
\begin{equation}
\label{rotation_eqn}
\left(\matrix{x_d \cr y_d}\right) = \left(\matrix{ \cos \rho & \sin \rho \cr -\sin \rho & \cos \rho }\right) \left(\matrix{x_{d,0} \cr y_{d,0}}\right)
\end{equation}
The actual values of the geometric-calibration parameters, however, may deviate from their nominal values, as will be discussed in Section~\ref{pointingRecon}. It is therefore useful to express the offset in the phase, $\Delta \psi_d$, and opening angle, $\Delta \alpha_d$, to a detector in terms the differences between the nominal and actual geometric-calibration parameters using equations~\ref{def_detPhase_eqn} to~\ref{rotation_eqn}:
\begin{eqnarray}
\label{offset_dep_eqn}
\Delta \psi_d &=& \delta \psi_{FRP} + x_{d}(\rho)\left(\cos(\delta \rho)-1\right) + y_{d}(\rho)\,\sin(\delta \rho), \nonumber\\
\Delta \alpha_d &=& \delta \alpha_{FRP} - x_{d}(\rho)\,\sin(\delta \rho) + y_{d}(\rho)\left( \cos(\delta \rho)-1 \right).
\end{eqnarray}
The dependence of the recovered position of the line-of-sight of a detector may now clearly be seen in equation~\ref{offset_dep_eqn}; the position along the scan is affected by the reference phase and the roll angle, whereas the position in the cross-scan direction is affected by boresight and roll angles. Information on the scan-phase correction $\Delta \psi_d$ may be obtained from the measurements of $\psi_d$ for point-source transits. The correction to the opening angle $\Delta \alpha_d$ may be derived from the distribution of intensities of the point-source transits. These corrections may be used to evaluate the offsets in the geometric-calibration parameters, this analysis is the subject of Section~\ref{methods_section}.
\subsection{Pointing Reconstruction}
\label{pointingRecon}
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(10,8)(0,0)
\put(-0.25,0){\special{psfile=FIGS/planck_SRS.ps vscale=40
hscale=40 angle=0 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{The figure shows the Satellite Reference System, SRS, which is defined in terms of the principle axes of the {\it Planck} satellite. The $x$-axis is aligned with the nominal spin axis, the line-of-sight of the FRP lies in the $xz$-plane, and the $y$-axis completes the right-handed triad.}
\label{planck_SRS_fig}
\end{figure}
The pointing reconstruction of the satellite is achieved by the recovery of the the relationship between the line-of-sight of the star tracker and that of the focal plane.
It is helpful to define two reference frames for {\it Planck}, the Satellite Reference System (SRS) and the Inertial Reference System (IRS). The origin of both systems is the centre of gravity, CoG, of the satellite. The SRS is aligned with the principle axes of the satellite, and is shown in Figure~\ref{planck_SRS_fig}. The $x$-axis is defined by the nominal spin axis which passes through the centre of the solar panels, and the CoG, and is directed away from the Sun. The $z$-axis is orthogonal to the $x$-axis and lies in the plane defined by the $x$-axis and the projected line-of-sight of the FRP. The $y$-axis then completes this right-handed triad. It is necessary to define a second reference system, the Inertial Reference System, IRS, in which the inertia tensor is diagonal. The $x$-axis is now the actual spin axis, which is defined by the largest principle axis as determined by the principle moments of inertia of the satellite.
These two reference systems are connected by a time varing rotation matrix which must be recovered in flight. As the mission progresses and consumables are depleted this affects the inertia tensor of the satellite and hence the axis of rotation, changing the relationship between the SRS and the IRS. The star tracker should recover this time varing relationship. However, the exact relationship between the star tracker reference system and the SRS is not known. This lack of an exact relation between the star tracker and the SRS and IRS will produce uncertainties in the reconstruction of the actual line-of-sights for the detectors. These uncertainties have been estimated to be $\sim$~1.3 arc-minutes, \cite{chlewicki04}, and may be described in terms of offsets in the geometric-calibration parameters, which must be recovered from the science data in order to meet the accuracy requirements discussed in Section~\ref{acc_reqments}.
The relationships between the focal plane and the star tracker to the SRS, although nominally constant may indeed have a slow variation with time due to thermal effects. The offsets in the geometric-calibration parameters which must be established from the science data, may therefore be time dependent. Additional time variation may arise if the star tracker fails to completely establish the time dependence between the SRS and the IRS. The expected magnitude of these effects are unknown, but are thought to be less than 1 arcmin.
A more detailed discussion of the reference systems and the attitude analysis may be found in \cite{leeuwen02}.
\subsection{Pointing Accuracy Requirements}
\label{acc_reqments}
Figure~\ref{acc_fig} was created in order to assess the accuracy to which the pointing must be recovered to avoid compromising the reconstruction of the $C_\ell$ power spectrum, which describes the magnitude of the fluctions in the CMB as a function of angular scale and is the primary science goal of the {\it Planck} satellite. It shows the effect of unknown random pointing reconstruction errors on the recovered $C_{\ell}$ values. The solid grey curves correspond to the error on individual multipoles, whereas the dashed grey curves correspond to the error on multipole bins of 50. The shape of the grey curve is determined at low multipoles, on the left of the figure, by the sample and cosmic variance. On the right at high multipoles, however, the error is due to noise and the finite size of the beam. The decrease in the sensitivity of the beam to the higher multipoles may be accounted for at the cost of the exponential increase in the noise at the high $\ell$ values; as the noise is undiminished by the beam size. The series of black curves in Figure~\ref{acc_fig} show the effect of unknown random pointing reconstruction errors. The pointing uncertainties result in an effective smearing of the beam, resulting in a larger effective beam. This produces the reduction in the sensitivity to the higher multipoles, as seen in Figure~\ref{acc_fig}. A more detailed discussion of how this figure is obtained, may be found in \cite{harrison04}. Ideally, the errors in the reconstruction of the pointing should be such that the additional errors introduced are less than that of the unsubtractable noise. This places an upper limit on the total pointing error of 9~arcseconds.
It should be noted that correlated errors in the pointing will not produce effects as large as those for pointing errors of the same magnitude which are random. The effect of smearing the beam predominately in one direction is to increase the sensitivity to higher $\ell$ multipoles as compared to the uniform smearing shown in Figure~\ref{acc_fig}. The pointing accuracy requirement generated by assuming unknown random pointing errors should therefore exceed any requirements found from assuming correlated errors, and will represent the tightest constraints on the pointing accuracy required.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(7,7)(0,0)
\put(8.5,0){\special{psfile=FIGS/bw_cl_acc.eps vscale=37
hscale=37 angle=90 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{The figure shows the unsubtractable errors on the $C_{\ell}$s which are dominated by cosmic variance to the left and the finite beam size and noise to the right. The solid grey curves enclose the region corresponding to the error on each individual multipole, whereas the dashed grey curves correspond to the error on multipole bins of width 50. The black curves correspond to the additional errors on the reconstruction of the $C_{\ell}$s due to unknown random pointing errors. A beam of FWHM 5\arcmin, with a noise of 10.3{\rm $\mu$K\,} per beam and a fractional sky coverage of 0.7 have been assumed.}
\label{acc_fig}
\end{figure}
\section{Methods for the geometric calibration}
\label{methods_section}
As discussed in Section~\ref{pointingRecon} there will be an offset between the line-of-sight, LOS, as determined by the Star Tracker and the actual LOS. It is this offset which produces the systematic offsets between the actual and nominal values of the reference phase, boresight and roll angles, which may only be recovered using the science data.
The methods for evaluating the geometric-calibration parameters therefore, need only assess the offset in the value of the parameters from their nominal values. These methods should also be able to cope with a slow variation in the values of these offsets over the course of the mission, due to thermal effects, as discussed in Section~\ref{pointingRecon}. These offsets must be recovered to accuracies which will allow the total pointing error to remain below 9 arcseconds, as discussed in Section~\ref{acc_reqments}.
As discussed in Section~\ref{geoCal_param}, offsets in the reference phase and the boresight angle produce errors in the recovered position which are orthogonal, and hence may be solved for independently. The offset in the roll angle, however, produces errors in position, in both the scan and cross-scan directions. In practice, however, the roll angle may be solved together with the reference phase independently from the boresight angle, as will be discussed in Section~\ref{methods_refPhase}. The calibration of the opening angles to the detectors and hence the boresight angle is discussed in Section~\ref{methods_openAng}.
\subsection{Reference Phase and Roll Angle}
\label{methods_refPhase}
The position of a point source in the scan direction may be constrained by the phase of the detection. Detections on multiple non-parallel scans which correspond to the same point source, may therefore be used to constrain the position of the point source on the sky. This process is illustrated in Figure~\ref{intersect_fig}, where the solid arrows represent the scan directions. The position of the point source may be found from the intersection point of lines orthogonal to the scan direction extending from the phase of each detection. Figure~\ref{intersect_fig} shows these lines for the cases of no offsets in phase, dashed lines, and unreconstructed time dependent offsets, dotted lines. As seen in Figure~\ref{intersect_fig} any unaccounted for offset in phase will affect the recovered position of the point source. Offsets in the opening angles to the detectors, however, will not affect the recovered position. Offsets in the opening angles only affect the angular separation between the recovered position and the apparent position of the scan. The offsets in the phase of the detections, therefore, allow an offset in the roll angle to be discovered independently of an offset in the boresight angle. An offset in the roll angle is equivalent to a different reference phase offset for every detector, hence an offset in the reference phase may be distinguished from an offset in the roll angle. The minimisation of the residuals between the phase observed for the detection and the phase expected for the point source based on its recovered position, therefore provides a mechanism whereby the offset in the reference phase and roll angle may be assessed. The methods presented here are similar to those employed in the sphere reconstruction for the {\it Hipparcos} satellite, \cite{esa97} and~\cite{lindegren92}.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8.5,8.5)(0,0)
\put(0,0){\special{psfile=FIGS/intersect_fig.eps vscale=60
hscale=60 angle=0 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{This figure shows the evaluation of the position of a point source, with, dashed lines, and without, dotted lines, including corrections to offsets in the reference phase or roll angle. The minimisation of the residuals between the expected phase of the source given its position and the observed phase of the detection allows an assessment of the offsets in the reference phase and roll angle as a function of time.}
\label{intersect_fig}
\end{figure}
The residual phase, $\Delta \psi_{ij}$, for the $i^{th}$ detection corresponding to the $j^{th}$ point source, may now be defined as the difference between the expected phase for $i^{th}$ detection given the position of the $j^{th}$ point source, $\psi_{ij}$, and the observed phase of the detection, $\psi_i$.
\begin{equation}
\label{abscissa_eqn}
\Delta \psi_{ij}= \psi_{ij} - \psi_{i}
\end{equation} The expected phase, $\psi_{ij}$, may be evaluated using:
\begin{equation}
\label{refPhase_roll_dep}
\psi_{ij}= \psi_{FRP_{ij}} + x_d(\rho_i)
\end{equation} where $x_d$ is the scan phase of the detector, in which the $i^{th}$ detection occurs, with respect to the FRP and is dependent on the current value of the roll angle at the time of this detection, $\rho_i$. The expected phase of the FRP for the $i^{th}$ detection of the $j^{th}$ point source, $\psi_{FRP_{ij}}$, may be calculated using:
\begin{equation}
\label{phase_pos}
\tan (\psi_{FRP_{ij}})=\frac{\sin (\lambda_{SA_i}-\lambda_j)}{\cos \beta_{SA_i} \tan \beta_j -\sin \beta_{SA_i} \cos(\lambda_{SA_i}- \lambda_j)}
\end{equation} where $(\lambda_{SA_i}, \beta_{SA_i})$ is the spin axis position corresponding to the $i^{th}$ detection and $(\lambda_j,\beta_j)$ is the position of the $j^{th}$ point source. An initial position for the $j^{th}$ point source, $(\lambda_j,\beta_j)$, may be found by minimimizing the residual phases for all the detections of the $j^{th}$ point source, hence solving:
\begin{equation}
\label{initial_pos_eqn}
\frac{\partial \sum_i (\delta \psi_i^2)}{\partial \lambda_{j}}=0\,\,{\rm and}\,\,
\frac{\partial \sum_i (\delta \psi_i^2)}{\partial \beta_{j}}=0
\end{equation} where the summation is over all the detections of the $j^{th}$ point source and $\delta \psi_i$ is the residual in the phase of the $i^{th}$ detection which may be found using the relationship between small changes in the phase of a detection and the resultant change in position, $(\delta \lambda_i, \delta \beta_i)$ :
\begin{eqnarray}
\label{del_phase_del_pos}
\delta \psi_i & = & \left( \frac{\sin \beta_{SA_i} -\cos \alpha_i \sin \beta_i}{\sin^2 \alpha_i} \right) \delta \lambda_i \nonumber \\
& & \mbox{} -\left( \frac{\cos \beta_{SA_i} \sin(\lambda_{SA_i}-\lambda_i)}{\sin^2 \alpha_i} \right) \delta \beta_i
\end{eqnarray} where $\alpha_i$ is the current value, at the time of the $i^{th}$ detection, of the opening angle to the detector in which the $i^{th}$ detection occurs, and $\delta \lambda_i = \lambda_{j}-\lambda_i $ and $\delta \beta_i = \beta_{j}-\beta_i $, where $(\lambda_i,\beta_i)$ is the position of the detection.
Once an initial position for the $j^{th}$ point source has been found, the expected phase of the detection, $\psi_{ij}$, and hence the residual phase, $\Delta \psi_{ij}$, may be evaluated. The residual phase, however, may also be related to changes in the expected phase due to changes in the position of the $j^{th}$ point source, and the residuals in the phase of the detection, $\Delta \psi_d $:
\begin{equation}
\label{refPhase_eqn}
\Delta \psi_{ij}=\delta \psi_{ij} + \Delta \psi_d .
\end{equation} Using equation~\ref{offset_dep_eqn} and under the assumption that the offset in the roll angle is small, $\Delta \psi_d $ may be expressed as:
\begin{equation}
\label{residPhase_eqn}
\Delta \psi_d = -\delta \psi_{ref}(t_i) + y_d (\rho)\, \delta \rho (t_i)
\end{equation} where the offsets in the reference phase and roll angle, may be expressed as a constant offset and a term defining the linear drift in time,
\begin{eqnarray}
\label{dif_refPhase_eqn}
\delta \psi_{ref}(t_i) &= &\left( \psi_{ref_0} +\psi_{ref_1} t_i \right), \nonumber \\
\delta \rho (t_i) &= &\left( \rho_0 +\rho_1 t_i \right).
\end{eqnarray}
Equation~\ref{refPhase_eqn} may now be written in terms of changes to the point source position and the offsets in the reference phase and roll angle, using equations~\ref{del_phase_del_pos} and equation~\ref{residPhase_eqn}.
\begin{eqnarray}
\label{lsq_eqn}
\Delta \psi_{ij} & = & \Lambda_{ij} \delta \lambda_{j} - B_{ij} \delta \beta_{j} - \left( \psi_{ref_0} +\psi_{ref_1} t_i \right) \nonumber \\
& & + y_d (\rho)\,\left( \rho_0 +\rho_1 t_i \right)\nonumber \\
{\rm where,} & & \nonumber \\
\Lambda_{ij} & = & \left( \frac{\sin \beta_{SA_i} -\cos \alpha_i \sin \beta_{j}}{\sin^2 \alpha_i} \right) \nonumber \\
B_{ij} & = & \left( \frac{\cos \beta_{SA_i} \sin(\lambda_{SA_i}-\lambda_{j})}{\sin^2 \alpha_i} \right)
\end{eqnarray} A similar equation may be written for every detection, giving $i$ equations and $2 \times j+4$ unknowns. The corrections to the point source positions and the values for the systematic offset and drift in the reference phase and roll angle may then be extracted by a nonlinear least squares analysis.
\subsection{The Boresight Angle}
\label{methods_openAng}
As discussed in Section~\ref{geoCal_param}, the opening angle to a detector is given by the angle between the line-of-sight of the detector and the spin axis position. The opening angles to the detectors are determined by the focal-plane layout and the boresight and roll angles, hence if the offset in the roll angle has been accounted for, and the focal-plane layout is known, any remaining offset in the opening angles will be due to an offset in the boresight angle.
Once the values of the reference phase, roll angle and the point source positions have been evaluated, as above, the positions of the point sources may be used to evaluate the value of the ordinate for each detection, where the ordinate is angular separation, in the cross-scan direction, between the point source and the path of LOS of the detector.
The detections corresponding to the $k^{th}$ transit of the focal plane, in the cross-scan direction, of the $j^{th}$ point source are accumulated for each of the detectors. It is then possible to find the value of the ordinates which correspond to the peak of the transits, in the cross-scan direction, for each detector in which the $k^{th}$ transit was observed. If there is no offset in the opening angle to the $d^{th}$ detector the value of the ordinate at the peak of the transit will be zero. Hence the offsets in the opening angles may be determined, at the epoch of each transit, by the analysis of the cross-scan transits of the point sources.
The offsets in the opening angle to the detectors for each transit of the focal plane may then be used to find the offset in the boresight angle at the epoch of the transit, $\delta \alpha_{FRP}(t_k)$. As with the reference phase and roll angle parameters, this offset may be expressed as a constant offset and a linear drift in time:
\begin{equation}
\label{bore_drift_eqn}
\delta \alpha_{FRP}(t_k) = \delta \alpha_{FRP_0} + \delta \alpha_{FRP_1} t_k\end{equation} where $t_k$ is the epoch of the $k^{th}$ transit. By assessing the value of $\delta \alpha_{FRP}$ at every available epoch, epochs at which the focal plane transited a sufficiently bright point source, the time variation of the boresight angle may be investigated, and hence the systematic offset, $\delta \alpha_{FRP_0}$, and linear drift, $\delta \alpha_{FRP_1}$, may be evaluated.
\subsection{Evaluating the Geometric-Calibration Parameters}
\label{itr_section}
The evaluation of the reference phase, roll angle and point source positions is an iterative process, with offsets found from their inital values used in the further refinement of these parameters until convergence is reached. The offsets in the boresight angle should only need to be evaluated once, as the point source positions may be found independently of any offset in the boresight angle, as discussed above in Section~ \ref{methods_refPhase}. In practice, however, any significant offsets in the geometric-calibration parameters will affect which detections are classed as belonging to a single point source. It is therefore necessary to iterate over all the parameters, and reassign the detections as improvements in the values of the geometric-calibration parameters occur.
\section{Simulations}
\label{simul_section}
In order to assess the performance of the methods developed here in the reconstruction of the geometric-calibration parameters, it is necessary to simulate a list of point source detections for the {\it Planck} mission. This requires an input point source catalogue, together with a scanning strategy and information on the beams of the {\it Planck} detectors. Throughout this paper we have made the simplifying assumption of Gaussian beams.
\subsection{The scanning strategy}
\label{scanStrat}
While it is anticipated that this method will be applicable to any scanning strategy in which the circles corresponding to a single spin axis position may be binned together as a ring, only two potential scanning strategies for {\it Planck} were investigated here, a sinusoidal and a precessional scanning strategy. Where the sinusoidal scanning strategy may be described by:
\begin{eqnarray}
\label{Lsin_eqn}
\lambda_k & = & \lambda_0 + k \theta \nonumber \\
\beta_k & = & A \sin(n_s\lambda_k)
\end{eqnarray} and the precessional scanning strategy by:
\begin{eqnarray}
\label{Lprec_eqn}
\nu & = & (\lambda_0 + k\theta) \nonumber \\
\sin(\beta_k) & = & -\sin A \sin( n_p\nu) \nonumber \\
\cos(\phi) & = &\frac{\cos A}{\cos(\beta_k)} \nonumber \\
\lambda_k & = & \left \{
\begin{array}{ll}
\nu+\phi &;\,\frac{\pi}{2} < n_p\nu < \frac{3\pi}{2}\\
\nu-\phi &;\,{\rm otherwise}\\ \end{array}
\right \}
\end{eqnarray} where
\begin{equation}
\lambda_0 = \lambda_\odot + \pi
\end{equation} where $\lambda_\odot$ is the position of the sun at the time the first ring, $k$ is the ring number, $\theta$ is the angular separation between subsequent spin axis positions, $n_{s,p}$ is the number of periods within $2\pi$, and $A$ is the amplitude. The values of these parameters used here are $\theta$=2.5\arcmin, $n_s$=2, $n_p=2.05$, and $A$=10$^\circ$\,. This value of $\theta$, given a repointing once per hour, keeps the spin axis directed away from the sun.
\subsection{The input point source catalogue}
\label{inputCat}
The approximate numbers of point sources visible with {\it Planck} may be predicted by using the IRAS point source catalogue (PSC, \cite{beichman88}). This is a catalogue of some 250,000 well-confirmed point sources, providing positions, flux densities at 12, 25, 60 and 100\micron , uncertainties and various cautionary flags which are given for each source. The selection of objects from this catalogue and the extrapolation of their fluxes to Planck frequencies, is discussed in \cite{harrison04}. Also included in the input catalogue is the Wilkinson Microwave Anisotropy Probe, WMAP, point source catalogue, \cite{bennett03}. This consists of 208 extragalactic sources as seen in the WMAP maps, at {\it Planck} LFI frequencies. These sources may be extrapolated to {\it Planck} HFI frequencies using the spectral indices provided in the WMAP catalogue. The input point source catalogue constructed this way contains 5796 extragalactic point sources and 2286 galactic sources.
Table~\ref{inputCat30_table} shows the number of galactic and extragalactic sources detectable above a signal-to-noise ratio of 30 in the rings, for each of the {\it Planck} frequencies, under the assumption of the goal noise levels, shown in Table~\ref{noiseLevel_table}.
\begin{table}
\caption{The number of galactic and extragalactic sources with fluxes greater than 30 times the nominal level of the point source sensitivity in the ring data. Numbers in italics correspond to a polarised detector pair.}
\label{inputCat30_table}
\begin{tabular}{|rrrrrrr|}
\hline
Freq & \multicolumn{6}{c}{ No. with SNR $\ge$ 30 } \\
(GHz) & \multicolumn{2}{c}{Extragalactic} & \multicolumn{2}{c}{Galactic} & \multicolumn{2}{c}{Total} \\
\hline
30 & - & {\it 19} & - & {\it 0} & - & {\it 19} \\
44 & - & {\it 9} & - & {\it 0} & - & {\it 9 }\\
70 & - & {\it 2} & - & {\it 0} & - & {\it 2 }\\
100 & - & {\it 17} & - & {\it 1} & - & {\it 18} \\
143 & 24 & {\it13} & 2 & {\it 2} & 26 & {\it 15}\\
217 & 19 & {\it10} & 22 & {\it 14} & 41 & {\it 24}\\
353 & 13 & {\it5} & 99 & {\it 68} & 112 & {\it 73}\\
545 & 21 & - & 258 & - & 279 & -\\
857 & 81 & - & 654 & - & 735 & - \\
\hline
{\bf Any} & \multicolumn{2}{c}{110} & \multicolumn{2}{c}{654} & \multicolumn{2}{c}{764} \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The total intensity of a point source in the ring data, required for a 1 $\sigma$ detection, assuming the goal noise levels are attained.}
\label{noiseLevel_table}
\begin{tabular}{|rrr|}
\hline
Freq & \multicolumn{2}{c}{1$\sigma$ noise (ring) (mJy)} \\
(GHz) & Unpolarised detector & Polarised detector pair\\
\hline
30 & - & 147 \\
44 & - & 230 \\
70 & - & 346 \\
100 & - & 102 \\
143 & 85 & 120 \\
217 & 98 & 139 \\
353 & 184 & 261 \\
545 & 290 & - \\
857 & 332 & - \\
\hline
\end{tabular}
\end{table}
\subsection{Generating simulated data}
\label{simGen}
Instead of simulating time ordered data, TOD, we simulate directly the list of detections of source transits, as delivered by the Level2~DPC from their analysis of the TOD. This list of detections includes the position in phase and the amplitude observed for the transit, together with their respective errors. Additionally, the list includes the number of the detector which made the observation and the ring number on which it occurred.
When a source will be observed by a detector depends upon when the line-of-sight of the detector passes close to the position of that source. This in turn depends upon the scanning strategy employed, the focal-plane layout and the values of the geometric calibrations parameters as discussed above.
The simulations assume that the spin axis is repositioned every hour and that the nutation effects are small so that the individual scans may indeed be combined to reach the nominal sensitivity to point sources in rings, as shown in Table~\ref{noiseLevel_table}. Any scanning strategy which meets this proviso may be employed. The simulations also allow the variation in time of the values of the geometric-calibration parameters, as may be expected from the discussion in Section~\ref{pointingRecon}.
As any time variation in the parameters is expected to be slow, as also discussed in Section~\ref{pointingRecon}, the parameters will be constant over the time frame of an individual ring. Hence, the instantaneous offset in a geometric-calibration parameter, $\gamma$, may be defined on each ring by:
\begin{equation}
\label{inst_offset_eqn}
\gamma \left(\Gamma_i\right) = \gamma_0 + \gamma_1 \frac{\left( \Gamma_i-\Gamma_{max}/2 \right)}{\Gamma_{max}}
\end{equation} where $\Gamma_i$ is the current ring number, $\Gamma_{max}$ is the final ring of the mission and the ring numbers start from zero. The systematic offset in the parameter, $\gamma_0$, is hence defined as the instantaneous offset of the parameter exactly half-way though the mission and the drift in the value of the parameter, $\gamma_1$, is defined as the total drift in the value of the parameter over the course of the mission.
For every spin axis position, the instantaneous values of the geometric-calibration parameters are assessed and used to reconstruct the lines-of-sight of the focal plane. If a source is located nearby, the amplitude with which the source is observed is assessed and if above a threshold signal-to-noise ratio it may be included in the simulated list of detections. Errors on the amplitude of the detections are generated assuming the goal values of the noise in the ring and white noise. In order to generate realistic errors in the phase of the detection, an assessment must be made on how well the position of the transit in the scan (phase) direction may be measured. If the beams are Gaussian the position of the peak of the transit, $\psi_t$, may be found by minimising:
\begin{equation}
\label{minimise_eqn}
z(\psi_t)= \sum_i \frac{\psi_i-\psi_t}{\sigma^2_b} A_i
\end{equation} where $\sigma_b$ is the beam sigma and $A_i$ and $\psi_i$ are the amplitudes and phases of the $i^{th}$ sample in the transit. By simulating point source transits it is possible to establish a relationship between the peak amplitude for the transit and the error in the recovered phase for the transit. Hence, an empirical relationship may be found between the signal-to-noise ratio of the detection of the point source and the magnitude of the error in the recovered phase of the transit, $\sigma_{\psi}$:
\begin{equation}
\label{phase_err_eqn}
\frac{\sigma_{\psi}}{\sigma_{b}}=\frac{1.7}{SNR}
\end{equation} where SNR is the signal-to-noise ratio of the detection. This expression was used to determine the magnitude of the error in the phase to be included in the simulations. This phase error will dominate any errors in the phase which result from those geometric-calibration parameters determined soley by the star tracker, such as errors in the velocity-phase relation.
\section{Results}
\label{results_section}
Once a simulated list of detections has been generated, it may be analysed using the methods discussed in Section~\ref{methods_section}. The range of offsets in the geometric-calibration parameters which may be successfully recovered and the accuracies to which the recovered values may be attained may then be investigated. Unless otherwise stated, the simulated list of detections is generated using the goal noise levels in Table~\ref{noiseLevel_table} and only includes detections from extragalactic point sources above a threshold signal-to-noise ratio of 30.
Recovering a pointing offset depends upon being able to determine which detections are due to an individual point source. The largest magnitude of pointing offset which may be recovered is therefore related to the beam sizes of the 30~GHz channel, which has the largest beams. It has proved possible to successfully recover a pointing offset of $\sim 20 \arcmin$, which is greatly in excess of the expected magnitude of the offset, as discussed in Section~\ref{pointingRecon}.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(7,6.25)(0,0)
\put(8.5,-1){\special{psfile=FIGS/sysRefP_vary30.ps vscale=37
hscale=37 angle=90 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{This figure shows the ability of this method to recover systematic offsets in the reference phase, $\delta \psi_{ref_0}$, in the case of the sinusoidal scanning strategy. The differences between the input and recovered values of the systematic offset in the reference phase are plotted against the input values. For each input value, one hundred noise realisations were performed and the mean of the difference plotted with its error. The grey line is the global mean of these differences and the dashed lines enclose the region within 1$\sigma$ of this global mean. The recovery of $\delta \psi_{ref_0}$ can hence be seen to be unbiased.}
\label{sysRefP_fig}
\end{figure}
Figure~\ref{sysRefP_fig} shows the difference between the value of the systematic offset in the reference phase, $\delta \psi_{ref_0}$, input to the simulations and the recovered value of $\delta \psi_{ref_0}$ against the input value, for the sinusoidal scanning strategy. This figure shows that the methods presented here have successfully recovered $\delta \psi_{ref_0}$ over a range of values which exceed those which may be expected. One hundred noise realisations per value of $\delta \psi_{ref_0}$ investigated were performed, with each mean and its associated error plotted. The grey line shows the mean value over all values of $\delta \psi_{ref_0}$ with the dashed lines representing the 1$\sigma$ errors in this global mean. Similar analyses have been performed for all the other geometric-calibration parameters for both scanning strategies. It is found that they may be successfully recovered over the range of possible offset values. The reference phase and roll angle are found to have an unbiased recovery, but the boresight angle is particularly sensitive to the errors in the recovered point source positions and so is vulnerable to small biases in the recovery of its parameters. These biases were found to be of the order of a few hundredths of an arcsecond to a tenth of an arcsecond depending on the scanning strategy and the signal-to-noise threshold used for inclusion of detections in the analysis. Using only the highest signal-to-noise detections increases the likelihood of a biased recovery of these parameters as their assessment is then limited to a very few point sources. Given the pointing requirements, discussed in Section~\ref{acc_reqments}, a potential bias in the recovery of a geometric calibration parameter at this level is not a significant cause for concern. Figure~\ref{sysBore_both_fig} shows the recovery of the systematic offset in the boresight angle, $\delta \alpha_{FRP_0}$, for the sinusoidal and precessional scanning strategies, in the upper and lower panels respectively. In the case of the sinusoidal scanning strategy the recovery of $\delta \alpha_{FRP_0}$ was found to be biased at the order of a few hundredths of an arcsecond, whereas no bias in the recovered value was found when the experiment was repeated using the precessional scanning strategy.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(7,6.25)(0,0)
\put(8.5,-1){\special{psfile=FIGS/fig8.ps vscale=36
hscale=36 angle=90 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{This figure shows the difference between the input and recovered values of the systematic offset in the boresight angle, $\delta \alpha_{FRP_0}$, versus the values input to the simulations, which used the sinusoidal scanning strategy (top panel) or the precessional scanning strategy (bottom panel). For each input value, one hundred noise realisations were performed and the mean of the difference plotted with its error. The grey line is the global mean of these differences and the dashed lines enclose the region within 1$\sigma$ of this global mean. The recovery of $\delta \alpha_{FRP_0}$ is seen to be biased at the level of a few hundredths of an arcsecond in the case of the sinusoidal scanning strategy and unbiased in the case of the precessional scanning strategy.}
\label{sysBore_both_fig}
\end{figure}
The simulations may also be used as a check on whether the errors found for the recovered values of the geometric-calibration parameters are a true representation of the underlying errors in each of these parameters. The simulations reveal that the dispersion in the mean recovered value of a parameter is consistent with the calculated error in the parameter over the range of input values investigated, again with the exception of the boresight parameters in which the calculated error may be overestimated relative to the dispersion in the recovered value. The size of this overestimation is also found to have some dependence on the scanning strategy employed and the threshold signal-to-noise ratio used.
Figure~\ref{sysBoreERRs_lprec_fig} shows the dispersion in the mean recovered value of $\delta \alpha_{FRP_0}$ and the mean calculated error in the same against the value input to the simulations, which used the precessional scanning strategy. This shows that the calculated error is representative of the actual error in the recovered value of the parameter. The error in the recovered offset also has no discernible relationship with the input value of the offset. Figure~\ref{driftBoreERRs_lprec_fig} shows the overestimation of the error in the drift of the boresight angle, $\delta \alpha_{FRP_1}$, as compared to the dispersion in the recovered value of this parameter. Again the precessional scanning strategy is used and no dependence of the errors on the initial offset in the parameter is found.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(7,6.25)(0,0)
\put(8.5,-1){\special{psfile=FIGS/sysBore_vary30_errors_lprec.ps vscale=37
hscale=37 angle=90 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{This figure shows the dispersion in the mean recovered value of $\delta \alpha_{FRP_0}$, black crosses, and the mean calculated error in the value of $\delta \alpha_{FRP_0}$, grey crosses, against the value of $\delta \alpha_{FRP_0}$ input to the simulations, which used the precessional scanning strategy. The dashed lines enclosed the 1$\sigma$ region about the mean dispersion, and the grey line is the mean value of the calculated error. The calculated errors for $\delta \alpha_{FRP_0}$ are seen to be representative of the underlying error in the recovered value of $\delta \alpha_{FRP_0}$.}
\label{sysBoreERRs_lprec_fig}
\end{figure}
\begin{figure}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(7,6.25)(0,0)
\put(8.5,-1){\special{psfile=FIGS/driftBore_vary30_errors_lprec.ps vscale=37
hscale=37 angle=90 voffset=0 hoffset=0}}
\end{picture}
\end{center}
\caption[]{This figure shows the dispersion in the mean recovered value of $\delta \alpha_{FRP_1}$, black crosses, and the mean calculated error in the value of $\delta \alpha_{FRP_1}$, grey crosses, against the value of $\delta \alpha_{FRP_1}$ input to the simulations, which used the precessional scanning strategy. The dashed lines enclosed the 1$\sigma$ region about the mean dispersion, and the grey line is the mean value of the calculated error. The calculated errors for $\delta \alpha_{FRP_1}$ are seen to be overestimates of the actual error in the recovered values of $\delta \alpha_{FRP_1}$, again there is no dependence of the errors on the value of $\delta \alpha_{FRP_1}$}
\label{driftBoreERRs_lprec_fig}
\end{figure}
The disagreement of the errors and the occasional bias in the recovered values found for the boresight parameters are both due to errors in the recovered positions of the point sources used in this analysis. Simulations in which the correct positions of the point sources are used show no bias in the recovered values and no inconsistencies between the calculated errors and the dispersions in the recovered values. This also explains the dependence on the scanning strategy used, as this affects the errors in the recovered positions of the point sources. Limiting the evaluation of the boresight parameters to the top four frequency channels, which have the smallest beams, is found to minimise the biases in the recovered values. If required, an assessment of whether the recovered values of the boresight parameters are likely to be biased may be made using the actual point sources observed and scanning employed by {\it Planck}.
\begin{table}
\caption{Comparing the errors in the recovered values of the geometric-calibration parameters in the cases of the sinusoidal and precessional scanning strategies. In both cases only extragalactic point source detections with a signal-to-noise ratios of 30 or greater were used in the analysis.}
\label{meanCalERRS_precVsin_table}
\begin{tabular}{|lrr|}
\hline
Scanning Strategy: & sinusoidal & precessional \\
& (arcsec) & (arcsec) \\
\hline
$\sigma_{\psi_{ref_0}}$& 0.22 & 0.16 \\
$\sigma_{\psi_{ref_1}}$& 0.17 & 0.19 \\
$\sigma_{\alpha_{FRP_0}}$& 0.20 & 0.15 \\
$\sigma_{\alpha_{FRP_1}}$& 0.92 & 0.73 \\
$\sigma_{\rho_{0}}$& 4.2 & 3.6 \\
$\sigma_{\rho_{1}}$& 12.6 & 14.5 \\
\hline
\end{tabular}
\end{table}
Table~\ref{meanCalERRS_precVsin_table} compares the errors in the recovered values of the geometric calibrations parameters, when the sinusoidal and precessional scanning strategies are used. Due to the geometry of the ring crossings the positions of the point sources are attained to a slightly higher accuracy when the precessional scanning strategy is used, and this results in the slightly lower errors in the case of the precessional scanning strategy, especially in the case of the boresight parameters.
\begin{table}
\caption{This table shows the errors in the recovered values of the geometric-calibration parameters for different signal-to-noise ratio thresholds, for the inclusion of detections in the analysis, in the case of the sinusoidal scanning strategy.}
\label{meanCalERRS_table}
\begin{tabular}{|lrrr|}
\hline
Threshold SNR: & 40 & 30 & 20 \\
& (arcsec) & (arcsec) & (arcsec) \\
\hline
$\sigma_{\psi_{ref_0}}$& 0.25 & 0.22 & 0.21 \\
$\sigma_{\psi_{ref_1}}$& 0.19 & 0.17 & 0.16 \\
$\sigma_{\alpha_{FRP_0}}$& 0.24 & 0.20 & 0.17 \\
$\sigma_{\alpha_{FRP_1}}$& 1.10 & 0.92 & 0.87 \\
$\sigma_{\rho_{0}}$& 4.7 & 4.2 & 3.8 \\
$\sigma_{\rho_{1}}$& 14.3 & 12.6 & 11.3 \\
\hline
\end{tabular}
\end{table}
Table~\ref{meanCalERRS_table} shows the calculated errors in the recovered values of the geometric calibrations parameters for threshold signal-to-noise ratios of 40, 30 and 20. This shows that including the lower signal-to-noise ratio detections has very little impact on the accuracies to which the geometric-calibration parameters may be attained, in the case of ideal conditions. The simulations, however, may also be used to investigate the errors in the recovered values of the geometric-calibration parameters, under non-ideal conditions. Table~\ref{meanCal30_table} shows the errors attained in the geometric-calibration parameters using all the detections with a signal-to-noise ratio above the threshold value of 30, for four different scenarios. The errors attained using the goal noise levels, shown in Table~\ref{noiseLevel_table}, are compared against those using double these goal noise levels, as well as those where the error in the phase of the peak of each transit was doubled. Given that the majority of the point sources detectable by {\it Planck}\, appear only in the top frequency channel, the above analysis was performed without this channel, to investigate whether its loss would destroy our ability to acheive the pointing requirements. The errors in the geometric-calibration parameters shown in Table~\ref{meanCal30_table}, may be expressed as errors in the pointing reconstruction. The largest errors in the reconstructed pointing will occur at the beginning and end of the mission when the errors in the drift parameters will make their largest contributions, as may be seen from equation~\ref{inst_offset_eqn}. Table~\ref{MAXpointingERR_table} shows the errors in the pointing reconstruction, as found for the errors in the geometric-calibration parameters shown in Table~\ref{meanCal30_table}, for the detectors in the HFI and LFI which have the largest uncertainty in their positions. This is due to their location in the focal plane with respect to the FRP, hence their greater sensitivity to errors in the roll angle. Table~\ref{MAXpointingERR_table} shows that even in the case of doubling the goal noise levels, the pointing requirements are still easily achievable.
\begin{table}
\caption{The errors found in the recovered values of the geometric-calibration parameters using the sinusoidal scannning strategy and extragalactic source detections with signal-to-noise ratios greater than 30, where (i) uses the goal noise levels, (ii) excludes the 857~GHz frequency channel,(iii) doubles error in the phase of the peak of the transits and (iv) uses double the goal noise levels.}
\label{meanCal30_table}
\begin{tabular}{|lrrrr|}
\hline
& (i) & (ii) & (iii) & (iv) \\
& (arcsec) & (arcsec) & (arcsec) & (arcsec) \\
\hline
$\sigma_{\psi_{ref_0}}$& 0.22 & 0.25 & 0.45 & 0.64\\
$\sigma_{\psi_{ref_1}}$& 0.17 & 0.27 & 0.36 & 0.45 \\
$\sigma_{\alpha_{FRP_0}}$& 0.20 & 0.36 & 0.36 &0.69 \\
$\sigma_{\alpha_{FRP_1}}$& 0.92 & 1.56 & 1.71 &2.96 \\
$\sigma_{\rho_{0}}$& 4.2 & 4.3 & 8.9 &10.8 \\
$\sigma_{\rho_{1}}$& 12.6 & 12.9 & 26.2 & 36.1 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The maximum pointing error found for an HFI and LFI detector using the errors in the recovery of the geometric-calibration parameters shown in Table~\ref{meanCal30_table}, where (i) uses the goal noise levels, (ii) excludes the 857~GHz frequency channel,(iii) doubles error in the phase of the peak of the transits and (iv) uses double the goal noise levels.}
\label{MAXpointingERR_table}
\begin{tabular}{lrrrr}
\hline
& (i) & (ii) & (iii) & (iv) \\
& (arcsec) & (arcsec) & (arcsec) & (arcsec) \\
\hline
HFI (545 GHz) & 0.62 & 0.95 & 1.20 & 1.93 \\
LFI (44 GHz) & 0.86 & 1.13 & 1.74 & 2.55 \\
\hline
\end{tabular}
\end{table}
\section{Discussion}
The methods presented here show that recovering the geometric-calibration parameters as part of the initial stages of the construction of the final source catalogue, successfully extracts any offsets in the values of these parameters, using only the detections due to extragalactic point sources. The accuracy to which the geometric-calibration parameters may be attained far exceeds the pointing accuracy requirements as discussed in section~\ref{acc_reqments}, so much so that it has proved possible to recover the geometric-calibration parameters under the non-ideal conditions of the loss of a frequency channel, and double the goal noise levels. The accuracies to which the geometric-calibration parameters may be recovered, are such that the errors in the pointing reconstruction due to errors in the geometric-calibration parameters are of the same level as the errors in the pointing reconstruction due to the uncertainties in the mean spin axis position recovered by the star tracker which are expected to be of the order of 1\arcsec-2\arcsec. The achievable accuracies for the geometric-calibration parameters as found above did not include errors in the remaining geometric-calibration parameters found solely from the star tracker such as errors in the mean spin axis position or the velocity-phase relation. The expected level of these errors, however, has a negligible effect on the errors found above.
The focal-plane layout, relative to the FRP, was assumed to be fixed and known. If thermal variations produce offsets in the positions of the detectors relative to the FRP, these will also need to be recovered using the science data. It is anticipated that the any positional offsets in the focal-plane layout will be recovered using the planetary transits of the focal plane.
The methods presented here have assumed Gaussian beams. In reality, however, the {\it Planck} beams will not be Gaussian. If the beams are not symmetric about the scan direction then the phase, corresponding to the peak amplitude of the transit, will depend on the ordinate of the point source. Any dependence of the phase, of the transit, on the ordinate of the point source must be included in the analysis, and may lead to increases in the errors of the phases for the detections. We have however demonstrated that doubling the error in the phase of each transit has the effect of roughly doubling the error in the pointing reconstruction due to the geometric calibrations parameters, and these resultant errors still easily meet the required pointing accuracy. There is therefore plenty of scope for increased uncertainties in the positions of the transits.
Due to the number of available bright point sources, the time resolution of the recovery of the geometric-calibration parameters is poor and any fast evolution in a parameter will not be recoverable from the science data. Of concern is whether an unsolved variation in a parameter could bias the recovery of the parameter through an uneven distribution of detections. To investigate this possibility, the effect of an unsolved drift in the parameters was investigated. An unrecovered drift was found not to affect the recovered value of the systematic offset until the pointing requirements are exceeded by the unrecovered drift itself. It is therefore likely that any unsolved for variations will not affect the recovery of the systematic offsets and drifts while they are small enough not to be a problem in and of themselves.
\section*{Acknowledgments}
This work was supported by PPARC at the Cambridge Planck Analysis Centre.
| 2024-02-18T23:39:48.063Z | 2005-10-12T14:18:44.000Z | algebraic_stack_train_0000 | 446 | 9,629 |
|
proofpile-arXiv_065-2227 | \section{Introduction}
In mid-1988, an unusual stellar outburst in the nuclear bulge of the Andromeda
Galaxy, M31, was discovered independently by Rich et~al.\ (1989), Bryan \& Royer
(1991), and Tomaney \& Shafter (1992). Although similar in luminosity to a
classical nova, the object was cool and red throughout its eruption. Its
behavior was thus completely different from that of a classical nova, in which
an extremely hot and blue remnant is quickly revealed as the ejected envelope
expands and becomes optically thin. This remarkable object has been called the
``M31 red variable," or ``M31 RV.''
M31 RV's optical outburst light curve has been assembled from published
observations plus their own analyses of archival plate material by Sharov (1990,
1993) and more recently by Boschi \& Munari (2004). Although the rise to
maximum was not well observed, M31~RV was brighter than 18.5 $B$ magnitude
($M_B\lesssim-6$) for at least 80 days, but then rapidly declined to
invisibility. Inspection of archival plates shows that the 1988 outburst was the
only one in the past half century (Boschi \& Munari 2004), the claim of a
previous eruption in 1968 (Sharov 1990) having subsequently been withdrawn
(Sharov 1993).
Spectroscopic observations of M31 RV near maximum and during the subsequent
decline showed a spectrum resembling that of an M0 supergiant (Rich et al.\
1989), gradually evolving toward M5 and then late M as the outburst proceeded
(Mould et al.\ 1990). At maximum brightness, M31~RV was one of the most luminous
stars in the Local Group, at a bolometric absolute magnitude of $M_{\rm
bol}\simeq-10$ (Rich et al.\ 1989). Unfortunately, M31~RV was not well observed
during its outburst, and little else is known about this unusual event.
Interest in M31 RV has revived recently because of its striking resemblance to
V838 Monocerotis. V838~Mon, a previously unknown Galactic variable star, erupted
in 2002 January and reached a peak luminosity similar to that of M31 RV (Bond et
al.\ 2003; Munari et al.\ 2005; and references therein). Its spectrum evolved
rapidly from type K to a very cool M and then L type, accompanied by formation
of a dense circumstellar dust envelope (Banerjee \& Ashok 2002; Evans et al.\
2003; Lynch et al.\ 2004; Rushton et al.\ 2005). The outburst of V838 Mon was
followed by the appearance of a spectacular light echo (Henden, Munari, \&
Schwartz 2002; Munari et al.\ 2002; Bond et al.\ 2003; Crause et al.\ 2005),
imaged extensively by the {\it Hubble Space Telescope\/} ({\it HST}\/) as well as
from the ground, which continues to evolve at the present time.
V4332 Sagittarii is a third object\footnote{Nova V1148~Sgr 1943 was reported to
have a late-type spectrum by Mayall (1949) based on three objective-prism
plates, making it a possible fourth member of the class, but nothing else is
known about this object. More speculatively, Kato (2003) has suggested that Nova
CK~Vul 1670 could have been a V838~Mon-like event, but of course nothing is
known about its outburst spectrum.} with similarities to M31~RV and V838~Mon. In
1994, V4332 Sgr had a nova-like outburst, during which it remained very cool
(Martini et al.\ 1999; Banerjee \& Ashok 2004; Tylenda et al.\ 2005 and
references therein). The absolute luminosity of V4332~Sgr is highly uncertain,
but if the star lies in the nuclear bulge of our own Galaxy, it was several
magnitudes less luminous than M31~RV and V838~Mon at maximum light.
A number of explanations have been proposed for this new class of peculiar
outburst events. These include an outburst from a compact object in a red-giant
envelope (Mould et al.\ 1990); a hydrogen shell flash on the surface of an
accreting cold white dwarf in a short-period binary (Iben \& Tutukov 1992); a
thermonuclear event in a single star (Martini et al.\ 1999; Munari et al.\
2005); the merger of two main-sequence stars (Soker \& Tylenda 2003; Tylenda et
al.\ 2005); the accretion of planets by a giant star (Retter \& Marom 2003); and
a born-again red-giant event in a binary system (Lawlor 2005). Moreover, Yaron
et al.\ (2005) hint that a complete exploration of the parameter space for
classical novae might reveal models with properties similar to these objects.
The discovery that V838~Mon has an unresolved B-type companion (Munari \&
Desidera 2002; Wagner \& Starrfield 2002), as well as belonging to a small
cluster containing several more B-type stars (Afsar \& Bond 2005), and that it
therefore may have arisen from a fairly massive progenitor star, may rule out
some of these scenarios, but a fully convincing explanation remains elusive.
The plethora of mutually exclusive scenarios shows that these objects pose a
significant challenge to our understanding of stellar physics. In an effort to
provide further information on M31~RV, we have examined archival images of the
site of the event obtained with {\it HST}\/ some 11 years after the outburst. The
aims of our investigation are to determine whether M31~RV produced a detectable
light echo, to characterize the stellar population surrounding the object, and,
if possible, to detect a remnant star.
\section{Archival \emph{HST} and Ground-Based Images}
M31~RV has never been targeted specifically by {\it HST}, but the bulge of M31 has
been imaged in several unrelated programs. We searched the archive\footnote{The
{\it HST}\/ data archive is available at http://archive.stsci.edu/hst} for images
that serendipitously cover the location of M31~RV, obtained with any of the
{\it HST}\/ cameras: Wide Field and Planetary Camera (WF/PC), Wide Field Planetary
Camera~2 (WFPC2), Space Telescope Imaging Spectrograph (STIS), Near Infrared
Camera and Multiobject Spectrograph (NICMOS), and Advanced Camera for Surveys
(ACS)\null. We found that no images of the site have been obtained with STIS or
NICMOS, nor with WF/PC (although one set of frames taken in 1992 July missed
M31~RV by only $1\farcs6$!).
With WFPC2, there are two sets of archival observations that do include the site
of M31~RV\null. One set, called hereafter the ``u58 series,'' was taken on 1999
July 23--24 in program GTO-8018 (PI: R.~F.\ Green); these WFPC2 images were taken
in parallel mode during STIS spectroscopy of the nucleus of M31, and thus the
inclusion of M31~RV in the WFPC2 field is purely fortuitous. The u58 series
consists of datasets u5850101r through u5850108r, taken through the ``$V$''
filter (F555W) (consisting of 8 dithered exposures of 600--1000~s), and datasets
u5850109m through u585020br, taken through the ``$I$'' filter (F814W)
(consisting of 13 exposures of 300--1000~s). In these images, M31~RV lies in the
high-resolution Planetary Camera (PC) chip.
An earlier set of six observations, which we call the ``u31 series,'' was taken
on 1995 December 5 in program GTO-6255 (PI: I.~King), which was also a STIS
spectroscopic program on the M31 nucleus, with the WFPC2 images again being
taken in parallel. The u31 series contains two 1300~s exposures through the
F300W ultraviolet filter (u31k0109t and u31k010bt), and four 2700~s exposures
through the F170W ultraviolet filter (u31k010ft through u31k010rt). In the u31
series, the site of M31~RV lies in one of the low-resolution Wide Field (WF)
chips.
Finally, with the ACS, there are two {\it HST}\/ observations fortuitously showing
the M31~RV site, both of them 2200~s exposures in the ``$B$'' band (F435W),
taken in program GO-10006 (PI: M.~Garcia) as part of a project on X-ray novae in
M31. Exposures were taken on 2003 December 12 (dataset j8vp02010) and 2004
October~2 (j8vp07010), the latter being unavailable to us at this writing due to
the proprietary {\it HST}\/ data policy.
M31~RV lies in an extremely dense stellar field, which is well resolved in
{\it HST}\/ images. It is therefore desirable to locate the site of the outburst
event as accurately as possible, as a preliminary to investigation of the {\it HST}\/
frames. We have done this using two sets of ground-based CCD images that
include the site of M31~RV, one of which shows the object during its outburst.
The first ground-based set was obtained by R.~Ciardullo on 1988 September 29
with the No.~1 0.9-m reflector at Kitt Peak National Observatory (KPNO), and was
kindly made available to us. These images were obtained fortuitously during the
eruption of M31~RV, as part of a search for classical novae in the bulge of M31,
and contain well-exposed images of M31~RV (which was cataloged as ``Nova~36''
in Ciardullo et al.\ 1990). There are two 900~s exposures taken through a
narrow-band H$\alpha$ filter, and one 420~s frame taken through a wider-band
continuum filter centered at 6091~\AA.
Because of the small field of view of the {\it HST}\/ images, and the relative
shallowness of the 0.9-m frames, there are too few detectable stars seen in
common in both the {\it HST}\/ and 0.9-m frames to allow them to be registered
astrometrically. We therefore also used a second, deeper set of ground-based
frames of the M31 bulge, which had been obtained by H.E.B.\ for a different
purpose with the KPNO 4-m Mayall telescope and its Mosaic camera on 1999
January~17, and which fortuitously cover the location of M31~RV\null. These
frames show the brightest stars seen in the {\it HST}\/ images, while having a large
enough field to also show several stars visible in the 0.9-m frames. We chose
for further analysis two 30~s exposures taken in good seeing through a
Kron-Cousins $I$ filter.
\section{Astrometry}
\subsection{Absolute Astrometry of M31~RV}
We first used the three KPNO 0.9-m frames that show M31~RV in eruption to derive
absolute astrometry of the object. We identified 9 nearby field stars contained
in the NOMAD-1.0 astrometric catalog (Zacharias et al.\ 2004; optical
photographic survey plates in the bright M31 bulge are generally saturated, and
most of the NOMAD positions at this location are derived from stellar
coordinates in the 2MASS infrared catalog). These field stars were used to
establish an astrometric grid on each of the three frames, from which we
obtained a position for M31~RV accurate to about $\pm$$0\farcs04$ in each
coordinate, based on the rms scatter in the astrometric solutions and the
excellent agreement among our three CCD frames.
The absolute position of M31~RV is given in Table~1 along with, for comparison,
the positions derived by Munari, Henden, \& Boschi (2003) from two photographic
plates obtained with the Asiago Schmidt telescope during the outburst. (Munari
et al.\ also list several earlier, less-precise position measurements from the
literature.) The agreement is satisfactory, given the smaller plate scale of
the Asiago material and the necessity to set up secondary astrometric standards
as part of their solutions. This position lies $4\farcm7$ to the southeast of
the nucleus of M31, corresponding to a projected linear separation of 1.0~kpc.
\subsection{Astrometric Registration of the Ground-Based and \emph{HST} Images}
We then proceeded to locate the site of M31~RV on the {\it HST}\/ frames. As noted
above, we cannot directly register the KPNO 0.9-m frames showing M31~RV with the
{\it HST}\/ frames, since there are insufficient visible stars in common.
We therefore used the KPNO 4-m frames in an intermediate step, as follows.
First, we selected the 0.9-m frame with the best seeing, and identified 9 stars
visible both on this frame and an image created by registering and combining the
two excellent 4-m frames. We then used the IRAF\footnote{IRAF is distributed by
the National Optical Astronomy Observatory, which is operated by the Association
of Universities for Research in Astronomy, Inc., under cooperative agreement
with the National Science Foundation.} routine {\it geomap\/} to compute the
geometric transformation to be applied to the 0.9-m image so as to register it
with the 4-m frame, and then the {\it geotran\/} routine to actually apply this
transformation. This allowed us to mark the location of M31~RV preciesly in the
4-m frame.
Next we applied the IRAF routine {\it wmosaic\/} to {\it HST}\/ images in the u58
series so as to combine the four WFPC2 chips into single-image mosaics, and used
a similar geometric transformation from the 4-m image to locate the site of
M31~RV in the WFPC2 mosaic. Finally, we did a transformation from the WFPC2
mosaic to the PC chip to locate M31~RV in the latter.\footnote{The WFPC2
reference image chosen was u5850103r, in which the derived location of M31~RV
lies in the PC chip at pixel coordinates $(x,y)=(549.6, 675.4)$.} A formal
error-propagation calculation indicates that the location in the PC chip is
accurate to $\pm$$0\farcs18$ = $\pm$3.9 PC pixels in the $x$ coordinate, and
$\pm$$0\farcs27$ = $\pm$5.9 PC pixels in $y$.
\section{The Site of M31 RV}
\subsection{Visual Examination}
We prepared high-S/N, cosmic-ray rejected WFPC2 images of the M31 RV site by
registering and combining all of the u58 series $V$ and $I$ images. Figure~1
illustrates the location of M31~RV in the combined {\it HST}\/ images. These are
$3''\times3''$ images centered on the derived location of the outburst.
The sheet of stars belonging to the bulge of M31 is well resolved in these deep
$V$ and $I$ frames. There are no obvious stars with unusual colors or
brightnesses at the M31~RV location. Detailed stellar photometry of the field
will be presented below.
We also examined the ultraviolet (F300W and F170W) WFPC2 frames from the u31
series visually. These frames show very few stars, and there is nothing obvious
at the M31~RV site. The single ACS $B$-band image currently available in the
{\it HST}\/ archive likewise shows no stars of unusual colors at the outburst site.
\subsection{Absence of Light Echo}
Although the WFPC2 frames in Figure~1 are considerably deeper than {\it HST}\/ frames
that show the light echo around V838~Mon, no such feature is visible at the
location of M31~RV at the 1999.6 epoch.
The geometry of a light echo is simple (e.g., Bond et al.\ 2003 and references
therein): at a time $t$ after the outburst, the illuminated dust lies on the
paraboloid given by $z=x^2/2ct-ct/2$, where $x$ is the projected distance from
the star in the plane of the sky, $z$ is the distance from this plane along the
line of sight toward the Earth, and $c$ is the speed of light.
Figure~2 illustrates the geometry for M31~RV\null. It shows the light-echo
paraboloids at $t=2$~through 10~yr after the outburst, with a spacing of 2~yr,
and, as a darker line, the parabola at the time of the u58 series of {\it HST}\/
images, 11.0~yr after the outburst. Also shown at the top is the conversion to
angular units, at the nominal 725~kpc distance of M31. The dashed circle shows
a radius of 2~pc around the star, which is the approximate outer boundary of
illuminated dust currently being seen around V838~Mon (e.g., Bond et al.\ 2003;
Crause et al.\ 2005).
This figure demonstrates that, if M31~RV were surrounded by circumstellar dust
with an extent and density similar to that around V838~Mon, we should have seen
a light echo in the WFPC2 images taken in 1999. The approximate diameter would
have been $\sim$$0\farcs8$, which would be readily resolved in Figure~1.
The absence of a light echo has two possible explanations: (1)~there is very
little circumstellar (or interstellar\footnote{Note that some authors (Tylenda
2004; Crause et al.\ 2005) have suggested that the light echo around V838~Mon
arises from ambient interstellar dust, rather than from material ejected from
the star.}) dust around M31~RV, or (2)~if there is circumstellar dust around
M31~RV similar in density to that around V838~Mon, Figure~2 shows that it
extends less than $\sim$1.7~pc from the star. Figure~2 suggests that it might
be worthwhile to examine any existing high-resolution ground-based images of the
M31 bulge obtained around the early to mid-1990's, at which time any light echo
from circumstellar dust would have had maximum apparent radius.
\subsection{Stellar Photometry}
We performed stellar photometry on the u58 series images, using HSTphot (Dolphin
2000), a program that performs automated PSF-fitting photometry, accounts for
aperture corrections and charge-transfer effects, and transforms the
instrumental magnitudes to the Johnson-Kron-Cousins standard system. HSTphot
detected over 70,000 stellar objects in the WFPC2 field, 12,000 of which lie on
the PC chip that contains the location of M31~RV.
The quality of the photometry is excellent, with errors of less than 0.1~mag
down to $V\simeq25.5$ and $I\simeq24.7$ for isolated stars in the PC field.
However, because of the extreme crowding in this field, the total star counts
roll over at $V\simeq24$ and $I\simeq22$, with a steady decline over the next
two magnitudes.
The color-magnitude diagram (CMD) for the entire PC chip is shown in the top
plot in Figure~3. The CMD is that of an old population of red giants, in
agreement with other studies of the M31 bulge (e.g., Jablonka et al.\ 1999;
Stephens et al.\ 2003; and references therein). Apart from less than a dozen
blue objects (some of which could be background unresolved galaxies, or blue
stragglers in M31), there is no evidence for any significant young population at
this location in the M31 bulge.
The bottom plot in Figure 3 shows the CMD of stars within radii of 6 pixels
(star symbols), and radii of 6 to 18 pixels (filled circles); these radii
correspond to approximately 1$\sigma$ and 3$\sigma$ position errors, based on
the values given in \S3.2. None of these stars have unusual colors; all of
them lie on the M31 bulge red-giant branch.
\section{Discussion}
\subsection{The Stellar Population of M31 RV}
We have shown that M31~RV belongs to an old stellar population in the bulge of
M31. This is in striking contrast to V838~Mon, which has an unresolved B-type
companion, as well as belonging to a small cluster containing several additional
B-type stars.
The similarities of the light curves, spectral evolution, and peak luminosities
of M31~RV and V838~Mon strongly suggest a common outburst mechanism. If this is
the case, this mechanism must be one that can occur in stars belonging to both
young and old populations. Whether this constrains any of the scenarios
mentioned in \S1 is unclear. The lack of any young stars near M31~RV does
suggest, however, that the B-type companion of V838~Mon is a bystander that did
not play an essential role in the outburst.
\subsection{Searching for a Stellar Remnant}
All of the stars at the outburst site have the magnitudes and colors of ordinary
red giants in the M31 bulge. The absence of any conspicuous remnant star has
three possible explanations: (a)~the object had faded below {\it HST}\/ detectability
in the 11~years since outburst, either intrinsically or because of heavy dust
obscuration; (b)~the remnant is an unseen companion of (or its image is blended
with) one of the red giants in the field; or (c)~the remnant {\it is\/} one of
the red giants.
The post-outburst histories of V838 Mon and V4332~Sgr give us only modest
guidance. At the present time, V838~Mon remains enshrouded in dust, but it
continues to be luminous at long wavelengths. For example, in 2004 December it
had an apparent magnitude of $I\simeq10.5$ and an extremely red color of
$V-I\simeq4.7$ (Crause et al.\ 2005). Assuming $E(B-V)=0.9$ and a nominal
minimum distance of 6~kpc for V838~Mon (Munari, Desidera, \& Henden 2002; Bond
et al.\ 2003; Munari et al.\ 2005), and neglecting the small foreground
reddening for M31, the corresponding values if the V838~Mon of 2004 December
were located in M31 would be $I\lesssim19.4$ and $V-I\simeq3.4$. {\it Figures~1
and~3 show that there was no such bright, very red star at the location of
M31~RV in 1999.}\footnote{We also examined images from the 2MASS survey taken in
1997 October, which likewise show no red star at the M31~RV location; however,
V838~Mon as of 2004 December moved to the distance of M31 would have been
fainter than the 2MASS survey completeness limits.}
Of course, V838~Mon in 2004 December---less than 3 years after its outburst
maximum---is not the most suitable comparison object for M31~RV observed 11
years after its eruption. V4332~Sgr may provide a more apt comparison, since
more than 11 years have now elapsed since its eruption in early 1994.
CCD photometry of V4332~Sgr is collected in Table~2; in addition to previously
published photometry at maximum light and in 2003, we include observations made
by us in 2004-5 at the 1.3-m SMARTS Consortium telescope at Cerro Tololo, which
have errors of about $\pm$0.05~mag. In recent years, V4332~Sgr has been
essentially constant near an apparent magnitude of $I\simeq15.1$ and a color of
$V-I\simeq2.5$. At maximum light (Martini et al.\ 1999, their Figure~2), the $I$
magnitude was about 7. The star has therefore declined from its maximum by about
8~mag at the present time. Note, however, that pre-outburst sky-survey images
show a star at the location of V4332~Sgr of approximately the same brightness as
the object seen at present. This fact, along with the constancy of the past few
years, may suggest that the 15th-mag object is a field star that is blended with
the variable (or is possibly a physical companion). In this case, the variable
has now declined by {\it more\/} than 8~mag. We note, though, that V4332~Sgr was
imaged with {\it HST}\/ on 1997 November~3 (program SNAP-7386, PI: F.~Ringwald),
using WFPC2 with a narrow-band H$\alpha$ filter. The image, which lies in the PC
chip, shows no strong evidence for a resolved companion star, but the FWHM of
the stellar profile does appear marginally larger than for nearby field stars.
The $I$ magnitude of M31~RV at maximum was approximately 14 (see Boschi \&
Munari 2004). If it had faded by only 8~mag by 1999.6, it would have been at
$I\simeq22$. The reddening of V4332~Sgr is $E(B-V)\simeq0.32$ (Martini et al.\
1999), so its recent intrinsic color (see Table~2) is $(V-I)_0\simeq 2.1$.
Figure~3b shows that there {\it are\/} several stars with these characteristics
in the vicinity of M31~RV, although none of them are within 1$\sigma$ of the
site.
\subsection{Future Work}
We are left with the unsatisfying situation that there are several stars near
the site of M31~RV that could be the remnant, but which could also merely be
normal field red giants.
There are several types of observations that could shed additional light on the
situation. These include (a)~a new {\it HST}\/ observation, to see whether any of the
stars near the outburst site have faded since 1999; (b)~a grism observation with
{\it HST}\/ to determine whether any of the stars near the site share the remarkable
very low-excitation emission-line spectrum now exhibited by V4332~Sgr (Banerjee
\& Ashok 2004; Tylenda et al.\ 2005); and (c)~near-infrared imaging with NICMOS
to see whether any of the stars show an IR excess similar to that currently
shown by the dust-obscured V838~Mon.
\acknowledgments
The authors thank Robin Ciardullo for providing a tape containing his 1988
images of M31~RV and Andrew Dolphin for assistance with the HSTphot program.
The SMARTS observations of V4332~Sgr were made by David Gonz\'alez and Juan
Espinoza. H.E.B.\ acknowledges many interesting discussions with Sumner
Starrfield.
This research has made use of the USNOFS Image and Catalogue Archive
operated by the United States Naval Observatory, Flagstaff Station
(http://www.nofs.navy.mil/data/fchpix/).
Support for this work was provided by NASA through grant number
GO-9587 from the Space Telescope Science Institute, which is operated by
AURA, Inc., under NASA contract NAS 5-26555.
| 2024-02-18T23:39:48.099Z | 2005-10-13T20:08:21.000Z | algebraic_stack_train_0000 | 450 | 4,121 |
|
proofpile-arXiv_065-2321 | \section{Introduction}
Giant extragalactic {\ion{H}{2}} regions (GEHR) are important sites of star
formation. They are small scale examples of extreme sites of star
formation such as local and distant starburst galaxies. Like
starburst regions, they contain several distinct star clusters
\citep[e.g.][]{meu95,hun96} that can interact with each other to
potentially enhance or slow down the star formation processes
\citep[see review by][and references therein]{tan05}. They produce
the most massive stellar types known (O, B, and Wolf-Rayet) that have the
potential to transform the morphological and chemical aspects of
galaxies through their feedback
\citep[e.g.][among others]{heck90,mar02,tre04,cal04}.
Most GEHR are recent and quasi-instantaneous events of star-formation
\citep{mas91,mas99,sch99,sta99}, as seems to be the case for a
starburst, in the sense that most of their massive stars seems to
form within less than 2-3\,Myr \citep{pel04}.
Evolutionary synthesis is a powerful tool to study stellar populations in
various environments \citep[e.g.][]{wor94,lei99,bruz03,rob03}. The main goal
of evolutionary synthesis is to deduce the global properties of spatially
unresolved stellar populations such as their averaged
age, mass, and metallicity. The development of evolutionary synthesis
codes in the past decade has considerably improved our knowledge of
galaxies \citep[e.g.][among many others]{gonz99,lei01,chan03}.
With the recent (and coming) generation of large
telescopes such as Keck, Gemini, JWST, and ALMA, this technique will be
very useful for our understanding of very distant galaxies
and of their evolution through cosmic time.
Nearby GEHR are excellent candidates to test the
accuracy of the evolutionary synthesis technique. GEHR like those found
in M\,33 are close enough to resolve individual stars and to compare their
detailed stellar content with what is deduced from synthesis of
integrated spectra. In this work, a detailed study of the massive stellar
content of several GEHR observed in the far-ultraviolet (900-1200\AA; FUV)
is presented. The study is based on the
spectral synthesis code {\tt LavalSB} and its recent empirical spectral
library in the FUV range \citep{rob03}.
The synthesis of GEHR observed in M\,33 and M\,101 will be compared, when
possible, to previous works detailing their resolved stellar content.
The following section presents a summary of the data processing.
Section~\ref{lavalsb} describes the evolutionary synthesis code
{\tt LavalSB} used in this work. The synthesis results for each GEHR
are detailed in \S\ref{syn}, and compared with previous works at various
wavelengths. A discussion of specific results is presented in section~5
and the main results are summarized in \S6.
\section{FUSE Data and Reduction}
\label{data}
FUV spectrograms of nine GEHR were obtained by the {\it {Far Ultraviolet
Spectroscopic Explorer}} (FUSE) telescope \citep{moos00} for various
projects. Most data were obtained through the largest aperture
(LWRS; 30$^{\prime\prime}$$\times$30$^{\prime\prime}$) while some
spectrograms were obtained using a smaller aperture (MDRS;
4$^{\prime\prime}$$\times$20$^{\prime\prime}$). Aperture locations
are displayed in Figure~1.
A general description of the FUSE data is reported in Table~1.
Data were gathered from the MAST\footnote{Multimission
Archive at Space Telescope Science Institute; http://archive.stsci.edu/\,.}
public archives. The data were processed with the {\tt calfuse} pipeline
v2.4.2. This version corrects for Doppler shift induced by the
heliocentric motion of Earth, event bursts, the walk problem,
grating thermal shifts, bad pixels, background noise, distortions, and
astigmatism. More information relative to {\tt calfuse} is available
electronically\footnote{http://fuse.pha.jhu.edu/analysis/calfuse.html}.
The output from {\tt calfuse} comprises eight segment spectrograms for each
exposure that correspond to the eight optical paths of the instrument.
Each segment covers a different wavelength range, with some of them
overlapping \citep[see fig.~2 of][]{sah00}. First, for each segment,
each exposure was combined with a statistical weight based on
exposure time. Then the segments that cover the same wavelength
regions (roughly 900-1000\AA, 1000-1100\AA, and 1100-1200\AA) were
averaged with weights based on their signal-to-noise ratios. Finally,
the spectrograms of each wavelength range were simply coadded to
obtain one spectrogram covering the entire 905-1187\AA\ range. The
spectrograms were then smoothed by a factor of 20 using the
IRAF\footnote{Image Reduction and Analysis Facility, supported by NOAO
and operated by AURA under cooperative agreement with the
NSF; http://iraf.noao.edu/\,.} {\it {boxcar}} task, corresponding to a
resolution of about 0.13\AA. This last step increases the signal-to-noise
ratio without affecting the stellar line profiles. The spectrograms were
corrected for redshift. Reddening correction will be
discussed in section~\ref{syn}.
\section{Stellar population modeling in the FUV}
\label{lavalsb}
A first work of spectral synthesis below 1200\AA\ has been made
by \citet{gonz97} for the {O~{\sc{vi}}}$+$Ly$\beta$$+${\ion{C}{2}} feature. The
stellar library was based on Copernicus and the Hopkins Ultraviolet
Telescope (HUT) with a spectral resolution of 0.2\AA. Their work
clearly showed that the line profile was sensitive to the age of
a stellar population.
A new FUV spectral library, based on FUSE data, has recently been added
to the spectral synthesis code {\tt LavalSB} \citep{rob03}. This code has
been proven to be very powerful for young stellar populations
\citep{pel04} and will be used in the present work to deduce the global
properties of massive stars in GEHR from their integrated FUV light.
{\tt LavalSB} is a parallel version of {\tt Starburst99} \citep{lei99}.
It uses the evolutionary tracks of the Geneva
group (Schaller et al. 1992; Schaerer et al. 1993a, 1993b;
Charbonnel et al. 1993; Meynet et al. 1994).
The stellar population follows a mass distribution based on a chosen
stellar initial mass function (IMF) and mode of star formation
(instantaneous or continuous). Individual stellar parameters are used
to assign the corresponding normalized empirical spectrogram from the FUV
library based on relations from \citet{schm82}. The normalized library
spectrograms are flux calibrated using stellar atmosphere models of
\citet{kur92} for normal stars, and of \citet{sch92} for stars with
extended envelopes.
The \citet{kur92} spectra have been fitted using a Legendre function to
remove their low resolution spectral features in order to avoid
any confusion with empirical stellar lines from the spectral library.
The FUSE stellar library covers from 1003.1 to 1182.678\AA\ with a
dispersion of 0.127\AA. The library metallicities corresponds to the
evolutionary tracks of {\tt LavalSB}, e.g. {Z$_{\odot}$} for
Galactic stars \citep[{12$+$log[O/H]$=$8.7};][]{all01}, 0.4\,{Z$_{\odot}$}
for LMC stars \citep[{12$+$log[O/H]$=$8.3};][]{rus92}, and 0.1\,{Z$_{\odot}$}
for SMC stars \citep[{12$+$log[O/H]$=$8.0};][]{rus92}.
The most useful stellar indicators in the FUV are the {\ion{C}{3}} blend
multiplet centered at 1175.6\AA, and the {\ion{P}{5}} doublet at 1118.0 and
1128.0\AA. The profiles of these lines show strong variations with age
and metallicity of the population, depending on what spectral types
dominate in flux. Significant, but more subtle, changes
also appear with different IMF parameters. At shorter wavelengths, the
{O~{\sc{vi}}~$\lambda\lambda$1031.9, 1037.6} and the {S~{\sc{iv}}~$\lambda\lambda$1062.7,
1073.0, 1073.5} line profiles show variations with age and metallicity, and
possibly with IMF. However, the empirical stellar library used in
{\tt LavalSB} contain stars for which these diagnostic lines are
contaminated by interstellar features from Galactic H$_2$ and other
atomic transitions. Consequently, stellar lines of {O~{\sc{vi}}} and {S~{\sc{iv}}}
will not be used in the present work since {\ion{C}{3}} and {\ion{P}{5}} lines alone
will provide more accurate results. An extensive identification
of stellar and interstellar lines contained within the FUSE range can be
found in \citet{rob03} and \citet{pel02}.
To establish the characteristics of an integrated stellar population in
the FUV, the FUSE spectrogram is first normalized and the stellar indicators
of {\ion{C}{3}~$\lambda$1175.6} and {\ion{P}{5}~$\lambda\lambda$1118.0, 1128.0} are
compared to the models. The best-fit model is chosen both by eye and by
performing a $\chi^2$ fit. This first step provides information on the age,
the metallicity, and the IMF parameters of the population. A standard IMF
is defined here as having a slope $\alpha$=2.35 a mass range from 1 to
100\,{M$_{\odot}$}. Once the age and metallicity of the stellar population are
estimated
from the normalized FUV spectrogram, the extinction is then evaluated by
comparing the observed continuum slope of the flux calibrated data to
the one of the best-fit model. The theoretical law from \citet{witt00}
for a clumpy dust shell distribution with an optical depth of 1.5 in the
V band is used to derive the internal extinction E(B-V)$_i$. The Galactic
extinction is corrected using the law of \citet{sea79}. Finally, the
stellar mass involved in the system is estimated from the unreddened
flux level.
Uncertainties related to the line profile fitting are determined by
comparing the different sets of models at a given metallicity. Since
{\tt LavalSB} covers only specific values
(0.1\,{Z$_{\odot}$}, 0.4\,{Z$_{\odot}$}, {Z$_{\odot}$}, and 2\,{Z$_{\odot}$}),
it is not possible at this point to evaluate the full age range
that could fit the data. The jumps in metallicity are quite large so
the synthetic spectra from the next metallicity value do not always
reproduce the observed line profiles and cannot give
clues on the age range. Consequently, the age uncertainties given
in the present work are underestimated and do not
take into account the possibility that the data can be fitted using a
slightly different age at a slightly different metallicity.
For a given model, the primary source of error in the estimation of
stellar masses and predicted fluxes is usually the FUV flux uncertainty
from FUSE, which is usually around 10\%. However, in some cases, the
age uncertainty gives a larger error bar than
the FUSE uncertainty. In every case, the largest uncertainty is
given. Also, the IMF slope used to calculate the total stellar mass
affects the uncertainty on masses and predicted fluxes. However,
these uncertainties are not explicitly
included for the best fit model uncertainties. Where possible,
parameters of other good-fit models are given to better evaluate
the full uncertainties.
\section{Massive Stellar Content of GEHR: the FUV Point of View}
\label{syn}
\subsection{NGC\,604}
\label{n604}
NGC\,604 is a well-known GEHR within the Local Group galaxy M\,33.
Several studies found and confirmed the presence of very massive
O, B, and Wolf-Rayet (WR) stars \citep{vil88,dri93,hun96,gonz00,bru03,
maiz04}. At least four distinct starclusters have been identified in this object
\citep{hun96}.
The FUSE spectrogram of NGC\,604 obtained through the LWRS aperture is
shown in Figure~2a. This aperture corresponds to a physical size of
123$\times$123\,pc$^2$ \citep[1$^{\prime\prime}$=4.1\,pc at
840\,kpc; see also Fig.1 of][]{leb05}. The aperture includes the
Cluster~A from \citet{hun96}, but not the entire {\ion{H}{2}} region. The
spectrogram has a very good signal-to-noise ratio (S/N) of 20
between 1155 and 1165\AA\ that allows to perform a good synthesis
with details on the IMF slope. The {\ion{C}{3}} line profile shows a large
absorption feature in its blue wing, indicating the presence of evolved
late-type O~stars. The {\ion{P}{5}} doublet
also displays P\,Cygni line profiles typical of massive stars with
strong winds. The line depths of {\ion{C}{3}} and {\ion{P}{5}} suggest a sub-solar
metallicity for the stars. Continuous burst models have to be excluded
since they produce stellar lines with too faint P\,Cygni profiles. To
obtain a good fit, especially for the {\ion{C}{3}} line profile, a flatter IMF
with a slope $\alpha$=1.5-2.2 is better, while a standard IMF with
$\alpha$=2.35 could also fit. The best fits are obtained for models
having $\alpha$=1.5 and an age of 3.9$\pm$0.1\,Myr for 0.1\,{Z$_{\odot}$} and
3.3$\pm$0.1\,Myr for a 0.4\,{Z$_{\odot}$} metallicity. If $\alpha$(IMF)=2.35,
then the best-fit ages are a little lower with 3.5$\pm$0.3\,Myr for
0.1\,{Z$_{\odot}$} models and 3.0\,Myr at 0.4\,{Z$_{\odot}$}.
The solution is not unique since there is a degeneracy in the
line depth for the models at sub-solar metallicities when
the P\,Cygni profiles are well developed.
\citet{gonz00} performed a detailed study of NGC\,604 using IUE spectrograms
(9.5$^{\prime\prime}\times$22$^{\prime\prime}$ aperture), optical
ground-based data, and H$\alpha$ images from the HST, to fully describe
this GEHR. From the H Balmer and {\ion{He}{1}} absorption lines, they
deduced an age between 3 and 4\,Myr for the stellar population with a
standard IMF or flatter. A continuous burst cannot fit their
emission line ratios. Their IUE spectrograms revealed a population of
3-5\,Myr (better fit at 3\,Myr) with an IMF slope flatter than 3.3.
\citet{vil88} studied in detail the chemical abundances in M\,33 from
nebular lines. They measured an oxygen abundance 12$+$log[O/H]=8.51 for
NGC\,604. All these results are fully consistent with the FUV line profile
synthesis. The best-fit model parameters are reported in Table~2,
together with the other good-fit models. Note that hereafter,
calculations using the models at 0.4\,{Z$_{\odot}$} are favored based on
the metallicity from \citet{vil88}.
Adopting an instantaneous burst model of 3.3\,Myr at 0.4\,{Z$_{\odot}$} with
an IMF slope $\alpha$=1.5, the observed FUV continuum slope suggests no
significant internal extinction E(B-V)$_i$. No internal extinction is needed if a
Galactic correction of 0.02 is applied, and E(B-V)$_i$=0.03 is calculated
if no Galactic extinction is applied. Using an IMF truncated between 1
and 100\,{M$_{\odot}$}, the FUV flux level leads to a stellar mass of
(7$\pm$2)$\times$10$^3$\,{M$_{\odot}$} within the LWRS aperture.
Using an IMF slope of 2.35, the calculated stellar mass is
rather (1.4$\pm$0.3)$\times$10$^4$\,{M$_{\odot}$}. \citet{gonz00} obtained a
E(B$-$V)$_i$ of 0.1 based on their IUE spectrograms.
They also estimate a stellar mass of 0.1-2$\times$10$^5$\,{M$_{\odot}$}.
\citet{hun96} found, based on optical HST images, an extinction value
of 0.08 for Cluster~A contained within the LWRS aperture.
Their extinction and mass values are slightly higher than those from the
FUSE data.
From the stellar population described above with $\alpha$=1.5, several
physical parameters can be deduced and compared (see Table~3). First,
such a population would theoretically lead to an unreddened H$\alpha$
flux of (2$\pm$1)$\times$10$^{-11}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, and a continuum level at
5500\AA\ around (3$\pm$1)$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}.
Changing the IMF does not change these numbers significantly.
H$\alpha$ fluxes of 4.0 and 3.3 $\times$10$^{-11}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$} have
been measured from HST and ground-based images by \citet{gonz00} and
\citet{bos02}, respectively. Those values are slightly above the FUV
predicted values. The differences in H$\alpha$ fluxes are consistent
with the differences in stellar masses.
According to
the H$\alpha$+UV images from HST \citep[see Fig. 2 of][]{gonz00}, several
massive stars from Cluster~A are co-spatial with the nebular emission.
These stars are good candidates to higher extinction and it is likely
that their contribution to the FUV flux is significantly lower than at
longer wavelengths (even at $\sim$1500\AA) and then partly explain
the differences observed in extinction values at various
wavelength ranges. Furthermore, Fig.~2 of \citet{hun96} shows that
Cluster~B and C contribute significantly to the nebular emission of
NGC\,604, but they are not taken into account in the total stellar
mass derived from FUV since they are not included within the FUSE
aperture. Also, a detailed study from
\citet{maiz04} revealed an extremely complex gas/dust geometry for which
around 27\% of the ionizing photons might be missing in NGC\,604 due
to attenuation. In addition to the aperture effect, this obviously
contributes to create a discrepancy between the predicted and observed
values in the stellar mass and other fluxes parameters.
The FUV synthesis of a 3.3\,Myr population with $\alpha$=1.5 at 0.4\,{Z$_{\odot}$}
predicts that about 9 WR stars (3 WN and 6 WC) should be
present in NGC\,604. \citet{dri93} obtained ground-based and HST-WF/PC1
images and identified 12 WR or Of candidates, slightly more than
the {\tt LavalSB} predictions. More recently Drissen et al.
(2005, in preparation) confirms that there are at least 6 WN and 2 WC
stars among them. This WC-to-WN number ratio is not consistent with
{\tt LavalSB} (or {\tt Starburst99} neither).
To obtain WC/WN$\sim$1/3, both models propose an age around 4.5-4.7\,Myr
for the population. However, {\tt LavalSB} do not
include the effect of rotation in evolutionary tracks.
By including rotation in the models,
the result will be to extend the duration of the
WR phase and to increase considerably the number of WN stars, which will
fit better the observations (G. A. V\'azquez 2005, private communication).
Also, for the population synthesized above for NGC\,604, LavalSB predicts that
90$^{+30}_{-10}$ O-type stars (of all spectral types still present
at this age). \citet{hun96} estimate from
HST/WFPC2 images that about 190 stars brighter than O9.5\,V are present
in NGC\,604, which is higher than the FUV estimation. However,
the number of \citet{hun96} may include some B supergiants. If
we use an IMF slope of 2.35, the model then predicts roughly the same number
of O-type stars but no WR stars (or very few) at 3.0\,Myr, which
is in disagreement with the observations of \citet{dri93}. The comparison
between the predicted and observed number of WR stars favors the case of
an IMF slope flatter than 2.35.
The FUSE spectrogram of the inner part of NGC\,604 obtained through the
MDRS aperture is shown is Figure~2b (S/N$\sim$14). This smaller
aperture corresponds to a physical size of 16$\times$82\,pc$^2$. The
stellar line profiles are similar to those obtained with the LWRS
aperture, but not exactly the same. The {\ion{C}{3}} and {\ion{P}{5}}
line profiles cannot be reproduced as well as for the LWRS data,
especially in their blue wings. The models closer to the observed line
profiles are those of 3.9-4.1\,Myr at 0.1\,{Z$_{\odot}$} and 3.3-3.4\,Myr using
0.4\,{Z$_{\odot}$} models. Interestingly, the MDRS spectrogram of NGC\,604
corresponds better to a combination of a synthesized population and
the spectrogram of a O8\,I LMC star. The blue wings in {\ion{P}{5}} and {\ion{C}{3}}
profiles are fitted by the single star spectrogram,
while the photospheric portion cannot be fitted by the star, but by
a modeled population. This strongly suggests that
the number of massive stars within the aperture is low enough to be
subject to statistical biases on the stellar IMF, and is not well
represented anymore by an analytical IMF. Assuming a stellar
population of 3.3\,Myr at 0.4\,{Z$_{\odot}$} as found previously, the
continuum slope for the MDRS spectrogram gives E(B-V)$_i$
0.03$\pm$0.02 if no Galactic extinction is considered.
The flux level indicates a stellar mass of about
1$\times$10$^{3}$\,{M$_{\odot}$} through the MDRS
aperture, clearly indicating that the MDRS aperture does not include the
whole GEHR.
\subsection{NGC\,595}
\label{n595}
As NGC\,604, NGC\,595 contains multiple star clusters with OB stars
\citep[e.g.][]{dri93,mas99,maiz01}. The FUSE spectrogram of NGC\,595 is
presented in Figure~2c with S/N$\sim$13. Particularly strong P\,Cygni
profiles are observed in {\ion{C}{3}} and {\ion{P}{5}}. As for a single evolved O~star,
the {\ion{C}{3}} profile of NGG\,595 does not show a blend of
photospheric$+$wind features as in an integrated population, but a
single well-developed P\,Cygni profile. In fact, it appears that a synthesized stellar
population is unable to reproduce the FUV line profiles. The FUSE
spectrograms have then been compared to those of single O stars from
the FUV stellar library of {\tt LavalSB} and it reveals that an O7\,I
LMC star is the closest match to the spectrogram of NGC\,595 (see
superimposed thick line spectrogram in Figure~2c).
It is obvious here that there are not enough hot stars in NGC\,595 to
fit an analytical IMF as used in current spectral synthesis. Only a few
stars with strong winds seem to dominate the line profiles.
According to {\tt LavalSB}, O7\,I stars appear between 2.5 and
4.0\,Myr after an instantaneous burst. At 2.5\,Myr, stars slightly
brighter than O7\,I will probably dominate the FUV flux.
Consequently, the O7\,I stars in NGC\,595 would be consistent with an age of
3.5$\pm$0.5\,Myr with a metallicity close to the LMC (0.4\,{Z$_{\odot}$}). This age
is consistent with the works of \citet{mal96} and \citet{mas99}. Assuming a
standard IMF, it is still possible to roughly estimate parameters related to the
FUV slope and flux level. Adopting a Galactic extinction of 0.04
(NED\footnote{The NASA Extragalactic Database (NED) is operated by the
Jet Propulsion Laboratory, California Institute of Technology, under
contract with the National Aeronautics and Space Administration;
http://nedwww.ipac.caltech.edu/\,.}), a very low internal extinction of
E(B-V)$_i$=0.02$\pm$0.02 is found. The stellar
mass of NGC\,595 is then estimated to be about 1$\times$10$^3$\,{M$_{\odot}$}
with very large uncertainties.
Previous works in the visible range suggested a higher extinction value of
0.3 \citep{mal96,mas99,maiz01} for this GEHR. \citet{mas99} and \citet{mal96}
also estimated a stellar mass of 5-6$\times$10$^3$\,{M$_{\odot}$}, which is also
significantly higher than the FUV result, but of the same order of magnitude.
Based on {\tt LavalSB}, the age and mass of NGC\,595 suggest that about 10
O~stars and 1 or 2~WR should be present in NGC\,595. However, HST
imaging reveals larger numbers of these stars. \citet{dri93} identified
11~WR/Of candidates and \citet{mal96} estimated the number
of O~stars to be $\sim$90. \citet{dri93} estimate that there are 2.5 times fewer stars
between 15 and 60\,{M$_{\odot}$} in NGC\,595 than NGC\,604, implying that NGC\,595
must be about 2.5 times less massive than NGC\,604. FUV synthesis gives a
factor of 5 between the stellar masses of the two GEHR. Recently, optical spectra
from Drissen et al. (2005, in preparation) confirmed the presence of several WR
candidates within NGC\,595 and classified them. Based on the
HST/WFPC2-F170W archival image, the WR stars produce about 30\% of
the UV luminosity. Obviously, the observed number of WR stars in this object is
incoherent with the FUV synthesis point of view.
In an attempt to reproduce the observed FUV spectrogram, simple combinations of
individual hot stars are tested. The combinations are comprised of individual late
O-type stars (or synthetic models) and WR stars for which $\sim$ 30\% of the total FUV
flux comes from 1~WN6/7 star and 4~WN7/8 stars, as classified by Drissen et al.
(2005, in preparation). However, the resulting fits are poor, with the stellar combinations
always giving wind profiles too strong in emission and having too narrow blue absorption.
However, the FUSE atlas of WR stars from \citet{wil04} revealed spectra of
WR stars in general with spectral line profiles that are changing considerably from one type to
another. A closer look at this atlas shows that HDE\,269927, a WN9 type star from the
Galaxy, display line profiles of {\ion{C}{3}} and {\ion{P}{5}} similar to stellar lines of NGC\,595. Replacing
the WN7/8 spectra used in the previous combinations by the spectra of HDE\,269927
gives surprisingly good results. In fact, the combination of spectrograms from a O7\,I star (70\%
of the flux) as well as 1 WN6 and 4 WN9 stars (30\% of the flux) reproduces well the FUSE data
for NGC\,595. This implies two things. First, it appears that the FUV spectra of WR stars show
line profiles that change significantly from one spectral type to another, and that probably vary
with metallicity as well. Consequently, the few WR spectrograms currently used in the {\tt LavalSB}
spectral library are probably not very representative of their spectral types. Fortunately, these
stars do not usually contribute significantly to an integrated stellar population and then do not
really affect the synthetic spectra. Second, it seems obvious that the FUV spectra of NGC\,595
is dominated by evolved late-type O and WN-late stars. However, one fundamental question
remains: how did NGC\,595 come to produce a stellar population enhanced in WR stars?
The FUV synthesis of NGC\,595 implies that
F(H$\alpha$)=(1.3$\pm$0.2)$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}. Various values
are found in the literature. \citet{bos02} obtained
1.1$\times$10$^{-11}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, and \citet{ken79} measured
8.8$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}. It is obvious that the FUV synthesis
is not accurate in this case, and possibly also that it does not include the entire
GEHR.
The FUSE spectrogram of NGC\,595 clearly reveals that a stellar population
with a stellar mass of a few 10$^3$\,{M$_{\odot}$} is too small to apply the
spectral synthesis technique, at least below 1200\AA. Obviously,
statistical fluctuations related to a small number of massive stars
are not well represented by an analytical IMF. A more detailed discussion
on this subject will be given in \S\ref{mass}
\subsection{NGC\,592}
\label{n592}
Because of its fainter H$\alpha$ luminosity, NGC\,592 is a much less
studied GEHR, but not less interesting. The observed FUV spectrogram is
shown in Figure~2d, with a rather low S/N of 6. The FUSE aperture
contains the entire GEHR \citep{bos02,keel04}. Despite noisy stellar
lines, their profiles clearly display extended blue absorption wings
from evolved O stars. Comparing both {\ion{P}{5}~$\lambda$1128.0} and
{\ion{C}{3}~$\lambda$1175.6} lines to the models, it is possible to reproduce
their profiles with a 4.0$\pm$0.5\,Myr stellar population at {Z$_{\odot}$}
metallicity. Models at 0.4\,{Z$_{\odot}$} produce too weak P\,Cygni effects in
{\ion{C}{3}}. The spectrogram is too noisy to discriminate between various IMF
slopes. From H$\alpha$ and H$\beta$ narrow-band
images, \citet{bos02} estimated the age of NGC\,592 to be more than
4.5\,Myr, which is not really compatible with FUV line profiles displaying
relatively strong P\,Cygni features. In term of metallicity, \citet{keel04}
interpolated a value of 0.5\,{Z$_{\odot}$} in [O/H], and Drissen et al. (2005, in
preparation) estimated that 12$+$log[O/H]$\sim$8.4 (i.e. 0.5\,Z$_{\odot}$)
from [\ion{O}{3}]/H$\beta$ and [\ion{N}{2}]/H$\alpha$
line ratios. These values are consistent with the FUV synthesis
considering that {Z$_{\odot}$} models can cover relatively well a metallicity
range from 0.4-0.5 to $\sim$1.2\,{Z$_{\odot}$} \citep{pel04}.
Using a model of 4.0\,Myr at {Z$_{\odot}$} and a standard IMF, and assuming a
Galactic extinction of 0.042 (NED), an E(B-V)$_i$ of 0.07$\pm$0.02 is
deduced from the FUV continuum slope. Once the data are corrected
for extinction, the
stellar mass deduced is (1.1$\pm$0.3)$\times$10$^4$\,{M$_{\odot}$}. This mass is
similar to that estimated for NGC\,604, which is consistent with the
fact that the stellar line profiles can be reproduced with a synthesis
technique and an analytical IMF, contrary to NGC\,595. The FUV flux
level implies a unreddened H$\alpha$ flux of
(2.7$\pm$0.5)$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, which is the exact
value by \citet{bos02}. Other predicted parameters are
reported in Table~3.
\subsection{NGC\,588}
\label{n588}
The FUSE spectrogram of NGC\,588 is presented in Figure~2e, with a good S/N
of 12. The FUSE aperture includes the entire {\ion{H}{2}} region
\citep{bos02,keel04}. Models at {Z$_{\odot}$} produce stellar lines definitely
too deep compared to the observations. With models at 0.4\,{Z$_{\odot}$}
metallicity, a good fit can be obtained for a 3.5$\pm$0.5\,Myr population
with $\alpha$(IMF)$\leq$2.35. A flatter IMF tends to give better results,
but it is hard to really distinguish between various IMF slopes because of
the relatively low S/N. Good fits can also be obtained with 0.1\,{Z$_{\odot}$}
models of 4.5$\pm$1.0\,Myr and still with $\alpha$(IMF)$\leq$2.35. In
the literature, ages of 2.8, $>$4.5, and 4.2\,Myr are reported for
NGC\,588 \citep[][respectively]{mas99,bos02,jam04}, in general
agreement with FUV line profiles. \citet{vil88} derived a precise
oxygen abundance of 12+log[O/H]=8.30 (i.e. 0.4\,{Z$_{\odot}$}), favoring
the models at 0.4\,{Z$_{\odot}$}. A flat IMF is also favored by \citet{mas99}
and \citet{jam04} obtained $\alpha$(IMF)=2.37$\pm$0.16 from a star
counting method.
Based on the best-fit model at 0.4\,{Z$_{\odot}$}, a low internal extinction
of at most 0.06$\pm$0.02 is measured, which leads to a stellar mass of
(1.3$\pm$0.6)$\times$10$^3$\,{M$_{\odot}$}. The mass is higher,
(4$\pm$1)$\times$10$^3$\,{M$_{\odot}$}, if we consider $\alpha$=2.35.
Depending on the extinction law used, E(B-V)$_i$ values between 0.11
and 0.08 are measured \citep{mas99,jam04}. These same authors obtained
stellar masses of 534 and 3000-5800\,{M$_{\odot}$}, respectively. The smallest
value was deduced from IUE data (aperture of
10$^{\prime\prime}\times$20$^{\prime\prime}$), and the largest mass
is from full field imaging data, which explains the discrepancy.
FUV data are in relatively good agreement with imaging data,
which suggests that most OB stars of NGC\,588 are within the FUSE
aperture. With such a mass, the model predicts that
F(H$\alpha$)=2.8$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, which is in good
agreement with the value of 2-3$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$} measured by
\citet{ken79} and \citet{bos02}. The best-fit model predicts 2 WR stars
in NGC\,588, which is the exact number found by \citet{jam04} in their
HST images with resolved stars.
\subsection{NGC\,588-NW}
A FUSE spectrogram has been obtained in the vicinity of NGC\,588
(North-West). From the {\it {Digitized Sky Survey}} image (see Fig.~1),
this region corresponds to a relatively compact and small
cluster with a faint, extended nebular ring. It was first reported by
\citet[][their object 281]{bou74} and also identified in the
work of \citet{cou87}. The ring suggests that the cluster is more evolved
than those synthesized above. The FUSE spectrogram for this cluster is shown
in Figure~2f (S/N$\sim$7). Diagnostic stellar lines do not display P\,Cygni
profiles. Synthetic models do not reproduce well the line profiles. The
best fit is obtained for a stellar population around 5-6\,Myr old at
0.4\,{Z$_{\odot}$}, but the line profiles are not properly fitted. A possible
alternative is a single star spectrum, as was the case for NGC\,595. Then,
a Galactic O9.5\,III star also consistent with a population of 5-6\,Myr,
gives a better match than the model but significant discrepancies still
exist. This age is consistent with the presence of the faint extended
ring seen around NGC\,588-NW in the visible range.
To push the synthesis further, a stellar population of 5.5\,Myr at
0.4\,{Z$_{\odot}$} has been considered and an extinction value around 0 and a
stellar mass of about 1$\times$10$^3$\,{M$_{\odot}$} have been roughly
estimated for this cluster. This stellar mass is similar to the one
obtained for NGC\,595. The relatively low mass of the cluster is
a logical explanation for why the synthesis technique does not work
well. Rough estimations of predicted observable parameters are
reported in Table~3.
The study of NGC\,588-NW gives some other clues on the evolution of GEHR.
First, the FUSE spectrogram of NGC\,588-NW reveals the presence of an
important stellar population. However, because of
its slightly greater age (5-6\,Myr instead of $\sim$3.5\,Myr for NGC\,595),
the nebular emission is not as strong as for NGC\,595 and this region is
consequently much less studied. It is likely that NGC\,588-NW is
representative of what NGC\,595 may look like in $\sim$2-3\,Myr.
Second, the GEHR is still young and massive enough at this age not to
have dissolved yet into the galaxy background. It would be interesting to search
for slightly more evolved GEHR to better study their evolution, such
as the dissipation timescale of clusters. This kind of cluster (i.e.
still very young but with significantly low nebular emission) may be at
the origin of the diffuse UV light in starburst galaxies \citep{meu95}.
NGC\,588-NW is consistent with clusters of less than 10$^3$\,{M$_{\odot}$} without
O-type stars, as described by \citet{chan05} for the diffuse UV component in
starbursts. A more extensive search for this kind of object in local galaxies
could settle this issue.
\subsection{NGC\,5447}
\label{n5447}
NGC\,5447 is a GEHR in the spiral galaxy M\,101 (7.4\,Mpc) that displays
several knots of star formation \citep{bos02}. The FUSE spectrogram has a
S/N of 12 and is shown in Figure~3a. As shown in Fig.~1, the FUSE aperture
does not include all knots. The spectrogram does not show strong
wind profiles, suggesting that
most O stars have already disappeared. Models at {Z$_{\odot}$} metallicity
produce too deep stellar lines compared to the observations. Models at
0.1\,{Z$_{\odot}$} cannot reproduce both {\ion{P}{5}} and {\ion{C}{3}} features at the same
age. The best-fit model is obtained at 4.5$\pm$0.5\,Myr with an IMF slope
of 2.35 or flatter. This GEHR has not been extensively
studied and no age has been proposed so far for this object. \citet{sco92}
deduced an oxygen abundance of 8.3 in 12+log[O/H], compatible with the
line depths of {\ion{P}{5}} and {\ion{C}{3}}.
The measured FUV slope for NGC\,5447 suggests that E(B-V)$_i$=0. From
photographic plates and the Balmer decrement, \citet{smi75} estimated an
extinction of 0.37, much larger than the FUV value. The FUV flux
indicates a stellar mass of (1.2$\pm$0.2)$\times$10$^5$\,{M$_{\odot}$}.
From FUV synthesis, {\tt LavalSB} predicts that
F(H$\alpha$)=(5.7$\pm$0.9)$\times$10$^{-13}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$},
and an EW(H$\alpha$)=1064\AA\ for
NGC\,5447. Using photometric data \citet{bos02} measured an H$\alpha$
flux of 4.7$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, and \citet{ken79} obtained a
value of 1.6$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}. Since the GEHR is much more
extended than the FUSE aperture \citep[see Fig.~5 of][]{bos02},
the factor 5-10 discrepancies can easily be explained. However, the presence of
a second generation of stars contributing to the nebular flux but not to
the FUV flux cannot be excluded (see \S\ref{2egen}). For their knot~A only,
\citet{bos02} obtained that F(H$\alpha$)= 7.5$\times$10$^{-13}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$},
suggesting that this knot must be the principal contributor to the FUV flux
measured with FUSE. \citet{tor89} measured a dereddened equivalent width of
1096\AA\ through a 3.8$^{\prime\prime}\times$12.4$^{\prime\prime}$ slit, in
very good agreement with the FUV predictions and the knot~A.
\subsection{NGC\,5461}
\label{n5461}
NGC\,5461 is a very large GEHR ($>$500\,pc in diameter) with multiple
components in M\,101 \citep{bos02,keel04,chen05}. The FUSE aperture contains
most of the H$\alpha$ emission and should include most of the massive
stellar content (see again Fig.~1). The FUSE spectrogram is shown in
Figure~3b, with a S/N of about 7. The {\ion{C}{3}} feature displays a wind
profile, implying the presence of giant and supergiant {O-type} stars.
Models at {Z$_{\odot}$} do not reproduce the stellar line depth. The models at
0.1\,{Z$_{\odot}$} give a good fit for a 4.0$\pm$0.2\,Myr stellar population and
an IMF slope flatter than 2.35. A good correspondence is also obtained with
0.4\,{Z$_{\odot}$} models at 3.3$\pm$0.2\,Myr, still with $\alpha$$<$2.35. A
multiwavelength study from \citet{ros94} suggests an age between 3.0 and
4.5\,Myr, compatible with FUV line profiles. \citet{lur01} deduced an age
between 2.5 and 3.5\,Myr based on EW(H$\beta$), also in general agreement
with FUV line profiles. While the age determination method using
EW(H$\beta$) is not a recommended diagnostic \citep{ter04}, it appears
that it still gives good results at a such very young age.
More recently, \citet{chen05} identified about 12 candidate
stellar clusters within NGC\,5461 of which half of them are less than
5\,Myr old. The other clusters are probably older and do not seem to
contribute much to the FUV flux. Abundances ranging from
8.4 to 8.6 in 12+log[O/H] are found in the literature
\citep{tor89,sco92,ros94,lur01}. Their observations favor the FUV
synthesis models at 0.4\,{Z$_{\odot}$}.
Comparing with the modeled population of 3.3\,Myr at 0.4\,{Z$_{\odot}$} and
$\alpha$=1.5, the FUV continuum slope needs no extinction correction.
The stellar mass is then (1.5$\pm$0.4)$\times$10$^{4}$\,{M$_{\odot}$}. Using
a standard IMF slope of 2.35, the calculated stellar mass is then
(5$\pm$1)$\times$10$^4$\,{M$_{\odot}$}. According to \citet{ros94}, the
extinction from the Balmer decrement is 0.23, and using an extinction
law especially designed for M\,101, they find a stellar mass of
1$\times$10$^{5}$\,{M$_{\odot}$}.
According to {\tt LavalSB}, the FUV stellar population should
produce an H$\alpha$ flux of (5$\pm$2)$\times$10$^{-13}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, while
H$\alpha$ image data give 6.5 and 3.2$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}
\citep[][respectively]{bos02,ken79}. For this population, the unreddened
EW(H$\alpha$) should be about 1200\AA. \citet{tor89} obtained an unreddened
value of 1175\AA, in good agreement with {\tt LavalSB} predictions.
The differences between the predicted and observed extinction, nebular
flux and stellar mass will be discussed in more details in \S\ref{2egen}.
\subsection{NGC\,5471}
\label{n5471}
NGC\,5471 is another GEHR in M\,101 more compact than NGC\,5461 and NGC\,5447
and may contain about 19 star clusters according to \citet{chen05}.
Most of the H$\alpha$ emission of this {\ion{H}{2}} region would
have been included within the LWRS aperture of FUSE. Unfortunately, this
{\ion{H}{2}} region has been observed using the MDRS aperture
(4.0$^{\prime\prime}\times$20$^{\prime\prime}$), which implies that
some OB stars are not included in the FUV spectrogram presented
here (see Fig.~1). Also, in the FUSE data, no flux has been obtained
in detector~2, which affects the quality of the synthesis since
the LiF2A segment (which falls on the missing detector) is important
for the S/N of {\ion{P}{5}} and {\ion{C}{3}} lines \citep[see][]{sah00}.
The FUSE spectrogram is shown in Figure~3c with S/N=9. The {\ion{C}{3}} line
profile displays no obvious wind feature. The best-fit model is obtained
for a stellar population of 4.5$\pm$0.5\,Myr at 0.4\,{Z$_{\odot}$}. At
0.1\,{Z$_{\odot}$}, a modeled stellar population of 3.5-4.0\,Myr can also
reproduce the observed line profiles. Because of the noise, a standard
IMF as been assumed. \citet{mas99} deduced an age
of 2.9\,Myr for NGC\,5471, which is too young to explain the
faint P\,Cygni profiles observed in the FUV diagnostic lines. Oxygen
abundances ranging from 8.0 to 8.2
\citep[0.2-0.3Z$_{\odot}$][]{tor89,ros94,mas99,bos02} are found in the
literature, which is in good agreement with FUV synthesis.
Adopting the 0.4\,{Z$_{\odot}$} best-fit model, the comparison between the
observed and modeled continuum slopes indicate a low extinction,
smaller than the uncertainties of 0.02. The FUV flux level suggests a
stellar mass of (7$\pm$1)$\times$10$^{4}$\,{M$_{\odot}$} for NGC\,5471.
\citet{mas99} obtained an extinction of 0.07 in the UV range, which is
slightly higher than the FUV extinction. The FUV stellar mass deduced is
consistent with the mass of 1.2$\times$10$^{5}$\,{M$_{\odot}$} from
\citet{mas99}, considering the smaller aperture used with FUSE.
Predictions reported in Table~3 are difficult to compare with the literature
because of large differences
between apertures. However, the FUV flux prediction is always below
the values given from larger apertures \citep[e.g.][]{ken79,bos02}.
\citet{tor89} measured a dereddened EW of 575\AA\ for H$\alpha$, consistent
with the predictions.
\subsection{NGC\,5458}
\label{n5458}
NGC\,5458 is an {\ion{H}{2}} region smaller and fainter than the previous ones
in M\,101 and not much studied except for its X-ray source
\citep{wan99,pen01,col04}. The FUSE spectrogram is presented in Figure~3d,
and shows a S/N$\sim$10. The spectrogram displays photospheric profiles
without evident signs of winds in both {\ion{P}{5}} and {\ion{C}{3}} features.
Sub-solar metallicity
models produce stellar line depths too weak compared to the observations.
The best-fit model is obtained for a 5.5-6.0\,Myr old stellar population
at {Z$_{\odot}$}. A standard IMF has been assumed since the line profiles are
less sensitive to the IMF when evolved O stars have disappeared.
The continuum slope indicates a low extinction, below the uncertainties
of 0.02. The flux level leads to a stellar mass of (1.1$\pm$0.4)$\times$10$^5$\,{M$_{\odot}$}.
Other predicted observable parameters for NGC\,5458 are reported in Table~3.
\section{Discussion}
The massive stellar contents of several GEHR have been studied in detail
using the FUV spectral synthesis. The section below
focuses on the global characteristics of the whole sample to better
understand the physics of GEHR in general as well as the synthesis
technique in the FUV.
\subsection{FUV Synthesis of Small Stellar Populations}
\label{mass}
Spectral synthesis is a powerful technique to obtain a good estimate
of the general characteristics of young integrated stellar populations.
However, this technique usually assumes that the stars follow
an analytical IMF, and that the stars properly fill each bin of the mass
function. But how high does the mass of the population must be in order to
be accurately described by an analytical IMF? The FUV is a good wavelength
range to estimate this minimal mass for young systems. The FUV is
especially sensitive to IMF statistical fluctuations at high masses
since only O and B\,stars produce many photons below 1200\,\AA.
Also, GEHR are very young systems and the disappearance of the most
massive stars does not significantly affect the total stellar mass of
the system.
From FUV synthesis of GEHR in M\,33 and M\,101 (\S\ref{syn}), it appears
that a stellar mass greater than 1$\times$10$^3$\,{M$_{\odot}$} is needed to
properly fulfill
the IMF bins. As shown by NGC\,592, NGC\,604 (LWRS), and GEHR in M\,101,
a stellar mass of $\sim$1$\times$10$^4$\,{M$_{\odot}$} does not seem to suffer
much of a statistical bias. However, the FUV synthesis of NGC\,604
(MDRS), NGC\,595, and NGC\,588-NW reveals that a stellar mass closer to
$\sim$1$\times$10$^3$\,{M$_{\odot}$} becomes too low to obtain reliable
values of the age and mass of the star cluster because the
stellar line profiles are not those of a standard modeled population,
but those of a mix of a limited number of bright stars.
Note that the mass limit needs to be higher for younger systems, where
the dominant stars are of earlier spectral types than those found in a
slightly older population. This is because a younger population needs
to better fill the IMF higher mass bins and a more massive total stellar
population is thus required.
\citet{cer04} studied this problem from a theoretical point of view.
The lower mass limit of a few 10$^3$\,{M$_{\odot}$} found here for a
synthesized population is fully consistent with their results, which
suggest that the minimal initial cluster mass
needed for synthesis modeling in the U-band is about
8$\times$10$^3$\,{M$_{\odot}$} for a 5\,Myr population at
0.4\,{Z$_{\odot}$}. Following their calculation, this minimal mass can
be slightly lower at shorter wavelengths like the FUV range.
Using HST images where the stars of NGC\,588 were resolved, \citet{jam04}
obtained a standard IMF slope of 2.37$\pm$0.16 for NGC\,588 by using a
star counting technique and estimated a stellar mass of
(5.8$\pm$0.5)$\times$10$^{3}$\,{M$_{\odot}$}, consistent with \citet{cer04} and
FUV synthesis. FUSE spectral synthesis of GEHR has clearly shown that their
calculation not only applies to color bands, but also to stellar
line profiles.
\subsection{The Flat IMF slope of GEHR}
The stellar IMF is a matter of debate since the work of \citet{sal55}.
The generally
accepted slope\footnote{A slope of 2.35 is traditionally called a
Salpeter slope. However, this terminology is not appropriate for
stellar masses covered by FUSE since the work of \citet{sal55}
applies to a lower mass range.}
for the massive OB star regime at all metallicities in every kind of
environment (starbursts as well as star clusters), is $\alpha$=2.35
\citep[e.g.][]{mas98,sch00,gre04,pis04}.
However, the IMFs of GEHR derived from FUV line profiles seem to
favor a relatively flat slope (see Table~2). Since FUV stellar flux is produced
only by O and B stars, a small change in their relative numbers can affect
the derived IMF slope. This result cannot be associated to
a bias due to the FUV synthesis since several, and bigger, young populations
have been studied with the same technique and did not show such a
flat slope \citep{pel04}.
Some hypotheses could physically explain a flat IMF in the FUV range.
One hypothesis is that B-type stars could still be more
extinguished by dust than earlier type stars. If so, it would then
be more difficult to see them in the FUV, producing an artificially
flatter IMF. However, the extinction values of individual stars in
NGC\,604 obtained by \citet{bru03} do not show a significant correlation
with the spectral type, suggesting that B stars are not systematically
more extinguished than O stars.
Another more plausible possibility is that
the massive stars fill the IMF high mass bins relatively well,
but not perfectly. If some spectral types have slightly deviant
numbers from the analytical IMF, it will slightly change the
integrated stellar line profiles in the same direction as NGC\,595 or
NGC\,604-MDRS, i.e. by accentuating the integrated wind profiles.
Since a flatter IMF also produces more pronounced P\,Cygni profiles, it
would be hard to differentiate the two cases. Consequently, even if the
population synthesis gives reliable and precise results on most physical
parameters of the population (age, mass, metallicity, colors, fluxes)
for a $>$1$\times$10$^3$\,{M$_{\odot}$} population (\S\ref{mass}), it appears
that the stellar IMF slope derived from the FUV line profiles is a sign of a
non-perfect filling of the IMF high mass bins.
This last possibility is supported by the IMF obtained from the star
counting technique of \citet{jam04} on NGC\,588. They derived a standard
IMF slope, but their IMF histogram clearly shows that some mass bins,
especially at higher masses, are clearly deviant from the analytical slope.
\subsection{A second generation of stars in NGC\,5461}
\label{2egen}
The spectral synthesis of FUSE data on NGC\,5461 has predicted much lower
values for the H$\alpha$ flux (factor of 10), the stellar mass (factor of
2 to 10), and the extinction than has been reported in the literature.
These discrepancies are hard to explain since most of the H$\alpha$
emission is included within the FUSE aperture. One plausible explanation
is the presence of a second generation of stars in NGC\,5461, like the
one observed in the LMC Cluster N11 \citep{wal92}.
In the case of the star-forming region N11,
the central region is composed of a 3.5\,Myr stellar
population which dominates the UV flux. A surrounding nebulae is
excited by a younger generation of stars which is not observed at
short wavelengths because it is heavily reddened \citep{wal92}.
The presence of a second generation of stars in NGC\,5461, younger
and consequently more extinguished than the first one, could explain the larger
extinction deduced at longer wavelengths, the stellar mass discrepancy as
well as the excess in nebular emission. The second generation cluster must
then be relatively massive to explain the large differences in flux and mass.
It is not excluded that younger stars from different clusters are present rather
than a single second generation.
Although there is no proof of such a population within NGC\,5461,
this {\ion{H}{2}} region is a good candidate to host very massive stars,
younger than those actually detected with FUSE.
It is also possible that younger stars are present within other GEHR
studied here. Unfortunately, because the FUSE aperture does not always
include the whole system, it is impossible to confirm here if the
difference between the predicted and observed H$\alpha$ fluxes comes
from a second generation or not, as it is the case for NGC\,604 for
example. Considering the detailed work of \citet{maiz04} on the
attenuation maps of NGC\,604, the differences in H$\alpha$ fluxes
and stellar masses in GEHR, including NGC\,5461, might also be due,
at least partly, to the complexity of the gas and dust spatial distribution.
\section{Summary}
The evolutionary spectral synthesis technique in the FUV has been used to
study the massive stellar content of nine GEHR in M\,33 and M\,101.
Stellar masses, internal extinctions, and ages have been obtained
for most of them. The comparison of the FUV synthesis results with values
obtained from previous available works in various wavelength ranges has
shown that the technique is reliable in most cases.
The comparison of the GEHR with each other has confirmed
observationally that the synthesis technique must be applied to stellar
populations of at least a few 10$^3$\,{M$_{\odot}$} in the FUV to avoid
statistical fluctuations of the high mass end of the stellar IMF.
It has also revealed that a flat IMF slope is apparently favored for
GEHR in the FUV, which is likely the first apparent effect of
statistical fluctuations of the IMF for low mass populations.
FUV data suggests that giant {\ion{H}{2}} regions reach their maximum
nebular luminosity around 3.0-3.5\,Myr, coincident with the WR phase.
Finally, the {\ion{H}{2}} region NGC\,5461 in M101 is a good candidate to host
a second generation of stars more extinguished than, and formed after the
cluster actually detected with FUSE.
\acknowledgments
The author warmly thanks N. R. Walborn and L. Drissen for very
helpful comments that considerably improved the scientific content.
This work was supported by NASA Long-Term Space Astrophysics grant NAG5-9173.
| 2024-02-18T23:39:48.380Z | 2005-10-24T16:44:12.000Z | algebraic_stack_train_0000 | 463 | 9,018 |
|
proofpile-arXiv_065-2451 | \section{Introduction}
Strongly interacting two-component Fermi gases provide a unique
testing ground for the theories of exotic systems in nature.
In atomic Fermi gases, tunable strong interactions are produced
using the Feshbach resonance \cite{houb,stwa,ties}.
By sweeping the magnetic field in the Feshbach resonance experiments,
magnitude and nature of the two-body interaction strength changes
from repulsive to attractive.
Across the resonance the $s$-wave scattering length $a$ goes from large
positive to large negative values.
The fermionic system becomes molecular Bose-Einstein condensates (BEC)
for strong repulsive interaction and
transforms into the Bardeen-Cooper-Schrieffer (BCS) superfluid when the
interaction is attractive.
The first observations of BEC of molecules consisting of loosely bound
fermionic atoms \cite{greiner,jochim, zw} initiated a series of
explorations \cite{hara,regal,barten,bourdel,chin} of the crossover between
BEC and BCS superfluid.
The size of fermion pair condensates smoothly increases from the BEC to
the BCS-side of the resonance.
Near the resonance, the zero energy $s$-wave scattering length $a$
exceeds the interparticle spacing and the interparticle interactions
are unitarity limited and universal.
Recent experiments have entered the crossover regime and yielded results
of the interaction strength by measuring the cloud size and expansion.
As in the case of bosonic clouds, the frequencies of collective
modes of Fermi gases can be measured to high accuracy, it is
of major interest to investigate their dependence on the equation
of state along the crossover.
It was pointed out \cite{stringari} the
collective frequencies of a superfluid Fermi gas at $T=0 $,
trapped in a harmonic potential, approach well defined values
in the BEC and the unitarity limit regimes, where the density
dependence of the chemical potential can be inferred from general
arguments. In the intermediate region, various investigations, based
on the hydrodynamic theory of superfluid and suitable parameterizations of
the equation of state, have appeared recently
\cite{hui,heiselberg,bulgac,kim,manini,combescot,astra1}. The first
experimental results on the collective frequencies of the lowest axial
and radial breathing modes on ultra cold gases of ${}^6$Li across the Feshbach
resonance have also become available \cite{kinast,bar}.
Since the BCS and the unitarity limits are characterized by the
same collective excitation frequencies, there is a growing
interest to study the sound velocity \cite{ho,heisel,tosi} to make a clear
identification of these two regimes and to better characterize two kinds of
superfluid.
The axial excitations of ultra cold gases in a cigar shaped
trap can be divided into two regimes:
i) long wavelength excitations where wavelength is equal or larger
than the axial size, ii) short wavelength excitations where
wavelength is much smaller than the axial size. In the former case,
the axial excitations are discrete and the lowest breathing mode frequency
has been measured \cite{kinast,bar}. In the later case, the axial excitations can be
described by a continuous wave vector $k$. However, the
finite transverse size of the system also produces a discreteness
in the radial spectrum. The short wavelength axial phonons with different number
of discrete radial nodes give rise to the multi branch Bogoliubov
spectrum (MBS) \cite{zaremba}.
The inhomogeneous density in the radial plane determines the
curvature of the mode spectrum. The effect of the inhomogeneous density
in the radial plane decreases (since the radial
size increases) as we go from the molecular BEC side
to the weak-coupling BCS side for fixed number of atoms and the trapping potential.
We would expect that the MBS will be different in the different regimes
and it can be used to distinguish different superfluid regimes along
the BEC-BCS crossover.
It should be noted that the axial excited state is coupled with the
discrete radial nodes within a given angular momentum symmetry.
For example, when we excite the system to study the
sound propagation along the symmetry axis, this perturbation inherently
excites all other low energy transverse modes having zero angular momentum.
Similarly, the above arguments are also applicable to other low energy
mode spectrum,
{\em e. g.} spectrum for the breathing mode.
To determine the various mode spectrum, we must take into account that the
incident of the mode coupling between the axial quasiparticle states and
the transverse modes.
In this work, we calculate the sound velocity in an inhomogeneous as well
as homogeneous Fermi superfluid along the BEC-BCS crossover.
We also study the low energy MBS of a cigar shaped superfluid Fermi gas along the
BEC-BCS crossover by including the mode coupling. It is important to study
such spectrum in view of the current Bragg scattering experiment \cite{davidson}
on the MBS of an elongated cloud of weakly interacting BEC.
This paper is organized as follows. In Sec. II, we calculate the
transverse eigenfrequencies and its corresponding eigenfunction of
an elongated Fermi superfluid along the BEC-BCS crossover.
In Sec. III, we discuss about the equation of state of the Fermi
superfluid. The sound velocity, phonon mode and
monopole mode spectrum are presented in Sec. IV. We give a brief
summary and conclusions in Sec. V.
\section{hydrodynamic equations and eigen-frequencies}
We consider a two-component Fermi gas in a long cigar shaped harmonic trap potential
$ V(r,z) = (M/2)(\omega_r^2 r^2 + \omega_z^2 z^2) $ at
zero temperature. Here, $ \omega_z << \omega_r $.
We assume that the system behaves hydrodynamically throughout all regime.
If the system is BCS superfluid, then as long as the oscillation frequency
is below the gap frequency needed to break up a Cooper pair this condition
is expected to be fulfilled.
The system can be described by the following Schr$\ddot o$dinger equation \cite{kim}
\begin{equation}
i \hbar \frac{\partial \psi}{\partial t} =
[-\frac{\hbar^2}{2M} \nabla^2 + V(r) + \mu(n)] \psi,
\end{equation}
where $M$ is the mass of the Fermi particles and $ \mu(n) $ is
the equation of state which depends on the magnitude and
nature of the interaction strength.
Using the Madelung transformation
$\psi = \sqrt{n} e^{i \theta} $ and neglecting the quantum pressure
term, we obtain the
hydrodynamic equations of motion for the Fermi superfluid which are given by the
continuity and the Euler equations, respectively,
\begin{equation}
\frac{\partial n}{\partial t} = - {\bf \nabla} \cdot [n {\bf v}],
\end{equation}
and
\begin{equation}
M \frac{\partial {\bf v}}{\partial t} =
- \nabla[ \mu(n) + V(r) + \frac{1}{2} M {\bf v}^2].
\end{equation}
Here, $ n({\bf r},t) $ and
$ {\bf v}({\bf r},t) = (\hbar/M) \nabla \theta $ are the local
density and superfluid velocity, respectively.
We also assumed that
$ \omega_r >> \omega_z $ so that it makes a long cigar shaped trap.
The equation of state enters through the density-dependent
chemical potential.
We assume the power-law form of the equation of state as
$ \mu(n) = C n^{\gamma} $ as in Refs. \cite{hui,heiselberg,bulgac,manini,astra1}.
At equilibrium, the density profile takes the form
$ n_0(r) = (\mu/C)^{1/\gamma}( 1- \tilde r^2)^{1/\gamma} $,
where $ \tilde r = r/R $ and
$ R = \sqrt{2 \mu/M \omega_r^2} $.
Linearizing around equilibrium, $ n = n_0 + \delta n $, $ {\bf v} =
\delta {\bf v} $ and
$ \mu(n) = \mu(n_0) + (\partial \mu/\partial n)|_{n =n_0} \delta n $.
The equations of motion for the density and velocity fluctuations are
\begin{equation} \label{den}
\frac{\partial \delta n}{\partial t} = - \nabla \cdot [n_0(r) \delta {\bf v}],
\end{equation}
\begin{equation} \label{vel}
M \frac{\partial \delta {\bf v}}{\partial t} =
- \nabla [\frac{\partial \mu(n) }{\partial n}|_{n=n_0} \delta n].
\end{equation}
Taking first-order time-derivative of Eq. (\ref{den}) and using Eq. (\ref{vel}),
the second-order equation of motion for the density fluctuation is
given by
\begin{equation} \label{den0}
\frac{\partial^2 \delta n}{\partial t^2} =
\nabla \cdot [n_0(r) \nabla \frac{\partial \mu(n) }{\partial n}|_{n =n_0}
\delta n ].
\end{equation}
In the long cigar shaped trap, we assume the normal mode solution of
the density fluctuation which can be written as
\begin{equation} \label{plane}
\delta n(r,z,t) = \delta n(r) e^{i [\omega(k) t - k z]}.
\end{equation}
Substituting Eq. (\ref{plane}) into Eq. (\ref{den0}), then one can obtain
\begin{eqnarray} \label{den1}
- \tilde \omega_{\alpha}^2(k) \delta n(r) & = & \frac{\gamma}{2} \nabla_{\tilde r}
\cdot [(1- \tilde r^2)^{1/\gamma}
\nabla_{\tilde r}(1- \tilde r^2)^{1-1/\gamma} \delta n(r)] \nonumber \\
& - & \frac{\gamma}{2} \tilde k^2 (1-\tilde r^2) \delta n(r),
\end{eqnarray}
where $ \tilde \omega = \omega/\omega_r $ and $ \tilde k = k R $.
Here, $\alpha$ is a set of two quantum numbers: radial quantum number,
$n_r$ and the angular quantum number, $m$.
For $ k = 0 $, it reduces to a two-dimensional eigenvalue problem and the solutions
of it can be obtained analytically. The energy spectrum is
given by
\begin{equation}
\tilde \omega_{\alpha}^2 = |m| + 2 n_r [\gamma(n_r + |m|) + 1].
\end{equation}
The corresponding normalized eigenfunction is given by
\begin{equation}
\delta n_{\alpha} = A (1-\tilde r^2)^{1/\gamma -1} \tilde r^{|m|}
P_{n_r}^{(1/\gamma -1, |m|)} (2\tilde r^2 -1) e^{im\phi},
\end{equation}
where $ P_{n}^{(a,b)}(x) $ is a Jacobi polynomial of order $n$ and $\phi $ is
the polar angle. Also, the normalization constant $ A $ is given by
\begin{equation}
A^2 = \frac{2^{2-2/\gamma}}{\sqrt{\pi} R^2}
\frac{[\Gamma(n_r+1)]^2 \Gamma(1/\gamma)\Gamma(2/\gamma + 2 n_r + |m|)}
{\Gamma(1/\gamma-1/2)[\Gamma(1/\gamma + n_r)]^2 \Gamma(2n_r + |m| +1)}.
\end{equation}
For $ \gamma = 1 $, the above energy spectrum and its corresponding eigenfunction
exactly matches with results of Ref. \cite{graham}.
Note that the modes with $n_r =0 $ and $m \neq 0 $ do not depend
on the equation of state. This is because the flow in these modes
are incompressible and the internal energy does not change
during the oscillation period.
The radial breathing mode is $ \omega_1 = \sqrt{2(\gamma + 1)} \omega_r $
which
exactly matches with the result of Ref. \cite{heiselberg}.
The experimental results of the radial breathing mode \cite{kinast,bar}
is well described \cite{heiselberg} by this analytic spectrum.
The solution of Eq. (\ref{den1}) can be obtained for arbitrary value of $k$ by
numerical diagonalization.
For $ k \neq 0 $, we expand the density fluctuation as
\begin{equation}
\delta n = \sum_{\alpha} b_{\alpha} \delta n_{\alpha} (r,\phi).
\end{equation}
Substituting the above expansion into Eq. (\ref{den1}), we obtain,
\begin{eqnarray} \label{density2}
0 & = & [\tilde \omega_{\alpha}^2 - [|m| + 2 n_r ( \gamma (n_r +|m|) +1)]
\nonumber \\ & - &
\frac{\gamma}{2} \tilde k^2] b_{\alpha} +
\frac{\gamma}{2} \tilde k^2 \sum_{\alpha^{\prime}} M_{\alpha \alpha^{\prime}}
b_{\alpha^{\prime}}.
\end{eqnarray}
Here, the matrix element $ M_{\alpha \alpha^{\prime}} $ is given by
\begin{eqnarray} \label{matrix}
M_{\alpha \alpha^{\prime}} & = & A^2 \int d^2 \tilde r
(1-\tilde r^2)^{2\gamma_0} \tilde r^{2+|m| + |m^{\prime}|} e^{i(m - m^{\prime}) \phi}
\nonumber \\ & \times & P_{n_r^{\prime}}^{(\gamma_0,|m^{\prime}|)}(2 \tilde r^2-1)
P_{n_r}^{(\gamma_0,|m|)}(2 \tilde r^2-1),
\end{eqnarray}
where $ \gamma_0 = 1/\gamma -1 $.
The above eigenvalue problem (Eq. (\ref{density2})) is block diagonal with no overlap
between the subspaces of different angular momentum, so
that the solutions to Eq.(\ref{density2}) can be obtained separately in
each angular momentum subspace. We can obtain all low energy
multibranch Bogoliubov spectrum on the both sides of the Feshbach resonance
including the unitarity limit from Eq. (\ref{density2})
which is our main result.
Equations (\ref{density2}) and (\ref{matrix}) show that the spectrum depends
on the average over the radial coordinate and the coupling between
the axial mode and transverse modes within a given angular momentum symmetry.
Particularly, the coupling is important for large values of $k $.
\section{equation of state}
To calculate the sound velocity and the MBS,
we need to know how the adiabatic index $ \gamma $ depends on the two-body
interaction strength.
At zero temperature, the energy per particle of a dilute Fermi system can be
written as
\begin{equation}
\epsilon = \frac{3}{5} E_F \epsilon (y),
\end{equation}
where $ E_F = \hbar^2 k_F^2/2M $ is the free particle Fermi energy and
$ \epsilon (y) $ is a function of the interaction parameter $y = 1/k_F a $.
In the unitarity limit ($y \rightarrow 0^{\pm} $) one expects that the
energy per particle is proportional to that of a noninteracting Fermi gas.
The fixed-node diffusion Monte Carlo calculation of Astrakharchik {\em et al}.
\cite{astra} finds $ \epsilon (y \rightarrow 0) = 0.42 \pm 0.01 $.
An analogous calculation of Carlson {\em et al}. \cite{carlson} gave
$ \epsilon (y \rightarrow 0) = 0.44 \pm 0.01 $.
The calculation of Astrakharchik {\em et al}.
\cite{astra} is quite complete and gives the behavior of the energy
of the system across the unitarity limit. On the
basis of the data of Carlson {\em et al}. \cite{carlson}, Bulgac and Bertsch
\cite{bulgac} proposed the following behavior of $ \epsilon(y)$ near the
unitarity limit:
\begin{equation}
\epsilon (y) = \xi - \zeta y - \frac{5}{3} y^2 + O (y^3),
\end{equation}
where $ \xi \sim 0.44 $ and $ \zeta = 1 $ for both positive and
negative values of $y$. However, the data of Ref. \cite{astra}
gives a continuous but not differentiable behavior of
$ \epsilon (y) $ near $ y=0$ and it suggest $\zeta = \zeta_- = 1 $ in
the BCS regime and $\zeta = \zeta_+ = 1/3 $ in the BEC regime.
On the basis of the data of Astrakharchik {\em et al}.
\cite{astra}, Manini and Salasnich \cite{manini} proposed the
following analytical fitting formula of $ \epsilon(y)$
for all regimes in the BEC-BCS crossover including the unitarity limit:
\begin{equation} \label{fit}
\epsilon (y) = \alpha_1 - \alpha_2
\tan^{-1} [\alpha_3 y \frac{\beta_1 + |y|}{\beta_2 + |y|}].
\end{equation}
This analytical expression is well fitted with the data of Ref.
\cite{astra} for a wide range of $y$ on both sides of the resonance.
We shall use Eq. (\ref{fit}) for further studies in this work.
Two-different sets of parameters are considered in Ref. \cite{manini}: one
set in the BCS regime ($y<0$) and an another set in the
BEC regime ($y>0$). In the BCS limit, the values of the parameters \cite{manini}
are $ \alpha_1 = 0.42 $, $ \alpha_2 = 0.3692 $, $\alpha_3 = 1.044 $,
$\beta_1 = 1.4328 $ and $ \beta_2 = 0.5523 $. In the BEC limit,
the values of the parameters \cite{manini} are
$ \alpha_1 = 0.42 $, $ \alpha_2 = 0.2674 $, $\alpha_3 = 5.04 $,
$\beta_1 = 0.1126 $ and $ \beta_2 = 0.4552 $.
The advantage of a functional parameterization of $ \epsilon(y)$ is that
it allows straightforward analytical calculations of several physical
properties. The chemical potential $\mu $ is given by \cite{manini}
\begin{equation} \label{chemical}
\mu = \epsilon(n) + n \frac{d\epsilon (n)}{d n} =
E_F [\epsilon (y) - \frac{y}{5} \epsilon^{\prime}(y)],
\end{equation}
where $ \epsilon^{\prime}(y) = \frac{\partial \epsilon (y) }{\partial y} $.
One can extract an effective adiabatic index $\gamma $ and its dependence
on the scattering length $a$ by defining the logarithmic derivative as
\cite{manini}
\begin{equation} \label{gamma}
\gamma \equiv \bar \gamma = \frac{n}{\mu}\frac{d\mu}{dn}
= \frac{\frac{2}{3} - \frac{2y}{5} \epsilon^{\prime}(y) +
\frac{y^2}{15} \epsilon^{\prime \prime}(y)}{\epsilon (y) -
\frac{y}{5}\epsilon^{\prime}(y)}.
\end{equation}
The radial size of the Fermi system in all the regimes of the
BEC-BCS crossover can be obtained from the relation:
$ R = \sqrt{2 \mu/M \omega_r^2} $. From Eq. ({\ref{chemical}),
one can obtain the radial size which is given by
\begin{equation} \label{radial}
R = r_0 \sqrt{\epsilon (y) - \frac{y}{5} \epsilon^{\prime}(y)},
\end{equation}
where $ r_0 = a_{\rm av} (24N)^{1/6} $ is the radial size of the free Fermi gas
in a harmonic trap potential \cite{butts},
$a_{\rm av} = \sqrt{\hbar/M \omega_{\rm av}} $ and
$ \omega_{\rm av} = (\omega_r^2 \omega_z)^{1/3}$ is the average
oscillator frequency of the trap potential.
In the weak-coupling BCS limit, the ground state energy per particle is
$ \epsilon_{\rm bcs}(n) = (3/5) E_F $ and the chemical potential is
$ \mu_{\rm bcs} = E_F $. The corresponding radius is
$ R_{\rm bcs} = a_{\rm av} (24N)^{1/6} = r_0 $.
In the unitarity limit, the ground state energy per particle is
$ \epsilon_{\rm uni}(n) = (3/5) E_F \xi $ and the chemical potential is
$ \mu_{\rm uni} = E_F \xi $. The corresponding radius is
$ R_{\rm uni} = a_{\rm av} (24N \xi^3)^{1/6} = r_0 \sqrt{\xi} $.
\section{sound velocity and multibranch Bogoliubov spectrum}
\subsection{Sound velocity}
Before presenting the exact numerical results, we make some approximation
for a quantitative discussion. If we neglect the couplings among all other
modes in the $m=0$ sector by setting
$ l^{\prime} = (n_r, 0) $ in Eqs. (\ref{density2}) and (\ref{matrix}), one
can easily get following spectrum:
\begin{equation} \label{per}
\tilde \omega_{n_r}^2 = 2n_r (\gamma n_r + 1) +
\frac{\gamma}{2}(1-M_{n_r,n_r}) \tilde k^2.
\end{equation}
In the limit of long wavelength, the $ n_r = 0 $ mode is phonon-like
with a sound velocity
\begin{equation} \label{sin}
u_1 = \sqrt{\frac{(2- \gamma)\gamma}{2} \frac{\mu}{M}}.
\end{equation}
For $ \gamma = 1 $, it exactly reproduces the weakly interacting
BEC results \cite{zaremba}.
This sound velocity is different from the result obtained in
Ref. \cite{tosi}. The reason for the difference is given below.
In Ref. \cite{tosi}, the sound velocity is obtained by simply integrating
Eq. (\ref{den1}) on radial coordinates. In this work, we are multiplying
by the complex conjugate of $ \delta n $ on both sides of Eq. (\ref{den1})
and then integrating it on radial coordinates. Since the density fluctuation
at the lowest energy state is a function of the radial coordinate, the two
average procedure gives two different result. Note that the
correct average procedure is considered in our work.
For the homogeneous Fermi system, the sound velocity can be obtain from
Eq. (\ref{per}) by neglecting the $ M_{n_r,n_r} $ and it is
given by
\begin{equation} \label{sho}
u_1 = \sqrt{\frac{\gamma \mu}{M}}.
\end{equation}
The sound velocity in the inhomogeneous system is smaller by a factor
of $ \sqrt{1-\gamma /2 } $ with
respect to the sound velocity in a homogeneous Fermi systems. This is
due to the effect of the average over the radial variable which can be seen
from Eqs. (\ref{density2}) and (\ref{matrix}).
Using Eqs. (\ref{chemical}), (\ref{gamma}) and (\ref{sin}),
the sound velocity in the inhomogeneous Fermi superfluid
along the BEC-BCS crossover including the unitarity limit is given by
\begin{equation} \label{soundin}
u_1 = v_F \sqrt{\frac{[\frac{1}{3} - \frac{y}{5} \epsilon^{\prime}(y) +
\frac{y^2}{30} \epsilon^{\prime \prime}(y)][-\frac{1}{3} + \epsilon (y)
- \frac{y^2}{30} \epsilon^{\prime \prime}(y)]}{[\epsilon (y) - \frac{y}{5}
\epsilon^{\prime}(y)]}},
\end{equation}
where $ v_F = \sqrt{ 2 E_F/M} $ is the Fermi velocity.
Similarly, by using Eqs. (\ref{chemical}), (\ref{gamma}) and (\ref{sho}),
the sound velocity in the
homogeneous Fermi superfluid along the BEC-BCS crossover including the
unitarity limit is given by
\begin{equation} \label{soundho}
u_1 = v_F \sqrt{ [\frac{1}{3} \epsilon (y) - \frac{y}{5} \epsilon^{\prime}(y)
+ \frac{y^2}{30} \epsilon^{\prime \prime}(y)]}.
\end{equation}
Equation (\ref{soundho}) exactly agrees with the result of Ref. \cite{manini}.
In the molecular BEC limit, the sound velocity in the inhomogeneous
bosonic systems can be written as $ u_m = \sqrt{\mu_m/2M_m} $, where
$ \mu_m $ is the chemical potential of the molecular BEC
and $ M_m = 2 M $ is the mass of a molecule.
The chemical potential $ \mu_m $ can be written as
$ \mu_m = 4 \pi a_m \hbar^2 n_m/M_m $, where
$ n_m = k_F^3/6 \pi^2 $ is the molecular density and
$ k_F $ is the Fermi wave vector.
Here, $ a_m = 0.6 a $ is the two-body scattering length
between two bound molecules \cite{petrov}.
A simple expression for
the sound velocity in the molecular BEC limit can be written as
\begin{equation} \label{soundmol}
u_m = v_F \sqrt{\frac{0.6}{12\pi} \frac{1}{y}}.
\end{equation}
Using equations (\ref{soundin}), (\ref{soundho}) and (\ref{soundmol}),
we plot the sound velocity along the BEC-BCS crossover in Fig. 1.
\begin{figure}[ht]
\includegraphics[width=9.1cm]{sound.eps}
\caption{Plots of the sound velocity along the BEC-BCS crossover including
the unitarity limit. The solid and dashed lines are corresponding to
the sound velocity in inhomogeneous
and homogeneous Fermi superfluid, respectively.
The dot-dashed line corresponds to Eq. (\ref{soundmol}).}
\end{figure}
There is a small kink at the unitarity limit
$y=0$ due to $ \zeta_- \neq \zeta_+ $. Otherwise, Fig. 1 shows that
there is a smooth crossover
of the sound velocity from the molecular BEC side to the BCS side through
the unitarity limit $ y = 0 $.
Fig. 1 also shows that Eq. (\ref{soundmol}) matches very well with
Eq. (\ref{soundin}) for large values of $y$.
For homogeneous Fermi systems the sound velocity in the two
limiting cases can be obtained from Eq. (\ref{soundho}) and
these are given by $ u_1 = 0.37 v_F $ in the unitarity limit and by
$ u_1 = 0.57 v_F $ in the weak-coupling BCS limit. These results
exactly matches with the previous results \cite{ho,heisel}.
Similarly, the sound velocity for the inhomogeneous Fermi systems
in the two limiting cases can be obtain from Eq. (\ref{soundin})
and these are given by $ u_1 = 0.30 v_F $ in the unitarity limit and
$ u_1 = 0.45 v_F $ in the dilute BCS limit.
The sound velocity in the inhomogeneous Fermi system
is less than that in the homogeneous Fermi system with the
same density at the center of the trap as the former system. However,
this difference is large in the BCS side compared to the BEC side.
The sound velocity of the inhomogeneous Fermi superfluid can be measured
by observing the propagation of the sound pulses along
the symmetry axis as it is done for weakly interacting BEC \cite{andrew}.
\subsection{Phonon mode spectrum}
In Fig. 2 we plot the phonon mode spectrum in the weak-coupling BCS limit
($ y << 0 $), unitarity limit ($ y = 0 $) and BEC side of the
unitarity limit ($ y = 0.25 $) by solving Eq. (\ref{density2}).
These spectra have the usual form like $ \omega = u_1 k $ at low momenta, where
the sound velocity $ u_1 $ is given in Eq. (\ref{soundin}).
It is seen from Fig. 2 that the behavior of the phonon mode spectrum is different for
different regimes characterizing each superfluid phase.
For example, the slope of the phonon spectrum in the BCS limit is large compared
to the unitarity and BEC limits as expected.
\begin{figure}[ht]
\includegraphics[width=9.1cm]{phonon.eps}
\caption{Plots of the phonon mode spectrum in
the BCS-limit (dot-dashed), unitarity limit (dashed) and BEC side of the
unitarity limit with $ y = 0.25 $ (solid line).}
\end{figure}
\subsection{Monopole mode spectrum}
In Fig. 3, we plot the monopole mode spectrum in three different regimes
by solving Eq. (\ref{density2}).
In the long wavelength limit, the monopole mode has the free-particle
dispersion relation with some effective mass $m_b $ and a gap
$ \Delta_b = \sqrt{2 (\gamma + 1)} \omega_r $. In the long wavelength limit,
the breathing mode spectrum can be calculated from Eq. (\ref{density2})
by using the first-order perturbation theory. The spectrum for the monopole
mode in the long wavelength limit is given by
$ \omega_1(k) = \sqrt{2 (\gamma + 1)} \omega_r + \hbar k^2/2 m_b
+ O (k^4)$,
where the effective mass of the breathing mode $ m_b $ is
\begin{equation}
m_b = M \frac{\hbar \omega_r}{\mu}
\sqrt{\frac{8}{\gamma^2} \frac{(2+\gamma)(\gamma+1)}{(2-\gamma)^2}}.
\end{equation}
Note that $ \gamma = 2/3 $ in the BCS and unitarity limits.
Therefore, the monopole mode frequencies are the same at the BCS and the
unitarity limits. However, the behavior of the spectrum in two
different regimes are completely different. For example, the effective mass
of the monopole mode spectrum in the
BCS limit is small compared to that of the unitarity limit.
\begin{figure}[ht]
\includegraphics[width=9.1cm]{mono1.eps}
\caption{Plots of the monopole mode spectrum in the BCS-limit (dot-dashed),
unitarity limit (dashed) and BEC side of the
unitarity limit with $ y = 0.25 $ (solid line).}
\end{figure}
\section{Summary and conclusions}
In this work, we have calculated the sound velocity in the
homogeneous as well as inhomogeneous Fermi superfluid along
the BEC-BCS crossover.
The sound velocity in the inhomogeneous Fermi superfluid
can be measured by observing the sound pulse propagation
along the symmetry axis, similar to the experiment by
Andrews {\em et al}. \cite{andrew} for weakly interacting BEC.
The hydrodynamic description presented in this
work enables us to produce correctly all low-energy multibranch
Bogoliubov spectrum by including the coupling of the axial mode
with the radial modes within the same angular momentum sector.
An analytic expression for the effective mass of the breathing
mode spectrum is obtained.
Due to the axial symmetry, the modes having zero angular momentum can
be excited in the Bragg scattering experiment.
Particularly, the spectrum for the phonon and monopole modes in the different
regimes can be observed in the Bragg scattering experiments as
these spectrum are observed in Ref. \cite{davidson} for weakly interacting BEC.
By measuring the sound velocity in the pulse propagation experiment and
by observing the low energy Bogoliubov spectrum in the Bragg spectroscopy,
one can make a clear identification of various superfluid regimes along
the BEC-BCS crossover.
\begin{acknowledgments}
This work of TKG was supported by a grant (Grant No. P04311) of the
Japan Society for the Promotion of Science.
\end{acknowledgments}
| 2024-02-18T23:39:48.757Z | 2005-11-23T06:17:36.000Z | algebraic_stack_train_0000 | 492 | 4,306 |
|
proofpile-arXiv_065-2472 | \section{Introduction}
A phaseless auxiliary field (AF) quantum Monte Carlo (QMC) method was
recently introduced \cite{zhang_krakauer} to study correlation effects
in real materials, which has yielded results for a variety of
$sp$-bonded materials in good agreement with experiment and comparable
to those obtained using the standard diffusion Monte Carlo method
(DMC) \cite{QMC_rmp}. In this paper we present the first application
of the phaseless AF QMC method to the more highly-correlated
transition metal oxide systems.
Because of their complexity (the presence of both localized and
itinerant characters in the
electronic degrees of freedom, strong electron-ion pseudopotentials,
and the presence of many highly correlated electrons),
there have been relatively few
QMC calculations of any type for transition metal systems
\cite{sokolova,wagner_mitas,lee_mitas_wagner,needs}.
There are many important applications based on the magnetic,
ferroelectric, and superconducting properties of transition metal
oxides. These effects arise from the presence of $d$-shell electrons
whose interactions are often highly correlated. The generally
successful {\em ab initio\/} density functional theory (DFT) approach
\cite{kohn_nobel} has had limited success in describing these
properties, often predicting incorrect ground states ({\it e.g.\/}
metallic instead of insulating). Even in cases where correlation
effects are less pronounced and the method is qualitatively correct,
the results are sometimes not of sufficient accuracy. For example in
ferroelectrics such as PbTiO$_3$, which have essentially no occupied
$d$-states, the relatively small and usually acceptable DFT errors
($\sim$3\%) in predicted equilibrium volumes can lead to suppression
of the ferroelectric ground state. There is thus a great need for
better theoretical modeling of transition metal systems.
{\em Ab initio\/} quantum Monte Carlo methods are an attractive means
to treat {\em explicitly\/} the interacting many fermion system.
These methods in principle scale algebraically as a low power with
system size. However, except for a few special cases, QMC methods are
plagued by the fermion sign problem \cite{schmidt84,loh90}, which, if
uncontrolled, results in exponential scaling. No formal solution has
been found for this problem, but approximate methods have been
developed that control it. The most established QMC method is the
real space fixed-node diffusion Monte Carlo \cite{ceperley_adler},
which has been applied to calculate many properties of solids and
molecules \cite{QMC_rmp}. Recent DMC studies have addressed transition
metal systems such as the TiC molecule \cite{sokolova}, TiO and MnO
molecules \cite{wagner_mitas}, solid MnO \cite{lee_mitas_wagner}, and
solid NiO \cite{needs}.
The new phaseless AF QMC approach \cite{zhang_krakauer} is an
alternative that has several appealing features. For example, it is
formulated in a Hilbert space spanned by some fixed one-particle
basis, and the freedom to choose {\em any\/} one-particle basis
suitable for a given problem could be advantageous. Moreover, the AF
QMC methodology can take full advantage of well-established techniques
used by independent-particle methods with the same basis. With a
planewave basis, for example, algorithms based on fast Fourier
transforms (FFT) and separable non-local pseudopotentials can be
carried over from DFT planewave codes. Given the remarkable
development and success of the latter \cite{DFT}, it is clearly
desirable to have a QMC method that can use exactly the same machinery
and systematically include correlation effects by simply building
stochastic ensembles of the independent-particle solutions.
The central idea in standard AF QMC methods \cite{BSS,Koonin} is the
mapping of the interacting many-body problem into a linear combination
of non-interacting problems in external auxiliary fields. Averaging
over different AF configurations is then performed by Monte Carlo (MC)
techniques. However, except for special cases (e.g., the Hubbard
model with on-site interactions), the two-body interactions will
require auxiliary fields that are {\em complex\/}. As a result, the
single-particle orbitals become complex, and the MC averaging over AF
configurations becomes an integration over complex variables in many
dimensions, and a phase problem occurs.
The phaseless AF QMC method \cite{zhang_krakauer} used in this paper
controls the phase/sign problem in an approximate manner using a trial
wave function. As in fixed-node DMC, the calculated results approach
the exact ones as the trial wave function is improved. The
ground-state energy in the phaseless method, however, is not a
variational upper bound \cite{zhang_krakauer,Carlson99}. Previous
results on $sp$-bonded systems \cite{zhang_krakauer,CPC05} and our
current results suggest that the calculated energy is quite
insensitive to the trial wave function. Accurate ground-state
energies have been obtained with simple trial wave functions, namely
single Slater determinants from DFT or Hartree-Fock calculations.
In this paper, we study the transition metal oxide molecules TiO and
MnO, using the phaseless AF QMC method with planewaves and
pseudopotentials. This represents the first application of AF-based
QMC to transition metal oxides. As in regular DFT calculations,
molecules can be studied with planewaves by placing them in large
cells (supercells) and using periodic boundary conditions. This is
somewhat disadvantageous because one has to insure that the supercells
are large enough to eliminate the spurious interactions between the
images of the molecule. Consequently the computational cost for
isolated atoms and molecules is higher than with a localized basis.
However, the main motivation of the present study is to test the
phaseless AF QMC method for strongly correlated systems such as
transition metal oxides, using the same methodology as previously used
for $sp$-bonded materials. In addition, a converged planewave basis,
which is straightforward to achieve aside from the computational cost,
gives an unbiased representation of the Hamiltonian, and facilitates
direct comparison with experiment.
The remainder of the paper is organized as follows. The phaseless AF
QMC method is briefly reviewed in Sec.~II. The specific formulation
using a single-particle planewave basis with non-local
pseudopotentials is then discussed in Sec.~III. Finally, in Sec.~IV we present
results of our calculations for the binding energies of TiO and MnO,
which are in excellent agreement with experiment.
\section{Formalism }
The Hamiltonian for a many-fermion system with two-body interactions
can be written in any one-particle basis in the general form
\begin{equation}
{\hat H} ={\hat H_1} + {\hat H_2}
= \sum_{i,j}^M {T_{ij} c_i^\dagger c_j}
+ {1 \over 2}
\sum_{i,j,k,l}^M {V_{ijkl} c_i^\dagger c_j^\dagger c_k c_l},
\label{eq:H}
\end{equation}
where $M$ is the size of the chosen one-particle basis, and
$c_i^\dagger$ and $c_i$ are the corresponding creation and
annihilation operators. Both the one-body ($T_{ij}$) and two-body
matrix elements ($V_{ijkl}$) are known.
As in other QMC methods, the auxiliary field quantum
Monte Carlo obtains the ground state $\left| \Psi_G \right\rangle$ of
${\hat H}$ by projecting from a trial wave function $\left| \Psi_T
\right\rangle$, using the imaginary-time propagator $e^{-\tau {\hat
H}}$:
\begin{equation}
\left| \Psi_G \right\rangle \propto \lim_{n
\to \infty} (e^{-\tau {\hat H}})^n \left| \Psi_T \right\rangle .
\end{equation}
The trial wave function $\left| \Psi_T \right\rangle$, which should
have a non-zero overlap with the exact ground state, is assumed to be
in the form of a single Slater determinant or a linear combination of
Slater determinants.
Using a second-order Trotter breakup, we write
the propagator as:
\begin{equation}
e^{-\tau {\hat H}} = e^{-\tau {\hat H_1}/2} e^{-\tau {\hat H_2}}
e^{-\tau {\hat H_1}/2} + {\cal{O}}(\tau^3).\label{eq:expH}
\end{equation}
The two-body part of the propagator
can be written as an integral of
one-body operators by a Hubbard-Stratonovich transformation \cite{HS}:
\begin{equation}
e^{-\tau{\hat H_2}}
= \prod_\alpha \Bigg({1\over \sqrt{2\pi}}\int_{-\infty}^\infty
e^{-\frac{1}{2} \sigma_\alpha^2}
e^{\sqrt{\tau}\,\sigma_\alpha\,
\sqrt{\lambda_\alpha}\,{\hat v_\alpha}} d\sigma_\alpha\Bigg),
\label{eq:HStrans}
\end{equation}
after ${\hat H_2}$ is turned into a sum of squares of one-body operators:
${\hat H_2} = - {1\over 2}\sum_\alpha
\lambda_\alpha {\hat v_\alpha}^2$, with $\lambda_\alpha$ a real
number.
The propagator of Eq.~(\ref{eq:expH}) can now be expressed
in the general form:
\begin{equation}
e^{-\tau{\hat H}} =\int P(\sigma) {\cal{B}}(\sigma)\,d\sigma,
\label{eq:prop}
\end{equation}
where we have introduced the vector representations $\sigma\equiv \{\sigma_1,\sigma_2,
\cdots\}$, $P(\sigma)$ is the normal distribution with mean $0$ and standard deviation $1$, and
\begin{equation}
{\cal{B}}(\sigma)\equiv
e^{-\tau {\hat H_1}/2}
\,e^{\sqrt{\tau} \sigma\cdot {\hat{\mathbf v}}}
\,e^{-\tau {\hat H_1}/2},
\label{eq:Bdef}
\end{equation}
with $\v_op \equiv\{ \sqrt{\lambda_1}\,{\hat v_1},
\sqrt{\lambda_2}\,{\hat v_2}, \cdots\}$.
Monte Carlo methods can be used to evaluate the multi-dimensional
integral of Eq.~(\ref{eq:prop}) efficiently. We follow the
procedure \cite{Zhang,Zhang_review,zhang_krakauer}
of turning the MC process into an open-ended random walk (instead
of Metropolis sampling of entire paths along imaginary time \cite{BSS,Koonin}),
because it facilitates the imposition of local constraints
to deal with the sign/phase problem \cite{Zhang_review}.
Each step in the random walk takes
a Slater determinant $|\phi\rangle$ to a new determinant
$|\phi^\prime\rangle$:
\begin{equation} |\phi^\prime (\sigma) \rangle =
{\cal{B}}(\sigma) |\phi \rangle , \label{eq:proj}
\end{equation}
where $\sigma$ is sampled from $P(\sigma)$. Given sufficient
propagation time one obtains a MC representation of the ground state:
$|\Psi_G\rangle \doteq \sum_{\phi} |\phi \rangle $.
This straightforward approach, however, will generally lead to
an exponential increase in the statistical fluctuations with the
propagation time. One can easily understand the source of this by
realizing that the one-body operators $\v_op$ are generally complex,
since $\lambda_\alpha$ usually cannot all be made
positive in Eq.~(\ref{eq:HStrans}). As a result, the orbitals in
$|\phi\rangle$ will become complex as the propagation proceeds. This
is the phase problem referred to earlier. It is of the same origin as
the sign problem that occurs when ${\cal B}(\sigma)$ is real. The phase
problem is more severe, however, because for each $|\phi\rangle$,
instead of a $+|\phi\rangle$ and $-|\phi\rangle$ symmetry
\cite{Zhang}, there is now an infinite set $\{ e^{i\theta}
|\phi\rangle\}$ ($\theta \in [0,2\pi)$) from which the MC
sampling cannot distinguish. At large propagation time, the phase of each
$|\phi\rangle$ becomes random, and the MC representation of
$|\Psi_G\rangle$ becomes dominated by noise.
In Ref.~\cite{zhang_krakauer} the phaseless auxiliary field QMC method
was presented to control the phase problem. The first ingredient
of this method is an importance-sampling transformation using a {\em complex}
importance function, $\langle \Psi_T|\phi\rangle$, where
$| \Psi_T \rangle$ is a trial wave function. In the resulting
random walk, a walker $|\phi\rangle$
is propagated to a new position
$|\phi^\prime\rangle$ in each step by
\begin{equation}
|\phi^\prime(\sigma)\rangle={\cal B}(\sigma- \bar{\sigma} ) |\phi\rangle.
\label{eq:prop_imp}
\end{equation}
As in Eq.~(\ref{eq:proj}), $\sigma$ is sampled from $P(\sigma)$, but the
propagator is modified to include a force bias, or
shift \cite{shiftcont_rom97}:
\begin{equation}
\bar{\sigma} =
- \sqrt{\tau}
{\langle\Psi_T|\v_op|\phi\rangle \over \langle\Psi_T | \phi\rangle}.
\label{eq:FB}
\end{equation}
A walker carries a weight
$w_{\phi}$ which is updated according to
\begin{equation}
w_{\phi^\prime}=W(\phi)\,w_\phi,
\label{eq:wt_imp}
\end{equation}
where $W(\phi)$ can
be expressed in terms of the so-called local energy, $E_L$:
\begin{equation}
W(\phi) \doteq
\exp\bigg[-\tau
{\langle\Psi_T|\hat{H}|\phi\rangle \over \langle\Psi_T | \phi\rangle}\bigg]
\equiv \exp[-\tau E_L(\phi)].
\label{eq:El}
\end{equation}
In the limit of an exact $|\Psi_T\rangle$, $E_L$ is a real
constant, the weight of each walker remains real, and the
mixed estimate for the energy is phaseless:
\begin{equation}
E_G =
{\langle\Psi_T|\hat{H}|\Psi_G\rangle \over \langle\Psi_T | \Psi_G\rangle}
\doteq
{\sum_{\phi^\prime} w_{\phi^\prime} E_L({\phi^\prime})
\over
\sum_{\phi^\prime} w_{\phi^\prime}}.
\label{eq:mixed_w_EL}
\end{equation}
With a general $|\Psi_T\rangle$ which is not exact, a natural
approximation is to replace $E_L$ in Eq.'s (\ref{eq:El}) and
(\ref{eq:mixed_w_EL}) by its real part, ${\rm Re}\, E_L$,
leading to a phaseless formalism for the random walk, with real and
positive weights.
The second ingredient in the phaseless method involves a projection:
the modified random walk is still ``rotationally invariant'' in the
complex plane defined by $\langle\Psi_T|\phi\rangle$. With the
propagation, the walkers will populate the complex plane symmetrically
independent of their initial positions. In particular, a finite
density of walkers will develop at the origin where the local energy
$E_L(\phi)$ diverges, and this causes diverging fluctuations in the
weights of walkers.
This problem, which is inherent of the ``two-dimensional'' nature of
the random walk in the complex plane, can be controlled with an
additional approximation, in which the random walk is projected to
``one-dimension.'' This is done, e.g., by multiplying the weight of
each walker in each step by $\max\{0,\cos(\Delta\theta)\}$, where
$\Delta\theta$ is the phase of
$\langle\Psi_T|\phi^\prime\rangle/\langle\Psi_T |\phi\rangle$. The
projection ensures that the density of walkers vanish at the origin.
Note that the projection has no effect
when $\v_op$ is real. This additional approximation and the
importance-sampling procedures of Eq.'s (\ref{eq:prop_imp}) through
(\ref{eq:El}) form the basis of the new phaseless AF QMC method.
\section{Implementation with Planewaves}
The calculations reported in this paper were carried out in supercells using a
planewave basis and periodic boundary conditions (PBC).
Pseudopotentials are used as in DFT calculations to
represent the electron-ion interaction, eliminating the core electrons
from the Hamiltonian. The basis set consists of the $M$
planewaves with kinetic energy $|\k|^2/2 < E_{{\rm{cut}}}$, where the
parameter $E_{{\rm{cut}}}$ is a cutoff energy.
In a planewave basis, the one-body operator $\hat H_1$ of Eq.~(\ref{eq:H})
is the sum of the kinetic energy and the electron-ion
interaction, and $\hat H_2$ represents the electron-electron
interaction. These can be expressed as:
\begin{eqnarray}
\hat H_1&=& -\frac{\hbar^2}{2m}\sum_{\k,s} |\k|^2
c^{\dag}_{\k,s} c_{\k,s}
+ \sum_{\k,{\bf{k'}},s} V_{L}(\k-{\bf{k'}}) c^{\dag}_{\k,s} c_{{\bf{k'}},s} \nonumber \\
&\,\,\,\,\, +& \sum_{\k,{\bf{k'}},s} V_{NL}(\k,{\bf{k'}}) c^{\dag}_{\k,s}
c_{{\bf{k'}},s} \nonumber \\
\hat H_2 &=& \frac{1}{2 \Omega} \sum_{\k,{\bf{k'}},s,s'}
\sum_{{\bf {q}} \neq 0}
\frac{4 \,\pi\, e^2}{|{\bf {q}}|^2} \, c^{\dag}_{\k+{\bf {q}},s}
c^{\dag}_{{\bf{k'}}-{\bf {q}},s'} c_{{\bf{k'}},s'} c_{\k,s}. \nonumber\\
\end{eqnarray}
Here $c^{\dag}_{\k,s}$ and $c_{\k,s}$ are the creation and
annihilation operators of an electron with momentum $\k$ and spin
$s$.
$V_L(\k-\k')$ and $V_{NL}(\k,\k')$ are the local and non-local parts of the
pseudopotential, respectively.
$\Omega$ is the super-cell volume, $\k$ and ${\bf{k'}}$ are planewaves
within the cutoff radius, and the ${\bf {q}}$-vectors
satisfy $|\k+{\bf {q}}|^2/2 < E_{\rm{cut}}$.
A Hubbard-Stratonovich transformation is applied to decouple the
electron-electron interaction $\hat H_2$ into a linear combination of
one-body operators. The resulting one-body operators consist of
density operators of the form $\hat \rho({\bf {q}})=\sum_{\k,s}
c^{\dag}_{\k+{\bf {q}},s} c_{\k,s}$. The number of auxiliary fields is
proportional to the number of unique ${\bf {q}}$ vectors that the basis
allows, i.e., roughly eight times the number of planewaves in the
basis.
\begin{table}
\caption{A summary of the binding energy BE (in eV), equilibrium bond
length $R_{e}$ (in a.u.) and harmonic vibrational frequency $\omega$
(in cm$^{-1}$) of the TiO molecule with two different
pseudopotentials. The first, with $E_{\rm cut}=50\,$Ry ($50\,$Ry
psp), was used in all ensuing DFT and QMC calculations. The second
has a $64\,$Ry cut-off. The corresponding values of the cut-off
radius, $r_c$, are listed in the footnotes (in units of a.u.). DFT
results from both Perdew-Burke-Ernzerhof (PBE) GGA \cite{gga} and
Perdew-Wang 92 LDA \cite{lda} functionals are shown, together with
experimental values.}
\begin{tabularx}{2.88 in }{r c c c }
\hline
\hline
& BE & $R_{e}$ & $\omega$ \\
\hline
Experiment\cite{TiO_ref_exp1,TiO_ref_exp2} \qquad \qquad & 6.87 or 6.98 & 3.06 & 1009 \\
\hline
$50\,$Ry psp\footnote{O $r_{c}$: $1.45$ ($s$), $1.55$ ($p$);
Ti $r_{c}$: $1.40$ ($s$), $1.40$ ($p$), $1.80$ ($d$).}
\ \ \ GGA \qquad \qquad & 8.00 & 3.02 & 1005 \\
LDA \qquad \qquad & 9.11 & 2.99 & 1040 \\
\hline
$64\,$Ry psp\footnote{O $r_{c}$: $1.30$ ($s$), $1.39$ ($p$);
Ti $r_{c}$: $1.35$ ($s$), $1.35$ ($p$), $1.52$ ($d$).}
\ \ \ GGA \qquad \qquad & 7.96 & 3.04 & 1008 \\
LDA \qquad \qquad & 9.05 & 3.02 & 1044 \\
\hline
\hline
\end{tabularx}
\label{table_dft_TiO}
\end{table}
Non-local pseudopotentials can be treated {\em exactly} within the
present AF QMC formalism, and the use of separable forms leads to the
same speed-up achieved in planewave DFT calculations
\cite{zhang_krakauer}. This is to be compared with standard real-space
DMC calculations where an additional locality approximation
\cite{Mitas91} is used for non-local pseudopotentials that depends on
the overall quality of the trial wave function $|\Psi_T\rangle$. (In
contrast, the fixed-node approximation in DMC only depends on the
position of the nodal surface of $|\Psi_T\rangle$.) In order to
minimize errors due to the locality approximation, small
pseudopotential cut-off radii $r_c$ tend to be used. This could
result in harder pseudopotentials than otherwise required by
transferability considerations. In the AF QMC, the use of non-local
pseudopotentials with larger values $r_c$ (determined only by
transferability requirements) does not pose any additional difficulty.
\section{Results}
\begin{table}
\caption{A summary of the binding energy BE (in eV), equilibrium
bond length $R_{e}$ (in a.u.), and harmonic vibrational frequency $\omega$
(in cm$^{-1}$) of the MnO molecule with three different
pseudopotentials. The first, with $E_{\rm cut}=64\,$Ry and
created from the design non-local (DNL) procedure, was used in all
ensuing DFT and QMC calculations. Two
other sets are also tested here, with $64\,$Ry and $82\,$Ry cut-off values
and without DNL. The corresponding $r_c$ values (in a.u.)
are listed in the footnotes.
Calculated results are from DFT GGA.}
\begin{tabularx}{2.4 in }{ p{1.5 in} c c c }
\hline
\hline
& BE & $R_{e}$ & $\omega$ \\
\hline
Experiment~\cite{TiO_ref_exp2} & 3.70 & 3.11 & 832 \\
\hline
$64\,$Ry DNL psp\footnote{O $r_{c}$: $1.45$ ($s$), $1.55$ ($p$);
Mn $r_c$: $1.40$ ($s$),
$1.40$ ($p$), $1.65$ ($d$).
} & 5.11 & 3.11 & 822 \\
$64\,$Ry psp\footnote{O $r_{c}$: $1.45$ ($s$), $1.55$ ($p$);
Mn $r_c$: $1.40$ ($s$),
$1.40$ ($p$), $1.65$ ($d$).}& 4.90 & 3.07 & 878 \\
$82\,$Ry psp\footnote{O $r_{c}$:
$1.05$ ($s$), $1.02$ ($p$);
Mn $r_c$: $1.25$ ($s$),
$1.25$ ($p$), $1.50$ ($d$).} & 4.99 & 3.09 & 845 \\
\hline\hline
\end{tabularx}
\label{table_dft_MnO}
\end{table}
In this paper, we apply the phaseless AF QMC method to calculate the
binding energies of the transition metal oxide molecules TiO and MnO.
Norm-conserving pseudopotentials are used, and the non-local part of
the pseudopotential $V_{NL}$ is represented using the separable
Kleinman-Bylander (KB) form \cite{KB-separable}.
To obtain the trial wave function $|\Psi_T\rangle$ for each QMC
calculation, a DFT calculation with the generalized gradient
approximation (GGA) is carried out with the ABINIT \cite{Abinit}
program, using the same pseudopotentials and planewave basis.
$|\Psi_T\rangle$ is then taken as the single Slater determinant formed
from the occupied single-particle orbitals obtained from this DFT
calculation, with {\em no further optimization}. The random walkers
are all initialized to $|\Psi_T\rangle$, so the many-body ground-state
projection initiates from the GGA state. In addition, this
$|\Psi_T\rangle$ is used in the QMC calculations to control the
sign/phase problem as described in Section~II.
The pseudopotentials were generated by the OPIUM program \cite{rappe}
using Ti$^{++}$, Mn$^{++}$, and neutral oxygen as reference
configurations. The titanium and manganese semicore states
(3s$^2$3p$^6$) were included as valence states, so the Ti and Mn atoms
contribute 12 and 15 valence electrons, respectively, while the O atom
contributes 6 electrons.
Well-converged planewave cutoffs were $50\,$Ry for oxygen and
titanium, and $64\,$Ry for manganese. These $E_{\rm cut}$'s were
chosen such that the resulting cut-off errors, systematically analyzed
using DFT calculations, were much smaller than the expected QMC
statistical errors. In addition, we have carried out QMC calculations
on a $1\times 1 \times 1$ TiO solid supercell with a 50~Ry and 60~Ry
cutoff, respectively. The calculated energies are the same within
statistical error bars ($\approx 0.1$~eV), confirming basis
convergence at the correlated level. The Mn pseudopotential is
created using the design non-local pseudopotential procedure
\cite{rappe2}. This enhances the pseudopotential transferability by
exploiting the flexibility contained in the separable KB form of the
nonlocal pseudopotential.
The accuracy of the pseudopotentials was examined with DFT
calculations of binding energies, as well as the equilibrium bond
length and harmonic vibrational frequencies. In Tables
\ref{table_dft_TiO} and \ref{table_dft_MnO}, we summarize our
calculations of these properties for different OPIUM pseudopotentials.
In both cases, increasing the hardness of our pseudopotentials did not
lead to significant changes in the calculated properties. We have
also done some of these calculations using
Troullier-Martins~\cite{trouillier_martin} pseudopotentials with the
same cutoff radii, and little difference was found. Moreover, our LDA
results for the bondlengths for TiO and MnO, $R_{\rm{e}}=
2.99$~a.u. and $3.05$~a.u., are in reasonable agreement with the
all-electron LDA values ($R_{\rm{e}}= 3.020$~a.u. and
$3.032$~a.u.) \cite{Hartwigsen} and those obtained with the
Hartwigsen-Goedecker-Hutter pseudopotentials \cite{Hartwigsen}.
The TiO results of
$R_e$ and $\omega$ also compare favorably with the calculations of
Ref. \cite{albaret}.
\begin{table}
\caption{ A comparison between LAPW and pseudopotential calculations
for non spin-polarized TiO in a $7 \times 7 \times 14$~a.u.$^3$
supercell. We show the
equilibrium bond length $R_{e}$ (in a.u.) and harmonic vibrational frequency $\omega$ (in
cm$^{-1}$) from DFT, using both GGA and LDA. The two OPIUM pseudopotentials
are the same as those in Table \ref{table_dft_TiO}.
}
\begin{tabularx}{2.4 in }{l p {0.9 in} c c }
\hline
\hline
& & $R_{e}$ & $\omega$ \\
\hline
LAPW & GGA \qquad\qquad & 3.01 & 1057 \\
& LDA & 2.97 & 1097 \\
\hline
$50\,$Ry psp \quad \quad & GGA
& 2.96 & 1060 \\
& LDA & 2.94 & 1095 \\
\hline
$64\,$Ry psp & GGA
& 2.99 & 1058 \\
& LDA & 2.97 & 1091 \\
\hline\hline
\end{tabularx}
\label{table_LAPW_TiO}
\end{table}
As a further check on the pseudopotentials, we have carried out a
comparison between pseudopotential and all-electron
LAPW calculations. The latter is computationally more costly, so we
limited the comparison to a $7 \times 7 \times 14$~a.u.$^3$
supercell for non spin-polarized TiO molecule. Our results for the
calculated equilibrium bond length and angular frequency of vibration
are summarized in Table~\ref{table_LAPW_TiO}. The close agreement
between the LAPW and the pseudopotential results gives further
evidence on the reliability of the pseudopotentials.
Clearly these tests on the quality of the pseudopotentials are far
from perfect. Our pseudopotentials are all DFT-based, and the tests
are with DFT calculations. For $sp$-bonded systems, we have done
plane-wave Hartree-Fock (HF) calculations using OPIUM DFT pseudopotentials,
and compared with all-electron HF results. In general, these tend to
be quite consistent with the DFT tests, and often good agreement at
the HF level is found when good test results have been obtained from
DFT calculations. Of course, the
suitability of a DFT or HF pseudopotential (i.e., derived from
independent-particle procedures) for many-body calculations is a
separate issue, which our tests do not address.
Empirically, such pseudopotentials have been widely used in many-body
calculations and have been quite successful.
\begin{table}
\caption{ A summary of the calculated binding energy of the molecule
TiO for different supercells. Supercell dimensions are given in
a.u.~and binding energies are in eV. The QMC statistical errors are
in the last two digits, and are indicated in parentheses. At the
DFT GGA level, the binding energy converges to 8.00\,eV.
}
\begin{tabularx}{3 in}{p {1.9 in} c c}
\hline
\hline
& GGA & QMC \\
\hline
10$\times$11$\times$17 & 7.46 & 6.59(20) \\
12$\times$12$\times$15 & 7.77 & 6.98(21) \\
14$\times$14$\times$15 & 7.94 & 7.08(21) \\
$\inf$ & 8.00 & \\
\hline\hline
\end{tabularx}
\label{table_BE_TiO}
\end{table}
\begin{table}
\caption{The calculated total ground-state energy of Mn for different
supercells. Supercell dimensions are in a.u.~and energies are in eV.
The QMC statistical errors are in the last digit, and are shown in
parentheses.}
\begin{tabularx}{3.1 in}{p {1.6 in} c c }
\hline
\hline
& GGA & QMC \\
\hline
11$\times$12$\times$15 \,\,\,\, & -2766.66 \,\,\,\, & -2766.40(5) \\
12.55$\times$13.69$\times$17.11 \,\,\,\, & -2766.38 \,\,\,\, & -2765.66(4) \\
14$\times$14$\times$15 \,\,\,\, & -2766.32 \,\,\,\, & -2765.89(9) \\
15.4$\times$15.4$\times$16.5 \,\,\,\, & -2766.25 \,\,\,\, & -2765.74(8) \\
\protect{$\inf$} \,\,\,\, & -2766.20 \,\,\,\, & \\
\hline\hline
\end{tabularx}
\label{table_Mn_cells}
\end{table}
The use of PBC with a planewave basis requires supercells that
are large enough to control spurious interactions between the periodic
images of the system under study. We studied convergence with respect to such
size effects using both ABINIT and QMC calculations.
Representative results are
shown in Tables~\ref{table_BE_TiO} and \ref{table_Mn_cells}.
Estimating the size-effects in the AF QMC calculations is complicated
by the presence of finite Totter time-step ($\tau$) errors. The QMC
values shown in Tables~\ref{table_BE_TiO} and \ref{table_Mn_cells} are
final values after extrapolations in $\tau$, as discussed further
below. The range of supercells shown in Table~\ref{table_BE_TiO}
corresponds to about 12,000-17,000 planewaves in our basis. For the Ti
atom, the largest two supercells resulted in a degeneracy of the
highest-lying occupied $d$-orbitals in the density functional
calculations. To break the degeneracy, these supercells were modified
to $11.6 \times 12\times 15$~a.u.$^3$ and $13.5 \times 14 \times 15$~
a.u.$^3$, respectively. The fully converged value of the DFT GGA TiO
binding energy is 8.00\,eV, as shown. For the AF QMC calculations,
the binding energies for the larger sizes are converged to well within
the statistical errors.
Table~\ref{table_Mn_cells} shows the energy of the Mn atom
for different supercell sizes. The corresponding number of planewaves
is between 17,000 and 34,000.
As can be seen, the QMC energy is converged to less than the
statistical error for the $14\times 14\times 15$ supercell, although
for the smaller supercells, the finite-size errors are significant both
in GGA and QMC. The MnO molecule, on the
other hand, exhibits a much smaller size effect, with QMC
energies of $-3195.50(11)$\,eV and $-3195.58(7)$\,eV
for the $11\times12\times15$ and $14\times14\times15$ supercells,
respectively.
The QMC Trotter errors were examined by studying the individual
time-step dependence for the atoms and the molecule using a particular
supercell size. Figure~\ref{fig_TiO}, for example, shows the Trotter
extrapolation for the TiO molecule, done with a $10\times11\times 17$
supercell. The Trotter behavior obtained from this procedure was then
used to extrapolate the QMC data of other supercell sizes, for which
calculations were performed with the time step fixed at
$\tau=0.025~\rm{Ry}^{-1}$. The final extrapolated results are what is
shown (e.g., in Table~\ref{table_BE_TiO}). Figure~\ref{fig_MnO}
shows the time-step dependence of MnO, which exhibits a
quadratic behavior compared to the more linear dependence in
Fig.~\ref{fig_TiO} for TiO. The Mn and O atoms exhibit much smaller
finite-$\tau$ errors, as is also the case with the Ti atom (not shown).
\begin{figure}
\includegraphics[width=0.45\textwidth]{TiO_final.eps}
\caption{QMC time-step $\tau$ dependence of the total energy of the
TiO molecule. A $10\times 11\times17$~a.u.$^3$ supercell was used. The
solid line is a linear fit to the calculated QMC energies (solid
squares). The final extrapolated energy $E=-2007.72(17)$\,eV is shown
as a star. }
\label{fig_TiO}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{MnO_all.eps}
\caption{QMC time-step $\tau$ dependence of MnO, Mn, and O.
An $11\times 12\times15$\,a.u.$^3$ supercell was used for MnO and Mn,
and a $10\times
11\times17$\,a.u.$^3$ supercell for oxygen. The solid line is a least-squares
fit to
the QMC energies (solid squares). The final extrapolated values
are shown as a star. MC statistical error bars are indicated.
}
\label{fig_MnO}
\end{figure}
\begin{table}
\caption{A summary of the binding energies of the molecules TiO, MnO,
and O$_2$. Calculated results from the present QMC method and
diffusion Monte Carlo (TiO and MnO from Ref.~\cite{wagner_mitas},
and O$_2$ from Ref.~\cite{grossman}) are shown, together with
experimental values (TiO from
Refs.~\cite{TiO_ref_exp1,TiO_ref_exp2}, MnO from
Ref.~\cite{TiO_ref_exp1}, and O$_2$ from Ref.~\cite{grossman}).
Equilibrium experimental bond lengths were used in the molecule
calculations. Our QMC used as trial wave function a single Slater
determinant from DFT GGA. The trial wave functions used in DMC are
indicated in the footnotes. All energies are in eV, and the
experimental zero point energy is added to each molecule. }
\begin{tabularx}{3 in }{ p {1.3 in} l l l }
\hline
\hline
& TiO & MnO & O$_2$ \\
\hline
Experiment & 6.98 &
3.70 & 5.1152(9) \\
& 6.87 & & \\
\hline
DMC (HF)\footnote{Trial w.f. is a (HF 1-determinant)$\times$Jastrow.} & 6.3(1) & 2.9(1) & 4.84(2)\\
DMC (B3LYP)\footnote{Trial w.f. is a (DFT B3LYP 1-determinant)$\times$Jastrow.} & 6.9(1) & 3.4(2) & \\
DMC (MCSCF)\footnote{Trial w.f. is a
(MCSCF multi-determinant)$\times$Jastrow.} & 6.7(2) & 3.4(2) &
\\
\hline
Present QMC & 7.02(21) & 3.79(34) & 5.12(10)\\
\hline\hline
\end{tabularx}
\label{table_final}
\end{table}
Table~\ref{table_final} summarizes the results for the molecular
binding energies. For comparison we also include results from a recent
diffusion Monte Carlo study by Wagner and Mitas \cite{wagner_mitas}.
As mentioned, our AF QMC calculations use a single-determinant trial
wave functions obtained from a DFT GGA calculation, without a Jastrow
factor or any further optimization to the determinant. We see that
the calculated binding energies from AF QMC and those from DMC
\cite{wagner_mitas} with trial wave functions containing either an
optimized hybrid B3LYP determinant or multiple determinants from MCSCF
are in good agreement with each other and with experiment. DMC with a
trial wave function containing the Jastrow and a single Slater
determinant from HF, on the other hand, gives somewhat worse agreement
with experiment. We have not carried out AF QMC calculations using an
HF trial wave function for these molecules. In several $sp$-bonded
molecules, DFT and HF-generated trial wave functions showed little
difference in the calculated energies in AF QMC.
We have also included in Table~\ref{table_final} the results for the
binding energy of the O$_2$ molecule. Because of the short bond
length of this molecule (R$_e$=2.281~a.u.), a harder pseudopotential
was used, with a higher $E_{\rm cut}$ of $82$\,Ry and smaller values
for $r_c$ (last entry in Table \ref{table_dft_MnO}). At the DFT GGA
level the binding energy is $5.72$~eV. Our QMC results shown in
Table~\ref{table_final} were obtained using a supercell of size $8
\times 9 \times 11$~a.u.$^3$. Additional QMC and DFT calculations with
a larger supercell of $11 \times 12 \times 13$~a.u.$^3$ have verified
that the finite-size effects are within our statistical error bars
($\approx 0.1$~eV). Again, we see that the agreement with experiment
is very good.
Finally, we comment briefly on the computational cost. As mentioned,
the use of planewaves for isolated molecules is somewhat
disadvantageous even at the density-functional level, because of the
need for large supercell sizes to reduce the spurious interactions
between the images of the molecule. The number of planewaves, $M$, is
proportional to the supercell volume, and the computational cost
scales with $M$ as $ M \ln M $. (In addition, it scales quadratically
with the number of electrons.) As a result, these planewave AF QMC
calculations are computationally rather demanding, especially with
transition metal oxides. For instance, the ground-state energy of the
MnO molecule in Fig.~2 at the single Trotter step of $\tau =
0.008\,\rm{Ry}^{-1}$ (with an error bar of $0.35$\,eV) was obtained
from running on an Intel XEON cluster (3.2 GHz) for about 150 hours
using 72 processors.
In summary, we have presented the first study of transition metal
oxide molecules by AF QMC. We have shown that the binding energies of
TiO and MnO calculated with the new phaseless AF QMC method
\cite{zhang_krakauer} are in good agreement with experiment, and are
comparable to the best results obtained from diffusion Monte Carlo
\cite{wagner_mitas}. It is encouraging that a trial wave function of
only DFT single Slater determinants was sufficient for the phaseless
QMC method to reach this accuracy. Together with previous results for
$sp$-bonded systems \cite{zhang_krakauer,CPC05}, the present study
indicates that the phaseless method is a robust QMC
method. Complementary to standard DMC, it offers a promising approach
for the computation of correlation effects in real materials.
\section{Acknowledgments:}
We would like to thank E.~J.~Walter for many useful discussions, and
for carrying out the LAPW calculations. This work was supported by
ONR Grants N000149710049 and N000140110365, NSF, and DOE's
Computational Materials Science Network.
Computations were carried out in part at the Center for Piezoelectrics by
Design (CPD), the SciClone Cluster at the College of William and Mary,
the National Center for Supercomputing Applications (NCSA), and the San
Diego Supercomputer Center (SDSC) center.
| 2024-02-18T23:39:48.802Z | 2005-10-28T20:41:43.000Z | algebraic_stack_train_0000 | 494 | 5,768 |
|
proofpile-arXiv_065-2478 | \section{Introduction}
A survey of galaxies at high redshift is a direct approach to understand
formation and evolution of galaxies.
In this decade, the color selection technique utilizing
blue UV continuum slope and Lyman break
of star-forming galaxies has found so-called Lyman Break
Galaxies (LBGs; e.g., \cite{Ste03})
at $z\sim3$ ($\sim$2 Gyr after big bang).
Then various studies have been revealing their detailed properties
such as UV luminosity function (UVLF) \cite{Ste99},
UV spectroscopic features (e.g., \cite{Shap03}), and
stellar population (e.g., \cite{Saw, Pap01}) etc,
opening the new era to understand galaxies at high redshift.
As a next step, a survey for LBGs at higher redshift is required.
We focused on redshift 5, because it is
$\sim$1Gyr (corresponding to the maximum age estimation of LBGs at $z\sim3$)
earlier than $z\sim3$ and
the highest redshift at which the two-color (secure) Lyman Break
selection using standard optical bands can be applied.
We have obtained deep and wide $V$,
$I_C$ and $z^{\prime}$-bands images with Subaru \cite{Iye04} and
Suprime-Cam \cite{Miya} in/around the GOODS-N field and the J0053+1234 field
(\cite{Iwa03}; Iwata et al. 2005 in prep.)
and successfully constructed a largest, $\sim$1000 galaxies
with $z'<26.5$, sample of LBG candidates at $z\sim5$ among other
similar surveys (\cite{Lehn03, Ouchi04a, Dick04}).
Now we investigate properties of these LBGs at $z\sim5$.
Their statistical and photometric properties such as UVLF rest-frame UV
to optical color, etc. are presented and discussed in Iwata's
contribution in this proceedings (see also \cite{Iwa03}).
In this paper, we report the results of spectroscopic observations of
our LBGs at $z\sim5$ and discuss their UV spectroscopic features.
Throughout this paper, we adopt flat $\Lambda$ cosmology, $\Omega_M=0.3$,
$\Omega_{\Lambda}=0.7$, and $H_0=70$[km s$^{-1}$Mpc$^{-1}$].
The magnitude system is based on AB magnitude.
\section{Observations}
We made optical spectroscopy for a part of our LBG sample in the GOODS-N
field and
the J0053+1234 field using multi-object-spectroscopy (MOS) mode of the Faint
Object Camera and Spectrograph (FOCAS; \cite{kashi}) attached to the
Subaru Telescope.
Spectroscopic targets were selected from photometric catalog of our
survey for LBGs at $z\sim5$.
Details of imaging observation and color selection are described in
\cite{Iwa03} and Iwata et al.(2005 in prep.).
Main spectroscopic targets are our LBG candidates brighter than
$z'=25.0$ mag.
Since one mask of FOCAS MOS covers a 6$^{\prime}\phi$ aperture diameter
field of view, we designed MOS masks to contain main targets as many as
possible on each MOS field.
So far, we observed four MOS fields: three masks in
the GOODS-N field and one mask in the J0053 +1234 field which
contain 24 bright targets.
We also included faint LBG candidates ($z' \geq 25.0$ mag)
as many as possible in each mask.
Spectroscopic observations were made in 2003 and 2004 under a clear condition.
We used the grism of 300 lines/mm blazed at 7500\AA\ and
the SO58 order cut filter.
This setting gave wavelength coverage from 5800\AA\ to 10000\AA\
depending on a slit position on a mask.
The MOS slit widths were
fixed to be 0.$^{''}$8, giving a spectral resolution of R$\sim$700
which was measured by night sky emission.
An exposure time of each frame was 0.5 hours, and a total effective exposure
time was $5-6$ hours.
This exposure time was set to detect continuum feature of main targets.
Seeings during the observing runs were
$\sim$0.$^{\prime\prime}6 - 0.^{\prime\prime}$8.
\section{Results}
Among 24 main targets, we identified 9 objects to be LBGs at $z\sim5$.
Examples of resultant spectra are shown in Figure 1.
\begin{figure}
\vspace*{1.25cm}
\begin{center}
\epsfig{figure=f1a.eps,width=6.5cm}
\epsfig{figure=f1b.eps,width=6.5cm}
\end{center}
\vspace*{0.25cm}
\caption{
Examples of spectra of LBGs at $z\sim5$. Flux scale is $F_{\lambda}$ and
normalized with the continuum level. Sky spectrum is shown in a lower
panel of each figure, and atmospheric absorptions are shown as vertical
hatched regions.
}
\end{figure}
Redshifts of these objects are confirmed with continuum depression
shortward of redshifted Ly$\alpha$ and some line features such as
Ly$\alpha$ emission and low
ionized interstellar (LIS) metal absorption lines which are
characteristic features of nearby starburst galaxies
(e.g., \cite{Hek98}) and LBGs (e.g., \cite{Shap03, Ste96a,frye}).
Intriguingly, these bright LBGs generally show
no or a weak Ly$\alpha$ emission and
relatively strong LIS absorption lines, though the sample size is still small.
Figure 2 presents the composite spectra of bright LBGs at $z\sim5$
(thick line: \cite{Ando04}) and LBGs at $z\sim3$ (thin line:
\cite{Shap03}) for comparison.
\begin{figure}
\vspace*{1.25cm}
\begin{center}
\rotatebox{270}{\epsfig{figure=f2.ps,width=6.5cm}}
\end{center}
\vspace*{0.25cm}
\caption{Composite spectrum of LBGs at $z\sim5$ (thick line;
\cite{Ando04}) and that of LBGs at $z\sim3$ (thin line; \cite{Shap03}).
Main line features are shown as vertical dashed lines.
}
\end{figure}
The average rest-frame equivalent widths of Ly$\alpha$ and three LIS
absorption lines (SiII $\lambda$1260,
OI+SiII $\lambda$1303, and CII $\lambda$1334) of
these bright LBGs at $z\sim5$ are $5.9$\AA\ and $-2.6$\AA, respectively.
The value of EW of Ly$\alpha$ emission is small
by considering that Ly$\alpha$ emission is often seen in LBGs at $z\sim3$
and more than 1/4 of them have strong
(EW$_{\rm rest}>$20\AA) Ly$\alpha$ emission.
The average value of EW of three LIS absorption lines is stronger than
that of LBGs at $z\sim3$ ($-1.8$\AA: \cite{Shap03}).
Assuming the local relation between LIS absorption and metallicity by
\cite{Hek98}, we can estimate their metallicity of 12$+$log(O/H)$\sim8.0$
(1/5 solar).
We also found a velocity offset between the peak of Ly$\alpha$ emission
with respect to that of average of LIS lines; peaks of Ly$\alpha$
emission are redshifted $300 - 700$ km s$^{-1}$ to LIS absorption for
five objects.
Similar velocity offsets were also reported in LBGs at $z=3\sim4$
(e.g.,\cite{Shap03, frye}) which may be related to a large scale outflow.
Reminders of spectroscopic main sample are not identified because of
their low S/N, but we found 2 objects among them to be
possible elliptical galaxies at foreground redshift ($z\sim1$) from
their continuum features like 4000\AA\ break.
Besides main targets, we confirmed 2 objects among faint bonus targets with
$z'>25.0$ to be LBGs at $z\sim5$ using their strong and asymmetric
Ly$\alpha$ emission line, though LIS absorptions were not seen due to
low continuum S/N.
In contrast to the result for main, bright, targets describe above,
Ly$\alpha$ emissions of these
faint LBGs are quite strong (average EW$_{\rm rest}\sim$ 58.5\AA),
and these are expected to be detected as Ly$\alpha$ emitters (LAEs).
This result suggests that EW of Ly$\alpha$ emission of LBGs at $z\sim5$
depends on the UV luminosity.
Figure 3 shows the positions of identified objects in the
$V - I_C$ and $I_C - z'$ two color diagram.
Filled circles show LBGs at $z\sim5$ (7 bright objects as large
circles and 2 faint objects as small ones), and crosses show foreground
objects.
In order to examine our color selection criteria, we also observed
some objects located outside but close to the criteria.
As a result, four objects were identified to be Galactic M stars which
are also plotted as crosses in Figure 3.
These results suggest that our selection criteria for LBGs at $z\sim5$
are reasonable.
\begin{figure}
\vspace*{1.25cm}
\begin{center}
\rotatebox{270}{\epsfig{figure=f3.ps,width=6.5cm}}
\end{center}
\vspace*{0.25cm}
\caption{
Positions of identified objects in two-color diagram. Our color
selection criteria \cite{Iwa03} for LBGs at $z\sim5$
are indicated by thick lines.
Filled circles represent the objects confirmed to be at $z\sim5$
(7 bright objects as big circles and 2 faint objects as small ones).
Crosses show objects identified to be foreground objects
(Galactic M stars and possible ellipticals at $z\sim1$).
A dashed (a dot-dashed) line represents a color track of a model LBG
spectrum with the $E(B-V)=0.0$ mag ($E(B-V)=0.4$ mag) from
\cite{Iwa03}. A dotted line refers to a color track of an elliptical
galaxy \cite{cww}. Small open pentagons indicate the colors of A0 -- M9
stars calculated based on the library by \cite{Pick}.
}
\end{figure}
\section{Discussions}
We found the sign of luminosity dependence of EW of Ly$\alpha$ emission in
LBGs at $z\sim5$; the lack of strong (EW$_{\rm rest}>$20\AA) Ly$\alpha$
emission in bright LBGs at $z\sim5$.
In order to examine this trend, we compiled past results of the
spectroscopy of galaxies at similar redshift.
Figure 4 shows the EW$_{\rm rest}$
of Ly$\alpha$ emission against the rest-frame UV absolute magnitude.
Filled circles show our results and
filled squares are the results of spectroscopies of galaxies at $z=4.4-5.9$
\cite{Lehn03, Spi99, Wad99,Daw02, Dey98}.
We also show the SFR estimated from UV absolute
magnitude using the relation by \cite{Ken98}\footnote{In order to
derive UV absolute magnitudes and the SFR,
we assumed continuum slope $\beta$ of $-1$ which is a typical value of
LBGs at $z\sim3$.}.
This figure clearly shows that there are no UV luminous LBGs at $z\sim5$ with
strong (EW$_{\rm rest}>$20\AA) Ly$\alpha$ emission, while UV faint
ones tend to have strong Ly$\alpha$ emission.
In addition, there seems to be a UV magnitude threshold for LBGs with strong
Ly$\alpha$ emission around
$M_{1400}\sim-$21.5 mag which is almost the same as the $M_{\ast}$
magnitude of UV luminosity function of our $z\sim5$ LBG sample
\cite{Iwa03}.
In Figure 4, crosses show Lyman $\alpha$ emitters (LAEs) at $z\sim5.8$
from narrow-band imaging data \cite{Aji03}.
The EW distribution of LAEs is similar to that of faint LBGs with strong
Ly$\alpha$ emission, suggesting the fraction of LAEs to LBGs changes
with the UV luminosity at $z\sim5$ universe.
This is consistent with past result of \cite{Ouchi03}; a number ratio
of LAEs to LBGs at $z\sim5$ decreases with increasing UV luminosity.
\begin{figure}
\vspace*{1.25cm}
\begin{center}
\rotatebox{270}{\epsfig{figure=f4.ps,width=6.5cm}}
\end{center}
\vspace*{0.25cm}
\caption{
Rest-frame EWs of Ly$\alpha$ emission vs. absolute
magnitude at rest-frame 1400\AA\ for galaxies at $z\sim5$.
Filled circles show our spectroscopic results and
filled squares show results from \cite{Lehn03}
and serendipitously discovered objects at
$z\sim5$ \cite{Spi99, Wad99, Daw02, Dey98}.
Crosses represent LAEs at $z=5.8$ \cite{Aji03}
obtained from narrow-band imaging.
SFR estimated from UV absolute magnitude with the relation by
\cite{Ken98} are also shown.
}
\end{figure}
If the absence of strong Ly$\alpha$ emission is due to the dust extinction,
luminous LBGs at $z\sim5$ may have chemically evolved to some extent.
Presence of strong LIS absorption and the estimated metallicity
($\sim$1/5 solar) also support this idea.
It seems that luminous LBGs at $z\sim5$ started star formation
relatively earlier than faint ones.
Further, results of clustering analysis of LBGs at $z\sim5$ show bright
LBGs have a larger correlation length than faint ones, suggesting
bright LBGs reside in more massive dark halos (\cite{Ouchi04b}; see also the
contribution by Iwata et al. in this proceedings).
This fact also implies that bright LBGs at $z\sim5$ have experienced
star formation earlier than faint ones, i.e., biased star formation in
the early universe.
Of course, velocity structure and distribution
of HI gas (including dust) in/around the galaxy also affect the strength
of Ly$\alpha$ emission and its profile.
We found asymmetry of Ly$\alpha$ emission line and the velocity
offset between Ly$\alpha$ emission and LIS
absorption lines in a part of our sample,
which implies the presence of a large scale motion of
the neutral gas in/around LBGs at $z\sim5$.
Thus we can not rule out the possibility that the effect of gas geometry
and kinematics for the EW and profile of Ly$\alpha$ emission.
In any case, more spectroscopic sample of LBGs at $z\sim5$
is needed to study their spectroscopic features and discuss their
luminosity dependence presented in this paper,
which would give clues to understand evolution of galaxies in the early
universe.
\acknowledgements{
This work is based on data collected at Subaru Telescope which is
operated by the National Astronomical Observatory of Japan.
We are grateful to the FOCAS team, especially support astronomer
Youichi Ohyama, and all staffs of Subaru telescope
for their dedicated supports.
MAs are supported by a Research Fellowship of the Japan Society for the
Promotion of Science for Young Scientists.
}
| 2024-02-18T23:39:48.818Z | 2005-10-30T10:07:24.000Z | algebraic_stack_train_0000 | 495 | 2,190 |
|
proofpile-arXiv_065-2603 | \section{Introduction}
\label{sec:intro}
Among the most dramatic structures in the interstellar medium (ISM) of
disk galaxies are large shells and supershells. These objects are
generally observed as voids in the neutral hydrogen (\HI) distribution
surrounded by swept-up walls. In some nearby galaxies, like the Large
and Small Magellanic Clouds, the \HI\ structure of the disk is
dominated by shells and supershells
\citep{kim99,hatzidimitriou05}. The ISM of the Milky Way is also
riddled with tens, if not hundreds, of \HI\ shells
\citep[e.g.][]{heiles79,heiles84,mcgriff02a,ehlerova05}. It is thought
that most \HI\ shells are formed by stellar winds or supernovae, or
the combined effects of both. This explanation is particularly
convincing for smaller shells, with diameters of a few tens of parsecs
and formation energies on the order of $10^{52}$ ergs. However, the
larger shells, or supershells, are enigmatic. They seem to require
unreasonably large ($>10^{53}$ ergs) formation energies in order to
maintain expansion velocities of $\sim 20$ ${\rm km~s^{-1}}$\ at radii in excess of
a few hundred parsecs \citep{heiles84}. The stellar wind and
supernova from a given massive star is capable of injecting $\sim
10^{51}$ ergs of energy into the ISM, suggesting that many hundreds
and even thousands of massive stars are required to power the most
energetic shells. In this case, it is expected that multiple
generations of star formation are required, and although numerous
studies have searched for evidence of triggered star formation
associated with \HI\ shells there are few examples \citep{oey05}.
\HI\ supershells play a significant role in the energy budget of the
ISM and can also play a role in the exchange of matter between the
disk and halo. \HI\ supershells can grow large enough to exceed the
scale height of the \HI\ disk. In this case, the shell expands
rapidly along the density gradient away from the disk until it becomes
unstable and breaks out, venting its hot internal gas to the halo.
This ``chimney'' process may provide a mechanism for distributing hot
gas and metals away from the disk \citep*{dove00}. \HI\ shell
blow-outs are predicted for, and indeed observed in, a number of dwarf galaxies
where the gravitational potential of the disk is smaller than in large
spirals \citep{maclow99,marlowe95}. Occasionally chimneys are
observed in large spiral galaxies, with good examples being
NGC 891, NGC 253 and NGC 6946 \citep{rand90,howk97,boomsma05}. In the
Milky Way, very little is known about the impact of chimneys on the
formation, structure and dynamics of the halo. In fact, only a
handful of relatively small chimneys are known in the Milky Way,
e.g. the Stockert chimney \citep{muller87}, the W4 chimney
\citep{normandeau96,reynolds01}, the Scutum supershell
\citep{callaway00} and GSH 277+00+36 \citep{mcgriff00}. Together, the
known chimneys are not capable of providing the thermal energy
required to support the halo.
One of the largest \HI\ supershells in the Milky Way is GSH 242-03+37,
discovered by \citet{heiles79} in his seminal work on Galactic shells.
GSH 242-03+37 is located at $l=242\arcdeg$, $b=-3\arcdeg$, which is in
the direction of the so-called ``Puppis window'', an area of very low
visual extinction \citep{fitzgerald68}. The shell has an angular
diameter of $15\arcdeg$. Its kinematic distance is 3.6 kpc, implying
a physical diameter of $\sim 1$ kpc. \citet{heiles79} suggested that
GSH 242-03+37 is still expanding with an expansion velocity of $v_{\rm
exp} \approx 20$ ${\rm km~s^{-1}}$\ and from that he estimated an expansion
energy of $E_E \sim 1.6 \times 10^{54}$ ergs. Despite the impressive
size and implied energetics of this shell, there has been very little
follow-up work. \citet{stacy82} made a thorough study of the \HI\ in
the region $239\arcdeg \leq l \leq 251\arcdeg$ with the aim of
correlating \HI\ features with optical spiral tracers. The survey
concentrated mainly on smaller scale \HI\ features, and while it
mentioned GSH 242-03+37, no detailed images were available or
discussed.
Here we use new data from the Galactic All-Sky Survey (GASS) to study
the \HI\ supershell GSH 242-03+37. We present the highest angular
resolution ($\sim 15\arcmin$) and most sensitive images of the shell
published so far. We show that GSH 242-03+37 has in fact broken out of
both sides of the Galactic plane through three large channels. We
show that these chimney openings are capped at high Galactic latitude
with very narrow, low surface brightness filaments. We discuss the
chimney caps and their long-term fate in \S \ref{sec:chimneys}. In
\S\ref{subsec:otherwavelengths} we examine archival X-ray (ROSAT) and
H-alpha (SHASSA) data to explore the various gas phases associated
with the chimney. In \S \ref{sec:stars} we discuss the stellar
content of the shell and in \S \ref{sec:minivoid} we explore the
possible association of a small shell that appears inside GSH
242-03+37.
\section{Observations and Analysis}
\label{sec:obs}
The \HI\ data presented here are from the first pass of the Parkes
Galactic All-Sky Survey (GASS; McClure-Griffiths et al., 2005, in
prep.). GASS is a project to image \HI\ at Galactic velocities
($-400~{\rm km~s^{-1}} \leq v_{\rm LSR} \leq +450~{\rm km~s^{-1}}$) for
the entire sky south of declination $0\arcdeg$. GASS uses the
Parkes multibeam to produce a fully sampled atlas of \HI\ with an
angular resolution of $15\arcmin$, spectral resolution of $0.8~{\rm
km~s^{-1}}$, and to an rms sensitivity of 80-90 mK. The survey will be
corrected for stray radiation effects, according to the method
described in \citet{kalberla05}, to ensure high reliability of the
\HI\ spectra. Observations for the survey began in January 2005 and
will continue through to 2007. When complete, GASS will be the first
fully sampled all-sky survey of \HI\ on sub-degree scales. The full
survey details, including its scientific goals, will be described in a
future paper. Here we briefly describe the observations and data
reduction techniques to allow assessment of the data presented.
GASS is conducted as an on-the-fly mapping survey, with each point in
the sky scanned twice. We use the Parkes multibeam
\citep{staveley-smith96}, which is a thirteen beam receiver package
mounted at prime focus on the Parkes Radiotelescope near Parkes NSW,
Australia. The thirteen beams of the multibeam are packed in a
hexagonal configuration with a beam separation on the sky of
$29\farcm1$ for the inner beams. On-the-fly mapping is performed by
scanning the telescope at a rate of 1~deg~min$^{-1}$, recording
spectra every 5 seconds. While scanning, the receiver package is
rotated by $19\fdg1$ with respect to the scan direction to ensure that
the inner seven independent beams make parallel tracks equally spaced
by $9\farcm5$ on the sky. Scans in both right ascension and
declination will be made for the full survey, although only a few RA
scans have been included in this paper. The declination scans are made at a
constant RA and are 8 deg long in declination. After a scan the
receiver package is offset in RA to perform an interleaved scan,
reducing the spacing between adjacent beam tracks to $4\farcm7$.
Spectra are recorded in a special correlator mode that allows 2048
channels across an 8 MHz bandwidth on all thirteen beams. In-band
frequency switching is used to allow for robust bandpass correction.
We switch every 5 seconds between center frequencies of 1418.8345 MHz
and 1421.9685 MHz. Bandpass calibration is done in near real-time
using the {\em Livedata} package, which is part of the ATNF subset of
the {\em aips++} distribution. The bandpass correction algorithm
employed was designed expressly for the GASS frequency-switched data.
It works on each beam, polarization and IF independently, performing a
robust polynomial fit to the quotient spectrum (one frequency divided
by the second frequency) after masking the emission by examining the
spectrum both spectrally and spatially. {\em Livedata} also performs the
Doppler correction to shift the spectra to the Local Standard of Rest
(LSR). Absolute brightness temperature calibration was performed from daily
observations of the IAU standard line calibration regions S8 and S9
\citep{williams73}.
Calibrated spectra are gridded into datacubes using the {\em
Gridzilla} package, also part of the ATNF subset of the {\em aips++}
distribution. The gridding algorithm used in {\em Gridzilla} is
described in detail in Barnes et al.\ (2001)\nocite{barnes01}. GASS
spectra were imaged using a weighted median technique with a cell size
of $4\arcmin$, a Gaussian smoothing kernel with a full width half max
of 12\arcmin, and a cutoff radius of $8\arcmin$. The effective
resolution of the gridded data is $\sim 15\arcmin$. The per channel
rms of the resulting image cubes near the Galactic plane is $\sim 120$
mK. These data are not corrected for stray radiation and may
therefore contain some low-level spurious features. For the data
presented here we have compared our images with the low resolution
stray radiation corrected Leiden/Argentine/Bonn survey
\citep{kalberla05,bajaja05} to verify features.
\section{Results}
\label{sec:results}
\citet{heiles79} cataloged GSH 242-03+37 with a low confidence
rating, suggesting uncertainty about the shell's veracity. With the
improved angular and spectral resolution of the GASS data, as well as
the availability of improved data visualization tools, we are
confident that this shell meets the three criteria for shell
identification given in \citet{mcgriff02a}, i.e.\ that the void is
well-defined over more than three consecutive velocity channels with
an interior to exterior brightness contrast of 5 or more, that the
void changes shape with velocity and that a velocity profile
through the shell shows a well-defined dip flanked by peaks.
In Figure \ref{fig:hishell} we show velocity channel images of the
shell as multiple panels. The shell is visible as the large void in
the center of the images, between LSR\footnote{All velocities are
quoted with respect to the kinematic Local Standard of Rest.} velocities
$v\approx 30$ ${\rm km~s^{-1}}$\ and $v\approx50$ ${\rm km~s^{-1}}$. Every fourth velocity
channel is displayed here to give an impression of the dynamic
structure in the shell. The first and last panels of
Fig.~\ref{fig:hishell} show the approximate front and back caps of the
shell. The shell extends over approximately 18 degrees in longitude
and 10 degrees in latitude. However, there are clear breaks on the
top and bottom of the shell, as seen in the velocity channel images.
These are indicative of chimney openings and will be discussed
thoroughly below. We find that the center of the shell is at a
slightly different location than given in \citet{heiles79}. We define
the center as the velocity of least emission in the spectral profile
through the shell center and the geometric center of the shell at that
velocity. Using these criteria, the center of the shell is at
$l=243\arcdeg$, $b=-1.6\arcdeg$, $v=+42$ ${\rm km~s^{-1}}$, notably different than
the coordinates implied by its name.
A velocity profile through the shell center is shown in the top panel
of Figure \ref{fig:profile}. The shell is the clearly defined dip in
the profile at $v\approx 40$ ${\rm km~s^{-1}}$. The front and back caps of the
shell are marked on Fig.\ \ref{fig:profile} and are apparent as the
bumps in the velocity profile at $v\approx 27$ ${\rm km~s^{-1}}$\ and $v\approx 57$
${\rm km~s^{-1}}$\ on both sides of the void. The shell is located at a Galactic
longitude where the rotation curve is relatively simple, allowing us
to translate radial velocity approximately into distance. The lower
panel of Fig.\ \ref{fig:profile} plots the velocity-distance relation
at $l=244\arcdeg$ from the \citet*{brand93} rotation curve, assuming
the IAU recommended values for the Galactic center distance, $R_0=8.5$
kpc, and LSR velocity, $\Theta_0 = 220$ ${\rm km~s^{-1}}$. From this relationship
the kinematic distance of the shell is 3.6 kpc, as was also found by
\citep{heiles79}. The shell is at a Galactocentric radius of $R_{\rm
g} = 10.7$ kpc. The error on the kinematic distance is on the order
of 10\% because of uncertainties in determining the central velocity
of the shell, random cloud-to-cloud motions in the ISM and errors in
the rotation curve. The radius of the shell along the plane is
$R_{\rm sh} = 565 \pm 65$ pc. At a distance of 3.6 kpc our resolution
is approximately 16 pc. Because of the large size of the shell it may
be elongated because of differential rotation in the Galactic plane
\citep{tenorio88}. This will distort the shell and affect its
lifetime as discussed below.
The interior of the shell has extremely low brightness temperature
values when compared with the rest of the Galactic plane; the mean
brightness temperature in the shell interior is only $T_{\rm b} = 4$ K
with a standard deviation on the mean of $\sigma_{\rm T} = 1.5$ K.
Assuming the gas is optically thin, this implies a mean column density
of $N_{\rm H} = 1.3 \pm 0.6 \times 10^{20}~{\rm cm^{-2}}$ and a mean
internal \HI\ number density of $n_{\rm H}\sim 0.07~{\rm cm^{-3}}$ if
the shell is spherical.
\subsection{GSH 242-03+37 Physical Properties}
The physical properties of GSH 242-03+37, such as radius, mass,
expansion velocity, and expansion energy were estimated by
\citet{heiles79}. Our values are only slightly different from those.
Both our values and Heiles' values, where different, are given in
Table \ref{tab:params}.
Expansion velocities for shells are usually estimated as half of the
total measured velocity width, $\Delta v$, of the shell. The full
velocity width of GSH 242-03+37, through the center of the shell, is
approximately $\Delta v=25$ ${\rm km~s^{-1}}$. Because of the relationship between
distance and radial velocity, there is a complicated coupling of the
expansion velocity, $v_{\rm exp}$, and the velocity width due to the
line-of-sight physical dimension of the shell, $v_p$. A simplistic
way of de-coupling the expansion velocity and velocity width due to
physical size is to use the velocity gradient, $dv/dr$, to estimate
the contribution of the physical size to the total velocity width.
Again using the \citet*{brand93} rotation curve, we find that at
$v=42$ ${\rm km~s^{-1}}$\ along this line of sight $dv/dr \sim 10~{\rm
km~s^{-1}~kpc^{-1}}$. If the diameter of the shell along the
line-of-sight is comparable to its diameter in the plane of the sky,
then $v_p \sim 10~{\rm km~s^{-1}}$. We then make the simplifying
assumption that the total velocity width is $\Delta v \approx 2v_{exp}
+ v_p = 25$ ${\rm km~s^{-1}}$, implying $v_{\rm exp} \approx 7$ ${\rm km~s^{-1}}$.
The expansion energy, $E_E$, of a shell is defined by \citet{heiles79}
to be the equivalent energy that would have been deposited at the
center of the shell to account for the observed radius and expansion.
The expansion energy, based on the calculations of \citet{chevalier74}
for supernova expansion, is $E_E = 5.3 \times
10^{43}\,n_0^{1.12}\,R_{\rm sh}^{3.12}\,v_{\rm exp}^{1.4}$, where
$n_0$ is the ambient density measured in ${\rm cm^{-3}}$, $R_{\rm sh}$
is in parsecs, $v_{\rm exp}$ is in ${\rm km~s^{-1}}$. This equation makes the
extreme simplifying assumption that the ambient medium, $n_0$, into
which the shell is expanding, is homogeneous with constant density. We
know that this cannot be true on small scales, but for very large
shells the density variations largely average out and the equation
provides a reasonable standard energy estimate with which to compare
shells. For GSH 242-03+37, assuming $n_0 \approx 1~{\rm cm^{-3}}$,
the expansion energy is $E_E\sim 3.1\times 10^{53}$ ergs. Another
limitation of the expansion energy equation is that it does not
account for energy lost to high latitudes by shell break-out.
Therefore, for GSH 242-03+37 the expansion energy is only a lower
limit. We note that our expansion energy estimate is about a factor
of 10 lower than the value quoted in \citet{heiles79}. The difference
between our value and Heiles' can be accounted for by our assumption
of an ambient density of $n_0 \approx 1~{\rm cm^{-3}}$, whereas
\citet{heiles79} uses $n_0 \approx 2~{\rm cm^{-3}}$, and from the
lower expansion velocity estimated here. If the average energy output
of a single O or B star via its stellar winds and supernovae is $\sim
10^{51}$ ergs, then more than 300 massive stars are required to expand
the shell to its current size. There are no known coveal stellar
clusters of that size in the Milky Way, which suggests that GSH
242-03+37 was formed through the effects multiple generations of
massive stars.
Age estimates for \HI\ shells are also fraught with large
uncertainties. Unless a powering source, such as an OB association,
that can be aged independently is associated with the shell it is
often impossible to accurately estimate the age of a shell. We can,
however, estimate a shell's dynamic age based on models of the
evolution of supernova remnants in the late radiative phase. In this
case the dynamic age, $t_6$ in units of Myr for a shell of radius,
$R_{\rm sh}$, given in pc and $v_{\rm exp}$ given in units of ${\rm km~s^{-1}}$, is
given by $t_6 = 0.29 \,R_{sh}/v_{\rm exp}$ \citep{cioffi88}. For GSH
242-03+37, the dynamic age is $t\sim 21$ Myr. Comparing with other
known Galactic shells, GSH 242-03+37 is relatively old
\citep{heiles79,heiles84,mcgriff02a}. Ultimately the lifetime of
\HI\ shells is limited by the development of instabilities along
their walls and the onset of deformation and shear due to differential
rotation in the Galactic plane. Both effects become significant at
around 20 Myr \citep{dove00,tenorio88}. Differential rotation will
distort the shell so that it no longer appears spherical. At a
Galactocentric radius of 10.3 kpc this is a moderate effect; over 20
Myr a static shell of radius 1 kpc will distort to have an axial ratio
in the plane of $\sim $1.5:1.
\subsection{GSH 242-03+37 Morphology}
\label{sec:morphology}
The three-dimensional morphology of GSH 242-03+37 is extremely
interesting. One of the noteworthy aspects of the morphology is that
the shell is not spherical, but has some ``scalloped'' structure along
the edges as can be seen in Figures \ref{fig:hishell} and
\ref{fig:annotate}. The bases of these scallops are separated by
several degrees or arcmin, or $\sim 400-500$ pc. In three locations,
two at the bottom and one at the top, these arches are weaker or
absent presenting extensions from the Galactic plane towards the halo.
The three break-outs are at: $(l,b)=(245\fdg2,+3\fdg8)$, $(243\fdg0,
-8\fdg3)$ and $(236\fdg5,-8\fdg2)$. These break-outs or ``chimneys''
have some vertical structures that clearly separate them from the
ambient medium. At very low brightness temperatures ($\sim 1.5$ K)
the chimney structures are capped, each about 1.6 kpc from the center
of the shell. These caps are marked on Figure \ref{fig:annotate},
which is a channel image at $v=+45$ ${\rm km~s^{-1}}$. They are also visible in
the channel images shown in Figure \ref{fig:hishell}. Like the shell,
the caps are visible over $\sim 20$ ${\rm km~s^{-1}}$\ of velocity space, which
suggests that they are not only associated with the shell, but also
physically extended. The structure and nature of these caps will be
discussed in \S \ref{sec:chimneys}.
Figure \ref{fig:walls} shows a slice across the shell in the
longitudinal direction. The slice is taken at $v=39.4$ ${\rm km~s^{-1}}$,
$b=0\fdg20$. The shell is clearly empty and the walls
of the shell are very sharp. The walls show a brightness temperature
contrast of 10 to 20 from the shell interior to the shell wall over
one to two resolution elements, or $\sim 16 - 32$ pc. Referring to
similarly strong shell walls in GSH 277+00+36, \citet{mcgriff03b}
suggested that the sharpness of the walls is indicative of compression
as associated with a shock.
\citet{stacy82} noted one cloud in the shell interior at $(l,b,v)$ =
$(242\fdg2, -4\fdg6,36~{\rm km~s^{-1}})$, pointing out that it was
unusual as one of the only clouds within a largely evacuated area.
With the sensitivity of GASS it is clear that there is quite a lot of
structure inside the shell, though in general it is only at the $T_b
\sim 5$ K level. There are a number of interlocking rings throughout
the shell interior. Most of these are relatively circular and
noticeable over several velocity channels. The cloud cataloged by
\citet{stacy82} appears to belong to a thin ring structure near the
center of GSH 242-03+37, as shown in Figure \ref{fig:minishell}. This
ring is distinguished from the rest of the internal structure because
it encloses an interesting region that is even more evacuated than the
rest of the shell, appearing as a shell within the shell. This
``mini-shell'' is centered at $(l,b,v)$ = $(242\fdg9,-2\fdg3,45~{\rm
km~s^{-1}})$ ${\rm km~s^{-1}}$\ and has an angular diameter of $\sim 3\fdg8$.
The interior has lower brightness temperatures than the main void; the
brightness temperatures associated with mini-shell are $\sim 2$ K
inside the void, about a factor of two lower than in the main void,
and $\sim 7 - 10$ K along the ``walls''. There are few regions in the
Galactic plane that show such low brightness temperatures and most are
associated with known \HI\ shells.
Another particularly noticeable feature is the compact cloud a
$(l,b,v)$ = $(240\fdg9,+4\fdg9,45~{\rm km~s^{-1}})$, which can be seen in
several of the channel images presented in Figure~\ref{fig:minishell}.
The unresolved cloud is surrounded by a ring of emission with a
diameter of 2\arcdeg. Also centered on this position, about 3 degrees
away, is an arc of emission. These features, though noticeable, have
no obvious physical explanation.
\subsection{Comparison with other wavelengths}
\label{subsec:otherwavelengths}
We have obtained publicly available data from the ROSAT all-sky survey
maps of the diffuse X-ray background \citep{snowden97}. We compared
the $1/4$ keV, $3/4$ keV and 1.5 keV emission with the
\HI\ distribution. There is clear evidence for $\frac{1}{4}$ keV
excess emission in the shell interior. It would be surprising,
however, if this excess were physically associated with shell. At
$1/4$ keV, unity optical depth corresponds to \HI\ column densities of
about $1\times 10^{20}~{\rm cm^{-2}}$ \citep{snowden97}, giving a mean
free path for $1/4$ keV X-rays of only $\sim 65$ pc. Even though the
\HI\ column density through the Puppis window is low, at the distance
of GSH 242-03+37 the foreground \HI\ column density is significant at
$\sim 2 \times 10^{21}~{\rm cm^{-2}}$. Examining the \HI\ data cube,
we find that the morphology of the X-ray excess agrees very well with
the foreground \HI\ shell, GSH 242-04-05 \citep{heiles79}. It is
therefore likely that the $1/4$ keV X-ray excess emission traces hot
gas in the foreground object, not in GSH 242-03+37.
The mean-free path for X-rays in the higher energy bands is much
longer; 1.5 keV electrons have a mean free path of $\sim 3$ kpc
\citep{snowden97}. Despite the potential detectability of higher
energy X-rays, we find no correlation between the \HI\ distribution
and the $3/4$ or 1.5 keV emission. The lack of harder X-ray
features is not unexpected, however, because $3/4$ and 1.5 keV
emission suggests very hot gas with temperatures on the order of
$10^7$ K. Old large shells, like GSH 242-03+37 are not expected to
have gas temperatures much higher than about $3.5 \times 10^6$ K
\citep{maclow88}.
We have also compared the \HI\ distribution to the velocity integrated
H$\alpha$ emission from the SHASSA survey of the Southern sky
\citep{gaustad01}. Because of the low extinction towards this shell
it should be possible to detect H$\alpha$ emission to the 3.6 kpc
distance of the shell. Unfortunately, because there is no velocity
discrimination in the SHASSA data it is very difficult to distinguish
foreground H$\alpha$ features from features at the distance of the
shell. We find very few obvious correlations between the \HI\ at
$v=35-50$ ${\rm km~s^{-1}}$\ and H$\alpha$ over the entire field of the shell. The
only potential candidate for agreement is a faint H$\alpha$ filament
that lies just inside the \HI\ mini-shell at $l=243\arcdeg$, $b=-2\fdg3$.
This feature, however, is in a confused region and no definitive
association can be made.
Finally, we have searched the FUSE catalog of O{\sc vi} detections
(Wakker at al. 2003) to determine if there is hot, high-latitude gas
at the same velocity as GSH 242-03+37. Two O{\sc vi} detections below
the plane of the Galaxy are near GSH 242-03+37: the first towards PKS
0558-504 at 11 ${\rm km~s^{-1}}$\ and the second towards NGC 1705 at 29
${\rm km~s^{-1}}$. However, neither of these pointings lie within the capped areas
of the chimney outflows and the O{\sc vi} is observed to be at lower
velocities than that of the shell. Although it is likely that the
shell is filled with hot gas, no connections between the FUSE O{\sc
vi} detections and the morphology of GSH 242-03+37 can be made at
this time.
\section{Discussion}
\label{sec:discussion}
GSH 242-03+37 is a remarkable structure. Its size, age, and
morphology are all at the extreme range of observed Galactic shell
parameters. The only known Galactic supershell to display similar
properties is GSH 277+00+36 \citep{mcgriff03b}. The similarities
between these two shells are intriguing. Both shells have radii of
350 - 500 pc and expansion energies on the order of $10^{53}$ ergs.
Both are located far from the Galactic center at Galactocentric
distances of $\sim 10$ kpc. The morphology of the two shells,
particularly their walls and extended $z$ structure, are strikingly
similar \citep[c.f.][Figure 1]{mcgriff03b}. Although GSH 277+00+36 is
a much less confusing structure than GSH 242-03+37, it exhibits
similar chimney break-outs above and below the plane, as well as the
same scalloped structure along the walls. The similar morphology of
these two objects is intriguing and leads us to speculate that their
large scale features may be dominated by common global Galactic
phenomena, for example the \HI\ scale height of the Galactic disk
(R. Sutherland 2005, private communication), rather than local
phenomena, such as the distribution of powering stars. It should be
possible to determine which effects dominate with high resolution MHD
simulations of supershell evolution. GSH 242-03+37, because it is
closer, offers some advantages over GSH 277+00+36. In GSH 242-03+37
we can observe very weak chimney caps and explore the stellar content.
Here we discuss some of the characteristics that are specific to GSH
242-03+37.
\subsection{Stellar content}
\label{sec:stars}
The stellar content of Galactic \HI\ supershells is largely unknown.
Most shells lie in the Galactic plane behind several magnitudes of
visual extinction, precluding most optical stellar surveys. Infrared
surveys, such as 2MASS, can probe to much greater distances than in
the optical but the line-of-sight confusion of stars makes it
difficult to associate specific stars with supershells. As it is, most of the
known OB associations in the Galactic plane are at distances of less
than 3 kpc. By contrast, most \HI\ shells are at distances larger
than 2 kpc because the confusion of gas at local velocities
makes detecting shells difficult. Fortunately GSH 242-03+37 is
located in the ``Puppis Window'' and benefits from numerous stellar
studies. \citet{kaltcheva00} recently compiled and updated the
$uvby\beta$ photometry of luminous OB stars in the Puppis-Vela region,
providing a nearly complete table of the stellar types and distances.
We have extracted from their table all stars with projected distances within 1
kpc of the center of the shell. For the region $234 \leq l \leq
252\arcdeg$, $-7 \leq b \leq +7\arcdeg$ there are 22 OB stars, of
which the earliest is an O9 type star. These stars are plotted as
crosses on Figure \ref{fig:stars}.
As one might expect for an old shell, there are very few massive stars
in the center of the object, with the notable exception of a small
cluster near $(l,b)=(242\arcdeg,-5\arcdeg)$. These stars belong to a
cluster of O and B-type stars, which appear to lie along the rim of
the internal mini-shell. The remaining stars are located near the
shell walls. Along the right wall the stars appear to lie near the
foci of the loops that characterize the scalloped structure of GSH
242-03+37. This coincidence seems to suggest that the stars near the
edge of the shell are contributing to the continued expansion of the
shell and determining the morphology of the walls. In addition, the
stars trace the left-hand edge of the upper chimney wall, extending to
$b=6\fdg5$ or $z\approx 400$ pc at a distance of 3.6 kpc. The mean
height for Galactic OB stars is only $90$ pc \citep{miller79}, so OB
stars at a height of 400 pc are unusual. One obvious explanation for
their position is that they were formed out of dense
material raised to high $z$ by the shell.
A potential tracer of the past population of massive stars is the
current population of pulsars. \citet{perna04} examined the number of
pulsars within several of the largest supershells in the Milky Way,
including GSH 242-03+37. They compared the numbers of known pulsars
with Monte Carlo simulations of the pulsar population to predict how
many pulsars should be associated with a shell, assuming a multiple
supernovae formation scenario for the shell. Although there are only
2 known pulsars within GSH 242-03+37, based on the current sensitivity
of pulsar searches they predicted that there should be 7 pulsars
within the shell. The known pulsars therefore provide very few
constraints on the progenitor stars that may have formed the
supershell.
\subsection{The Mini-shell}
\label{sec:minivoid}
Figure \ref{fig:minishell} shows the central region of GSH 242-03+37.
The mini-shell (described in \S \ref{sec:morphology}) is apparent at
at $(l,b,v)$ = $(242\fdg9,2\fdg3,45~{\rm km~s^{-1}})$. If it, too,
is at a distance of 3.6 kpc then the mini-shell has a radius of
$R_{\rm sh} \approx 120$ pc. There is no clear indication that the
structure changes size with velocity although it is persistent over at
least 7 ${\rm km~s^{-1}}$\ of velocity width. The shell is likely a stationary
structure or one with a very small expansion velocity. From our data
we can only estimate an upper limit to the expansion velocity of
$v_{\rm exp}\leq \Delta v /2 \sim 3$ ${\rm km~s^{-1}}$, which is less than the
turbulent velocity of warm \HI\ clouds, typically $\sim 7$
${\rm km~s^{-1}}$\ \citep{belfort84}. It is therefore impossible to distinguish
the shell's expansion from random cloud motions in the ISM. For a
stationary shell a rough estimate of the age of the shell can be made
from the sound crossing time for a 120 pc radius. Assuming $c_s \sim
10$ ${\rm km~s^{-1}}$\ for $T\sim 8000$ K \HI, the age is $t \sim R_{\rm sh}/c_s = 12$
Myr.
How does the mini-shell relate to GSH 242-03+37? From the
\HI\ velocity channel images it appears as if the mini-shell is
located within GSH 242-03+37. It is difficult to understand how a
cool neutral shell could form within a mostly evacuated supershell.
We have compared our estimates of the internal density of the
supershell with the swept-up mass of the mini-shell showing that there
is not enough gas in the supershell to form the mini-shell. From the
measured column density along the mini-shell walls we estimate that
the typical \HI\ density in the walls is only $n_H \sim 0.4~{\rm
cm^{-3}}$, a factor of a few lower than typical ISM values
\citep{dickey90}. This gives a swept-up mass for the shell of $\sim 7
\times 10^4~{\rm M_{\odot}}$. If the shell were formed near the
center of the main void, where we estimated that the typical
\HI\ density is only $n_H \sim 0.07~{\rm cm^{-3}}$, then the total
amount of mass enclosed in a sphere of radius 120 pc is only $\sim 1.2
\times 10^4~{\rm M_{\odot}}$, a factor of six less than the swept-up
mass of the shell. If the excess mass in the mini-shell came from
swept-up ionized gas that had since recombined and cooled it would
imply an ionized density of $\sim 0.3~{\rm cm^{-3}}$. This is much
higher than the typical ionized densities in shell interiors, which
are usually on the order of $\sim 5 \times 10^{-3}~{\rm cm^{-3}}$
\citep{maclow88}. An alternative explanation is that the mini-shell
is a very old structure that was formed through the stellar winds of
the stars whose supernovae shocks eventually contributed to the large
shell. In this case, the cool walls of the mini-shell might have been
overtaken by the supernovae shocks, which subsequently expanded to
much larger radii. The size of the mini-shell is approximately
consistent with a late stage stellar wind bubble. A final suggestion
is that the mini-shell formed not near the center of the shell, as it
seems in the projected image, but at the edge or outside of the
supershell where the gas densities should be higher. In order to be
fully outside the main shell the mini-shell would require a systematic
velocity that is $\sim 5 - 10$ ${\rm km~s^{-1}}$\ different from the LSR motion at
its position. Given the dynamical nature of a large shell, that kind
of motion is not unreasonable. Although it seems unlikely that the
mini-shell formed in the interior of a swept-out supershell it is not
possible with our data to distinguish between the latter two
scenarios.
Along the lower wall of the mini-shell lies a small cluster of six O
\& B type stars \citep*{kaltcheva00,kaltcheva01}. The cluster is at a
distance of $3.2\pm0.2$ kpc and contains the stars: LS 538 (B0II), 514
(B1II) 507 (B0.5 III), 534 (B3), 528 (O9III), and 511 (B2)
\citep{kaltcheva00}. These stars are coincident with the brightest
part of the mini-shell. Of these stars, only the later-type B2 and B3
stars are still on the main-sequence. If we assume coeval star
formation, the age of the cluster must be between $\sim 7$ and 15 Myr
\citep{schaller92}, which is comparable to the age estimate for the
mini-shell. There are two possible scenarios to explain the presence
of a stellar cluster on the edge of this internal shell. The first is
that the mini-shell formed from the stellar winds or supernovae of a
single very massive star that was part of the stellar cluster. This
seems unlikely if the stars are coeval, as the age of the stellar
cluster is comparable to or less than the age of the
shell. Alternately, the cluster may have formed out of gas compressed
on the edge of the expanding mini-shell, causing a new generation of
stars along the walls of the mini-shell. For as well as we can
determine the ages of the shell and the cluster, this scenario seems most
likely.
Many studies have searched for evidence of triggered star formation in
supershells. In a recent study of the W3/W4 complex \citet{oey05}
found evidence for three generations of star formation. They point
out that, statistically, a hierarchical system of three or more
generations of star formation is more suggestive of a causal
relationship between the generations than a two generation system. In
the GSH 242-03+37 system there is evidence of multiple epochs of star
formation, contributing respectively to the $\sim 21$ Myr old
supershell, the OB stars near the edges of the supershell, the
mini-shell and the stellar cluster along the edge of the mini-shell.
Whether these multiple epochs of star formation represent multiple
generations or simply continuing star formation is not clear.
Unfortunately, the ambiguous position of the mini-shell and the
uncertainties in the shell ages makes it extremely difficult to
identify three unique generations of star formation in the system. We
therefore state that the system is {\em suggestive} of triggered star
formation, but that the evidence is not conclusive.
\subsection{Chimneys and Halo Clumps}
\label{sec:chimneys}
GSH 242-03+37 has three chimney break-outs with the dominant one
towards positive latitudes. All three chimneys are capped
approximately 1.6 kpc above the center of the shell. The morphology
of the positive latitude chimney in particular is reminiscent of the
models of chimney formation, such as
\citet*{maclow89,tomisaka86}. \citet{maclow89} showed that for a
superbubble expanding in a Galactic disk with an exponential
atmosphere the shell will extend far beyond the Galactic mid-plane,
developing a polar cap at $z\sim 1500$ pc. In the late stages of
shell evolution gravitational acceleration dominates the dynamics of
the slowly expanding shell and the polar cap should become
Rayleigh-Taylor unstable and fragment. It is this fragmentation that
allows hot gas filling the shell cavity to escape to the halo
\citep{dove00}. As seen in Fig.~\ref{fig:annotate} the upper cap of
GSH 242-03+37 shows some evidence for fragmentation. There are a
number of dense concentrations along the general arc of the cap, as
well as some regions where the arc appears absent. We note however,
that even with the sensitivity of GASS, the brightest features along
this arc are only a few Kelvin in brightness temperature, so very faint
portions of the arc may not be detectable.
If, as these observations indicate, the polar caps of expanding
supershells can reach heights of $z \sim 1500$ pc before fragmenting
it raises questions about the ultimate fate of the fragmented caps and
the expected size distribution for forming clouds. These questions
have been addressed in a general sense in a variety of numerical
simulations, \citep[e.g.][]{avillez00,avillez01}. \citet{avillez00},
for example, predicts that expanding shells should produce
condensations of size 5 - 100 pc on timescales of tens of million
years. The simulations explain the cloudlets in terms of expelled
chimney gas that has cooled and recombined. GSH 242-03+37, on the
other hand, seems to be producing cool clouds from the fragmented
shell, a process that presumably takes place well before expelled
chimney gas can cool.
The cool shell of GSH 242-03+37 appears to be fragmenting into
cloudlets with sizes of a few tens of parsecs. The linewidths of
these clumps are on the order of $\sim 10~{\rm km~s^{-1}}$, indicating
thermal temperatures of $T\sim 10^3$ K or lower. We measure column
densities (assuming they are optically thin) for these clumps of $N_H
\sim$ few $\times 10^{19}~{\rm cm^{-2}}$. If the clumps are roughly
spherical, then their average \HI\ number density is $n \sim 1~{\rm
cm^{-3}}$ and their \HI\ mass is $\sim 100~{\rm M_{\odot}}$. The
\HI\ mass of the clumps is well below the dynamical mass limit to be
gravitationally bound, which is $\sim 2 \times 10^5~{\rm M_{\odot}}$
for a 10 pc cloud with a 10 ${\rm km~s^{-1}}$\ linewidth.
Are these clumps in pressure equilibrium with their surroundings? The
thermal pressure of these clumps is $nT \sim 10^3~{\rm cm^{-3}~K}$.
The thermal pressure of the lower halo at a $z$-height of 1.6 kpc is
uncertain. By mass and volume the dominant component of the ISM at
$z\sim 1.6$ kpc is warm ionized gas \citep{ferriere01}, with a density
of $\sim 4\times 10^{-3}~{\rm cm^{-3}}$ and a temperature of $\sim
8000$ K \citep{reynolds91}. We also know from observations of the
soft X-ray background \citep{snowden98} and O{\sc vi} absorption
\citep{savage03}, among others, that there is a significant diffuse,
hot ($T\sim 10^{5-6}$ K) component to the lower halo gas. This
component is difficult to observe directly but from FUSE O{\sc vi}
measurements \citet{savage03} suggest that it is distributed as a
patchy, plane-parallel exponential with a scale height of $\sim 2.3$
kpc. The contribution of the hot, ionized medium to the thermal
pressure of the lower halo is a matter for debate with estimates
ranging from $\sim 10~{\rm K~cm^{-3}}$ \citep{boulares90} to $\sim
10^3~{\rm K~cm^{-3}}$ \citep{shull94}. It is generally agreed,
however, that the presence of this medium, as well as its patchy
nature are probably due to exhausting hot gas from supershells like
GSH 242-03+37. Therefore, the best estimate for the thermal pressure
of the ambient medium most likely comes from estimates of the thermal
pressure in the interior of an evolved supershell.
It is not trivial to estimate the thermal pressure in the interior of
GSH 242-03+37 because the shell has begun to break-out, releasing its
pressure and also because the shell is sufficiently evolved that
radiative cooling is important in the interior. We can, however,
roughly estimate the maximum internal thermal pressure before
break-out, assuming an adiabatic interior where the internal density
is dominated by mass evaporated from the cold dense shell
\citep{maclow88}. The internal thermal pressure for an evolved
spherical shell of age, $t_7 = t /10^7$ yr, formed with an energy
deposition rate of $L_{38} \approx E_E/t /(10^{38}~{\rm erg~s^{-1}})$
is $nT = (1.4 \times 10^4~{\rm K~
cm^{-3}})\,L_{38}^{14/35}n_0^{21/35}t_7^{-28/35}$ \citep{maclow88}.
If, once again, we assume that the ambient density is $n_0 \sim 1~{\rm
cm^{-3}}$, then the internal pressure is $\sim 1.4 \times 10^4~{\rm
K~cm^{-3}}$. Given that the shell has begun to break apart and that
its $z$-height far exceeds its dimension along the Galactic plane,
this pressure estimate is almost certainly too large but it provides a
useful limit to the pressure around the clumps. We may therefore
conclude that the clumps are likely in equilibrium or moderately
pressure-supported by the hot gas from the shell.
We would like to know whether and for how long clouds created from the
fragmentation of a shell could survive against evaporation due to heat
flux from the ambient medium. If the ambient medium is indeed
dominated by warm ionized gas, with $T\sim 8000$ K, then the classical
thermal evaporation is extremely long when compared to the 20 - 30 Myr
lifetime of the shell \citep{cowie77}. Even if the ambient medium is
dominated by the hot, diffuse gas of the shell interior, the
evaporation time is $\sim 35$ Myr. We can expand on this estimation
somewhat by following \citet{mckee77b} who consider the simple
scenario of cool, spherical clouds embedded in a hot medium where the
fate of the cloud is controlled by the effects of both radiative
losses and incoming heat flux. Those authors provide analytic
solutions to determine the critical radius, $R_{cr}$ below which
clouds evaporate and above which they condense material from the
surrounding medium. The critical radius is determined solely by the
pressure in the ambient medium. From Figure 2 in \citet{mckee77b} we
estimate that the critical radius for $T\sim 8000$ K gas with a
density of $\sim 4\times10^{-3}~{\rm cm^{-3}}$ is $\sim 3$ pc. The
clouds, however, are also affected by the very hot, tenuous medium
interior to the shell. Even if this gas is at $\sim 10^6$ K with
densities of $10^{-2}~{\rm cm^{-3}}$ the critical radius will be on
the order of 15 pc and even lower for smaller ambient densities.
Given these very crude assumptions it seems that the clouds should be
near or above the critical radius for all expected temperatures in
their environs. These clouds should therefore be relatively
long-lived and may even condense matter onto themselves.
Another interesting question is what size scales should we expect to
see represented from fragmenting caps. This of course depends on
the instability process responsible for the fragmentation. If the
shell cap fragments through classical Rayleigh-Taylor instabilities
all size scales are expected to be represented, but the growth
timescale of the instability is proportional to the square-root of the
size scale so we should expect to see the smaller scales first. In
addition, in the presence of a magnetic field a lower limit is applied
to the size of growing modes and also a fastest growing mode is
established \citep{mcgriff03b}. In that case, the size scales
observed in supershells may provide probes of the ambient medium.
In GSH 242-03+37 we observe large polar caps that appear to be
breaking into clumps with radii on the order of tens of parsecs. The
size, density, linewidths and $z$-height of these clumps is very
similar to halo cloudlets detected by \citet{lockman02a} and those
found in a recent study of the GASS pilot region (A.\ Ford et
al.\ 2005a, in prep.). The origin of the \citet{lockman02a} clouds is
still quite uncertain. If the polar caps of supershells like GSH
242-03+37 can break into small clumps with parsec or tens-of-parsec
size scales these clumps should be much longer lived than the shell
itself. Although these ideas are still rather speculative, the
similarities of properties suggests that it would be worth pursuing
the fragmenting shell model further. Some important questions to
answer will be: how long can the clouds survive? what is their $z$
distribution? and, given that they are massive compared to their
surroundings, how long before they will drop back to the Galactic
plane? We will address these questions in a future paper comparing
the properties of halo cloudlets with simulations of the long term
evolution of supershells (A.\ Ford et al.\ 2005b, in prep.).
\section{Conclusions}
\label{sec:conclusions}
We have presented new \HI\ images of the Galactic supershell GSH
242-03+37 from the Galactic All-Sky Survey (GASS). GSH 242-03+37 is
one of the largest shells in the Galaxy with a radius of $R_{sh} =
565\pm 65$ pc. We show that the supershell is broken at the edge of
the disk, both above and below the plane. The resultant structure has
three ``chimney'' openings that are capped with very narrow filaments
all situated $\sim 1.6$ kpc above the disk midplane. These ``caps''
are extremely reminiscent of the caps seen on expanding supershells in
simulations, such as those by \citet{maclow89} and \citet{tomisaka98}.
In supershell evolutionary theories these shells should become
Rayleigh-Taylor unstable and the polar caps break into clumps. The
caps of GSH 242-03+37 appear to show clump structures with sizes on
the order of 20 pc, which may indicate the onset of break-out. We
estimate that clouds formed through this break-out may survive longer
than the parent shell. The size, temperature and $z$-heights of these
clouds are similar to the halo cloudlets detected near the disk in the
inner Galaxy \citep{lockman02a}. We suggest that the Lockman
cloudlets may be formed through the fragmentation of high-$z$
supershell caps. All-sky surveys like GASS will provide an extremely
valuable database for testing this idea. GASS will have the sky
coverage, resolution and sensitivity necessary to detect and study the
relationship of small-scale structures in the halo to structures in the
Galactic disk.
We have searched catalogs of OB stars for massive stars in the
vicinity of GSH 242-03+37. We find very few stars at the center of
the shell, but there are 22 OB stars that lie near the internal edges
of the shell, of which the earliest is an O9 type star. There are six
OB stars with ages between 7 and 13 Myr \citep{kaltcheva01} that lie
along a small ``mini-shell'' that looks as if it is inside GSH
242-03+37. It is difficult to understand how a neutral shell could
form in the evacuated cavity of GSH 242-03+37. We therefore suggest
that it lies at the edge of the shell and that the OB stars were
formed in material compressed along the walls of the mini-shell. The
agreement between the main shell structure, the identification of 22
OB stars near the shell walls, the mini-shell and its corresponding
cluster of young OB stars are suggestive, but not conclusive evidence
for triggered star formation.
\acknowledgements The Parkes Radio Telescope is part of the Australia
Telescope which is funded by the Commonwealth of Australia for
operation as a National Facility managed by CSIRO. This research was
performed while D.J.P.\ held a National Research Council Research
Associateship Award at the Naval Research Laboratory. Basic research
in astronomy at the Naval Research Laboratory is funded by the Office
of Naval Research. D.J.P. also acknowledges generous support from NSF
MPS Distinguished International Research Fellowship grant
AST0104439. B.K.G. acknowledges the financial support of the Australian
Research Council through its Discovery Project program. We are
extremely grateful to Warwick Wilson, March Leach, Brett Preisig, Tim
Ruckley and John Reynolds for their efforts in enabling the GASS
correlator mode in time for our first observations.
| 2024-02-18T23:39:49.203Z | 2005-10-11T07:55:04.000Z | algebraic_stack_train_0000 | 518 | 8,479 |
|
proofpile-arXiv_065-2628 | \section{Introduction}
The Earth is the only known example of a life-hosting world,
even though terrestrial exoplanets have been searched for since
the discovery of the first Earth-mass exoplanets by Wolszczan
\& Frail (1992). However, planets similar to the Earth, Venus
or Mars, in size, density or orbital parameters are still
beyond the reach of the present capabilities for planet
detection around normal stars.
Until now, mostly giant exoplanets have been discovered.
Remarkable progresses have been made recently with the
discovery of planets in the mass range of 14 to 21~Earth masses
(14 to 21~M$_\oplus$, see McArthur et al.\ 2004; Santos et al.\
2004), and most recently a $\sim$7.5~M$_\oplus$ planet orbiting
\object{GJ~876} (Rivera et al.\ 2005). We may speculate, then,
that smaller planets with sizes down to that of the Earth might
be observed in a near future. Among the 161
planets\footnote{From J.~Schneider's Extrasolar Planets
Encyclop\ae dia at
\texttt{vo.obspm.fr/exoplanetes/encyclo/encycl.html}. See also
the web page of the IAU Working Group on Extrasolar Planets at
\texttt{www.ciw.edu/boss/IAU/div3/wgesp}.} detected so far,
eight have been discovered or re-discovered as they were
transiting their parent star, producing a photometric
occultation. The last transiting planet identified is a
Saturn-mass planet orbiting \object{HD~149\,026}, a bright
$V=8.15$ G0\,{\sc iv} star (Sato et al.\ 2005). The first
discovered transiting giant exoplanet, HD~209\,458b (Henry et
al.\ 2000; Charbonneau et al.\ 2000; Mazeh et al.\ 2000), is
the object of intense investigations dedicated to
characterizing its hot atmosphere.
Probing planetary atmospheres by stellar occultations is an
effective method used for a lot of planets and their satellites
in the Solar System, from Venus to Charon (see, e.g., Elliot \&
Olkin 1996). With this technique, we can observe the thin
atmospheric ring surrounding the optically thick disk of the
planet: the limb. In the case of giant exoplanets, though, the
star is only partially occulted (1.6\% for the transiting
planet \object{HD~209\,458b}). The spectrum of the star light
transmitted and filtered by the lower and thick giant exoplanet
atmosphere consequently presents extremely weak absorption
features (from $10^{-3}$ to $10^{-4}$, see Seager \& Sasselov
2000; Hubbard et al.\ 2001; Brown 2001).
Despite the difficulties, such dim signatures were detected:
Charbonneau et al.\ (2002) measured the lower atmosphere of
\object{HD~209\,458b} as they detected a \mbox{$(2.32 \pm 0.57)
\cdot 10^{-4}$} photometric diminution in the sodium doublet
line of the parent star at 589.3~nm. However its upper
atmosphere, which extends up to several planet radii, shows
even larger signatures. Vidal-Madjar et al.\ (2003, 2004)
observed a \mbox{$15 \pm 4 \%$} absorption in the
Lyman~$\alpha$ (Ly$_\alpha$) emission line of
\object{HD~209\,458} at 121.57~nm as well as absorptions from
atomic carbon (\mbox{$7.5 \pm 3.5 \%$}) and oxygen (\mbox{$13
\pm 4.5 \%$}) in the upper atmosphere. In this work, we will
discuss the possibility to detect and to characterize the lower
atmospheres of exoplanets using signatures comparable in origin
to the one detected by Charbonneau et al.\ (2002).
The idea is to extend the use of transmission spectroscopy to
hypothetical Earth-size planets. We estimate that these
exoplanets present at least two orders of magnitude less signal
than gaseous giants, as the transit of the planet itself would
have a dimming of $\sim$10$^{-5}$ (the transit depth, $\Delta F
/ F$, where $F$ is the stellar flux, can be expressed as $(R_P
/ R_\star)^2$, with $R_P$ and $R_\star$ standing for the radii
of the planet and the star, respectively). The atmospheres of
Earth-size exoplanets should span over $\sim$100-km height
without considering potential upper atmospheres. Depending on
their transparency -- which would give an equivalent optically
thick layer of $\sim$10~km -- the expected occultations caused
by atmospheric absorptions should be $\sim$10$^{-7}$ to
$\sim$10$^{-6}$.
Earth-size planets are probably the most challenging objects to
detect with transmission spectroscopy. The orders of magnitude
given above, in fact, raise many questions: is it realistic to
seek for possible features that dim, with an instrumentation
that might or might not be available in a near future? What are
the strongest signatures we should expect? What kind of planet
could be the best candidate to look at?
We have developed a one-dimensional model of transmission at
the limb to give quantitative answers to these questions. Since
we use the stellar light to explore the planetary atmospheres,
we chose to focus on the wavelength range where the largest
number of photons is available, i.e., between 200 and
2\,000~nm. The model is described in Sect.~\ref{sec:model}. The
detectability of the selected atmospheres depends on the
signal-to-noise ratio (S/N) achievable with a space telescope
spectrograph. The constraints on idealized observations and the
method we used to calculate their S/N, are described in
Sect.~\ref{sec:S/N}. Finally, the results for the specified
cases are given and discussed in Sect.~\ref{sec:results}.
\section{Model description}
\label{sec:model}
\subsection{Geometric description of the model}
\label{sec:geometry}
The general geometry of a transiting system is described by
Brown (2001). In the present work we consider a non-transient
occultation for the `in transit' phase, with a null phase
configuration (configuration~2 in Brown's Fig.~1), that is, the
planet is centered in the line of sight with respect to the
star. This configuration both maximizes the area of the
atmosphere that is filtering the stellar light and minimizes
any effects linked to the stellar limb darkening (Seager \&
Sasselov 2000).
The stellar light is filtered through the atmospheric limb of
the planet, as sketched in
Fig.~\ref{fig:transmission_geometry}. In the following we
detail the integration of the atmospheric opacity along a
stellar light path (or cord) through the limb of the planet.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Fig1.ps}}
\caption{Sketch of the transmission of the stellar light
through the planetary limb. The planet itself, i.e. the `solid'
disk (in grey) is optically thick at all wavelengths. The
quantity $\mathrm{d}l$ is the elemental length along the line
of sight. In the calculation, we prefer to use the height $h$
instead of $l$. The scale in the figure has been distorted for
clarity.} \label{fig:transmission_geometry}
\end{figure}
\subsubsection{Opacity along the line of sight}
We calculate the total opacity of the model atmosphere,
$\tau_{\lambda}$, along a cord, parallel to the line of sight,
as the sum of the opacity of each species $i$ present in the
atmosphere, $\tau_{\lambda} = \sum_{i}\tau_{\lambda,i}$. We can
calculate the opacity along the cord as a function of its
impact parameter, $b$:
\begin{equation} \label{eq:opacity}
\tau_{\lambda,i}(b) = 2 \int_{0}^{+\infty} A_{\lambda,i}
\rho_i(h) \mathrm{d} l,
\end{equation}
where $A_{\lambda,i}$ is the absorption coefficient for the
species $i$ at the wavelength $\lambda$, expressed in
cm$^2$\,g$^{-1}$, and $\rho_i(h)$ is the mass density in
g\,cm$^{-3}$ of the species $i$ at an altitude $h$ in the
atmosphere.
Now, re-expressing Eq.~\ref{eq:opacity} as a function of the
height $z = h + R_P$, with $R_P$ being the planet radius, we
obtain:
\begin{equation}
\tau_{\lambda,i}(b) = 2 \int_{b}^{b_\mathrm{max}} A_{\lambda,i}
\rho_i(z-R_P)
\frac{z \mathrm{d} z}{\sqrt{z^2-b^2}},
\end{equation}
where $b_\mathrm{max}$ is the height of the higher atmospheric
level we are considering. The method to estimate
$b_\mathrm{max}$ is presented in Sect.~\ref{sec:b_max}.
\subsubsection{Spectrum ratio}
\label{sec:spectrum_ratio}
Consider the stellar flux received by the observer during the
planetary transit to be $F_{\mathrm{in}}$, and the flux
received when the planet is not occulting the star to be
$F_{\mathrm{out}}$. Brown (2001) defined $\Re$ to be the ratio
between those two quantities, and $\Re'$ (the so-called
spectrum ratio) as $\Re'=\Re - 1$. Here, $\Re'$ is the sum of
two distinct types of occultations:
\begin{itemize}
\item The occultation by the `solid' surface of the planet,
optically thick at all wavelengths. Projected along the line of
sight, this is a disk of radius $R_P$ and the occultation depth
is simply $(R_P/R_\star)^2$.
\item The wavelength-dependent occultation by the thin ring of gaseous components
that surrounds the planetary disk, which can be expressed as $\Sigma_\lambda /
(\pi R_\star^2)$. The area, $\Sigma_\lambda$, is the
atmospheric equivalent surface of absorption and may be calculated as:
\begin{equation}
\Sigma_\lambda = \int_{R_P}^{b_\mathrm{max}} 2 \pi b \mathrm{d}
b \left[1 - \mathrm{e}^{-\tau_\lambda(b)}\right].
\end{equation}
\end{itemize}
The resulting spectrum ratio is:
\begin{equation} \label{eq:spectrum_ratio}
\Re'(\lambda) = - \frac{\Sigma_\lambda + \pi R_P^2}{\pi
R_\star^2}.
\end{equation}
Note that $\Re' < 0$.
\subsection{Description of the atmospheric profiles}
Along a single cord, stellar photons are crossing several
levels of the spherically stratified atmosphere. We generate an
atmospheric model using the vertical profiles from Tinetti et
al.\ (2005a, 2005b) and Fishbein et al.\ (2003) for the Earth
and from the Venus International Reference Atmosphere (VIRA,
Kliore et al.~1985) for Venus. These atmospheric data include
the profiles of pressure, $p$, temperature, $T$, and various
mixing ratios, $Y$. The atmospheres are initially sampled in 50
levels, ranging from the ground level to an altitude of about
80~km for the Earth and about 50~km for Venus. Both profiles
stop below the homopause, so we assume hydrostatic equilibrium
for the vertical pressure gradient.
A useful quantity to describe atmospheres in hydrostatic
equilibrium is the scale height, $H$, i.e. the height above
which the pressure decreases by a factor $e$. The scale height
explicitly depends on the temperature, as $H = k \mathcal{N}_A
T / (\mu g)$, where $k$ and $\mathcal{N}_A$ are the Boltzmann's
and Avogadro's constants while $\mu$ is the mean molar mass of
the atmospheric gas. Since $g$ is the acceleration due to
gravity, $H$ also implicitly depends on the radius and the
density of the planet\footnote{To avoid confusion between the
density of the atmosphere and the mean density of the planet,
the latter is denoted $\rho_P$}. Consequently, less dense
objects are likely to have more extensive atmospheres, hence
they are easier to detect (Brown 2001).
Density and size of planets are therefore key parameters for
the present work. In order to estimate their influence, we test
a set of different planetary types ranging from the Titan-like
giant planet's satellite ($\rho_P \approx 2$~g\,cm$^{-3}$, $R_P
\approx 0.5$~Earth radius -- 0.5~R$_\oplus$) to the
`super-Earth' object ($\rho_P \approx 6$~g\,cm$^{-3}$, $R_P
\approx 2$~R$_\oplus$). For the physical properties of
plausible, theoretically predicted planets such as a
`super-Earth', we use the mass-radius relation model from
Dubois (2004) and from Sotin et al.\ (2005). Our atmospheric
model allows the re-scaling of vertical profiles depending on
the acceleration due to gravity of the planet and the
atmospheric pressure at the reference level.
\subsubsection{Molecular composition of the atmosphere}
Our simplified atmospheric profiles contain only the species
that may produce interesting spectral signatures in the chosen
wavelength range (0.2 to 2~$\mu$m), viz., water vapor (H$_2$O),
carbon dioxide (CO$_2$), ozone (O$_3$) and molecular oxygen
(O$_2$). Molecular nitrogen (N$_2$) has also been considered,
though lacking marked electronic transitions from the UV to the
near IR. Nevertheless, it is a major species in Earth's
atmosphere and it has a detectable signature via Rayleigh
scattering at short wavelengths.
We consider three types of atmospheres: (A) N$_2$/O$_2$-rich,
(B) CO$_2$-rich and (C) N$_2$/H$_2$O-rich cases. The first two
types can be associated with existing planetary atmospheres,
respectively Earth and Venus. The last type (C) could
correspond to the atmosphere of an Earth-mass volatile-rich
planet such as an `ocean-planet' described by L\'eger et al.\
(2004). The basis for building a `toy model' of an H$_2$O-rich
atmosphere are found in L\'eger et al.\ (2004) and Ehrenreich
et al.\ (2005b, see Sect.~\ref{sec:H2O-rich_atmo}).
Vertical gradients in the chemical composition and temperature
of each of these atmospheres are plotted in
Fig.~\ref{fig:A_profile} (N$_2$/O$_2$-rich),
Fig.~\ref{fig:B_profile} (CO$_2$-rich) and
Fig.~\ref{fig:C_profile} (N$_2$/H$_2$O-rich).
Table~\ref{tab:composition} summarizes the mean chemical
compositions of these model atmospheres.
\begin{table*}
\centering
\begin{tabular}{*{8}{c}}
\hline \hline
Type & $\mu$ (g\,mol$^{-1}$) & $Y_{\mathrm{N}_2} (\%)$ & $Y_{\mathrm{H}_2\mathrm{O}}$ (\%) & $Y_{\mathrm{CO}_2}$ (\%) & $Y_{\mathrm{O}_2}$ (\%) & $Y_{\mathrm{O}_3}$ (\%) & Used for models \\
\hline
N$_2$/O$_2$-rich & 28.8 & 78 & 0.3 & 0.03 & 21 & ${<10^{-3}}^*$ & A1, A2, A3 \\
CO$_2$-rich & 43.3 & 4 & $3\cdot10^{-4}$ & 95 & 0 & 0 & B1, B2, B3 \\
N$_2$/H$_2$O-rich & 28.7 & 80 & 10 & 10 & 0 & 0 & C1, C2, C3 \\
\hline
\end{tabular}
\caption{Mean volume mixing ratio of atmospheric absorbers for
the different types of model atmospheres considered.
\newline
(*) Ozone is only present in model~A1.}
\label{tab:composition}
\end{table*}
\subsubsection{Temperature profiles}
\label{sec:temperature_profile}
As mentioned above, we use Earth and Venus vertical temperature
profiles as prototype for N$_2$/O$_2$-rich and CO$_2$-rich
atmospheres (see Sect.~\ref{sec:choice}). Moreover we assume an
isothermal profile in the thermosphere, instead of the real
one. This is an arbitrary, but conservative choice, since the
temperature should on the contrary rise in the thermosphere
enhancing the atmosphere's detectability (see
Sect.~\ref{sec:temperature_effect}).
\subsubsection{Upper limit of the atmosphere}
\label{sec:b_max}
We set the profiles to extend up to a critical height
$b_\mathrm{max}$ from the centre of the planet, or
$h_\mathrm{max}$ from the surface (\mbox{$b_\mathrm{max} = R_P
+ h_\mathrm{max}$}). This limit corresponds to the altitude
above which the molecular species we considered (H$_2$O, O$_3$,
CO$_2$, O$_2$) are likely to be destroyed or modified either by
photo-dissociating or ionizing radiations, such as Ly$_\alpha$
or extreme-UV (EUV).
Therefore, the critical height corresponds to the mesopause on
Earth (at $\approx 85$~km). The column density of the
terrestrial atmosphere above that altitude, $\mathcal{N}_{\geq
85\mathrm{\,km}}$, is sufficient to absorb all Ly$_\alpha$
flux. In fact, as the number density of the atmospheric gas,
$n(h)$, decreases exponentially with height, we can simply
consider \mbox{$\mathcal{N}_{\geq 85\mathrm{\,km}} \propto
n_{85 \mathrm{\,km}} \cdot H_{85 \mathrm{\,km}}$}, where $n_{85
\mathrm{\,km}}$ and $H_{85\mathrm{~km}}$ are the density and
the scale height of the terrestrial atmosphere at 85~km,
respectively.
Similarly, we set the upper limit of a given atmosphere,
$h_\mathrm{max}$, to the altitude below which the
photo-dissociating photons are absorbed. We assume that
$h_\mathrm{max}$ is the altitude where the column density
equals that of the terrestrial atmosphere at 85~km, that is
\mbox{$n_{h_\mathrm{max}} \cdot H_{h_\mathrm{max}} = (n_{85
\mathrm{\,km}})_\oplus \cdot (H_{85 \mathrm{\,km}})_\oplus$}.
We determine $h_\mathrm{max}$ by scaling this equation.
Values of $h_\mathrm{max}$ for the different models are given
in Table~\ref{tab:models}. Similarly to neutral elements
absorbing light below $h_\mathrm{max}$, it is likely that
ionized elements are absorbing light above this limit, though
we do not include this effect in the model.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Fig2.ps}}
\caption{Atmospheric profiles, A1. The plot shows the total
number density profile (thin solid line) of the atmosphere in
cm$^{-3}$, and that of the five species included in our model,
namely, N$_2$ (dotted line), O$_2$(dash-dot-dot-dotted line),
H$_2$O (dashed line), CO$_2$ (long-dashed line) and O$_3$
(dash-dotted line). Temperature (thick line up to 80~km) and
mixing ratios of the different species are those of Earth.
Temperature is assumed to be constant above that height. The
thickest horizontal line shows the position of the cloud
layer.} \label{fig:A_profile}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Fig3.ps}}
\caption{Atmospheric profiles, B1. The legend is identical to
that in Fig.~\ref{fig:A_profile}. The temperature profile and
mixing ratios are that of Venus. The temperature is considered
to be constant above 50~km. Carbon dioxide is barely visible
because it is by far the major constituent so its line is
superimposed with that of the total density.}
\label{fig:B_profile}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Fig4.ps}}
\caption{Atmospheric profiles, C1. Same legend as in
Fig.~\ref{fig:A_profile} and Fig.~\ref{fig:B_profile}. The
temperature profile follows a dry adiabat in the first 10~km of
the atmosphere, until the point where $e \geq e_\mathrm{sat}$.
Next, it follows a steeper saturated adiabat up to 20~km high.
The temperature gradient is arbitrarily set to be isothermal
above this point. The cloud top (thickest line) is one scale
height above the higher point where $e \geq e_\mathrm{sat}$.
For reasons detailed in the text (see
Sect.~\ref{sec:H2O-rich_atmo}), this point corresponds to the
level where the temperature gradient becomes isothermal.}
\label{fig:C_profile}
\end{figure}
\subsubsection{Presence of clouds}
\label{sec:clouds}
In the wavelength range of interest, the surface of Venus is
almost completely hidden by clouds. Therefore, it seems
reasonable to model these types of clouds to a first order
approximation by assuming that they act as an optically thick
layer at a given altitude. As a result, clouds effectively
increase the apparent radius of the planet and the transiting
spectrum gives information only about atmospheric components
existing above the cloud layer. The top of the cloud layer is a
free parameter for N$_2$/O$_2$- and CO$_2$-rich atmospheres
(set to 10 and 30~km, taken from the Earth and Venus,
respectively). We treat the case of the N$_2$/H$_2$O-rich
atmosphere separately because H$_2$O is a highly condensable
species.
\subsubsection{Composition, vertical structure and location of the clouds
in a N$_2$/H$_2$O-rich atmosphere}
\label{sec:H2O-rich_atmo}
The temperature gradient of an atmosphere containing
non-negligible amount of condensable species, like H$_2$O,
significantly departs from the case where no condensation
occurs. A correct estimation of the temperature profile is
crucial to determine the scale height, hence the detectability
of that atmosphere. In an H$_2$O-rich atmosphere, the evolution
of the adiabatic temperature gradient is driven by the ratio of
the partial pressure of water vapor, $e$, to the saturating
vapor pressure, $e_\mathrm{sat}$. This ratio should also
determine the levels at which the water vapor is in excess in
the air and condenses (for $e / e_\mathrm{sat}
> 1$), i.e.\ the levels where clouds may form.
Our initial conditions at the $z=0$ level ($z^0$) are the
temperature $T^0$ and pressure $p^0$. With these quantities we
can estimate $e_\mathrm{sat}$, which depends only on the
temperature, using the Clausius-Clapeyron equation:
\begin{equation} \label{eq:Clausius-Clapeyron}
e_\mathrm{sat}(T) = p^*
\exp{\left[\frac{\mu_{\mathrm{H}_2\mathrm{O}}
L_v}{\mathcal{N}_A k} \left( \frac{1}{T^*} - \frac{1}{T}
\right) \right]}
\end{equation}
where $p^*$ and $T^*$ are the reference pressure
($1.013\cdot10^{5}$~Pa) and temperature (373~K),
$\mu_{\mathrm{H}_2\mathrm{O}}$ is the molar mass of water and
$L_v$ is the latent heat of vaporization for water
($2.26\cdot10^{10}$~erg\,g$^{-1}$). Assuming that the planet is
covered with liquid water (e.g., an ocean-planet; see L\'eger
et al.\ 2004) and that $T^0$ is `tropical' (e.g. 340~K), the
humidity at the surface is high so that the value of $e^0$ must
be an important fraction of $e_\mathrm{sat}(T^0)$. We set $e^0$
to half the value of $e_\mathrm{sat}(T^0)$. The volume mixing
ratio of water can be expressed as $Y_{\mathrm{H}_2\mathrm{O}}
= e / p$, and we can calculate it at the surface of the planet.
The atmosphere of an ocean-planet may also contain a
significant quantity of CO$_2$. We arbitrarily set this
quantity constant to $Y_{\mathrm{CO}_2} = 0.1$ (L\'eger et al.\
2004; Ehrenreich et al.\ 2005b). Molecular nitrogen is the
major constituent of the atmosphere of the Earth and the second
more abundant species in the atmosphere of Venus, and therefore
we chose to include it to complete the chemical composition of
this atmosphere. The mixing ratio of N$_2$ was set to be
$Y_{\mathrm{N}_2} = 1 - Y_{\mathrm{CO}_2} -
Y_{\mathrm{H}_2\mathrm{O}}$ at any level. Assuming the
atmosphere contains only N$_2$, H$_2$O and CO$_2$, we can
obtain the mean molar mass of the atmospheric gas ($\mu^0 =
\sum_i Y_i^0 \mu_i$) and that of the dry atmospheric gas
($\mu_\mathrm{d}^0 = \mu^0 - Y_{\mathrm{H}_2\mathrm{O}}^0
\mu_{\mathrm{H}_2\mathrm{O}}$), the mean specific heat of dry
air ($C_p^0 = \sum {C_p}_i Y_i^0 \mu_i / \mu_\mathrm{d}^0$) and
the scale height $H_0$ (all at the level $z^0$).
For the $z^{j+1}$ level, we need to evaluate the temperature
gradient between $z^j$ and $z^{j+1}$. There are two cases
(Triplet \& Roche 1986):
\begin{itemize}
\item $e^j < e_\mathrm{sat}^j$; in this case the temperature
follows a dry adiabatic gradient,
\begin{equation} \label{eq:dry_gradient}
{\Delta T}_\mathrm{dry} = \frac{-g}{C_p^j}.
\end{equation}
\item $e^j = e_\mathrm{sat}^j$; in this case the gradient is
saturated,
\begin{equation}
\label{eq:sat_gradient} {\Delta T}_\mathrm{sat} = {\Delta
T}_\mathrm{dry} \frac{\left( 1 + r_\mathrm{sat}^j \right)
\left[ 1 + L_v r_\mathrm{sat}^j / (R_\mathrm{dry}^j T^j)
\right]}{1 + \frac{r_\mathrm{sat}^j}{C_p^j}
\left[{C_p}_{\mathrm{H}_2\mathrm{O}} + L_v^2 \frac{1 +
r_\mathrm{sat}^j
R_{\mathrm{H}_2\mathrm{O}}R_\mathrm{dry}^j}{R_{\mathrm{H}_2\mathrm{O}}
(T^j)^2} \right]}
\end{equation}
where $r_\mathrm{sat}^j = (\mu_{\mathrm{H}_2\mathrm{O}}
e_\mathrm{sat}^j) / [\mu_\mathrm{d}^j (p^j - e_\mathrm{sat}^j)
]$ is the mixing ratio of saturated air, $R_\mathrm{dry}^j =
\mathcal{N}_A k / \mu_\mathrm{dry}^j$ and
$R_{\mathrm{H}_2\mathrm{O}} = \mathcal{N}_A k /
\mu_{\mathrm{H}_2\mathrm{O}}$ are the specific constant of dry
air at the level $z^j$ and water (respectively).
\end{itemize}
If $z^{j+1} < 20$~km, we select the appropriate gradient
accordingly to the value of $e / e_\mathrm{sat}$, and get the
value of the temperature $T^{j+1}$. Above 20~km, we assume the
temperature profile becomes isothermal ($T^{j+1} = T^j$).
The assumption of an isothermal atmosphere, already discussed
in Sect.~\ref{sec:temperature_profile}, is somewhat arbitrary
but is motivated by an analogy with the atmosphere of the
Earth, where the temperature gradient becomes positive from
about 20 to 50~km. Taking an isothermal temperature gradient
will conservatively mimic the presence of a stratosphere.
However, it has important consequences since it allows H$_2$O
to be significantly present above the cloud top. In fact, above
20~km, the temperature stops decreasing, preventing
condensation from occurring (the saturation vapor pressure
depends only on temperature). Our assumption consequently fixes
the height of the cloud deck to the point where the temperature
profile is isothermal (actually, one scale height above that
point). If we set this point higher, we would increase the
amount of clouds hence reducing the detectable portion of
atmosphere. In addition, the cloud formation would certainly
take the corresponding latent heat of condensation out of the
atmospheric gas, contributing, as a consequence, to cool the
atmosphere at the level of the cloud layer.
We calculate $H^{j+1}$, $p^{j+1} = p^j \cdot
\exp{\left(-z^{j+1}/H^{j+1}\right)}$, $e_\mathrm{sat}^j$ (from
Eq.~\ref{eq:Clausius-Clapeyron}) and either $e^{j+1} = e^j
\cdot \exp{\left[(z^j - z^{j+1}) / H^{j+1}) \right]}$, if the
atmosphere is not saturated or $e^{j+1} =
e_\mathrm{sat}^{j+1}$, if the atmosphere is saturated. We
finally find all $Y_i^{j+1}$, $\mu_\mathrm{dry}^{j+1}$ and
${C_p}_\mathrm{dry}^{j+1}$ and then iterate the process for all
atmospheric levels.
The higher and the lower pressure levels where $e =
e_\mathrm{sat}$ indicate respectively the bottom and the top of
the region where clouds are forming. We assume the cloud layer
does not extend over one scale height above the top of the
cloud forming region. However, we can still have $e \leq
e_\mathrm{sat}$ higher in the atmosphere, and thus H$_2$O can
be present above the clouds.
\subsection{Description of atmospheric absorptions}
\subsubsection{Chemical species}
We used the program \texttt{LBLABC} (Meadows \& Crisp 1996), a
line-by-line model that generates monochromatic gas absorption
coefficients from molecular line lists, for each of the gases,
except ozone, present in the atmosphere. The line lists are
extracted from the HITRAN~2000 databank (Rothman et al.\ 2003).
We calculated the absorption coefficients for O$_2$, H$_2$O and
CO$_2$ in our wavelength range we (i.e., from 200 to
2\,000~nm).
The absorption coefficients relative to these species depend on
pressure and temperature. We verified that those variations do
not impact significantly on the results obtained (see
Sect.~\ref{sec:results}) and we decided to use the absorption
coefficients calculated at the pressure and temperature of the
cloud layer, i.e., 10~km in models~A1, A2 \&~A3, 30~km in
models~B1, B2 \&~B3 and from 25 to 70~km in models~C1 to~C3. We
then assumed these absorption coefficients to be constant along
the $z$-axis. This is a fairly good approximation since
molecules at that atmospheric level contribute more
substantially to the transmitted spectrum than molecules at the
bottom of the atmosphere. Absorption coefficients for H$_2$O,
CO$_2$, O$_3$ and O$_2$ are compared in Fig.~\ref{fig:abc}.
The spectrum of O$_3$ is unavailable in HITRAN at wavelengths
lower than 2.4~$\mu$m. However it has strong absorption in the
Hartley (200--350~nm) and Chappuis (400--750~nm) bands. Thus we
took the photo-absorption cross-sections, $\sigma$ (in
cm$^{2}$), from the GEISA/cross-sectional databank
(Jacquinet-Husson et al.\ 1999) and converted them into
absorption coefficients, $A$ (in cm$^{2}$\,g$^{-1}$), such as
$A = \sigma \mathcal{N}_A / \mu$, where $\mu$ is the molar mass
of the component.
As shown in Fig.~\ref{fig:abc_variation}, the pressure and the
temperature variations do not have a significant influence over
the cross sections/absorption coefficients of O$_3$. We
therefore used the values given for $p = 1$~atm\footnote{1~atm
= 1\,013~hPa.} and $T = 300$~K, and set them constant along the
$z$-axis.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Fig5.ps}}
\caption{Absorption coefficients of atmospheric absorbers (in
cm$^2$\,g$^{-1}$), as a function of the wavelength. The
photo-absorption coefficients corresponding to H$_2$O, O$_2$,
O$_3$ and CO$_2$ (solid lines) are plotted against their
respective Rayleigh scattering coefficient (dotted line),
except O$_3$, plotted against the Rayleigh scattering
coefficient of N$_2$.} \label{fig:abc}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Fig6.ps}}
\caption{Dependence of the absorption coefficient of O$_3$ on
pressure and temperature. For clarity, each line has been
shifted down by $5 \cdot 10^4$~cm$^2$\,g$^{-1}$ with respect to
the previous one.} \label{fig:abc_variation}
\end{figure}
\subsubsection{Rayleigh diffusion}
Light is scattered toward short wavelengths by atmospheric
molecules whose dimensions are comparable to $\lambda$.
Rayleigh diffusion could be an important indicator of the most
abundant atmospheric species. Molecular nitrogen, for instance,
does not present any noticeable spectroscopic lines between 0.2
and 2~$\mu$m. With a transit observation, the presence of a gas
without spectroscopic lines like nitrogen in the Earth
atmosphere can be indirectly inferred from the
wavelength-dependance of the spectrum ratio continuum. Since
Rayleigh scattering cross section of CO$_2$ is high, Venus-like
atmospheric signatures should also present an important
Rayleigh scattering contribution.
We have therefore estimated these different contributions. The
Rayleigh scattering cross section, $\sigma_R$, can be expressed
in cgs units as: (Bates 1984; Naus \& Ubachs 1999; Sneep \&
Ubachs 2004)
\begin{equation} \label{eq:rayleigh_xsc}
\sigma_R(\bar{\nu}) = \frac{24 \pi^3 \bar{\nu}^4}{n^2} \left(
\frac{r(\bar{\nu})^2 - 1}{r(\bar{\nu})^2 + 2} \right)
\end{equation}
where $\bar{\nu} = 1 / \lambda$, $n$ is the number density
(cm$^{-3}$) and $r$ is the refractive index of the gas. The
total Rayleigh scattering includes weighted contributions from
N$_2$, O$_2$, CO$_2$ and H$_2$O (i.e., $\sigma_R = \sum_{i} Y_i
{\sigma_R}_i$), and so we need all the corresponding refractive
indexes. These are found in Bates (1984) and Sneep \& Ubachs
(2004) for N$_2$, O$_2$ and CO$_2$.\footnote{We noted a
typographical error in the CO$_2$ refractive index formula
(Eq.~13) in Sneep \& Ubachs (2004): in order to yield the
correct values, results from this expression should be divided
by $10^3$ (M.~Sneep, personal communication).} The refractive
index for H$_2$O comes from Schiebener et al.\ (1990). Tests
have proved the different refractive indexes do not
significantly change with temperature and pressure. We have
therefore calculated the indexes for standard conditions
($15\degr$C and 1013~hPa).
\subsubsection{Refraction}
Depending on the wavelength, the refraction may bring into the
line of sight rays coming from different parts of the star. To
quantify the importance of that effect, we calculate the
maximum deviation, $\Delta\theta$, due to the wavelength
dependence of the refraction index, using the formula given by
Seager \& Sasselov (2000) and the refractive index at the
surface ($h = 0$) between 0.2 and 2~$\mu$m. We obtain
\mbox{$\Delta\theta \approx 0.3\arcmin$}. This represents about
1.5\%, 1\% and 0.5\% of the angular diameter of the star (F-,
G- and K-type star, respectively) as seen from the planet. We
can therefore consider this effect negligible as long as there
are no important variations of the stellar flux on scales lower
than the surface corresponding to these numbers.
\subsection{Choice of test models}
\label{sec:choice}
We chose 9 cases, divided into 3 categories:
1~R$_\oplus$-planets (models~A1, B1 and~C1),
0.5~R$_\oplus$-planets (A2, B2 and~C2) and 2~R$_\oplus$-planets
(A3, B3 and~C3). The parameters for each model are summarized
in Table~\ref{tab:models}. For theses ranges of planetary radii,
the depth of the occultation by the tested planets will differ
by a factor of $\sim$16 at most during their transit. Notice
that a better detection of the transit itself does not always
imply a better detection for the atmosphere of the transiting
planet. On the contrary, in some cases, the fainter the transit
is, the more detectable the atmosphere will be!
In any case, we naturally need to secure the detection of the planet
itself before looking for an atmosphere.
The choice of studying planets with a variety of sizes gives us
the possibility to explore a large range of planet
characteristics, in mass, radius and density. The Earth density
is 5.5~g\,cm$^{-3}$. A planet having the internal composition
of the Earth and twice its radius would weigh $\sim$10 times
more, while a planet half large would weigh $\sim$10 times less
(Sotin et al.~2005). That gives densities of 6.1 and
4~g\,cm$^{-3}$, respectively. We thus have 3 cases, each of
which can be coupled with a plausible atmosphere. We chose a
N$_2$/O$_2$-rich atmosphere (similar to that of the Earth) for
models~A1, A2 and~A3, and a Cytherean (i.e.,
Venus-like\footnote{Cythera
(\emph{K}$\acute{\upsilon}\theta\eta\rho\alpha$) is an Ionian
island where, according to the Greek mythology, the goddess
Aphrodite/Venus first set foot. See
\texttt{http://en.wikipedia.org/wiki/Cytherean}.}) CO$_2$-rich
atmosphere for models~B1, B2 and~B3.
Note that the atmospheric pressure profiles are scaled from the
1~R$_\oplus$ cases (A1 and B1) to the 0.5 and 2~R$_\oplus$
models. In doing so, we did not include any species that showed
a peak of concentration in altitude, such as the O$_3$ layer in
model~A1. In fact, the O$_3$ peak does not depend only on the
hydrostatic equilibrium, but also on the photochemical
equilibrium at the tropopause of the Earth. For that reason
O$_3$ is absent in models~A2 and~A3.
L\'eger et al.\ (2004) suggested the existence of
`ocean-planets', whose internal content in volatiles (H$_2$O)
might be as high as 50\% in mass. Such planets would be much
less dense than telluric ones. We are particularly interested
in those ocean-planets since the lower the density of the
planet is, the higher the atmosphere extends above the surface.
These objects could have densities of 1.8, 2.8 and
4.1~g\,cm$^{-3}$ for radii of 0.5, 1 and 2~R$_\oplus$ (Sotin et
al.~2005), which are relatively small, but reasonable if
compared with Titan's density (1.88~g\,cm$^{-3}$). The huge
quantity of water on the surface of an ocean-planet could
produce a substantial amount of water vapor in their
atmosphere, if the temperature is high enough. A non-negligible
concentration of CO$_2$ might be present as well in those
atmospheres (Ehrenreich et al.~2005b). Using this information
on ocean-planets, we can simulate three extra cases, namely~C1,
C2 and~C3 (Table~\ref{tab:models}).
\subsection{Choice of different stellar types}
\label{sec:distance} In this work, we consider planets orbiting
in the habitable zone (HZ) of their parent star. Our
atmospheric models are not in fact a good description for
planets orbiting too close to their parent star. For instance,
the heating of the atmosphere by an extremely close star could
trigger effects like evaporation, invalidating the hydrostatic
equilibrium we assumed (see, for instance, Lecavelier des
Etangs et al.\ 2004; Tian et al.\ 2005). The reduced semi-major
axis $a_r$ of the orbit of all planets we have considered is
defined as:
\begin{equation} \label{eq:a_r}
a_r = a \cdot (L_\star / L_\odot)^{-0.5}.
\end{equation}
We set $a_r = 1$~astronomical unit (AU), so that the planet is
in the HZ of its star.
Here we focus on Earth-size planets orbiting around different
main sequence stars, such as K-, G- and F-type stars, since the
repartition of stellar photons in the spectrum is different
from one spectral type to another. Planets in the HZ of K, G
and F stars, with $a_r = 1$~AU, should have a real semi-major
axis of 0.5, 1 and 2~AU, respectively.
\section{Signal-to-noise ratio for ideal observations}
\label{sec:S/N} Prior to the atmospheres, we need to detect the
planets themselves with a dedicated survey, as the one proposed
by Catala et al.\ (2005). The transmission spectroscopy we
theoretically study here require the use of a large space
telescope. Hence, we need to quantify the S/N of such
observations to determine the detectability of the atmospheric
signatures for a transiting Earth-size exoplanet. The S/N will
depend on both instrumental and astrophysical parameters.
\subsection{Instrumental requirements}
\label{sec:S/N_instru} The first relevant parameter relative to
the instrumentation is the effective area of the telescope
collecting mirror, $S$, which can be expressed as $S=(\epsilon
D)^2 \pi / 4$. The coefficient $\epsilon^2$ accounts for the
instrumental efficiency and $\epsilon D$ is thus the `effective
diameter' of the mirror. Up to present, all exoplanetary
atmospheric signatures have been detected by the Space
Telescope Imaging Spectrograph (STIS) on board the \emph{Hubble
Space Telescope (HST)}. This instrument, now no longer
operative, was very versatile\footnote{STIS was used for
imagery, spectro-imagery, coronography and low and high
resolution spectroscopy.} and consequently not planned to have
high efficiency. It had a throughput $\epsilon^2 \approx 2\%$
from 200 to 300~nm, and $\epsilon^2 \approx 10\%$ from 350 to
1\,000~nm. As the majority of photons we are interested in is
available in the range from 350 to 1\,000~nm, we reasonably
assume that a modern spectrograph has a mean $\epsilon^2$
significantly greater than 10\% from 200 to 2\,000~nm. Present
day most efficient spectrographs have $\epsilon^2 \approx 25\%$
in the visible, so it seems reasonable to imagine that next
generation spectrographs, specifically designed to achieve high
sensitivity observations, could have throughput of $\epsilon^2
\approx 25\%$, or $\epsilon = 50\%$.
Another parameter linked to the instrument is the spectral
resolution, $\mathcal{R}$. In the following, $\mathcal{R}$ will
be assumed to be about 200, i.e. 10~nm-wide spectral feature
can be resolved.
Finally, it is legitimate to question the ability of the
instrument detectors to discriminate the tenuous ($\sim$
10$^{-6}$) absorption features in the transmitted spectra of
Earth-size planets. In a recent past, sodium was detected at a
precision of 50~parts-per-million (ppm) on a line as thin as
about 1~nm by Charbonneau et al.\ (2002) using STIS. According
to our results (see Sect.~\ref{sec:results}), some absorption
features from Earth-size planet atmospheres show a $\sim$1~ppm
dimming over $\sim$100~nm: the technological improvement
required to fill the gap should not be unachievable. Besides,
since we deal with \emph{relative} measurements -- the
in-transit signal being compared to the out-of-transit one --
there is no need to have detectors with a perfect absolute
calibration. Only a highly stable response over periods of
several hours is required. Nevertheless, instrumental precision
remains a challenging issue whose proper assessment will
require further, detailed studies.
\subsection{Physical constraints on the observation}
\label{sec:S/N_physics} The number of photons detected as a
function of wavelength depends on the spectral type of the
star, while the total number of photons received in an exposure
of duration $t$ depends on the apparent magnitude of the star,
$V$. The stellar spectra $F_{\star}^{V=0}(\lambda)$ are from
\object{$\rho$~Capricorni} (F2\,{\sc iv}), \object{HD~154\,760}
(G2\,{\sc v}) and \object{HD~199\,580} (K2\,{\sc iv}) and are
taken from the Bruzual-Persson-Gunn-Stryker (BPGS)
spectrophotometry atlas\footnote{Available on
\texttt{ftp.stsci.edu/cdbs/cdbs2/grid/bpgs/}.}. The fluxes
(erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$) are given at a null
apparent magnitude, so we re-scaled them for any apparent
magnitude $V$, \mbox{$F_\star = F_{\star}^{V=0} \cdot 10^{-0.4
V}$}. The three corresponding spectra are plotted for a default
magnitude $V=8$ in Fig.~\ref{fig:stars}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{Fig7.ps}}
\caption{Spectrum of a K2 (dashed line), G2 (solid line), and
F2-type stars (dotted line) between 0.2 and 2~$\mu$m. The
fluxes are scaled to an apparent magnitude $V=8$.}
\label{fig:stars}
\end{figure}
The stellar type determines the radius and the mass of the
star, so the transit duration (and thus the maximum time of
exposure during the transit) is different depending on the star
we consider. The transit duration is also a function of the
semi-major axis of the planet orbit. Since we chose a constant
reduced distance ($a_r = 1$~AU) for all planetary models (see
Sect.~\ref{sec:distance}), the duration of transit depends on
the stellar luminosity as well. From Zombeck (1990), we obtain
the radii of F and K stars relatively to that of the Sun,
respectively $R_\mathrm{F}/R_\odot \approx 1.25$ and
$R_\mathrm{K}/R_\odot \approx 0.75$, the mass ratios,
$M_\mathrm{F}/M_\odot \approx 1.75$ and $M_\mathrm{K}/M_\odot
\approx 0.5$, and the luminosity ratios, respectively
$L_\mathrm{F}/L_\odot \approx 4$ and $L_\mathrm{K}/L_\odot
\approx 0.25$. Using Eq.~\ref{eq:a_r}, the duration of the
transit is:
\begin{equation}
\label{eq:time_transit} \tau \approx \frac{13 \pi}{4}~\rm{h}
\cdot \frac{R_\star}{R_\odot}
\left(\frac{M_\star}{M_\odot}\right)^{-0.5}
\left(\frac{L_\star}{L_\odot} \right)^{0.25},
\end{equation}
where $13\pi/4$~h is the mean transit duration of a planet at
1~AU across a G star averaged over all possible impact
parameter of the transit. From Eq.~\ref{eq:time_transit} we
obtain mean transit durations of 7.6, 10.2 and 13.6~h for K-,
G- and F-type star, respectively. In the following, we set $t =
\tau$.
Ideally, our observations are limited only by the stellar
photon noise -- the detection of sodium at a precision of
$\sim$50~ppm in the atmosphere of \object{HD~209\,458b} by
Charbonneau et al. (2002) was in fact limited by the stellar
photon noise. However, at the low signal levels we are
searching for, the intrinsic stellar noise might need to be
considered as well. Stellar activity, as well as convective
motions will cause variations in both intensity and color in
the target stars, on a large variety of timescales. The impact
of stellar micro-variability on the detectability of
photometric transits has been addressed by a number of studies
(see, e.g., Moutou et al., 2005; Aigrain et al., 2004 --
especially their Fig.~8; Lanza et al., 2004), all pointing
towards photometric variability levels in the range of
$\sim$100--1\,000~ppm for durations of a few days. This is to
be compared to the strength and duration of the atmospheric
signatures we want to look at: they are $\sim$1~ppm variations
lasting a few hours. While indeed the different time frequency
and spectral content of these signatures versus the stellar
noise will hopefully allow to discriminate the two, the impact
of stellar micro-variability on such faint signals is likely to
be significant, and may limit the ability to detect an
atmosphere in a transiting planet. For instance, Aigrain et al.
(2004) suggested K stars are more adapted than G or F stars
regarding to the detection of terrestrial planets versus
stellar micro-variability. However, note that the observation
of several transits for each planet considered will confirm the
signal detected in the first transit. For instance, at
$a_r=1$~AU around a K star, a planet has a period of $\approx
0.3$~yr, allowing to schedule several transit observations
within a short period of time. Finally, the usual technique to
detect a spectral signature from a transit is to compare
in-transit and out-of-transit observations (Vidal-Madjar et
al.\ 2003, 2004). For all these reasons, we will assume in the
following to be able to discriminate a transit signal from the
stellar activity and consequently the photon-noise to be the
limiting factor. Nevertheless, further and detailed analysis is
certainly needed to quantify the effect of stellar
micro-variability, as a function of the stellar type, but this
is outside the scope of this paper.
\subsection{Calculation of the signal-to-noise ratio}
Now let $\varphi_\star$ be the maximum number of photons per
element of resolution that can be received during $\tau$:
\mbox{$\varphi_\star = F_\star(\lambda) \cdot \lambda / (h_P c)
\cdot \mathcal{R} \cdot S \cdot \tau$}, where $h_P$ is Planck's
constant and $c$ the speed of light. Some photons are blocked
or absorbed by the planet, therefore the actual number of
photons received during the transit is \mbox{$\varphi =
\varphi_\star (1 + \Re')$} per element of resolution.
From the observations, it is possible to obtain $\tilde{R}_P$,
an estimate of the radius of the transiting planet $R_P$ (e.g.,
by using the integrated light curve or a fit to the observed
spectrum ratio). This value corresponds to the flat spectrum
ratio (i.e., a planet without atmosphere) that best fits the
data. The corresponding number of photons received during an
observation per element of resolution is therefore expressed
as: \mbox{$\tilde{\varphi} = \varphi_\star \left[1 -
(\tilde{R}_P / R_\star)^2\right]$}.
The weighted difference between $\varphi$ and $\tilde{\varphi}$
can reveal the presence or the absence of a planetary
atmosphere. We express the $\chi^2$ of this difference over all
the elements of resolution $k$ as \mbox{$\sum_{k} \left[
\left(\varphi_k - \tilde{\varphi}_k \right) /
\sigma_{\varphi_k} \right]^2$}. Here, the uncertainty of the
number of photons received is considered to be dominated by the
stellar photon noise (see Sect.~\ref{sec:S/N_physics}), that is
$\sigma_\varphi = \sqrt{\varphi}$. We thus have:
\begin{equation} \label{eq:chi2}
\chi^2 = \sum_{k} \left( \frac{{\varphi_\star}_k}{1 + \Re'_k }
\left[ \Re'_k + \left(\tilde{R}_P / R_\star \right)^2 \right]^2
\right).
\end{equation}
Given the $\chi^2$, the S/N can be directly calculated taking
its square root. The best estimation can be obtained by
minimizing the $\chi^2$ with respect to the radius
$\tilde{R}_P$, i.e., \mbox{$\partial \chi^2 /
\partial \tilde{R}_P = 0$}. From this formula we can calculate
the estimated radius:
\begin{equation} \label{eq:R_estimate}
\tilde{R}_P = R_\star \sqrt{- \frac{\sum_{k}\left[{\varphi_\star}_k
\Re'_k / \left(1+\Re'_k \right) \right]}{\sum_{k} \left[{\varphi_\star}_k
/ \left( 1 + \Re'_k \right) \right]}}.
\end{equation}
Once we determine if an atmosphere is observable or not
(depending on the S/N ratio), we can use a similar approach to
quantify the detectability of the single atmospheric absorber
contributing to the total signal $\varphi$. Let
\mbox{$\hat{\varphi}_i = {\varphi_\star} (1 + \hat{\Re'}_i)$}
be the signal obtained by filtering the stellar light out of
all atmospheric absorbers except the $i^\mathrm{th}$, and let
$\tilde{\left(\hat{\varphi}_i\right)}$ be its estimation. Here,
$\hat{\Re}'_i$ is the spectrum ratio calculated when the
species $i$ is not present in the atmosphere. Further, since
$\tilde{\left(\hat{\varphi}_i\right)} \approx \alpha_i
\hat{\varphi}_i$, we can deduce the presence of absorber $i$ in
the atmosphere, by simply comparing the fit we made assuming
its absence ($\alpha_i \hat{\varphi}_i$) with the measured
signal ($\varphi$):
\begin{equation}
\chi^2_i = \sum_{k} \left( \frac{{\varphi_\star}_k}{1 + \Re'_k}
\left[ \left(1 + \Re'_k \right) - \alpha_i \left(1 +
\hat{\Re}'_{ik} \right) \right]^2 \right),
\end{equation}
where
\begin{equation}
\alpha_i = \frac{\sum_k \left[ {\varphi_\star}_k \left( 1 +
\hat{\Re}'_{ik} \right) \right]}{\sum_{k} \left[
{\varphi_\star}_k \left( 1 + \hat{\Re}'_{ik} \right)^2 / \left(
1 + \Re'_k \right) \right]}.
\end{equation}
\section{Results and discussion}
\label{sec:results}
The results of our computations are displayed in
Tables~\ref{tab:results1} \&~\ref{tab:results2} and plotted as
spectrum ratios in Figs.~\ref{fig:ABC_ratios},
\ref{fig:DEF_ratios} \&~\ref{fig:GHI_ratios}.
\subsection{Spectral features of interest}
Here we summarize the contributions of each atmospheric
absorber to the spectrum ratio for various models. The spectral
resolution of the plots presented here is 10~nm. The most
prominent spectral signatures, when present, are those of O$_3$
and H$_2$O. Carbon dioxide is hard to distinguish from H$_2$O
bands and/or its own Rayleigh scattering. Molecular oxygen
transitions are too narrow to significantly contribute to the
spectrum ratio.
\subsubsection{Ozone}
In the spectral domain studied here, the Hartley (200--350~nm)
and Chappuis (420--830~nm) bands of O$_3$ appear to be the best
indicators of an Earth-like atmosphere. These bands are large
(respectively 150 and 600~nm) and lay at the blue edge of the
spectrum, where spectral features from other species are
missing. There is noticeably no contamination by H$_2$O, and
O$_2$ strong transitions are narrow and could be easily
separated. Ozone bands significantly emerge from Rayleigh
scattering and they correspond to very strong transitions,
despite the small amount of O$_3$ present in the model~A1
atmosphere (\mbox{$Y_{\mathrm{O}_3} < 10^{-5}$}). When present,
ozone is more detectable in an atmosphere similar to model~A2.
\subsubsection{Water}
The signature of H$_2$O is visible in a transit spectrum only
if H$_2$O is substantially abundant above the clouds. This is
not the case for models of Earth-like atmosphere like A1, A2
and~A3. On the contrary, the models of the ocean-planets (C1,
C2 and~C3) show a major contribution from this molecule, in the
form of four large bands that dominate the red part of the
spectrum (at $\lambda \ga 950$~nm). For these three cases,
H$_2$O can be significantly abundant above the clouds.
\subsubsection{Carbon dioxide}
The lines of CO$_2$ are about as strongly emerging from the
`continuum' than the H$_2$O ones, but are often overlapping
with these lines. The transitions around 1\,600~nm and the ones
around 1\,950~nm are the easiest to identify, other bands are
not observable if water is present. Rayleigh scattering and
photo-absorption cross sections of CO$_2$ are comparable at
most wavelengths below 1.8~$\mu$m (see Fig.~\ref{fig:abc}),
except for a few $\sim$10-nm wide bands. In fact, the more
CO$_2$ is present in the atmosphere, the more opaque the
atmosphere becomes. This implies it would be impossible for an
observer on the surface of Venus to see the Sun. Carbon dioxide
may be more detectable farther in the infrared, hence making
desirable further investigations up to 2.5~$\mu$m.
\subsubsection{Molecular oxygen}
Molecular oxygen does not appear in the plots: its bands at
620, 700, 760 and 1\,260~nm are too thin to appear with only
10~nm resolution. Besides, its Rayleigh scattering cross
section almost completely masks its absorption features (see
Fig.~\ref{fig:abc}) so that no large bands of O$_2$ can be used
as an indicator of its presence. However, note that the
presence of O$_3$ indirectly indicates the presence of O$_2$,
as pointed out by L\'eger et al.\ (1993) and others.
\subsubsection{Rayleigh scattering}
When Hartley and Chappuis bands of O$_3$ are absent (all cases
but A1), the Rayleigh scattering signature is clearly visible
in the blue part of the spectrum ratio. On one side it masks
the presence of some transitions, like those of O$_2$ and some
of CO$_2$, but on the other side it can provide two important
informations: (i) even if the spectral features cannot be
distinguished because they are too thin or faint, the
characteristic rising `continuum' as $\lambda^{-4}$ for short
wavelengths is a clear indication that the planet has an
atmosphere, and (ii) it indirectly indicates the presence of
the most abundant species of the atmosphere, such as CO$_2$ and
N$_2$, even if N$_2$ shows no spectral signature in the
observed domain. As a consequence, Rayleigh scattering can be
considered a way to detect N$_2$, provided clouds and/or
aerosols do not in turn mask the Rayleigh scattering signature.
To summarize, it is possible to detect the presence of the
atmosphere of a transiting exoplanet thanks to the Rayleigh
scattering, whatever the composition of the atmosphere is.
Moreover, it is theoretically possible to discriminate between
an O$_2$-rich atmosphere, where O$_3$ is expected to be present
(L\'eger et al.~1993; Sagan et al.~1993) and a H$_2$O-rich
atmosphere, as the O$_3$ lifetime is supposed to be extremely
brief in a water-rich environment. In other words, we should be
able to distinguish telluric Earth-like planets with low
volatile content from volatile-rich planets. On the other hand,
high spectral resolution is needed to discriminate between
H$_2$O-rich planets and Cytherean worlds (B1, B2, B3).
\subsection{Parameters influencing the signal-to-noise ratio}
\subsubsection{Influence of the star}
\label{sec:influence_star}
From Table~\ref{tab:results1} it is clear that the best targets
are K-type stars, rather than G- or F-type stars, the former
allowing much better S/N than the latter. Two factors are
determining the role of the star in the capabilities of
detecting an exoplanet atmosphere: (i) The size $R_\star$ of
the star, which directly influences the S/N (see
Eq.~\ref{eq:chi2}) and the duration of transit
(Eq.~\ref{eq:time_transit}) and (ii) the semi-major axis of the
planet's orbit, which influences both the duration of transit
and the probability to observe the transit from Earth (see
below). These factors can explain the discrepancies between the
S/N values obtained for different kind of stars in
Table~\ref{tab:results1}.
The probability, $\alpha$, that a planet transiting its parent
star might be seen from the Earth is defined as $\alpha \equiv
P\{{\rm transit}\} = R_\star / a $, with $R_\star$ being the
radius of the star and $a$ the semi-major axis of the planet's
orbit. This probability is about 10\% for `hot Jupiters', while
it is 0.3\%, 0.5\% and 0.7\% for planets orbiting in the HZ of
a F, G or K star, respectively.
In addition, K stars are more numerous than other types of
stars. From the CDS database, we find there is approximatively
a total of $10\,000 \cdot 10^{0.6(V-8)}$ main sequence stars
brighter than a given magnitude $V$ on the whole
sky.\footnote{We consider mostly bright stars, for which the
distribution is essentially isotropic.} About $3/5$ of these
are K type stars, against only $1/10$ for G stars. Let us now
define $\beta$ to be the number of planet(s) per star, and
$\gamma$ to be the fraction of the sky that is considered for a
transit detection survey (in other words the efficiency of
surveys to find the targets). We list in
Table~\ref{tab:results2} the number of potential targets for
each model. This number, $N$, corresponds to the number of
targets detected with a 10-m telescope mirror effective size
and with a S/N greater than or equal to 5. It is given by:
\begin{equation}
\label{eq:N_computed} N_{\mathrm{S/N} \geq 5,~\epsilon D =
10\mathrm{\,m}} = N_0 \cdot \alpha \cdot \beta \cdot \gamma
\cdot \left(\frac{\mathrm{S/N}_{V=8,\,\epsilon
D=10\mathrm{\,m}}}{5}\right)^3,
\end{equation}
where $N_0$ is about 6\,000, 1\,000 and 3\,000 for K, G and F
stars respectively, i.e. the number of stars with magnitude
$\geq 8$, and S/N$_{V=8,\,\epsilon D=10\,m}$ is the expected
S/N ratio computed for a given atmosphere of a planet orbiting
a $V=8$ star with a telescope having a mirror effective area of
10~m (this value is given in the last column of
Table~\ref{tab:results1}). Since no Earth-size planet has been
discovered so far, we have no real estimate of $\beta$. In the
following, when it is not a free parameter we consider
$\beta=1$\footnote{Actually, $\beta = 2$ in the Solar System
because there are two Earth-size planets with atmospheres,
namely Venus and the Earth.}. Catala et al.\ (2005) propose a
$30\degr \times 30\degr$ survey dedicated to find planets
around $< 11^\mathrm{th}$-magnitude stars, i.e., $\gamma
\approx$ 2--3\% for such a project.
Let be $N_{\mathrm{S/N},~\epsilon D}$, the number of potential
targets reaching a minimum S/N ratio for a given mirror
effective size $\epsilon D$, which scales from the value
calculated using Eq.~\ref{eq:N_computed}, $N_{\mathrm{S/N} \geq
5, \epsilon D = 10\mathrm{\,m}}$, in the following way:
\begin{equation}
\label{eq:N_scaled} N_{\mathrm{S/N},~\epsilon D} =
N_{\mathrm{S/N}\geq 5,~\epsilon D=10\mathrm{\,m}} \cdot
\left(\frac{\mathrm{S/N}}{5}\right)^{-3} \cdot
\left(\frac{\epsilon D}{10\mathrm{~m}}\right)^3.
\end{equation}
The values obtained for atmospheric detection are strongly in
favor of a small, late type star. Note that this is also true
for the detection of the planetary transit as well.
\subsubsection{Effect of the atmospheric temperature gradient}
\label{sec:temperature_effect}
The thick CO$_2$ Venus-like atmospheres (B1, B2 and~B3, see
Table~\ref{tab:results1} \&~\ref{tab:results2}) are more
difficult to detect than other cases. Even if we set the top of
the clouds at 10~km height, the detection remains more
challenging than for model~A1. That is somewhat surprising,
partly because CO$_2$ has strong transitions, particularly in
the near infrared, and partly because of the larger scale
height at the surface of the planet (14.3~km for model~B1,
8.8~km for model~A1). As a consequence, the atmosphere in
model~B1 should have a larger vertical extent than in model~A1.
In reality, the difficulty to characterize the atmospheres of
models~B1, B2 and~B3 is related to the temperature profiles we
chose (see Fig.~\ref{fig:A_profile} and~\ref{fig:B_profile}):
At 50~km of altitude, the temperature of model~B1 is roughly
60~K colder than that of model~A1. This model in fact benefits
from the positive stratospheric temperature gradient of the
Earth. Moreover, the atmosphere for model~B1
($\mu_\mathrm{B1}=43$~g\,mol$^{-1}$) is heavier than the one
for model~A1 ($\mu_\mathrm{A1}=29$~g\,mol$^{-1}$). Therefore,
at high altitude, the scale height is larger in model~A1 than
in model~B1 (respectively 7.6~km and 3.9~km at an altitude of
50~km).
\subsubsection{Effect of atmospheric pressure}
\label{sec:pressure_effect}
Note that the thickness of the atmosphere in model~B1 is almost
half the one in A1, despite the intense surface pressure
(100~atm), which should help to increase the upper level of the
atmosphere, limited by the UV photo-dissociation
($h_\mathrm{max}$). The exponential decrease of pressure
prevents, in fact, $p_0$ to play a key role: in order to
counterbalance the effect of the negative temperature gradient,
the surface pressure should have been $>10^6$~atm to obtain
absorptions similar to the case of the Earth (model~A1).
\subsubsection{Effect of the planet gravity and density}
The atmospheric absorption is, at a first order, proportional
to $H \cdot R_P$. At a given temperature and for a given
atmospheric composition, the scale height $H$ is proportional
to the inverse of the gravity acceleration, $g^{-1}$, or
equivalently to $R_P^2/M_P$, where $M_P$ is the mass of the
planet. As a result, the absorption is expected to be roughly
inversely proportional to the bulk density of the planet,
$\rho_P$, independently of the planet size.
This effect is illustrated by the following examples:
models~C1, C2 and~C3 all benefit from very extended
atmospheres, given the weak value of $g$ in the three cases.
For a planet as dense as the Earth (such that $g_\mathrm{C1} =
g_\mathrm{A1}$), the results for the N$_2$/H$_2$O-rich
atmosphere in models~C are close to the ones obtained for
models~A. Both models~C and~A, present typical spectral
features. In model~A1, ozone, for which the concentration peaks
at the tropopause, gives a prominent signature in the blue edge
of the spectral domain (the Hartley and Chappuis bands, as seen
in Fig.~\ref{fig:ABC_ratios}, top panel). On the contrary, the
saturated atmosphere of model~C1, which sustain H$_2$O up to
high altitudes, yields strong bands around 0.14 and 0.19~$\mu$m
(Fig.~\ref{fig:ABC_ratios}, bottom panel). The role played by
$g$ can be better understood by comparing model~A3 or~B3
($g=24.5$~m\,s$^{-2}$) to model~A2 or~B2 ($g=3.9$~m\,s$^{-2}$),
and model~C3 ($g=14.7$~m\,s$^{-2}$) to model~C2
($g=2$~m\,s$^{-2}$). Using absorption spectroscopy, it is clear
that the atmospheres of small and light planets (i.e., with low
surface gravity) are easier to detect than the ones of large
and dense planets (i.e., with high surface gravity).
Small and light exoplanets, however, may not be able to retain
a thick atmosphere. In fact, high thermal agitation of
atmospheric atoms causes particles to have a velocity in the
tail of the Maxwellian distribution allowing them to escape
into space (i.e., Jean's escape). It is therefore questionable
if planets of the size of Titan can have a dense atmosphere at
1~AU from their star. Models~A2, B2 and~C2 enter that category.
This problem concerns both small planets and giant exoplanet
satellites.
According to Williams et al.\ (1997), a planet having the
density of Mars could retain N and O over more than 4.5~Gyr if
it has a mass greater than 0.07~M$_\oplus$. Model planets~A2
and B2 have masses of 0.1~M$_\oplus$ and a density equivalent
to that of Mars ($\approx$4~g\,cm$^{-3}$) so they would be able
to retain an atmosphere (though they may not be able to have a
1~atm atmosphere, as for Mars). The ocean-planet model~C2 has a
mass of 0.05~M$_\oplus$ for a density of 2.8~g\,cm$^{-3}$, and
according to Williams et al.\ (1997), its atmosphere should
consequently escape. However, although at 1~AU from the star,
such a planet also has a huge reservoir of volatile elements.
This reservoir should help to `refill' the escaping atmosphere.
Note that an hydrodynamically escaping atmosphere should be
easier to detect than a stable one, since it can bring heavier
elements into the hot upper atmosphere. This effect is
illustrated by the absorptions seen by Vidal-Madjar et al.\
(2003, 2004) in the spectrum of \object{HD~209\,458}, which
originate in its transiting giant planet hydrodynamically
escaping atmosphere. A model of an `escaping ocean' is studied
by Jura (2004). This process would give interesting absorption
signatures in the H$_2$O bands from the lower atmosphere and in
the signatures of the photo-dissociation products of H$_2$O
from the upper atmosphere (such as an absorption of
Lyman~$\alpha$ photons by the hydrogen atom). See detailed
discussion in Jura (2004).
\begin{table*}
\centering
\begin{tabular}{*{10}{c}}
\hline \hline
Model & Description & Atm. type & $R_P$ & $M_P$ & $\rho_P$ & $g$ & $p_0$ & $H_0$ & $h_\mathrm{max}$ \\
& & & (R$_\oplus$) & (M$_\oplus$) & (g\,cm$^{-3}$) & (m\,s$^{-2}$) & (atm) & (km) & (km)\\
\hline
A1 & ($\approx$)Earth & N$_2$/O$_2$-rich & 1 & 1 & 5.5 & 9.8 & 1 & 8.8 & 85 \\
B1 & ($\approx$)Venus & CO$_2$-rich & 1 & 1 & 5.5 & 9.8 & 100 & 14.3 & 50 \\
C1 & medium ocean-planet & N$_2$/H$_2$O-rich & 1 & 0.5 & 2.8 & 4.9 & 1 & 20.0 & 260 \\
A2 & small Earth & N$_2$/O$_2$-rich & 0.5 & 0.1 & 4.0 & 3.9 & 1 & 24.7 & 260 \\
B2 & small Venus & CO$_2$-rich & 0.5 & 0.1 & 4.0 & 3.9 & 1 & 40.0 & 99 \\
C2 & small ocean-planet & N$_2$/H$_2$O-rich & 0.5 & 0.05 & 1.8 & 2.0 & 1 & 61.4 & 499 \\
A3 & `super-Earth' & N$_2$/O$_2$-rich & 2 & 9 & 6.1 & 24.5 & 1 & 3.9 & 30 \\
B3 & `super-Venus' & CO$_2$-rich & 2 & 6 & 6.1 & 24.5 & 100 & 6.4 & 30 \\
C3 & big ocean-planet & N$_2$/H$_2$O-rich & 2 & 9 & 4.1 & 14.7 & 1 & 6.7 & 60 \\
\hline
\end{tabular}
\caption{Summary of test models.}
\label{tab:models}
\end{table*}
\begin{table*}
\centering
\begin{tabular}{*{9}{c}}
\hline \hline
Model & Description & Star & \multicolumn{6}{c}{Signal-to-noise ratio} \\
& & & \multicolumn{6}{c}{(S/N)$_{V=8,\,\epsilon D=10\rm{~m}}$} \\
& & & w/o cloud & w/ clouds & H$_2$O & CO$_2$ & O$_3$ & O$_2$ \\
\hline
& & K & 5.2 & 3.5 & 1.7 & 1.1 & 1.9 & 0.2 \\
A1 & ($\approx$)Earth & G & 3.2 & 2.3 & 0.8 & 0.5 & 1.2 & 0.2 \\
& & F & 2.3 & 1.7 & 0.5 & 0.3 & 0.9 & 0.1 \\
\hline
& & K & 4.0 & 2.3 & 0.0 & 2.3 & - & - \\
B1 & ($\approx$)Venus & G & 2.1 & 1.2 & 0.0 & 1.2 & - & - \\
& & F & 1.3 & 0.7 & 0.0 & 0.7 & - & - \\
\hline
& medium & K & 41 & 39 & 39 & 11 & - & - \\
C1 & ocean- & G & 22 & 20 & 20 & 5.4 & - & - \\
& planet & F & 14 & 13 & 13 & 3.3 & - & - \\
\hline
& & K & 6.9 & 6.3 & 3.8 & 2.8 & - & 0.7 \\
A2 & small Earth & G & 4.3 & 4.0 & 1.8 & 1.4 & - & 0.5 \\
& & F & 3.2 & 3.0 & 1.1 & 0.8 & - & 0.3 \\
\hline
& & K & 5.8 & 3.3 & 0.0 & 3.3 & - & - \\
B2 & small Venus & G & 3.0 & 1.6 & 0.0 & 1.7 & - & - \\
& & F & 1.9 & 1.0 & 0.0 & 1.0 & - & - \\
\hline
& small & K & 47 & 46 & 46 & 17 & - & - \\
C2 & ocean- & G & 26 & 25 & 25 & 8.6 & - & - \\
& planet & F & 17 & 16 & 16 & 5.2 & - & - \\
\hline
& & K & 4.6 & 1.1 & 0.9 & 0.5 & - & 0.1 \\
A3 & super-Earth & G & 2.5 & 0.6 & 0.4 & 0.2 & - & 0.1 \\
& & F & 1.7 & 0.4 & 0.3 & 0.1 & - & 0.0 \\
\hline
& & K & 5.6 & 0 & 0 & 0 & - & - \\
B3 & super-Venus & G & 2.9 & 0 & 0 & 0 & - & - \\
& & F & 1.9 & 0 & 0 & 0 & - & - \\
\hline
& big & K & 20 & 13 & 12 & 3.2 & - & - \\
C3 & ocean- & G & 10 & 6.5 & 6.3 & 1.5 & - & - \\
& planet & F & 6.7 & 4.1 & 4.0 & 0.9 & - & - \\
\hline
\end{tabular}
\caption{Summary of results: signal-to-noise ratios obtainable with
a telescope mirror effective size of $\epsilon D = 10$~m pointing at a $V=8$ star.
To get the S/N ratios for a different effective size $\epsilon D$, exposure time during transit, $t$,
and/or apparent magnitude of the star, $V$, the result
scales with \mbox{$(\epsilon D / 10\mathrm{~m}) \cdot (t/\tau)^{0.5} \cdot 10^{-0.2 (V - 8)}$}
where $\tau$ is defined by Eq.~\ref{eq:time_transit}. The S/N by species are calculated for the models with clouds.}
\label{tab:results1}
\end{table*}
\begin{table*}
\centering
\begin{tabular}{*{9}{c}}
\hline \hline
Model & Description & Star & Mirror & Limiting & Number & \multicolumn{3}{c}{Number~of~targets} \\
& & & eff. size (m) & magnitude & of stars & \multicolumn{3}{c}{for models w/ clouds} \\
& & & $(\epsilon D)_{\mathrm{S/N}\geq5,\,V=8}$ & $(V_\mathrm{Lim})_{\mathrm{S/N}\geq5,\,\epsilon D=10\mathrm{\,m}}$ & & \multicolumn{3}{c}{$(N)_{\mathrm{S/N}\geq5}$, $\epsilon=50\%$} \\
& & & w/ clouds & w/ clouds & & $\beta \cdot \gamma = 1$ & $\beta \cdot \gamma = 3\%$ & $\beta \cdot \gamma = 10\%$ \\
& & & & & & $D = 20$~m & D = $30$~m & $D = 30$~m \\
\hline
& & & (a) & (b) & (c) & \multicolumn{3}{c}{(d)} \\
\hline
& & K & 14 & 7.22 & 2\,042 & 14 & 1 & 4 \\
A1 & ($\approx$)Earth & G & 22 & 6.31 & 96 & $<1$ (0.4) & $\ll 1$ & $<1$ (0.1) \\
& & F & 29 & 5.66 & 118 & $<1$ (0.3) & $\ll 1$ & $<1$ (0.1) \\
\hline
& & K & 21 & 6.31 & 580 & 4 & $<1$ (0.4) & 1 \\
B1 & ($\approx$)Venus & G & 43 & 4.90 & 13 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
& & F & 68 & 3.73 & 8 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
\hline
& medium & K & 1.3 & 12.5 & $>3\cdot10^6$ & 19\,602 & 1\,984 & 6\,615 \\
C1 & ocean- & G & 2.5 & 11.0 & 63\,095 & 321 & 32 & 108 \\
& planet & F & 3.9 & 10.1 & 54\,591 & 157 & 15 & 52 \\
\hline
& & K & 8 & 8.50 & 11\,971 & 84 & 8 & 28 \\
A2 & small Earth & G & 13 & 7.51 & 508 & 2 & $<1$ (0.2) & $<1$ (0.6) \\
& & F & 17 & 6.90 & 656 & 1 & $<1$ (0.1) & $<1$ (0.3) \\
\hline
& & K & 15 & 7.10 & 1\,730 & 12 & 1 & 4 \\
B2 & small Venus & G & 31 & 5.52 & 32 & $<1 (0.1)$ & $\ll 1$ & $\ll 1$ \\
& & F & 50 & 4.50 & 23 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
\hline
& small & K & 1.1 & 12.8 & $>4\cdot10^6$ & 33\,569 & 3\,398 & 11\,329 \\
C2 & ocean- & G & 2.0 & 11.5 & 125\,892 & 600 & 60 & 202 \\
& planet & F & 3.1 & 10.5 & 94\,868 & 307 & 31 & 103 \\
\hline
& & K & 45 & 4.71 & 63 & $<1$ (0.4) & $\ll 1$ & $<1$ (0.1) \\
A3 & super-Earth & G & 86 & 3.39 & 1 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
& & F & 121 & 2.51 & 1 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
\hline
& & K & $> 10^3$ & - & 0 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
B3 & super-Venus & G & $> 10^3$ & - & 0 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
& & F & $> 10^3$ & - & 0 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\
\hline
& big & K & 4.0 & 10.1 & 109\,182 & 682 & 69 & 230 \\
C3 & ocean- & G & 7.7 & 8.57 & 2\,197 & 10 & 1 & 3 \\
& planet & F & 13 & 7.57 & 1\,656 & 4 & $<1$ (0.4) & 1 \\
\hline
\end{tabular}
\caption{Summary of results: mirror effective size and number of targets. \newline
(a) Effective size $(\epsilon D)_{\mathrm{S/N}\geq5,\,V=8}$ of the telescope mirror required to
obtain $\mathrm{S/N}=5$ for a $V=8$ star, based on the
numbers displayed for the models with clouds (see
Table~\ref{tab:results1}).\newline
(b) The limiting magnitude at which the number
of targets in the last column is given. This can be expressed as
\mbox{$(V_\mathrm{Lim})_{\mathrm{S/N}\geq5,\,\epsilon D=10\mathrm{\,m}} = 5 \cdot \log_{10} \left[ \left(\mathrm{S/N}_{V=8,\,\epsilon D = 10}\right) / 5 \cdot (\epsilon D) / 10\mathrm{~m}
\right]+8$}. \newline
(c) Total number of given spectral-type stars brighter than the limiting
magnitude.\newline
(d) Number of potential targets calculated with
Eq.~\ref{eq:N_computed}, using the S/N value of the
models with clouds and assuming various $\beta \cdot \gamma$
values.
The coefficients $\beta$ and $\gamma$ are defined
in the text. When the number of potential targets is
slightly less than 1, the value is given between
parenthesis. Use Eq.~\ref{eq:N_scaled} to scale the
value displayed in the column to any mirror
effective size $\epsilon D$ and minimum S/N.}
\label{tab:results2}
\end{table*}
\begin{figure}[htbp!]
\resizebox{\hsize}{!}{\includegraphics{Fig8a.ps}}
\resizebox{\hsize}{!}{\includegraphics{Fig8b.ps}}
\resizebox{\hsize}{!}{\includegraphics{Fig8c.ps}}
\caption{Spectrum ratios for models~A1 (a), B1 (b) and~C1 (c). The
spectrum ratios have been respectively shifted by the values in
parenthesis so that the absorption by the `solid disk' of
the planet is 0~ppm. In the case of models with clouds, the
`solid disk' is artificially increased by the cloud layer. The
dashed line indicates the best fit estimation of the radius
of the planet, $\tilde{R}_P$ (see Sect.~\ref{sec:S/N}) if we
suppose there is no atmosphere.}
\label{fig:ABC_ratios}
\end{figure}
\begin{figure}[htbp!]
\resizebox{\hsize}{!}{\includegraphics{Fig9a.ps}}
\resizebox{\hsize}{!}{\includegraphics{Fig9b.ps}}
\resizebox{\hsize}{!}{\includegraphics{Fig9c.ps}}
\caption{Spectrum ratios for models~A2 (a), B2 (b) and~C2 (c).
The `saturation effect' in H$_2$O lines, for model~C2, is a consequence of the
atmosphere being optically thick at the upper atmospheric level,
$h_\mathrm{max}$. In fact, if one consider there is no more
water above this level due to photo-dissociation (see
Sect.~\ref{sec:b_max}), such transmitted spectrum plots
allow to determine the level where H$_2$O photo-dissociation
occurs in an exoplanet atmosphere.}
\label{fig:DEF_ratios}
\end{figure}
\begin{figure}[htbp!]
\resizebox{\hsize}{!}{\includegraphics{Fig10a.ps}}
\resizebox{\hsize}{!}{\includegraphics{Fig10b.ps}}
\resizebox{\hsize}{!}{\includegraphics{Fig10c.ps}}
\caption{Spectrum ratios for models~A3 (a), B3 (b) and~C3 (c).}
\label{fig:GHI_ratios}
\end{figure}
\section{Conclusion}
The vertical extent of the atmosphere is of extreme importance
as concerns the detectability of a remote atmosphere by
absorption spectroscopy. This tends to favor less dense
objects, like giant exoplanet satellites (as would be an
`exo-Titan') or volatile-rich planets (as ocean-planets,
theoretically possible but not observed yet). Cytherean
atmospheres are the most challenging to detect. Surface
parameters, such as surface pressure and temperature, are not
crucial. A temperature gradient that becomes positive at few
tens of kilometers height (for instance owing to
photochemistry) might help the detection. Our results show that
late-type stars are better for detecting and characterizing the
atmospheres of planets in transit, since they are smaller, more
numerous and present a better probability of being transited by
a planet.
The strongest signatures of the atmosphere of a transiting
Earth-size planet could be those of H$_2$O (6~ppm in the case
of hypothetical ocean-planets), O$_3$ ($\sim$1--2~ppm) and
CO$_2$ (1~ppm), considering our spectral study from the UV to
the NIR (i.e., from 0.2 to 2~$\mu$m). The presence of an
atmosphere around hundreds of hypothetical `ocean-planets'
(models C) could be detected with a 10--20~m telescope. The
atmospheres of tens of giant exoplanet satellites (model A2)
could be in the range of a 20--30~m instrument. A 30--40~m
telescope would be required to probe Earth-like atmospheres
around Earth-like planets (model A1). These numbers suppose
that Earth-size planets are frequent and are efficiently
detected by surveys.
Finally, planets with an extended upper atmosphere, like the
ones described by Jura (2004), hosting an `evaporating ocean',
or the planets in an `hydrodynamical blow-off state', are the
natural link between the planets we have modelled here and the
observed `hot Jupiters'.
\begin{acknowledgement}
We warmly thank Chris Parkinson for careful reading and
comments that noticeably improved the manuscript, David Crisp
for the code \texttt{LBLABC} and the anonymous referee for
thorough reading and useful comments on the manuscript. G.
Tinetti is supported by NASA Astrobiology Institute -- National
Research Council.
\end{acknowledgement}
| 2024-02-18T23:39:49.283Z | 2005-10-07T13:52:12.000Z | algebraic_stack_train_0000 | 522 | 12,458 |
|
proofpile-arXiv_065-2716 | \section{Introduction}
Prompt and afterglow emission in GRB show different temporal and
spectral properties and are usually attributed to different mechanisms,
more specifically, internal and external shocks, respectively.
During the prompt emission, the spectrum is hard and shows
strong spectral evolution from hard to soft. However, the
afterglow emission is much softer, and its spectral index remains
roughly constant during the whole emission from early to
late observations \citep{frontera00,piro02}. The
transition from one regime to the other takes place from a few
tens to a few thousand seconds after the burst. In this phase, a
variety of temporal behaviour is observed in different bursts,
likely due to the contribution of both prompt and afterglow emission.
Most intriguing is the presence of X-ray flares, such as that observed
in \object{XRF 011030} .
Indeed, several bursts observed by BeppoSAX showed the
presence of X--ray flares from tens of seconds (e.g., \object{GRB
970228} \citep{frontera} and \object{GRB 980613} \citep{soffitta})
to several minutes (e.g., \object{GRB 011121} and X-Ray Rich \object{XRR 011211}
\citep{piro05}) after the burst. These flares have a soft spectrum
consistent with that of the late afterglow. Furthermore, they
connect with the late afterglow emission with a power law $F
\propto (t-t_0)^{-\alpha}$. For early ($<$ 1 minute) X--ray flares,
the origin of the time $t_0$ is consistent with the onset of the
prompt emission. For later ($\gtrsim$ 100 s) X--ray flares, $t_0$
needs to be shifted to the onset of the flare \citep{piro05}.
More recently, X--ray flares taking place in a similar time
period have been observed by Swift \citep{swift} in a larger
number of bursts, about one half of the sample \citep{obrien}
\footnote{We notice that flares and/or re-brightenings are
also taking place on longer time scales. In the present paper,
we focus on flares from times of minutes up to $\sim$ 1000 s,
i.e., on a time scale similar to that observed in \object{XRF 011030}.}.
Swift observations are providing a big advance in the understanding
of this phenomenon and its possible relationship with the central engine.
In particular, the discovery in \object{GRB 050502B} \citep{burrows}
of a giant flare, $\sim$ 700 s after the trigger, with an energy
comparable to that of the burst itself, suggests that the central
engine is undergoing long periods of strong activity.
Swift observations confirm that X-ray flares have a spectrum
that is globally softer than the prompt emission, i.e., the
peak energy $E_{peak}$ is of the order of a few keV \citep{falcone05}.
In some cases (\object{GRB 050126}, \object{GRB 050219a}, and
\object{GRB 050904}), no significant evidence of spectral evolution
is detected, and the spectrum of the flare is consistent with that
observed in the late afterglow \citep{tagliaferri05,goad05,burrows_rew}.
The light curve connecting the flare with the late afterglow can be
reasonably well-fitted by the $(t-t_0)^{-\alpha}$ power law
\citep{tagliaferri05}. However, in other cases (\object{XRF
050406} \citep{romano06}, \object{GRB 050502B}
\citep{burrows,falcone05}, \object{GRB 050421}, \object{GRB
050607}, \object{GRB 050730}, and \object{GRB 050724}
\citep{burrows_rew}), hard-to-soft spectral evolution
was observed during the flare.
Several scenarios were proposed to explain the X-ray flare
phenomenon \citep{zhang05}.
\citet{burrows} propose that the central engine releases energy
for a long time, and internal shocks then produce a long duration
prompt emission. In the framework of the forward-reverse shock
scenario, \citet{fan} have shown that, adopting different values
for the forward and the reverse shock parameters, the reverse
shock synchrotron radiation can dominate in the X--ray band
producing a flare. In another scenario, late X--ray flares
mark the beginning of the afterglow emission, and they are produced
by a thick shell fireball (long duration engine activity) through
an external shock \citep{piro05}. Very recently, \citet{wu05} have
shown that X--ray flares can be produced in the context of both
late internal and late external shocks. They assume that the central
engine releases energy in two episodes (i.e., an early and a late shell are ejected).
They applied their model to four Swift GRB and found that
\object{XRF 050406} and \object{GRB 050607} flares
can be explained both with late internal or external shocks.
In this paper, we present a complete analysis of \object{XRF 011030}
observed with BeppoSAX. It shows outburst activities, with
the last detected flare occurring about 1300 s after the burst.
We investigate the origin of this late X--ray flare. As mentioned
above, several models could explain this phenomenon. Here we have
carried out a detailed analysis of the model in which the flare
is produced by the interaction of the fireball with the external
medium. We check if the model can consistently account for
the broadband data -- from radio to X-rays. We then derive the
main parameters of the fireball, including the density profile of
the surrounding medium. We have also tested this model for the
X--ray flare occurring in \object{GRB 011121}.
We describe the observations of \object{XRF 011030} in Sect.
\ref{data} and perform its temporal and spectral analysis in
Sect. \ref{reduction} and \ref{spectra}, respectively. In
Sect. \ref{interpretation} we discuss late flares in the context
of different variants of the external shock model. In Sect.
\ref{modelapplication} we apply a long duration engine activity (thick
shell) model to the late flare appearing in \object{XRF 011030} and in \object{GRB 011121} , and we
explain it as the onset of the afterglow emission. In particular, in
Sect. \ref{break}, we study the \object{XRF 011030} late afterglow emission taking
into account the presence of a break occurring between $10^4$ and
$10^6$ s after the burst.
In Sect. \ref{optical} we use information on \object{XRF 011030} from the
optical and the radio band to further constrain the model. Our
results and conclusions are summarized in Sect. \ref{conclusions}.
\section{Observations and data reduction}
\label{data}
The X-ray flash \object{\object{XRF 011030} } was detected by the BeppoSAX Wide
Field Camera (WFC) no. 1 on October 30th, 2001, at 06:28:02, without
any counterpart in the Gamma Ray-Burst Monitor (GRBM) \citep{gand}.
The peak flux is $7.5\times10^{-9}$erg s$^{-1}$cm$^{-2}$, and the total
fluence of the source between 2 and 26 keV is equal to $1.2\times10^{-6}$
erg cm$^{-2}$, consistent with the typical value observed in the same
range for normal GRB \citep{amati}.
The X-ray afterglow of \object{XRF 011030} was identified by Chandra in a 47 ks
exposure beginning on November 2001, at 9.73 UT and in a second
one of 20 ks performed on November 2001, at 29.44 UT \citep{harrison}.
The localization of the X--ray afterglow was consistent with
the position of a radio transient \citep{taylor}. The radio source
was detected on November 2001, at 8.80 UT near the centre of the
WFC error circle at (epoch 2000) R.A.=20:43:32.3, Dec.=+77:17:18.9
with an error of $\pm1 \arcsec$. In this paper, we have used the results
of the analysis of the Chandra data performed by \citet{vale}. The
spectrum between 2 and 10 keV is fitted by a power law with a photon
index $\Gamma=1.72^{+0.19}_{-0.20}$ (Table \ref{analysis}).
Several optical observations were carried out, but none of them succeeded
in associating an optical counterpart with \object{XRF 011030}. The tighest
upper limits are R$>$21 at 0.3 days after the burst \citep{vijay} and
R$>$23.6 at 2.7 days after the burst \citep{rhoads01}.
The precise Chandra localization allowed the association of the
burst with a faint irregular blue galaxy observed by the Hubble
Space Telescope and the Keck. The photometric observations of
this galaxy suggest a redshift smaller than $z \sim$3.5, while
the low brightness of the galaxy suggests that $z >$ 0.6 \citep{bloom}.
Since the observations allow us to establish only a wide range of
redshift values, in our analysis we assume $z$=1.\\
\subsection{Temporal analysis of XRF 011030 }
\label{reduction}
We produced background subtracted light curves of \object{XRF 011030} normalized
to the detector effective area exposed to the source.
The source remained in the field of view (f.o.v.) of the WFC for
about 1 day. A significant source flux above the background level
is detected until $\sim$ 1600 s (see Fig. \ref{reburst50}).
A main pulse, lasting $\sim$ 400 s, starts $\sim$ 300 s after a
fainter preceding event. The main event is also followed by an
X--ray flaring activity, 200 s long, which appears $\sim$ 1300
s after the first pulse. This X--ray flare has duration and flux
similar to the pulse preceding the main event.
\begin{figure*}[!htb]
\centering
\includegraphics[height=14cm,width=10.2cm,angle=-90]{4448fig1.ps}
\caption{ Light curve of \object{XRF 011030} in the BeppoSAX WFC (2-26 KeV) with a temporal
resolution of 50 s. The main pulse (300-700 s) is preceded by a fainter event (0-300 s)
and is followed by a late X--ray flare (1300-1550 s). }
\label{reburst50}
\end{figure*}
As the event is an X--ray flash, its spectrum is soft by definition.
Thus, we cannot expect to find such substantial spectral differences
as those find in GRBs between the phases of precursor, prompt emission,
and late X--ray flare, i.e., with a precursor and late flare markedly
softer than the prompt emission \citep{piro05}.
In any case, due to the similarities of the bursting activities that preceded and
followed the main pulse in \object{XRF 011030} to those observed in \object{GRB 011121} \citep{piro05},
in the following we refer to these two pulses as precursor and flare, respectively.
After about 1600 s, no signal is detected and we can only estimate upper limits on the flux.
The light curve of \object{XRF 011030} with the upper limits between 1600 s and $10^5$ s together with the
late afterglow emission detected by Chandra is shown in Fig. \ref{afterglow}.
\begin{figure}[!htb]
\centering
\includegraphics[width=6.3cm,height=8.75cm,angle=-90]{4448fig2.ps}
\caption{\object{XRF 011030} light curve with the BeppoSAX upper limits on the flux fixed at 3$\sigma$
above the background fluctuations. The late afterglow emission detected by Chandra is
shown in red squares. [See the electronic edition for a colour version of this figure.]}
\label{afterglow}
\end{figure}
\subsection{Spectral analysis}
\label{spectra}
We extracted the spectrum between 2--26 keV from the WFC data. In
the spectral fitting, we tested a simple power law model (with and
without photoelectric absorption), a broken power law model, and a
black-body model. The results of our spectral analysis are
summarized in Table \ref{analysis}. All errors are quoted at 1
$\sigma$ (68\% confidence level).
The whole spectrum of \object{XRF 011030}, integrated from 0 s to
1550 s can be described by a simple power law (Fig. \ref{fit}) with a
photon index $\Gamma$=$1.84^{+0.17}_{-0.16}$, consistent with the
simple power law photon index $\Gamma=1.9\pm0.1$ determined by
\citet{heise}. The fit with this model gives $\chi^2_{\nu}=0.83$;
25 degrees of freedom (d.o.f.).
\begin{figure}[!htb]
\centering
\includegraphics[height=8.8cm,width=6.3cm,angle=-90]{4448fig3.ps}
\caption{ $\nu$F$_{\nu}$ spectrum of the total event.
The solid line represents the fit of \object{XRF 011030} with a simple power law model.}
\label{fit}
\end{figure}
A fit of the spectrum with a broken power law having the photon index
$\Gamma_1$ free to vary and the photon index $\Gamma_2$
fixed to the typical value 2.5 \citep{amati} did not bring a
significant improvement of $\chi^2$. Also, the fit with a power law
with a photoelectric absorption did not bring a significant
improvement of $\chi^2$, and led to an upper limit on the absorption
column density $N_H<1.5\times10^{23}cm^{-2}$ at z=1. Finally, the fit
with a black-body model gives a $\chi^2_{\nu}$ value greater
than 2, and it can thus be rejected.
We studied the spectral evolution of \object{XRF 011030} by dividing the data into
four intervals: the precursor (from 35 s to 280 s), the first segment
of the prompt emission (from 280 s to 500 s), the second segment of
the prompt emission (from 500 s to 1200 s), and the flare (from 1300 s
to 1550 s). We observed only a marginal spectral variability.
The precursor spectrum is well-described by a simple power law with
a photon index $\Gamma$=$2.61^{+0.76}_{-0.61}$, marginally steeper
than the spectrum of the main event. This fit gives $\chi^2_{\nu}$=0.81;
25 d.o.f. The precursor can be also described by a black-body model
with a temperature $kT$=$0.90^{+0.19}_{-0.15}~keV$ (see Fig. \ref{bbody})
and $\chi^2_{\nu}$=0.96; 25 d.o.f. This result is interesting because
there is only one burst, observed by GINGA, whose spectrum is consistent
with a black body \citep{murakami}. However, we cannot discriminate between
these two models.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.8cm,width=6.3cm, angle=-90]{4448fig4.ps}
\caption{$\nu$F$_{\nu}$ spectrum of the precursor. The solid line is
the fit of the precursor with a black-body model.}
\label{bbody}
\end{figure}
The first and the second parts of the prompt emission are
both fitted by a simple power law; for the first part, the
photon index is $\Gamma$=$1.78^{+0.17}_{-0.16}$ and
$\chi^2_{\nu}$=1.24; 25 d.o.f., and for the second one,
$\Gamma$=$1.63^{+0.33}_{-0.30}$ and $\chi^2_{\nu}$=1.48; 25 d.o.f.
Finally, the late X--ray flare is also fitted by a simple power
law. Its spectrum is marginally steeper than those of the main
event, as we also found in the case of the precursor:
$\Gamma$=$2.10^{+0.83}_{-0.64}$ with $\chi^2_{\nu}$=1.51; 25
d.o.f.
\begin{table*}[!htb]
\caption{ Results of the spectral analysis of \object{XRF 011030}. The models used are: Power Law (PL),
BroKeN power law (BKN), Power Law plus photoelectric ABSorption (PL+ABS) and Black Body (BB).}
\label{analysis}
\begin{tabular}{c c c c c c c c c c}
\hline
name & interval & model & photon & $N_H$ & $E_b$ & kT & $Flux_{2-26~keV}$ & $\chi^2_{\nu}$ & $\nu$ \\
& (s) & & index $\Gamma$ & ($cm^{-2}$, z=1)& (keV) & (keV) & [$erg \cdot cm^{-2}\cdot s^{-1}$] & & \\
\hline \hline
total & 0-1550 & PL & $1.84^{+0.17}_{-0.16}$ & --- & --- & --- & $1.3\cdot10^{-9}$ & 0.83 & 25 \\
event & --- & BKN & $1.77^{+0.19}_{-0.23}$ & --- & $<11$ & --- & $1.3\cdot10^{-9}$ & 0.81 & 24 \\
& --- & PL+ABS & $1.88^{+0.27}_{-0.14}$ & $<1.5\cdot10^{23}$ & --- & --- & $1.2\cdot10^{-9}$ & 0.86 & 24 \\
\hline
precursor & 35-280 & PL & $2.61^{+0.76}_{-0.61}$ & --- & --- & --- & $5.8\cdot10^{-10}$ & 0.81 & 25 \\
& --- & PL+ABS & $2.44^{+2.06}_{-0.44}$ & $<7\cdot10^{22}$ & --- & --- & $5.5\cdot10^{-10}$ & 0.87 & 24 \\
& --- & BB & --- & --- & --- & $0.90^{+0.19}_{-0.15}$ & $3.7\cdot10^{-10}$ & 0.96 & 25 \\
\hline
burst & 280-500 & PL & $1.78^{+0.17}_{-0.16}$ & --- & --- & --- & $2.5\cdot10^{-9}$ & 1.24 & 25 \\
part 1 & --- & BKN & $1.59^{+0.23}_{-0.27}$ & --- & $9.1^{+4.0}_{-2.1}$ & --- & $2.3\cdot10^{-9}$ & 1.08 & 24 \\
& --- & PL+ABS & $2.23^{+0.36}_{-0.31}$ & $<4.4\cdot10^{23}$ & --- & --- & $2.1\cdot10^{-9}$ & 1.16 & 24 \\
\hline
burst & 500-1200 & PL & $1.63^{+0.33}_{-0.30}$ & --- & --- & --- & $8.9\cdot10^{-10}$ & 1.48 & 25 \\
part 2 & --- & BKN & $0.39^{+1.49}_{-0.31}$ & --- & $<4.4$ & --- & $7.8\cdot10^{-10}$ & 1.6 & 24 \\
& --- & PL+ABS & $1.93^{+0.74}_{-0.53}$ & $<8.6\cdot10^{22}$ & --- & --- & $8.1\cdot10^{-10}$ & 1.65 & 24 \\
\hline
X-ray late & 1300-1550 & PL & $2.10^{+0.83}_{-0.64}$ & --- & --- & --- & $4.9\cdot10^{-10}$ & 1.51 & 25 \\
flare & --- & BKN & $4.18\pm10$ & --- & $<3.0$ & --- & $5.8\cdot10^{-10}$ & 1.7 & 24 \\
& --- & PL+ABS & $2.10^{+3.93}_{-0.64}$ & $<1.3\cdot10^{23}$ & --- & --- & $4.9\cdot10^{-10}$ & 1.72 & 24 \\
\hline
afterglow$^1$ & $(9.24-9.71)\cdot10^5$ & PL+ABS & $1.72\pm0.20$ & $2.96^{+0.60}_{-0.65}\cdot10^{21}$ & --- & --- & $5.8\cdot10^{-14}$ & 0.76 & 9 \\
\hline
\end{tabular}
$^1$From \citet{vale}; the flux is in the 2-10 keV range.
\end{table*}
\section{The late X--ray flare in the context of external shock models}
\label{interpretation}
Among the different models proposed for X-ray flares
\citep{zhang05}, we choose to analyse in detail some of the models
based upon an external shock origin, motivated by the spectral similarity
observed in the flare and afterglow phases, straightforwardly accounted
for in this scenario. A detailed analysis is carried out to check the
capability of the model to account for the whole set of broadband data.
In what follows, we first try to explain the late flare of \object{XRF 011030} as being due to
external shock in a ''standard'' fireball model (i.e., thin shell case, \citet{sari}),
with a continuous or discontinuous density profile. Since the flare cannot be
described by this model, considering the similarity of \object{XRF 011030} with \object{GRB 011121},
we finally explain it by shifting the origin of time $t_0$ to the instant of the
flare, which corresponds to a thick shell fireball.
\subsection{The Fireball Model: the standard ``thin'' shell case}
\label{standard}
In a ``standard'' approach, the Fireball Model assumes a thin
shell \citep{sari} that expands with spherical symmetry either in
a constant density medium or in a wind profile environment. In
this framework, the emitted flux reaches its maximum at the
deceleration radius $r_0$ and then starts to decrease. However, it
does this too slowly to be consistent with the onset and decay of
the flare. This appears clearly in Fig. \ref{standardism}, where
the calculated light curve for a thin shell fireball expanding in
an ISM is shown.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fig5.ps}
\caption{X--ray light curve of \object{XRF 011030} for a standard thin
shell fireball expanding in an ISM with $E_{53}=0.03$, $\Gamma_0=45$,
$n=1$, $\varepsilon_e=0.3$, $\varepsilon_B=0.05$, and $p=2.1$.}
\label{standardism}
\end{figure}
Moreover, we note that a thin shell fireball also fails to explain the
emission at about 1000 s. In such a case, the deceleration time of the fireball
$t_{dec}$ is greater than the duration of the central engine activity $t_{eng}$,
thus prompt and afterglow emission are well separated, and one would expect no
emission between these two phases.
\subsection{Model with a discontinuous density profile}
\label{discontinuity}
A discontinuous density profile can be produced by a variable
activity of wind emission or by interaction of the wind bubble
with the external uniform medium.
When the fireball expands in a medium characterized by a sudden increase
of density, one could expect that a larger number of photons is produced
and the flux increases quickly. This is true only when the electron
cooling frequency $\nu_c$ is greater then the observational frequency
$\nu_{obs}$. On the contrary, when $\nu_{c}<\nu_{obs}$, the
emitted flux is independent of the density profile both during the
slow cooling and the fast cooling regimes \citep{panaitescu00}.
Let us then verify if the cooling frequency could be located above
the X-ray range at the time of the flare.
\begin{flushleft}
A fireball expanding in an ISM decelerates at a time $t_{dec}$ \citep{panaitescu00}:
\begin{equation}\label{eq:tempoism}
t_{dec}\simeq46.7E_{51}^{1/3}n^{-1/3}\Gamma_{0,2}^{-8/3}~s.
\end{equation}
During the deceleration phase (for $t>t_{dec}$), the cooling
frequency $\nu_c$ is given by \citep{panaitescu00}:
\begin{equation}\label{eq:cool}
\nu_c\simeq3.4\times10^{16}E_{51}^{-1/2}n^{-1}\varepsilon_{B,-2}^{-3/2}t_3^{-1/2}~Hz.
\end{equation}
\end{flushleft}
Except for very low values of the Lorentz factor, $\Gamma_0<30$,
the flare occurs during the deceleration phase and the cooling
frequency $\nu_c$ is given by Eq. \ref{eq:cool}. This
equation indicates that, for typical XRF energies and parameter
values, at the time of the flare, $t_3\simeq1$, the cooling
frequency $\nu_c$ could be higher than the observational frequency
$\nu_{obs}$ (that is, in the X--ray band) only for small values
of density, $n<0.07$. During the deceleration phase, $\nu_c$
decreases with time and will pass below the X-ray range at
later times.
\begin{flushleft}
If the fireball starts its expansion in a wind density profile,
the deceleration time $t_{dec}$ is \citep{panaitescu00}:
\begin{equation}\label{eq:tempowind}
t_{dec}\simeq6.67E_{51}A_{*,-2}^{-1}\Gamma_{0,2}^{-4}~s
\end{equation}
and during the deceleration regime
\begin{equation}\label{eq:coolwind}
\nu_c\simeq3.77\times10^{16}E_{51}^{1/2}A_{*,-2}^{-2}\varepsilon_{B,-2}^{-3/2}t_3^{1/2}~Hz.
\end{equation}
\end{flushleft}
Also in this case, for $A_{*}\lesssim 10^{-3}$, $\nu_c$ can be
higher than $\nu_{obs}$ at the flare time. Moreover, now $\nu_c$
increases with time, and therefore the X-ray afterglow emission
will remain sensitive to density variations.
Thus, for a suitable range of parameters, at the
time of the flare the X-ray emission can be sensitive to density.
However, the duration and amplitude of the flare are not consistent
with the kinematic upper limit recently established by \citet{ioka}
on the flares produced by the interaction of the fireball with density
discontinuities. In particular, if we assume to observe GRB on
axis, this upper limit is \citep{ioka}:
\begin{equation}\label{eq:upperlimit}
\frac{\Delta F_\nu}{F_\nu}\leq \frac{4}{5}f_c^{-1}\frac{F}{\nu
F_\nu}\frac{\Delta t}{t-t_0},
\end{equation}
where $f_c^{-1}\sim (\nu_i/\nu_c)^{(p-2)/2}$ is the fraction of cooling
energy and $F/\nu F_{\nu}\sim(\nu_{obs}/\nu_c)^{(p-2)/2}$ for $\nu_c<\nu_{obs}$.
The flare duration is $\Delta t \simeq 200$ s in \object{XRF 011030},
and the time of the flare occurrence is $(t-t_0)\simeq 1300$ s, where the
time is counted from $t_0$, i.e., in the case of a thin shell, from the
initial trigger. Thus $\Delta t/t \sim$0.15, and with typical parameters
values, Eq. \ref{eq:upperlimit} implies $\Delta F_{\nu}/F_{\nu} \leq$ 0.25,
while from the X--ray data of \object{XRF 011030} we derive $\Delta F_{\nu}/F_{\nu} \sim$ 3.6.
We point out that the case discussed above applies only to
a density discontinuity with a shell geometry. \citet{dermer99}
have shown that a clumpy medium would be able to produce high
variable light curves through external shock if the clouds radius
is very small in comparison to their distance from the central
engine. This process can explain X-ray flares up to thousand of
seconds \citep{dermer05}.
\subsection{Long duration engine activity: the thick shell model }
\label{reburst011030}
In the following, we show how we can describe the flare in the
context of the external shock scenario by shifting the origin of
the time $t_0$ to the onset of the flare.
From a theoretical point of view, the onset of the external shock
depends on the dynamical regime of the fireball that is strictly
related to the ``thickness'' of the shell \citep{sari}, i.e., to
the duration of the engine activity. In fact, a shell is defined
as being thin or thick depending on its thickness $\Delta=ct_{eng}$, where
$t_{eng}$ is the duration of the engine activity, and also on its
initial Lorentz factor $\Gamma_0$. The shell is defined to be
thick if $(E/nm_pc^2)^{1/3}\Gamma_0^{-8/3}<\Delta$ \citep{sari}.
For our purpose, we rewrite the above equation substituting the
deceleration time given in Eq. (3) of \citet{piro05}, obtaining
$t_{eng} \gtrsim t_{dec}$ for the thick shell condition. Most of
the energy is transferred to the surrounding material at $t_{dec}$
for thin shells or at $t_{eng}$ for thick shells. In the latter
case, the peak of the afterglow emission therefore coincides with
$t_{eng}$. Also, the afterglow decay will be described by a
power-law only if the time is measured starting from the time at
which the inner engine turns off, i.e., $t_0\simeq t_{eng}$.
According to \citet{lazzati}, this should happen when the
central engine releases most of the energy during the last phase
of its activity.
In this context, the flare would thus be produced by the external
shock caused by an energy injection lasting until the time of the flare
occurrence, i.e., $t_{eng}\approx 1300$s.
The hypothesis of external shock for the flare offers a
straightforward explanation of the spectral similarity with the
late afterglow data.
We also notice that in this model the early afterglow
emission is mixed with the (internal shock) GRB emission \citep{sari}.
In this context, the emission observed at 1000 s can be attributed
to internal shock while the flare represents the onset of the afterglow emission.
To develop our model, we used the prescriptions of the so-called
standard fireball model \citep{panaitescu00}. They offer analytic
solutions only at distances greater than the deceleration radius
$r_0$. But we are also interested in the so-called coasting and
transition phases \citep{zhang} because we want to study and to
reproduce the shape of the flare, its rise included. As mentioned
above, we have taken into account the thick shell variant by
introducing a time shift $t_0$, i.e., implying that most of the
energy for the external shock is carried at $t_0 \approx t_{eng}$.
We thus numerically solved the basic equations of the fireball
model. The program requires the parameters of the model,
namely the initial value of the Lorentz factor of the relativistic
shell $\Gamma_0$, the energy value in unity of $10^{53}$ erg
$E_{53}$, the electron population index $p$, the fraction of
energy going into relativistic electrons $\varepsilon_e$, the
fraction of energy going in magnetic field $\varepsilon_B$, and the
density of the external medium $n$ (cm$^{-3}$) or $A_*$
(cm$^{-1}$) as input. The density profile is described by the law
$n=3.0\cdot10^{35}A_*r^{-s}$, where in the case of an ISM, $s=0$,
while in the case of a wind profile environment, $s=2$.
When not stated otherwise, we have taken $E_{53}$=0.03,
assuming that all the kinetic energy is converted in $\gamma$-rays
and that the redshift is $z=1$. From our spectral analysis, we find
$p=2.1$. To determine the other parameters values,
we performed a study devoted to understanding how they influence the
calculated light curve.
We investigated the effect of model parameters on the X-ray light
curve produced by a thick shell fireball with spherical symmetry
expanding in an ISM. The origin of the time, $t_0$, is shifted to
the instant of the flare, 1300 s.
We show the X--ray flux between 2-10 keV obtained by numerical
integration of the specific energy flux.
First we find, according to the fact that the X-ray emission is
typically above the cooling frequency, particularly at late times,
that the density $n$ and $\varepsilon_B$ only have a marginal effect
on the normalization of the X-ray light curve.
Figure \ref{density} shows, for example, the effects of the
density $n$ of the external medium in which the fireball expands.
Differences are appreciable only at early times, when the
observational frequency $\nu_{obs}$ is smaller than the cooling
frequency $\nu_c$.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fig6.ps}
\caption{Effects of density $n$ of the external medium on the X--ray light
curve with the origin of the time shifted to $t_0$=1300 s.
For the red curve $n$=5 $cm^{-3}$, for the blue curve $n$=1 $cm^{-3}$, and for the green
curve $n$=0.1 $cm^{-3}$. The X--ray light curve peak shows a small increase with $n$.
The other model parameters are $E_{53}$=0.03, $\Gamma_0$=85, $\varepsilon_e$=0.01,
$\varepsilon_B$=0.05, and $p$=2.2.}
\label{density}
\end{figure}
The effects of the parameters $E_{53}$ and $\varepsilon_e$ are
presented in Fig. \ref{energy}. The normalization of the X-ray
light curve depends mostly on the product of $E_{53}$ and
$\varepsilon_e$, and follows a roughly linear dependence (for
$p\approx2.2$).
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fig7.ps}
\caption{ Effects of the energy $E_{53}$ and $\varepsilon_e$ on the
X--ray light curve with the origin of the time shifted to $t_0$=1300
s. For the green curve $E_{53}=0.03$ and $\varepsilon_e=0.1$, for
the blue curve $E_{53}=0.3$ and $\varepsilon_e=0.01$, for the
light blue curve $E_{53}=0.03$ and $\varepsilon_e=0.01$, for the
red curve $E_{53}=0.003$ and $\varepsilon_e=0.01$, and finally for
the orange curve $E_{53}=0.03$ and $\varepsilon_e=0.001$.
The other model parameters are $n$=1, $\Gamma_0$=85, $\varepsilon_B$=0.05,
and $p$=2.2. Note, as the emitted flux increases with $E_{53}$ and $\varepsilon_e$,
that these parameters have about the same amount of influence in determining
the normalization factor of the X--ray light curve.} \label{energy}
\end{figure}
Figure \ref{lorentzfactor} shows the effects of the initial Lorentz
factor $\Gamma_0$. Its value influences both the height and the
wideness of the peak in the X--ray light curve. The greater
$\Gamma_0$ is, the higher and narrower the peak is. For $t \gg t_0$,
i.e., when the deceleration phase has been reached for the
different values of $\Gamma_0$, the light curve is independent of
$\Gamma_0$.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fig8.ps}
\caption{ Effects of the initial Lorentz factor $\Gamma_0$ on the X--ray light
curve with the origin of the time shifted to $t_0$=1300 s. The blue curve was
obtained for $\Gamma_0=200$, the green curve for $\Gamma_0=85$, and the red curve for
$\Gamma_0=30$. Note, as the peak in the curve increases with $\Gamma_0$, the other
model parameters are $E_{53}$=0.03, $n$=1, $\varepsilon_e$=0.01, $\varepsilon_B$=0.05,
and $p$=2.2.}
\label{lorentzfactor}
\end{figure}
\section{External shock from long duration engine activity in XRF 011030 and GRB 011121}
\label{modelapplication}
We applied the model described in the previous section to \object{XRF 011030} and also
to \object{GRB 011121}. In the case of \object{XRF 011030}, we studied the
event both in an ISM and in a wind profile environment, producing the
calculated light curves and finding a family of solutions corresponding to
several choices of the model parameters. In Figs. \ref{ism} and \ref{wind}
we report two possible solutions for a fireball expanding in an ISM and in
a wind profile environment, respectively, with the origin of the time
shifted to the onset of the flare, $t_0=1300$ s. Small changes
($\lesssim 10$\%) of $t_0$ do not appreciably modify the results.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fig9.ps}
\caption{X--ray light curve of \object{XRF 011030} in an ISM obtained shifting the origin of
the time to the onset of the flare, $t_0=1300$s. The model parameters are
$E_{53}=0.03$, $\Gamma_0=110$, $n=1$, $\varepsilon_e=0.2$, $\varepsilon_B=0.05$,
and $p=2.1$.}
\label{ism}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fi10.ps}
\caption{X--ray light curve of \object{XRF 011030} in a wind obtained shifting the
origin of the time to the onset of the flare, $t_0=1300 s$. The model parameters are
$E_{53}=0.03$, $\Gamma_0=50$, $A_*=0.05$, $\varepsilon_e=0.1$, $\varepsilon_B=0.05$,
and $p=2.1$.}
\label{wind}
\end{figure}
We find that the calculated light curves can describe the flare,
both for a fireball expanding in a wind profile environment and
for a fireball interacting with a uniform medium. These two light
curves do not fit the late afterglow data; this will be discussed
in the next section.
In the case of the flare of \object{GRB 011121}, we computed the light
curve only for a fireball interacting with a wind density profile. In fact,
\citet{piro05} established that the \object{GRB 011121} X--ray and optical data
are consistent with a fireball expanding in a wind environment
due to the temporal decay observed in these two bands.
Using their parameters and shifting $t_0$ to the onset of the
flare, we find the light curve of Fig. \ref{011121}. The model
describes the flare and the late afterglow, in agreement with the
analysis made by \citet{piro05}.
\begin{figure}[!htb]
\centering
\includegraphics[width=6.3cm,height=8.76cm,angle=-90]{4448fi11.ps}
\caption{ X--ray light curve of \object{GRB 011121} for a fireball expanding in a wind with
the origin of the time shifted to the instant of the flare, $t_0=250$ s. The
model parameters are $E_{53}=0.28$, $\Gamma_0=130$, $A_*=0.003$, $\varepsilon_e=0.01$,
$\varepsilon_B=0.5$, and $p=2.5$.}
\label{011121}
\end{figure}
\subsection{The interpretation of the break in the light curve of XRF 011030}
\label{break}
In Fig. \ref{afterglow}, we note that the backward extrapolation
of the late afterglow flux detected by Chandra is not compatible
with the upper limits observed by BeppoSAX, suggesting the
presence of a temporal break.
First we considered the possibility that the temporal break is
related to a spectral break, i.e., to the passage of the cooling
frequency $\nu_c$ in the X-ray band. We first studied the case of
a wind density profile, when $\nu_c$ increases with the time as
$t^{1/2}$.
Initially, $\nu_c$ can be smaller than $\nu_{obs}$, but there will be
an instant at which it becomes greater than $\nu_{obs}$. This marks a
break in the light curve, which becomes steeper by $\delta \alpha=0.25$.
The observational data of \object{XRF 011030} suggest that the break occurs
between $10^{4}$ and $10^{6}$ s after the burst. We thus require
that $\nu_c$ passes in the X--ray band during this temporal range.
Eq. (\ref{eq:coolwind}) for a given time of the break $T_b$
links $A_*$ with $\varepsilon_B$.
We derive $\varepsilon_e$ and $\Gamma_0$ values to
reproduce the light curve of the flare as described in the
previous section. $A_*$ is constrained from late X-ray data and
also optical and radio data (see next section). Finally, the
corresponding value of $\varepsilon_B$ is constrained from
Eq. (\ref{eq:coolwind}).
We first attempt to find a broadband solution with the
isotropic energy fixed to $E_{53}=0.03$ (see Sect. \ref{reburst011030}).
In this case, we have problems fitting the radio data. We find a set
of model parameters able to describe the emission observed in the
X--ray and optical bands, but the corresponding radio light curve
is always below the observational data. When the fireball expands
in a stellar wind, the flux in the radio band goes as:
\begin{equation}\label{eq:radiofluxwind}
F_{\nu}\propto E_{53}^{1/3}A_*\varepsilon_{e,-1}^{-2/3}\varepsilon_{B,-3}^{1/3}.
\end{equation}
The parameters $\varepsilon_e$ and $\varepsilon_B$ are determined
by X-ray and optical data (see the next Sect. for more detail). Then,
if we keep $E_{53}=0.03$ to obtain the right normalization of
the radio light curve, we need to increase the wind density.
On the other hand, this will also cause the normalization of the X--ray
and optical light curves to increase and surpass the observational
data. This has motivated us to assume an efficiency $\eta=0.1$ to convert
the kinetic energy released by the central engine in $\gamma$--rays.
This choice is supported by several authors. \citet{guetta01} and \citet{koba01}
have argued that internal shocks convert the energy with an efficiency
$\eta \sim 0.1-0.5$, and this was also recently supported by Swift
observations \citep{zhang05,granot06}. Under this assumption (i.e.,
$E_{53}=0.3$) we find that it is possible to explain the radio data
jointly with the X-ray and optical data, obtaining a broadband solution.
In Fig. \ref{flat}, we show the calculated X--ray light curve.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fi12.ps}
\caption{ X--ray light curve of \object{XRF 011030} in a wind with the origin of the time
shifted to $t_0=1300 s$. This solution also takes into account the radio and
optical data. The model parameters are $E_{53}=0.3$, $\Gamma_0=60$,
$A_*=0.055$, $\varepsilon_e=0.02$, $\varepsilon_B=0.001$, and $p=2.1$. We have
assumed the efficiency in the conversion of the kinetic energy to be $\eta$=0.1.}
\label{flat}
\end{figure}
It is interesting to note that before the break ($\nu_c<\nu_{obs}$), i.e., during
the flare, the expected photon index is $\Gamma=(p/2+1)=2.05$, consistent with the
value found in our spectral analysis (Sect. \ref{analysis}). After the break
($\nu_c>\nu_{obs}$), the expected photon index is $\Gamma=[(p-1)/2+1]=1.55$,
which agrees with the value $\Gamma=1.72\pm0.20$ found by \citet{vale} in the
analysis of the Chandra data.
\begin{flushleft}
With regard to the fireball temporal evolution after the break,
$\nu_c>\nu_{obs}$ \citep{panaitescu02}:
\begin{equation}\label{eq:aftertimewind}
F\propto t^{-(3p-1)/4},
\end{equation}
\end{flushleft}
and we expect that the temporal decay index is $\alpha_2=1.325$.
The analysis of Chandra data shows that after the break
$\alpha_2=2.25\pm0.60$ \citep{vale}, these values
are marginally consistent.
Similar considerations can be made for a fireball expanding in an
ISM. In this case, the cooling frequency $\nu_c$ decreases with the
time as $t^{-1/2}$. Supposing $\nu_c>\nu_{obs}$ before the flare,
there will be an instant when $\nu_c$ becomes smaller than
$\nu_{obs}$ and a break occurs.
\begin{flushleft} Now, after the break, the temporal decay is
slower than the case of a wind density profile \citep{panaitescu02}:
\begin{equation}\label{eq:aftertimeism}
F\propto t^{-(3p-2)/4}
\end{equation}
\end{flushleft}
and the temporal decay index expected after the break is
$\alpha_2=1.075$, not consistent with the Chandra data. Moreover
the spectrum after the break steepens, in disagreement with the
spectral data.
In the ISM case, it is therefore even more difficult than in the wind
case to explain the late afterglow emission without introducing
a jet structure. The emission coming from a relativistic shell with jet
symmetry is similar to the one of a spherical fireball, as long as the
observer is on the jet axis, and the jet Lorentz factor $\gamma$ is greater
than the inverse of its angular spread $\theta_0$ \citep{rhoads97}. During
its expansion, the fireball collects a growing amount of matter;
thus, the Lorentz factor $\gamma$ decreases and there is an instant
when $\gamma<\theta_0^{-1}$. At this time, the sideways spread of
the jet becomes important and the observed area grows more quickly.
This leads the flux to decrease more rapidly whit respect to the
spherical case, and we expect a break in the light curve
\citep{sari99}. \citet{sari99} calculated that at high frequencies
the flux decreases like $t^{-p}$ both when $\nu_{obs}>\nu_c$ and
when $\nu_{obs}<\nu_c$.
Thus, with the electron population index $p=2.1$, the predicted
temporal behaviour agrees with the two Chandra observations.
Once the sideways expansion of the jet becomes important, the
cooling frequency $\nu_c$ is constant with time \citep{panaitescu01}
and the spectrum should not evolve. In our data, the \object{XRF 011030} spectral
evolution is only marginally significant; in fact, the photon index
$\Gamma$ of the power law fitting the flare is consistent within the
errors with the photon index of the power law describing the afterglow
(Table \ref{analysis}).
We therefore carried out a comparison between the model and broadband
data, i.e., taking into account the optical and the radio information
discussed in the next section.
In this case, we find a solution that nicely describes all the
data without requiring an efficiency $\eta$ in the conversion of the
kinetic energy in $\gamma$-rays (see Fig. \ref{ismjet} for the X--ray
light curve).
We notice that even if the jet model has an additional free parameter
with respect to the spherical fireball model for a jetted fireball,
the model parameters are, still well-constrained.
This is mostly due to the passage of the cooling frequency $\nu_c$ in
the X--ray band.
At the start of the observation $\nu_c>\nu_{obs}$, at about $10^4$
s, the cooling frequency $\nu_c$ becomes smaller than $\nu_{obs}$.
After this instant the X--ray and optical flux follow the same law, and this
well constrains the model parameters.
The constraints on the model parameters given by optical and radio information
are discussed with more detail in the next section.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fi13.ps}
\caption{X--ray light curve for a jetted fireball expanding in an ISM with
the origin of the time shifted to $t_0=1300 s$. The model parameters
are $E_{53}=0.03$, $\Gamma_0=130$, $n=5$, $\varepsilon_e=0.29$,
$\varepsilon_B=8\cdot10^{-5}$, $p=2.1$, and $T_b=8 \cdot 10^5 s$.}
\label{ismjet}
\end{figure}
\section{Broadband analysis of XRF 011030 afterglow data}
\label{optical}
No optical counterpart has been detected for \object{XRF 011030}.
From among all the optical observations, we considered those performed
by \citet{vijay} and \citet{rhoads01} because they are the most
constraining.
The upper limits are R$>$21 \citep{vijay} and R$>$23.6 \citep{rhoads01}
at 0.3 and 2.7 days after the burst, respectively. We correct
magnitudes for the reddening due to the absorption of our Galaxy,
finding $R>$ 20.4 and $R>$ 23.1 (corresponding to an optical flux
$F_{\nu, opt1}=1.79\times10^{-28}$ erg cm$^{-2}$ s$^{-1}$
Hz$^{-1}$ and $F_{\nu, opt2}=1.69\times10^{-29}$ erg cm$^{-2}$
s$^{-1}$ Hz$^{-1}$) for the two observations. In the radio band,
\citet{taylor} associated a transient source with a flux
$F_{\nu,R}=1.81\times10^{-27}$ erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$
about 10.5 days after the burst with \object{XRF 011030}.
For a jetted fireball expanding in an ISM, a solution that
accounts for X--ray (Fig. \ref{ismjet}), optical (Fig. \ref{ismjetottico}),
and radio (Fig. \ref{ismjetradio}) is given by $E_{53}=0.03$, $\Gamma_0=130$,
$n=5$, $\varepsilon_e=0.29$, $\varepsilon_B=8\cdot10^{-5}$, $p=2.1$, and
$T_b=8 \cdot 10^5$ s. We show the optical and radio light curves
corresponding to this set of model parameters in Figs, \ref{ismjetottico}
and \ref{ismjetradio}, respectively.
We investigated how well the parameters are constrained, with
particular regard to the density.
The density is mostly constrained by the data below the cooling frequency,
in this case optical and radio, and whether times are greater than about
$10^4$ s (that is the time when a spectral break occurs), also X--ray data.
After $10^4$ s, the emitted flux in the X--ray and optical
band is given by \citep{panaitescu00}:
\begin{equation}\label{eq:opticalflux}
F_{\nu}\propto E_{53}^{(p+3)/4}n^{1/2}\varepsilon_{e,-1}^{p-1}\varepsilon_{B,-4}^{(p+1)/4}.
\end{equation}
This same relation applies for the radio data because, at the
time of the observation, the injection frequency $\nu_i$ is below
or very near the observational frequency $\nu_{obs}$.
Thus, the normalization of the light curve in one of the
three observational bands also determines the normalization of the light
curve in the other two bands. This causes the model parameters
to be well-constrained.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fi14.ps}
\caption{ Optical light curve of a jetted fireball expanding in an ISM
with the origin of the time shifted to $t_0=1300$ s. The model
parameters are $E_{53}=0.03$, $\Gamma_0=130$, $n=5$, $\varepsilon_e=0.29$,
$\varepsilon_B=8\cdot10^{-5}$, $p=2.1$, and $T_b=8 \cdot 10^5$ s.}
\label{ismjetottico}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fi15.ps}
\caption{ Radio light curve of a jetted fireball expanding in an ISM with
the origin of the time shifted to $t_0=1300$ s. The model parameters
are $E_{53}=0.03$, $\Gamma_0=130$, $n=5$, $\varepsilon_e=0.29$,
$\varepsilon_B=8\cdot10^{-5}$, $p=2.1$, and $T_b=8 \cdot 10^5$ s.}
\label{ismjetradio}
\end{figure}
In the case of the spherical fireball expanding in a wind, the
break has to be self-consistently described (i.e., without the
addition of a free parameter).
Consequently, the model parameters are also well-constrained,
$E_{53}=0.3$, $\Gamma_0=60$, $A_*=0.055$, $\varepsilon_e=0.02$,
$\varepsilon_B=0.001$, and $p=2.1$.
The corresponding X--ray, optical, and radio light curves are shown
in Figs. \ref{flat}, \ref{windottico}, and \ref{windradio}.
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fi16.ps}
\caption{ Optical light curve of a spherical fireball expanding in a wind
with the origin of the time shifted to $t_0=1300$s. The model parameters
are $E_{53}=0.3$, $\Gamma_0=60$, $A_*=0.055$, $\varepsilon_e=0.02$,
$\varepsilon_B=0.001$, and $p=2.1$. We have assumed the efficiency in the
conversion of the kinetic energy to be $\eta$=0.1.}
\label{windottico}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[height=8.75cm,width=6.3cm,angle=-90]{4448fi17.ps}
\caption{ Radio light curve of a spherical fireball expanding in a wind
with the origin of the time shifted to $t_0=1300$s. The model parameters
are $E_{53}=0.3$, $\Gamma_0=60$, $A_*=0.055$, $\varepsilon_e=0.02$,
$\varepsilon_B=0.001$, and $p=2.1$. We have assumed the efficiency in the
conversion of the kinetic energy to be $\eta$=0.1.}
\label{windradio}
\end{figure}
\section{Summary and conclusions}
\label{conclusions}
In this paper, we have presented the temporal and spectral analysis
of the X-ray flash \object{XRF 011030} observed by BeppoSAX. This event is
one of the longest in the BeppoSAX sample \citep{zand},
with a duration of $\sim 1500$ s. In particular, along with the main
pulse, we find a precursor event and a late X-ray flare.
While the spectrum of the main burst is not consistent with a
black body, we cannot exclude this model for the precursor. This
result could be due to the lower statistics available, but it is
nonetheless interesting to note that, so far, there was only one
example of a precursor consistent with such a model \citep{murakami}.
This feature could be associated with the cocoon formed by the jet
emerging at the surface of the collapsing massive star
\citep{ramirez,waxman}.
After the launch of the Swift satellite, X--ray flares appear
to be a common feature in GRBs light curves. This has favored the
development of a large number of models to explain X--ray flares' origin,
both in internal and external shock scenarios \citep{zhang05}.
X--ray flares observed by BeppoSAX in \object{GRB 011121} and in \object{XRR 011211}
have spectra similar to that of the late afterglow.
This similarity can be straightforwardly accounted for in the framework
of the external shock, i.e., the flare represents the onset of
the afterglow and it is connected with the late afterglow emission with
a power law. For \object{GRB 011121} and \object{XRR 011211}, a connection with the
late afterglow is acceptable only by shifting the origin of the
time $t_0$ to the instant of the flare \citep{piro05}.
This implies a long duration engine activity (thick shell fireball,
\citet{sari}).
We find that this scenario fits nicely with the observational
data, including the X-ray flare and the late broadband radio,
optical, and X-ray afterglow observations of \object{XRF 011030} . The latter,
performed by Chandra, indicate the presence of a temporal break
occurring between $10^4$ s and $10^6$ s after the burst.
We carried out a detailed modelling of the data, finding good
agreement with observations for a spherical fireball expanding in
a wind medium and for a jetted fireball expanding in an ISM. In the
first case, the temporal break is explained by the passage of the
cooling frequency in the X-ray band.
We cannot exclude that the flare observed in \object{XRF 011030} is
due to internal shocks. However, this would require an engine
activity that is tuned to track the peak energy of the emission
during the prompt and flaring phases.
In the context of the external shock scenario, the time shift
$t_{0}\sim 1300$ s is due to a long lasting central
engine activity that remains active until the time of the flare,
with the most of the energy released at the end of the
emission phase \citep{sari,lazzati}.
In this case, the peak of the afterglow emission coincides with
the flare, and the afterglow decay will be described by a self-similar
solution by counting the time from the instant the inner engine turns
off, i.e., when $t_0\simeq t_{eng}$.
We also find that a thin shell fireball cannot describe the flare
in a continuous density profile, in fact, in this case, the flux
rises and decays too slowly to describe the shape of the flare. Also,
with a discontinuous density profile, it is very difficult to produce
flares as high and narrow as that observed in \object{XRF 011030},
unless one assumes that the fireball is expanding in a clumpy
medium \citep{dermer99,dermer05}.
Recently, Swift observations have shown the presence of late X-ray
flares in several other events. Some of these flares appear to
have a spectral behaviour consistent with that of late X--ray
flares observed by BeppoSAX, i.e., a soft spectrum substantially
well-differentiated from the hard, prompt emission typically
attributed to internal shocks (\object{GRB 050126}, \object{GRB
050219a} \citep{tagliaferri05}, and \object{GRB 050904}
\citep{burrows_rew}).
\object{GRB 050126} and \object{GRB 050219a} appear to
follow a $(t-t_0)^{-\alpha}$ power law, with $t_0\approx 100$ s,
reasonably well. Instead in other flares, e.g \object {XRF 050406}
and \object{GRB 050502B}, the hardness ratio suggests a spectral
evolution resembling the prompt emission \citep{burrows_rew}. This
behaviour has been thus interpreted by \citet{burrows} as due to
internal shocks produced by a long duration central engine activity.
However we notice that in some cases it could be possible
explain hard-to-soft spectral evolution also in the context of the
external shock scenario.
When the fireball expands in a wind the cooling frequency $\nu_c$
increases with the time and there will be an instant when it enters
in the X--ray band. At this instant the spectral index changes from
$\beta=p/2$ to $\beta=(p-1)/2$ and the spectrum becomes harder of a
factor 0.5.
Swift observations showed also the presence of multiple flares
in GRB light curves on relatively small time scales, from $\sim$ 100 s
up to $\sim$ 1000 s. For example \object{GRB 050421} shows two successive
flares within about 150 s \citep{godet}, \object{GRB 050607} has two
flares within 500 s \citep{pagani} and \object{GRB 050730} has three
flares within 800 s \citep{burrows_rew}.
In these cases a thick shell fireball can be successful to explain only
one of the flares appearing in the light curve, i.e. only one flare can
be identified with the beginning of the afterglow emission.
The other flares can be attributed to internal shocks or to the
interaction with a clumpy medium in the framework of the external shock
scenario.
In conclusion we note that the present data suggest the existence
of two categories of late X--ray flares differentiated by their
spectral behaviour.
It is also interesting to note that both in the framework of the
internal shocks scenario and in that of external shocks, the
explanation of late X--ray flares requires a central engine that
remains active about until the time of the flare.
\begin{acknowledgements}
The authors are grateful to E. Massaro, V. D'Alessio, B.Gendre and
A. Corsi for useful discussions and comments. This work was
partially supported by the EU FP5 RTN Gamma ray bursts: an enigma
and a tool. The BeppoSAX satellite was a program of the Italian
space agency (ASI) with participation of the Dutch (NIVR) space
agency.
\end{acknowledgements}
| 2024-02-18T23:39:49.602Z | 2006-05-17T13:09:50.000Z | algebraic_stack_train_0000 | 539 | 8,932 |
|
proofpile-arXiv_065-2868 | \section*{A report}
The publications resulting from the Nordita Workdays on QPOs
are an interesting and original contribution to research on
accretion flows around compact objects.
They contain four observational papers, one theoretical
paper dealing with numerical simulations of accretion discs and
eleven contributions (some of them analyzing observations) totally
devoted to the epicyclic resonance model (ERM) of high frequency
QPOs (hfQPOs) of Abramowicz \& Klu\'zniak. Probably all that is to
be known about this model is included in these publications. This is
their strength but also their weakness. First the model is not
complete, it is rather kinematic than dynamic. It describes in great
detail the interactions between two oscillations but as Klu\'zniak
confesses: \textsl{It would be good to identify the two non-linear
oscillators.} Yes indeed. Not only \textsl{good} but crucial.
Second, concentrating on hfQPOs only is most probably not a wise
decision because there exist (admittedly complex) relations between
them and their lower frequency brethren and there is a clear link
between their presence and the state of the system. Although the
authors of the eleven papers sometimes pay lip-service to
observations not directly concerning the frequency values of hfQPOs,
in practice they seem to ignore the very important conclusion of
Remillard: \textsl{... models for explaining hfQPO frequencies
must also explain the geometry, energetics and radiation mechanisms
for the SPL state}. By the way, probably even this will not do: the
model will have to explain all the X-ray states. One can understand
the reluctance to leave the clean world of resonating orbits for the
dirty world of turbulent, magnetized, radiating discs with
unpleasant boundary conditions, but QPOs occur in such world.
Abramowicz believes that QPOs are the Rosetta stone for
understanding black-hole accretion. Not so. If one had to
(over)use\footnote{The road to the theorist's hell is paved with
Rosetta stones} the Rosetta-stone analogy, QPOs would be just one of
the texts on this stone. Let's hope it is the Greek one. All in all,
these publications are not so bad: imagine a volume devoted to the
beat-frequency model. At least the epicyclic resonance model is
still alive.
The authors of the papers deal only with neutron star and
black-hole QPOs. The abundant QPOs observed in CVs are only
mentioned en passant and no special attention is paid to them.
Probably because, not being (sufficiently) relativist, they are
considered boring. In view of the recently published article on the
subject \cite{klab} such an attitude is rather surprising.
\subsection*{Observations}
The four contributions in this category have been written by some of
the top observers of X-ray binaries and they form a very good (too
good maybe) background for the theoretical papers. van der Klis, as usual, gives a clear and sober
review of the QPO phenomenon. One wishes theorists paid more
attention to what he has to say about black hole hfQPOs: \textsl{The
phenomenon is weak and transient so that observations are difficult,
and discrepant frequencies occur as well, so it can not be excluded
that these properties of approximately constant frequency and
small-integer ratios would be contradicted by work at better signal
to noise.} Being a loyal participant he adds: \textsl{In the
remainder I will assume these properties are robust.}
A usual in QPO research it is difficult to get used to the
terminology and classification. It took some time to make sense of
\textsl{atolls}, \textsl{bananas} and \textsl{z}-\textsl{tracks}
(and sources!) and now we encounter the challenge of the X-ray
states of Black Hole Binaries. Not surprisingly Remillard is using
the classification defined in his monumental work with McClintock
\cite{mcrm}. We have therefore the \textsl{thermal}, \textsl{hard}
and \textsl{SPL} states. One might be slightly worried not seeing
the \textsl{thermal dominant (TD) state} \cite{mcrm} but fortunately
we are told that the thermal state is the \textsl{formerly
``high/soft" state"}, so \textsl{TD = thermal}. In any case the real
drama begins when one wishes to see what other specialists have to
say about the subject, e.g. Belloni (2005). There we find a
different classification into: an \textsl{LS} (Low/hard state), an
\textsl{HIMS} (High Intermediate State), a \textsl{SIMS} and an
\textsl{HS} (High/Soft state). It seems that \textsl{HS=TD} and
\textsl{LS=hard} but in the two other cases relations are not clear.
This is not surprising because Belloni defines his states by the
transition properties and not by the state properties. In addition
Belloni (2005) classifies low frequency QPOs into A, B and C types,
whereas Remillard uses quantities $a$ and $r$, the rms amplitude
and power (note that it was Remillard who introduced type C QPOs).
Both approaches have their merits and one can understand why they
were introduced but they make life really difficult for people
trying to understand the physics of accretion flows. I am surprised
that Abramowicz complains only about the confusion introduced by
numerical simulations and not about the impenetrable jungle of X-ray
states and QPO terminology. I suspect he has given up on reading on
this subject.
However, Remillard convincingly shows that hfQPOs appear in the
SPL state and shows very interesting relations between the presence
of $2\nu_0$ and the $3\nu_0$ frequencies and the state of the system
as described by the disc flux and the power-law flux. As far as I
can tell this is ignored by the epicyclic theorists but this could
be the second text of the Rosetta stone. It is also a major
difficulty for the epicyclic resonance model. Since the SPL state is
characterized by a strong Comptonised component in the X-ray flux,
it is difficult to see how the flux modulation at the vertical
epicyclic frequency by gravitational-lensing could survive in such
an environment.
This brings me to the contribution by Barret and collaborators.
Recently Barret with a different (but intersecting) set of
collaborators \cite{barretal} made a fundamental discovery by
showing that the lower frequency kHzQPO in the neutron-star binary
4U 1608-52 is a highly coherent signal that can keep $Q\approx 200$
for $\sim 0.1$ s. They also found that the higher frequency kHzQPO
is fainter than its lower frequency counterpart and has lower $Q$.
Barret et al. (2005) showed very convincingly that no proposed QPO
model can account for such highly coherent oscillations. They can
all be rejected except for the ERM but only because the two resonant
oscillators have not yet been identified. In particular, they
rejected the modified beat-frequency model of Miller et al. (1998).
In Barret et al. another puzzling phenomenon is presented. They
found in three neutron-star binaries (including 4U 1608-52) showing
high-Q lower kHzQPOs that the coherence increases with frequency to
a maximum ($\sim 800$ Hz) after which it rapidly drops and QPOs
disappear. To me it looks like an effect related to the forcing
mechanism. Barret et al. link their observations to the ISCO basing
their claim on the Miller et al. (1998) model. There is half a
paragraph trying to explain (I think) how the model rejected in a
previous paper can be rejuvenated (or rather resuscitated) and used
to interpret the present observations. I read this part of the paper
several times and failed to understand its meaning. I had no problem
understanding the reasoning rejecting Miller et al. (1998).
In any case I also fail to understand why the Barret et al. (2005)
discovery of the high coherence of QPOs was not the central point of
the Nordita workdays. It is easy to miss a \textit{Mane, Mane,
Tekel, Uphar'sin} when looking for a Rosetta stone.
The main result of the excellent article on neutron-star boundary
layers Gilfanov is that the kHzQPOs appear to have the same origin
as aperiodic and quasiperiodic variability at lower frequency. It
seems to be clear that the msec flux modulations originate on the
surface of the neutron star. Nota bene, I am surprised that the
remarkable and extremely relevant discovery of the universal
rms-flux correlation (Uttley 2004; Uttley et al. 2005) is not
mentioned in this context. Gilfanov
point out that the kHz clock could still be in the disc.
\subsection*{Disc simulations}
It is known that in stars some multimode pulsations may arise from
stochastic excitation by turbulent convection (see e.g. Dziembowski
2005). It is therefore legitimate to expect that in turbulent discs
similar effects could be found. Brandenburg presents very
interesting results obtained in the framework of the shearing-box
approximation of accretion disc structure. He obtains what he calls
stochastic excitation of epicycles. In his model the radial
epicyclic frequency is equal to the Keplerian frequency and the
vertical epicyclic frequency is not equal (or comparable) to the
p-mode frequency so it is not clear how close his results are to
what is happening in full-scale discs. But they are promising.
Another result concerning dissipation in discs requires more
investigation. According to Brandenburg in MRI discs most of the
dissipation occurs in the corona, whereas in the forced hydrodynamic
case most of the dissipation occurs near the midplane. He claims
that his result, obtained in the isothermal case, has been shown
also for radiating discs. The disc model in question, however, was
radiation-pressure dominated while gas-pressure dominated models
\cite{millstone} do not seem to confirm the claim that MRI discs
release most of the energy in the corona.
\subsection*{The epicyclic resonance model}
The eleven contributions to the epicyclic resonance model contain two
general articles by the founders; the other papers on different
aspects of the model were written (except for the last contribution)
by younger members of the ERM team. All these
contributions are very well written, clear and to the point. I was
really impressed by their quality. They contain all one needs to
know about the ERM. As far as I know they were written by the
authors whose names appear explicitly on the paper and since they
are very careful in acknowledging other people's contributions I
recommend removing the ``et al.'s" which give the impression that
the texts were written by a sect, or that they form a sort of
Norditan Creed. Fortunately this is not the impression one gets
reading the articles. They are professional, open to alternatives,
pointing out difficulties etc.
Of particular quality in this respect in the contribution by Paola
Rebusco. She presents the problem in a very clear way and
carefully chooses the (difficult) questions still to be answered.
Ji\v{r}\'{\i} Hor\'ak contributes two interesting articles. The
first discusses the 3:2 autoparametric resonance in the general
framework of conservative systems and shows that the amplitude and
frequency of the oscillations should be periodically modulated - a
result that might relate hfQPOs to lower frequency QPOs. The second
paper tries to explain the QPO modulations in neutron-star binaries
by a mechanism proposed by Paczy\'nski. It is not clear if such a
mechanism could achieve the high quality factors observed by Barret
et al. (2005) or how it relates to the oscillation forced by the
spinning neutron-star magnetic field. Three contributions deal with
various aspects of oscillating tori. Eva {\v S}r{\' a}mkov{\' a}
presents the preliminary results of her research on eigenvalues and
eigenfrequencies of slightly non-slender tori. She includes in her
paper a figure showing a transient torus appearing in a 3D
simulation of an accretion flow -- a rather touching testimony to the
ERM-team reliance on this elusive structures. William Lee uses SPH
simulations to study the response of a slender torus to external
periodic forcing. The results are a very interesting illustration of
the essentially nonlinear character of the coupling between the
radial and vertical modes (coupling through the sub-harmonic of the
perturbation: $1/2\nu_0$) and the rather fascinating phenomenon of
mode locking for a drifting torus. This can be relevant to the drift
of QPO frequencies observed in neutron-star binaries. Since his
contribution is devoted to these systems, mentioning ``stellar-mass
black holes" in the abstract is a bit misleading. Michal Bursa
expertly attacks the problem crucial for the ERM applied to black
holes: how to produce \textsl{two} modulations of the X-ray flux. By
using a toy model consisting of an optically thin, bremsstrahlung
emitting, oscillating slender torus he shows that strong-gravity
relativistic effects may produce the desired result. How would
things look in the case of an optically thick disc surrounded by a
comptonizing cloud is (probably) a different story. The last three
contributions deal with some aspects of hfQPO observations. Tomek
Bulik reanalysis the somewhat controversial issue of the Sco X-1
kHzQPO clustering around the value corresponding to the frequency
ratio of 2/3. His skillful analysis shows that the clustering is
real.
Gabriel T{\"o}r{\"o}k has been entrusted with the somehow irksome
task of linking microquasar QPOs with those observed in Sgr A$^*$
and AGNs. Since the last category forms an empty set he could just
discuss why such observations would be important. Unfortunately the
prospect of detecting QPOs from AGNs is rather remote
\cite{vauguttl}. His valiant attempt to discuss determining
black-hole spin from hfQPOs was hindered by the uncertainties in
both data and models. But his is a very good short review of the
problem.
Because they are a general introduction to an unfinished
construction, the contributions by the founders are less
interesting. Abramowicz gives a general introduction to the
subject of accretion onto compact objects. In his (entirely
justified) efforts to rehabilitate his and collaborators' (to whom I
belong) fundamental contributions to the concept of ADAF, Abramowicz
went too far: he antedated the relevant Abramowicz et al. paper by
ten years and did not insert the Narayan \& Yi article into the
references. I think also that his claim that accretion theory today
experiences a period of confusion caused by supercomputer
simulations is exaggerated. The confusion is caused by (some)
astrophysicists hastily trying to apply to real objects whatever
comes out of the computer and not by the physicists making these
very impressive simulations. People who are confused should read the
excellent article by
Balbus (2005) -- a real guide for the
perplexed. However, Eq.~(2) Abramowicz can create confusion since
it asserts that the radial epicyclic frequency is \textsl{larger}
than the vertical one. Luckily there is his Fig.~2 to sober us up.
Klu\'zniak with his usual charming intellectual incisiveness
describes his personal road to ERM. He is convinced that after
trying various roads which led nowhere, he finally chose the right
one. He knows it is uphill and very steep. But never send to know
for whom the disc tolls; it tolls for him. I wish him luck.
\acknowledgements I am grateful to Marek Abramowicz for inviting me
to write this report and to G\"unther R\"udiger for accepting this
risky idea.
| 2024-02-18T23:39:50.029Z | 2005-10-14T09:39:04.000Z | algebraic_stack_train_0000 | 572 | 2,508 |
|
proofpile-arXiv_065-2905 | \section{Introduction}
\label{sec:intro}
The cold dark matter (CDM) paradigm has become the standard framework
for the formation of large-scale structure and galaxies. Small
fluctuations in the initial density field grow by means of
gravitational instability until they collapse to form virialized dark
matter haloes. This growth process is hierarchical in the sense that
small clumps virialize first, and aggregate successively into larger
and larger objects. Galaxies form from the gas that is shock heated
by the gravitational collapse and then subsequently cools (White \&
Rees 1978; but see also Birnboim \& Dekel 2003 and Keres {et al.~} 2004).
Therefore, a proper understanding of galaxy formation relies on an
accurate description of the structure and assembly of these dark
matter haloes. This problem is tackled by a combination of both
N-body simulations and analytical models. Although N-body
simulations have the advantage that they follow the formation of dark
matter haloes into the non-linear regime, they are expensive, both in
terms of labor (analyzing the simulations) and CPU time. Therefore,
accurate analytical models are always useful. The most developed of
these is the Press-Schechter (PS) formalism, which allows one to
compute the (unconditional) halo mass function (Press \& Schechter
1974). Bond {et al.~} (1991), Bower (1991) and Lacey \& Cole (1993)
extended the PS formalism, using the excursion set approach, to
compute conditional mass functions. These allow the construction of
merger histories, the computation of halo formation times, and
detailed studies of spatial clustering and large scale bias (e.g.,
Kauffmann \& White 1993; Mo \& White 1996, 2002; Mo, Jing \& White
1996, 1997; Catelan {et al.~} 1998; Sheth 1998; Nusser \& Sheth 1999;
Somerville \& Kolatt 1999; Cohn, Bagla \& White 2001).
Numerous studies in the past have tested the predictions of extended
Press-Schechter (EPS) theory against numerical simulations. Although
the unconditional mass function was found to be in reasonable
agreement, it systematically over (under) predicts the number of low
(high) mass haloes (e.g., Jain \& Bertschinger 1994; Tormen 1998;
Gross {et al.~} 1998; Governato {et al.~} 1999; Jenkins {et al.~} 2001). Similar
discrepancies have been found regarding the conditional mass function
(Sheth \& Lemson 1999; Somerville {et al.~} 2000), which results in
systematic offsets of the halo formation times predicted by EPS (e.g.,
van den Bosch 2002a). Finally, Bond {et al.~} (1991) have shown that the
PS approach achieves a very poor agreement on an object-by-object
basis when compared with simulations (for a review, see Monaco 1998).
It is generally understood that these discrepancies stem from the
assumption of spherical collapse. Numerous studies have investigated
schemes to improve the EPS formalism by using ellipsoidal, rather than
spherical collapse conditions, thereby taking proper account of the
aspherical nature of collapse in a CDM density field (e.g., Sheth, Mo
\& Tormen 2001, hereafter SMT01; Sheth \& Tormen 2002; Chiueh \& Lee
2001; Lin, Chuieh \& Lee 2002). Although this results in
unconditional mass functions that are in much better agreement with
numerical simulations (e.g., SMT01; Jenkins {et al.~} 2001), they have
been unable thus far to yield conditional mass functions of sufficient
accuracy that reliable merger trees can be constructed.
Despite its systematic errors and uncertainties, the PS formalism has
remained the standard analytical approach in galaxy formation
modeling. In particular, the extended Press-Schechter theory is used
extensively to compute merger histories and mass assembly histories
(hereafter MAHs) which serve as the back-bone for models of galaxy
formation (e.g., Kauffmann, White \& Guiderdoni 1993; Somerville \&
Primack 1999; Cole {et al.~} 2000; van den Bosch 2001; Firmani \&
Avila-Reese 2000). This may have profound implications for the
accuracy of these models. For instance, the mass assembly histories
of dark matter haloes are expected to impact on the star formation
histories of the galaxies that form inside these haloes. In addition,
the merger and mass assembly history of individual haloes may also be
tightly related to their internal structure. As shown by Wechsler
{et al.~} (2002; hereafter W02) and Zhao {et al.~} (2003a;b), the MAH is
directly related to the concentration of the resulting dark matter
halo (see also Navarro, Frenk \& White 1997; Bullock {et al.~} 2001; Eke,
Navarro \& Steinmetz 2001). Errors in the mass assembly histories of
dark matter haloes may therefore result in erroneous predictions
regarding the star formation history and the rotation curve shapes
and/or the zero-point of the Tully-Fisher relation (e.g., Alam,
Bullock \& Weinberg 2002; Zentner \& Bullock 2002; Mo \& Mao (2000);
van den Bosch, Mo \& Yang 2003). Clearly, a detailed understanding of
galaxy formation requires a description of the growth history of dark
matter haloes that is more accurate than EPS. Although $N$-body
simulations are probably the most reliable means of obtaining accurate
assembly histories of dark matter haloes, they are computationally too
expensive for some purposes.
As an alternative to the EPS formalism and N-body simulations,
perturbative techniques have been developed that describe the growth
of dark matter haloes in a given numerical realization of a linear
density field. These include, amongst others, the truncated Zel'dovich
(1970) approximation (Borgani, Coles \& Moscardini 1994), the
peak-patch algorithm (Bond \& Myers 1996a,b) and the merging cell
model (Rodriguez \& Thomas 1996; Lanzoni, Mamon \& Guiderdoni 2000).
Recently, Monaco, Theuns \& Taffoni (2002b) developed a numerical code
that uses local ellipsoidal collapse approximations (Bond \& Myers
1996a; Monaco 1995) within Lagrangian Perturbation Theory (LPT,
Buchert \& Ehlers 1993; Catelan 1995). This code, called PINOCCHIO
(PINpointing Orbit-Crossing Collapsed HIerarchical Objects), has been
shown to yield accurate mass functions, both conditional and
unconditional (Monaco {et al.~} 2002a,b; Taffoni, Monaco \& Theuns 2002),
and is therefore ideally suited to study halo assembly histories,
without having to rely on computationally expensive N-body
simulations.
This paper is organized as follows. In Section~\ref{sec:theory} we
give a detailed overview of (extended) Press-Schechter theory,
including a discussion of its short-comings and its modifications
under ellipsoidal collapse conditions, and describe the Lagrangian
perturbation code PINOCCHIO. In Section~\ref{sec:sim} we compare the
MAHs obtained from PINOCCHIO, the EPS formalism, and N-body
simulations. We show that PINOCCHIO yields MAHs that are in excellent
agreement with numerical simulations, and do not suffer from the
shortcomings of the EPS formalism. In the second part of this paper we
then analyze a large, statistical sample of MAHs obtained with
PINOCCHIO for haloes spanning a wide range in masses. In
Section~\ref{sec:ftime} we use these MAHs to study, in a statistical
sense, various characteristic epochs and events in the mass assembly
history of a typical CDM halo. We analyze the statistics of major
merger events in Section~\ref{sec:majmerprop}. Finally,
Section~\ref{sec:concl} summarizes our results.
\section{Theoretical background}
\label{sec:theory}
\subsection{Extended Press-Schechter theory}
\label{sec:EPS}
In the standard model for structure formation the initial density
contrast $\delta({\bf x}) = \rho({\bf x})/\bar{\rho} - 1$ is
considered to be a Gaussian random field, which is therefore
completely specified by the power spectrum $P(k)$. As long as $\delta
\ll 1$ the growth of the perturbations is linear and $\delta({\bf
x},t_2) = \delta({\bf x},t_1) D(t_2)/D(t_1)$, where $D(t)$ is the
linear growth factor linearly extrapolated to the present time. Once
$\delta({\bf x})$ exceeds a critical threshold $\delta^{0}_{\rm crit}$
the perturbation starts to collapse to form a virialized object
(halo). In the case of spherical collapse $\delta^{0}_{\rm crit}
\simeq 1.68$. In what follows we define $\delta_0$ as the initial
density contrast field linearly extrapolated to the present time. In
terms of $\delta_0$, regions that have collapsed to form virialized
objects at redshift $z$ are then associated with those regions for
which $\delta_0 > \delta_c(z) \equiv \delta^{0}_{\rm crit}/D(z)$.
In order to assign masses to these collapsed regions, the PS formalism
considers the density contrast $\delta_0$ smoothed with a spatial
window function (filter) $W(r;R_f)$. Here $R_f$ is a characteristic
size of the filter, which is used to compute a halo mass $M = \gamma_f
\bar{\rho} R_f3/3$, with $\bar{\rho}$ the mean mass density of the
Universe and $\gamma_f$ a geometrical factor that depends on the
particular choice of filter. The {\it ansatz} of the PS formalism is
that the fraction of mass that at redshift $z$ is contained in haloes
with masses greater than $M$ is equal to two times the probability
that the density contrast smoothed with $W(r;R_f)$ exceeds
$\delta_c(z)$. This results in the well known PS mass function for
the comoving number density of haloes:
\begin{eqnarray}
\label{PS}
\lefteqn{{{{\rm d}}n \over {{\rm d}} \, {\rm ln} \, M}(M,z) \, {{\rm d}}M =}
\nonumber \\ & & \sqrt{2 \over \pi} \, \bar{\rho} \, {\delta_c(z)
\over \sigma2(M)} \, \left| {{{\rm d}} \sigma \over {{\rm d}} M}\right| \,
{\rm exp}\left[-{\delta_c2(z) \over 2 \sigma2(M)}\right] \, {{\rm d}}M
\end{eqnarray}
(Press \& Schechter 1974). Here $\sigma2(M)$ is the mass variance of
the smoothed density field given by
\begin{equation}
\label{variance}
\sigma2(M) = {1 \over 2 \pi2} \int_{0}^{\infty} P(k) \;
\widehat{W}2(k;R_f) \; k2 \; {{\rm d}}k.
\end{equation}
with $\widehat{W}(k;R_f)$ the Fourier transform of $W(r;R_f)$.
The {\it extended} Press-Schechter (EPS) model developed by Bond {et al.~}
(1991), is based on the excursion set formalism. For each point one
constructs `trajectories' $\delta(M)$ of the linear density contrast
at that position as function of the smoothing mass $M$. In what
follows we adopt the notation of Lacey \& Cole (1993) and use the
variables $S = \sigma2(M)$ and $\omega = \delta_c(z)$ to label mass
and redshift, respectively. In the limit $R_f \rightarrow \infty$ one
has that $S = \delta(S) = 0$, which can be considered the starting
point of the trajectories. Increasing $S$ corresponds to decreasing
the filter mass $M$, and $\delta(S)$ starts to wander away from zero,
executing a random walk (if the filter is a sharp $k$-space filter).
The fraction of matter in collapsed objects in the mass interval $M$,
$M+{\rm d}M$ at redshift $z$ is now associated with the fraction of
trajectories that have their {\it first upcrossing} through the
barrier $\omega = \delta_c(z)$ in the interval $S$, $S+{\rm d}S$,
which is given by
\begin{equation}
\label{probS}
P(S ,\omega) \; {{\rm d}}S = {1 \over \sqrt{2 \pi}} \;
{\omega \over S^{3/2}} \;
{\rm exp}\left[-{\omega2 \over 2 S}\right] \; {{\rm d}}S
\end{equation}
(Bond {et al.~} 1991; Bower 1991; Lacey \& Cole 1993). After conversion
to number counting, this probability function yields the PS mass
function of equation~(\ref{PS}). Note that this approach does not
suffer from the arbitrary factor two in the original Press \&
Schechter approach.
Since for random walks the upcrossing probabilities are independent of
the path taken (i.e., the upcrossing is a Markov process), the
probability for a change $\Delta S$ in a time step $\Delta \omega$ is simply given by
equation~(\ref{probS}) with $S$ and $\omega$ replaced with $\Delta S$ and
$\Delta \omega$, respectively. This allows one to immediate write down the {\it
conditional} probability that a particle in a halo of mass $M_2$ at
$z_2$ was embedded in a halo of mass $M_1$ at $z_1$ (with $z_1 > z_2$)
as
\begin{eqnarray}
\label{probSS}
\lefteqn{P(S_1,\omega_1 \vert S_2,\omega_2) \; {{\rm d}}S_1 =} \nonumber \\
& & {1 \over \sqrt{2 \pi}} \;
{(\omega_1 - \omega_2) \over (S_1 - S_2)^{3/2}} \; {\rm
exp}\left[-{(\omega_1 - \omega_2)2 \over 2 (S_1 - S_2)}\right] \;
{{\rm d}}S_1
\end{eqnarray}
Converting from mass weighting to number weighting, one obtains the
average number of progenitors at $z_1$ in the mass interval $M_1$,
$M_1 + {\rm d}M_1$ which by redshift $z_2$ have merged to form a halo
of mass $M_2$:
\begin{eqnarray}
\label{condprobM}
\lefteqn{{{{\rm d}}N \over {{\rm d}}M_1}(M_1,z_1 \vert M_2,z_2) \; {{\rm d}}M_1 =}
\nonumber \\
& & {M_2 \over
M_1} \; P(S_1,\omega_1 \vert S_2,\omega_2) \;
\left\vert {{{\rm d}}S \over {{\rm d} M}} \right\vert \; {{\rm d}}M_1.
\end{eqnarray}
This conditional mass function can be combined with Monte-Carlo
techniques to construct merger histories (also called merger trees) of
dark matter haloes.
\subsection{Ellipsoidal collapse}
\label{sec:ellips}
In an attempt to improve the inconsistencies between EPS and numerical
simulations (see Section~\ref{sec:intro}), various authors have
modified the EPS formalism by considering ellipsoidal rather than
spherical collapse. For ellipsoidal density perturbations, the
conditions for collapse not only depend on the self-gravity of the
perturbation, but also on the tidal coupling with the external mass
distribution; external shear can actually rip overdensities apart and
thus prevent them from collapsing. Since smaller mass perturbations
typically experience a stronger shear field, they tend to be more
ellipsoidal. Therefore, it is to be expected that the assumptions of
spherical collapse in the standard EPS formalism are more accurate for
more massive haloes, whereas modifications associated with ellipsoidal
collapse will be more dramatic for smaller mass haloes. The way in
which ellipsoidal collapse modifies the halo formation times with
respect to the EPS predictions depends on the definition of collapse.
Ellipsoidal perturbations collapse independently along the three
different directions defined by the eigen vectors of the deformation
tensor (defined as the second derivative of the linear gravitational
potential). It is customary to associate the first axis collapse with
the formation of a 2-dimensional pancake-like structure, the second
axis collapse with the formation of a 1-dimensional filament, and the
third axis collapse with the formation of a dark matter halo. Most
authors indeed have associated halo formation with the collapse of the
third axis (e.g., Bond \& Myers 1996a; Audit, Teyssier \& Alimi 1997;
Lee \& Shandarin 1998; SMT01), though some have considered the first
axis collapse instead (e.g., Bertschinger \& Jain 1994; Monaco 1995).
For first-axis collapse one predicts that haloes form earlier than in
the spherical case, whereas the opposite applies when considering
third-axis collapse. Clearly, the implications of considering
ellipsoidal rather than spherical collapse depend sensitively on the
collapse definition.
In order to incorporate ellipsoidal collapse in a PS-like formalism,
one needs to obtain an estimate of the critical overdensity for
collapse $\delta_{ec}$. Various studies have attempted such schemes.
For instance, SMT01 used the ellipsoidal collapse model to obtain
\begin{equation}
\label{ellips}
\delta_{ec}(M,z) = \delta_{c}(z) \left( 1 + 0.47 \left[{\sigma2(M)
\over \delta2_{c}(z)} \right]^{0.615}\right).
\end{equation}
Here $\delta_c(z)$ is the standard value for the spherical collapse
model. Solving for the upcrossing statistics with this particular
barrier shape results in halo mass functions that are in excellent
agreement with those found in simulations (Sheth \& Tormen 1999;
Jenkins {et al.~} 2001). Unfortunately, no analytical expression for the
conditional mass function is known for a barrier of the form of
equation~(\ref{ellips}), and one has to resort to either approximate
fitting functions (Sheth \& Tormen 2002), or one has to use
time-consuming Monte-Carlo simulations to determine the upcrossing
statistics (Chiueh \& Lee 2001; Lin {et al.~} 2002). Although the
resulting conditional mass functions ${{{\rm d}}N \over {{\rm d}}M_1}(M_1,z_1
\vert M_2,z_2) \; {{\rm d}}M_1$ have been found to be in good agreement
with numerical simulations if a relatively large look-back time is
considered (i.e., if $\Delta z = z_2-z_1 \ga 0.5$), there is still a
large disagreement for small $\Delta z$. This is probably due to the
neglect of correlations between scales in the excursion set approach
(Peacock \& Heavens 1990; Sheth \& Tormen 2002). This is unfortunate
as it does not allow these methods to be used for the construction of
merger histories or MAHs. Lin {et al.~} (2002) tried to circumvent this
problem by introducing a small mass gap between parent halo and
progenitor halo, i.e., each time step they require that $S_1 - S_2
\geq f \, \delta_c2(z_2)$. Upon testing their conditional mass
function with this mass gap against numerical simulations they find
good agreement for $f = 0.06$, and claim that with this modification
the excursion set approach {\it can} be used to construct merger
histories under ellipsoidal collapse conditions. However, they only
tested their conditional mass functions for $\Delta z \geq 0.2$,
whereas accurate merger histories require significantly smaller time
steps. For instance, van den Bosch (2002a) has argued for timesteps
not larger than $\Delta \omega = \omega_1 - \omega_2 \simeq 0.1$,
which, for an Einstein-de Sitter (EdS) cosmology, corresponds to
$\Delta z \simeq 0.06$ (see also discussion in Somerville \& Kolatt
1999). Furthermore, with the mass gap suggested by Lin {et al.~} (2002),
each time step there is a minimum amount of mass accreted by the halo,
which follows from $S_1 - S_2 = f \, \delta_c2(z_2)$. This
introduces a distinct maximum to the halo half-mass formation time,
the value of which depends sensitively on the actual time-steps taken.
To test this, we constructed MAHs of CDM haloes using the method of
van den Bosch (2002a) but adopting the conditional probability
function of Lin {et al.~} (2002). This resulted in MAHs that are in very
poor agreement with numerical simulations. In particular, the results
were found to depend strongly on the value of $\Delta \omega$ adopted.
In summary, although introducing ellipsoidal collapse conditions in
the excursion set formalism has allowed the construction of accurate
unconditional mass functions, there still is no reliable method based
on the EPS formalism that allows the construction of accurate merger
histories and/or MAHs.
\begin{figure*}
\centerline{\psfig{file=mf.ps,width=1.0\hsize,angle=270}}
\caption{Panels in the upper row show the (unconditional) halo mass
functions at 4 different redshifts, as indicated. Different symbols
(each with Poissonian error bars) correspond to 5 different
PINOCCHIO simulations randomly selected from P0,
each with a different mass resolution. Dashed
and solid lines correspond to the PS and SMT01 mass functions,
respectively, and are shown for comparison. Panels in the lower row
show the percentual difference between the PS and SMT01 mass
functions (dashed lines) and that between the PINOCCHIO and the
SMT01 mass functions (symbols with errorbars). Clearly, the PS mass
function overestimates (underestimates) the number of small (high)
mass haloes, while PINOCCHIO yields mass functions that are in
excellent agreement with SMT01 (and thus with N-body
simulations). Note that the SMT01 halo mass function best fits
the mass function of simulated halos that is identified with an
FOF linking length of 0.2 times the mean particle separation.
The mean density of a halo so seletced is similar to that
within a virialized halo based on the spherical collapse model.
PINOCCHIO haloes and PS haloes are all defined so that
the mean density within a halo is similar to that based on
the spherical collapse model.}
\label{fig1}
\end{figure*}
\subsection{PINOCCHIO}
\label{sec:pino}
Although the problem of obtaining accurate merging histories under
ellipsoidal collapse conditions can be circumvented by using N-body
simulations, the time-expense of these simulations is a major hurdle.
An attractive alternative is provided by the LPT code PINOCCHIO
developed recently by Monaco {et al.~} (2002b). Below we give a short
overview of PINOCCHIO, and we refer the interested reader to Monaco
{et al.~} (2002a,b) and Taffoni {et al.~} (2002) for a more elaborate
description.
PINOCCHIO uses Lagrangian perturbation theory to describe the dynamics
of gravitational collapse. In LPT the comoving (Eulerian) coordinate
$\textbf{x}$ and the initial Lagrangian coordinate $\textbf{q}$ of each particle
are connected via
\begin{equation}
\label{displace}
\textbf{x}(\textbf{q},t)= \textbf{q}+\textbf{S}(\textbf{q},t),
\end{equation}
with $\textbf{S}$ the displacement field. The first-order term of
$\textbf{S}(\textbf{q},t)$ is the well-known Zel'dovich approximation (Zel'dovich
1970):
\begin{equation}
\label{zeld}
\textbf{S}(\textbf{q},t)= -D(t) {\partial \psi \over \partial \textbf{q}}
\end{equation}
with $\psi(\textbf{q})$ the rescaled linear gravitational potential, which
is related to the density contrast $\delta_0(\textbf{q})$ extrapolated to
the present time by the Poisson equation
\begin{equation}
\label{poisson}
\nabla2\psi(\textbf{q})= \delta_0(\textbf{q}),
\end{equation}
Since the Lagrangian density field is basically $\rho_{\rm L}(\textbf{q}) =
\bar{\rho}$, the (Eulerian) density contrast is given by
\begin{equation}
\label{euldens}
1 + \delta(\textbf{x},t) = {1 \over {\rm det}(J)}
\end{equation}
with $J = \partial \textbf{x} / \partial \textbf{q}$ the Jacobian of the
transformation given in~(\ref{displace}). Note that the density
formally goes to infinity when the Jacobian determinant vanishes,
which corresponds to the point in time when the mapping $\textbf{q}
\rightarrow \textbf{x}$ becomes multi-valued, i.e. when orbits first cross
leading to the formation of a caustic. Since the (gravitationally
induced) flow is irrotational the matrix $J$ is symmetric and can thus
be diagonalized:
\begin{equation}
\label{euldensdiag}
1 + \delta(\textbf{x},t) = {1 \over \prod_{i=1}^{3}[1 - D(t) \lambda_i(\textbf{q})]}
\end{equation}
with $-\lambda_i$ the eigenvalues of the deformation tensor
${\partial2 \psi/\partial q_i \partial q_j}$.
PINOCCHIO starts by constructing a random realization of a Gaussian
density field $\rho({\textbf{q}})$ (linearly extrapolated to $z=0$) and the
corresponding peculiar potential $\phi(\textbf{q})$ on a cubic grid. The
density fluctuation field is specified completely by the power
spectrum $P(k)$, which is normalized by specifying the value of
$\sigma_8$, defined as the rms linear overdensity at $z=0$ in spheres
of radius $8 h^{-1} \>{\rm Mpc}$. The density and peculiar potential fields
are subsequently convolved with a series of Gaussians with different
values for their FWHM $R$. For the $2563$ simulations used in this
paper, 26 different linearly sampled values of $R$ are used. For a
given value of $R$ the density of a mass element (i.e., `particle')
will become infinite as soon as at least one of the ellipsoid's axes
reaches zero size (i.e., when $D(t) = 1/\lambda_i$). At this point
orbit crossing (OC) occurs and the mass element enters a high-density
multi-stream region. This is the moment of first-axis collapse. Since
the Jacobian determinant becomes multivalued at this stage, one can
not make any further predictions of the mass element's fate beyond
this point in time. Consequently, it is not possible in PINOCCHIO to
associate halo collapse with that of the third axis.
For each Lagrangian point $\textbf{q}$ (hereafter `particle') and for each
smoothing radius $R$ this OC (i.e., collapse) time is computed, and
the highest collapse redshift $z_c$, the corresponding smoothing scale
$R_c$, and the Zel'dovich estimate of the peculiar velocity ${\bf
v}_c$ are recorded. PINOCCHIO differs from the standard PS-like
method when it comes to assigning masses to collapsed objects. Rather
than associating a halo mass with the collapsed mass element based
directly on the smoothing scale $R_c$ at collapse, PINOCCHIO uses a
fragmentation algorithm to link neighboring mass elements into a
common dark matter halo. In fact, the collapsed mass element may be
assigned to a filament or sheet rather than a halo.
After sorting particles according to decreasing collapse redshift
$z_c$ the following rules for accretion and merging are adopted:
Whenever a particle collapses and none of its Lagrangian neighbors (the
six nearest particles) have yet collapsed, the particle is considered
a seed for a new halo. Otherwise, the particle is accreted by the
nearest Lagrangian neighbor that already has collapsed if the Eulerian
distance $d$, computed using the Zel'dovich velocities ${\bf v}$ at
the time of collapse, obeys $d \leq f_a R_M$, where $R_M = M^{1/3}$ is
the radius of a halo of $M$ particles. If more than one Lagrangian
neighbor has already collapsed, it is simultaneously checked whether
these haloes merge. This occurs whenever, again at the time of
collapse, the mutual Eulerian distance between these haloes is $d \leq
f_M R_M$, where $R_M$ refers to the larger halo. Note that with this
description, up to six haloes may merge at a given time. The
collapsing particles that according to these criteria do not accrete
onto a halo at their collapse time are assigned to a filament. In
order to mimic the accretion of filaments onto haloes, filament
particles can be accreted by a dark matter halo at a later stage when
they neighbor (in Lagrangian space) an accreting particle. Finally,
in high density regions it can happen that pairs of haloes that are
able to merge are not touched by newly collapsing particles for a long
time. Therefore, at certain time intervals pairs of touching haloes
are merged if they obey the above merging condition.
The accretion and merging algorithm described above has five free
parameters. In addition to the parameters $f_a$ and $f_M$ three
additional free parameters have been introduced by Monaco {et al.~}
(2002b). We refer the reader to this paper for details. This
relatively large amount of freedom may seem a weakness of PINOCCHIO.
However, it is important to realize that even N-body codes require
some free parameters, such as the linking-length in the
Friends-Of-Friends (FOF) algorithm used to identify dark matter
haloes. Furthermore, we do not consider these parameters as free in
what follows. Rather, we adopt the values advocated by Monaco {et al.~}
(2002a,b),
which they obtained by tuning PINOCCHIO to reproduce the
conditional and unconditional mass function of N-body simulations.
\begin{figure}
\centerline{\psfig{file=mg.ps,width=0.5\textwidth,angle=270}}
\caption{The mass assembly histories of dark matter haloes
with present-day masses in the four mass bins as indicated
in the panels. The upper two panels are based on the
$100 h^{-1}{\rm Mpc}$-box simulations, P1 and S1,
while the lower two panels use data from the
$300 h^{-1}{\rm Mpc}$-box simulations, P2 and S2.
The thin lines are 40 MAHs randomly selected from the PINOCCHIO
simulations. The thick solid line in each panel shows the average
of all the MAHs obtained in the PINOCCHIO simulaions in the
corresponding mass bin. The thick dotted line shows the average
MAH extracted from the simulations. The thick dashed line
shows the average MAH obtained from 3000 EPS realizations
(properly sampled from halo mass function).}
\label{MAH}
\end{figure}
\begin{figure}
\centerline{\psfig{file=mg_2.ps,width=0.5\textwidth,angle=270}}
\caption{The dashed curve in each panel shows the difference
between the average MAHs predicted by the EPS model and by
the N-body simulation,
while the solid curve shows the difference between
PINOCCHIO prediction and N-body simulation.
The the upper two panels use data from P1 and S1, while the lower two
panels use data from P2 and S2.
Data are not shown for $z \lower.5ex\hbox{\gtsima} 3$ because the MAHs are
not well represented at such high redshifts in the simulations.}
\label{MAH2}
\end{figure}
{\bf
\begin{figure}
\centerline{\psfig{file=scatter.ps,width=0.5\textwidth,angle=270}}
\caption{The {\it standard deviation} of the MAHs, $S_{\rm M}(z)$,
normalized by the average MAH, $M(z)$, in four mass bins.
Solid lines are results from PINOCCHIO, while dotted lines
are results from N-body simulations.
As in Fig.~\ref{MAH} and Fig.~\ref{MAH2}, the upper two panels
use data from P1 and S1, while the lower two panels use data
from P2 and S2.}
\label{scatter}
\end{figure}
}
\begin{table}
\begin{center}
\caption{Ensemble of PINOCCHIO simulations (P0)}
\begin{tabular}{cccc}
\hline
Box size ($h^{-1}$ Mpc)& $N_{\rm run}$ & $M_{p}$ ($h^{-1}\>{\rm M_{\odot}}$) & $N_{\rm MAH}$ \\
\hline
20 & 12 & $4.0 \times 107$ & 2,690 \\
40 & 8 & $3.2 \times 108$ & 1,863 \\
60 & 8 & $1.1 \times 109$ & 796 \\
80 & 6 & $2.5 \times 109$ & 1,438 \\
100 & 6 & $5.0 \times 109$ & 2,799 \\
140 & 4 & $1.4 \times 10^{10}$ & 410 \\
160 & 2 & $2.0 \times 10^{10}$ & 299\\
200 & 9 & $4.0 \times 10^{10}$ & 2,629 \\
\hline
\end{tabular}
\end{center}
\medskip
A listing of the PINOCCHIO simulations used in this paper. All
simulations use $2563$ particles and adopt the standard $\Lambda$CDM
concordance cosmology. In order to get good statistics, we choose a
combination of box sizes so that we can select thousands of
well-resolved (with more than 2000 particles) haloes
in each mass bin we adopt in the paper. This
ensemble of PINOCCHIO simulations is referred to as `P0' in the text.
The first column of Table 1 lists the box size of the
simulation in $h^{-1} \>{\rm Mpc}$. The second column lists the number of
independent realizations run. The particle mass $M_p$ (in $h^{-1}
\>{\rm M_{\odot}}$) is listed in the third column, while the fourth column lists
the total number of haloes (summed over all $N_{\rm run}$
realizations) with more than 2000 particles and for which a MAH has
been obtained.
\end{table}
\section{Simulations}
\label{sec:sim}
In this paper we use PINOCCHIO simulations to study the mass assembly
histories (MAHs) of dark matter haloes. We follow previous studies
(Lacey \& Cole 1993; Eisenstein \& Loeb 1996; Nusser \& Sheth 1999;
van den Bosch 2002a) and define the MAH, $M(z)$, of a halo as the main
trunk of its merger tree: at each redshift, the mass $M(z)$ is
associated with the mass of the most massive progenitor at this
redshift, and we follow this progenitor, and this progenitor only,
further back in time. In this way, this `main progenitor halo' never
accretes other haloes that are more massive than itself. Note that
although at each branching point we follow the most massive branch,
this does not necessarily imply that the main progenitor is also the
most massive of {\it all} progenitors at any given redshift.
Below we describe the PINOCCHIO simulations, the N-body simulations,
and the EPS method used to construct MAHs.
\subsection{PINOCCHIO simulations}
\label{sec:pinsim}
Because the progenitors of a present-day halo become smaller at higher
redshift, we can only follow the MAHs to a sufficiently high redshift
if the halo at $z=0$ contains a large enough number of particles.
When constructing MAHs with PINOCCHIO, we only use haloes that contain
more than 2000 particles at the present time, and we trace each MAH to
the redshift at which its main progenitor contains less than 10
particles. In order to cover a large range of halo masses, we have
carried out 55 PINOCCHIO simulations with $2563$ particles each and
spanning a wide range of box sizes and particle masses (see Table~1, we
call this suite of PINOCCHIO simulations P0 hereafter).
The choice of box sizes ensures that there are several thousand
well-resolved haloes in each of the mass bins considered.
Each of these simulations takes only about 6 hours of CPU time on a
common PC (including the actual analysis), clearly demonstrating its
advantage over regular N-body simulations. This suite of PINOCCHIO
simulations has adopted
the $\Lambda$CDM concordance cosmology with $\Omega_m=0.3$,
$\Omega_\Lambda=0.7$, $h=0.7$ and $\sigma_8=0.9$.
With simulation box sizes ranging from $20 \>h^{-1}{\rm {Mpc}}$ to $200\>h^{-1}{\rm {Mpc}}$, and
particle masses ranging from $4 \times 10^{7} h^{-1} \>{\rm M_{\odot}}$ to $4
\times 10^{10} h^{-1} \>{\rm M_{\odot}}$, we are able to study the MAHs of
present-day haloes with masses $> 8 \times 10^{10} h^{-1} \>{\rm M_{\odot}}$. The
construction of the MAHs is straightforward: PINOCCHIO outputs a halo
mass every time a merger occurs, i.e., when a halo with more than 10
particles merges into the main branch. If we require an estimate of
the halo mass at any intermediate redshift, $z$, we use linear
interpolation in $\log(1+z)$ between the two adjacent output
redshifts.
\subsection{N-body simulations}
\label{sec:nbody}
For comparison we also used MAHs extracted from two sets of N-body
simulations (referred to as S1 and S2). These N-body simulations
follow the evolution of $5123$ particles in a periodic box of
$100 \>h^{-1}{\rm {Mpc}}$ (S1) and $300 \>h^{-1}{\rm {Mpc}}$ (S2) on a side, assuming slightly
different cosmologies (see Table 2 for details).
The snapshot outputs of each simulation are evenly placed at 60
redshifts between $z=0$ and $z=15$ in $\ln(1+z)$ space.
In each simulation and at each output, haloes are identified using the
standard FOF algorithm with a linking length of $b=0.2$. Haloes
obtained with this linking length have a mean overdensity of $\sim
180$. A halo at redshift $z_1$ is identified as a progenitor of a
halo at $z_2 < z_1$ if more than half of its mass is included in
the halo at $z_2$.
The resulting lists of progenitor haloes are used to construct the
MAHs. In our analysis, we only use haloes more massive than
$10^{11}h^{-1}\>{\rm M_{\odot}}$ at the present time in S1 and
halos more massive than $10^{13}h^{-1}\>{\rm M_{\odot}}$ in S2. Thus,
in each simulation only halos with more than $\sim 600$ particles
at $z=0$ are used, which allows us to trace the MAHs to sufficiently high
redshift with sufficiently high resolution. For comparison, we also
generate two sets of PINOCCHIO simulations, P1 and P2,
using exactly the same numbers of particles and cosmologies as
in S1 and S2, respectively (see Table 2).
\subsection{Monte-Carlo simulations}
\label{sec:moncar}
We also generate MAHs using Monte-Carlo simulations based on the
standard EPS formalism. We adopt the N-branch tree method with
accretion suggested by Somerville \& Kolatt (1999, hereafter SK99).
This method yields more reliable MAHs than for example the binary-tree
method of Lacey \& Cole (1993). In particular, it ensures exact mass
conservation, and yields conditional mass functions that are in good
agreement with direct predictions from EPS theory (i.e., the method is
self-consistent).
To construct a merger tree for a parent halo of mass $M$ the SK99
method works as follows. First a value for $\Delta S$ is drawn from
the mass-weighted probability function
\begin{equation}
\label{probdS}
P(\Delta S ,\Delta \omega) \; {{\rm d}}\Delta S = {1 \over \sqrt{2 \pi}} \; {\Delta \omega \over
\Delta S^{3/2}} \; {\rm exp}\left[-{(\dW2) \over 2 \Delta S}\right] \; {{\rm d}}\Delta S
\end{equation}
(cf. equation~[\ref{probSS}]). Here $\Delta \omega$ is a measure for the time
step used in the merger tree, and is a free parameter (see below).
The progenitor mass, $M_p$, corresponding to $\Delta S$ follows from
$\sigma2(M_p) = \sigma2(M) + \Delta S$. With each new progenitor it is
checked whether the sum of the progenitor masses drawn thus far
exceeds the mass of the parent, $M$. If this is the case the
progenitor is rejected and a new progenitor mass is drawn. Any
progenitor with $M_p < M_{\rm min}$ is added to the mass component
$M_{\rm acc}$ that is considered to be accreted onto the parent in a
smooth fashion (i.e., the formation history of these small mass
progenitors is not followed further back in time). Here $M_{\rm min}$
is a free parameter that has to be chosen sufficiently small. This
procedure is repeated until the total mass left, $M_{\rm left} = M -
M_{\rm acc} - \sum M_p$, is less than $M_{\rm min}$. This remaining
mass is assigned to $M_{\rm acc}$ and one moves on to the next time
step. For the construction of MAHs, however, it is not necessary to
construct an entire set of progenitors. Rather, at each time step,
one can stop once the most massive progenitor drawn thus far is more
massive than $M_{\rm left}$. This has the additional advantage that
one does not have to define a minimum progenitor mass $M_{\rm min}$
(see van den Bosch 2002a for details).
In principle, since the upcrossing of trajectories through a boundary
is a Markov process, the statistics of progenitor masses should be
independent of the time steps taken. However, the SK99 algorithm is
based on the {\it single} halo probability (equation~[\ref{probdS}]),
which does not contain any information about the {\it set} of
progenitors that make up the mass of $M$. In fact, mass conservation
is enforced `by hand', by rejecting progenitor masses that overflow
the mass budget. As shown in van den Bosch (2002a), this results in a
time step dependency, but only for relatively large time steps. For
sufficiently small values of $\Delta \omega$ the algorithm outlined above yields
accurate and robust results (see also SK99). Throughout this paper we
adopt a timestep of $\Delta z=0.05$. Our tests with different
values of $\Delta z$ from $0.01$ to $0.05$ have shown that this
time step is small enough to achieve stable results, that is, when we
decrease the time step to $\Delta z=0.01$, the change in the
average MAH is less than 1\%.
\subsection{Comparison}
\label{sec:comp}
\begin{table*}
\begin{center}
\caption{Reference PINOCCHIO and N-body simulations}
\begin{tabular}{lccccccc}
\hline\hline
Simulation Name & $N_{\rm p}$ &Box size ($h^{-1}$ Mpc) & $M_{p} (h^{-1}\>{\rm M_{\odot}})$ &
$\Omega_{\rm m}$ & $\Omega_\Lambda$ & $h$ & $\sigma_8$ \\
\hline\hline
S1 (N-body) & $5123$ &100 & $5.5 \times 10^{8}$ & 0.268 & 0.732 & 0.71 & 0.85\\
P1 (PINOCCHIO)& $5123$ &100 & $5.5 \times 10^{8}$ & 0.268 & 0.732 & 0.71 & 0.85\\
\hline
S2 (N-body)& $5123$ &300 & $1.3 \times 10^{11}$ & 0.236 & 0.764 & 0.73 & 0.74 \\
P2 (PINOCCHIO)& $5123$ &300 & $1.3 \times 10^{11}$ & 0.236 & 0.764 & 0.73 & 0.74\\
\hline\hline
\end{tabular}
\end{center}
\medskip
\end{table*}
We now compare the MAHs obtained with all three methods discussed
above. The upper panels of Fig.~\ref{fig1} plot the (unconditional)
halo mass functions at four different redshifts, as indicated,
obtained from 5 arbitrary PINOCCHIO runs with different box sizes in P0.
Dashed lines correspond to the analytical halo mass functions obtained
using the standard PS formalism (equation~[\ref{PS}]), while the solid
lines indicate the mass functions of SMT01 based on ellipsoidal
collapse. The latter have been shown to accurately match the mass
functions obtained from N-body simulations (e.g., Sheth \& Tormen, 1999;
SMT01). The symbols in the
lower panels of Fig.~\ref{fig1} plot the differences between the
PINOCCHIO and the SMT01 mass functions, while the dashed lines
indicate the differences between the PS and the SMT01 mass functions.
Clearly, the PINOCCHIO mass functions are in excellent agreement with
those of SMT01, and thus also with those obtained from N-body
simulations. In addition, Taffoni {et al.~} (2002) have shown that
PINOCCHIO also accurately matches the {\it conditional} mass functions
obtained from numerical simulations. We now investigate whether the
actual MAHs obtained from PINOCCHIO are also in good agreement with
the numerical simulations.
Fig.~\ref{MAH} plots the average MAHs obtained from the PINOCCHIO,
N-body and EPS simulations, for halos with the present masses
in the following four mass ranges:
$\log(M_0/h^{-1}\>{\rm M_{\odot}})=$11-12, 12-13, 13-14 and 14-15.
For comparison, in each panel we also show 40 randomly selected
MAHs from the PINOCCHIO simulations (P1 and P2).
To ensure mass resolution, results for the low-mass bins
(the two upper panels) are based on simulations with the small
box size, i.e. S1 and P1. Results for the high-mass bins
(the two lower panels) are based only on simulations with the
large-box size (S2 and P2) in order to obtain a large number of
massive halos. The thick solid curve in each panel corresponds
to the average MAH obtained by averaging over all the halos,
in the mass range indicated, found in one of the PINOCCHIO simulations
(P1 and P2). The thick dashed lines correspond to the average MAHs
obtained from 3000 EPS Monte-Carlo simulations (properly weighted by
the halo mass function). The thick dotted lines show the average MAHs
obtained from the two N-body simulations (S1 and S2).
In Fig.~\ref{MAH2}, a detailed comparison between these
results are presented. As can be seen in
Fig.~\ref{MAH2}, the average MAHs obtained with PINOCCHIO are in good
agreement with those obtained from the N-body simulations (with
differences smaller than 10\%). Note that there are uncertainties
in the identification of dark haloes in N-body simulations
using the FOF algorithm. Sometimes two physically separated haloes
can be linked together and identified as one halo if they
are bridged by dark matter particles, which can change
the halo mass by 5\% on average. The agreement between
PINOCCHIO and simulation shown in Fig.~\ref{MAH2} is
probably as good as one can hope for.
The EPS model, however, yields MAHs that are systematically
offset with respect to those obtained from the N-body simulations:
the EPS formalism predicts that haloes assemble too late
(see also van den Bosch 2002a; Lin, Jing \& Lin 2003; W02).
Fig.~\ref{scatter} shows the ratio between the standard deviation
of the MAHs, $S_{\rm M}(z)$, and the average MAH $M(z)$, as a function
of redshift $z$. As one can see, the agreement between the PINOCCHIO
and N-body simulations is also reasonably good.
In summary, the Lagrangian Perturbation code PINOCCHIO yields halo
mass functions (both conditional and unconditional), and
mass assembly histories that are all in good
agreement with N-body simulations. In particular, it works much
better than the standard PS formalism, and yet is much faster to run
than numerical simulations. PINOCCHIO therefore provides a unique and
fast platform for accurate investigations of the assembly histories
of a large, statistical sample of CDM haloes.
\section{Halo formation times}
\label{sec:ftime}
Having demonstrated that the PINOCCHIO MAHs are in good agreement with
those obtained from N-body simulations, we now use the suite of 55 PINOCCHIO
simulations, P0, listed in Table~1 to investigate the assembly histories of
a large sample of haloes spanning a wide range in halo masses.
The assembly history of a halo can be parameterized by a formation
time (or equivalently formation redshift), which characterizes when
the halo assembles. However, since the assembly of a halo is a
continuous process, different `formation times' can be defined, each
focusing on a different aspect of the MAH. Here we define and compare
the following four formation redshifts:
\begin{enumerate}
\item $z_{\rm half}$: This is the redshift at which the halo has
assembled half of its final mass. This formation time has been
widely used in the literature.
\item $z_{\rm lmm}$: This is redshift at which the halo experiences its last
major merger. Unless stated otherwise we define a major merger as
one in which the mass ratio between the two progenitors is larger
than $1/3$. This definition is similar to $z_{\rm jump}$ defined in
Cohn \& White (2005). Major mergers may have played an important
role in transforming galaxies and in regulating star formation in
galaxies. Their frequency is therefore important to quantify.
\item $z_{\rm vvir}$: This is the redshift at which the virial velocity of
a halo, $V_{\rm vir}$, defined as the circular velocity at the virial
radius, reaches its current value, $V_0$, for the first time. Since
$V_{\rm vir}$ is a measure for the depth of the potential well, $z_{\rm vvir}$
characterizes the formation time of the halo's gravitational
potential.
\item $z_{\rm vmax}$: This is the redshift at which the halo's virial
velocity reaches its maximum value over the entire MAH. As we show
below, the value of $V_{\rm vir}$ is expected to increase (decrease) with
time, if the time scale for mass accretion is shorter (longer) than
the time scale of the Hubble expansion. Therefore, $z_{\rm vmax}$
indicates the time when the MAH transits from a fast accretion phase
to a slow accretion phase.
\end{enumerate}
In an N-body simulation one can infer the virial velocity of a halo
from its internal structure. In the case of PINOCCHIO simulations,
however, no information regarding the density distribution of haloes
is available. However, we may use the fact that CDM haloes always have
a particular (redshift and cosmology dependent) overdensity. This
allows us to define the virial velocity at redshift $z$ as
\begin{equation}
\label{eq:vcz}
V_{\rm vir}(z) = \sqrt{G M_{\rm vir} \over R_{\rm vir}} =
\left[ \frac{\Delta_{\rm vir}(z)}{2}\right]^{1/6} \left[M_{\rm vir}(z) \, H(z)\right]^{1/3}
\end{equation}
Here $M_{\rm vir}$ and $R_{\rm vir}$ are the virial mass and virial radius of the
halo, respectively, and $H(z)$ is the Hubble parameter. The quantity
$\Delta_{\rm vir}(z)$ is the density contrast between the mean density of the
halo and the critical density for closure, for which we use the
fitting formula of Bryan \& Norman (1998),
\begin{equation}
\label{delc}
\Delta_{\rm vir}(z) = 18 \pi2 + 82 [\Omega_{\rm m}(z)-1] - 39 [\Omega_{\rm m}(z)-1]2
\end{equation}
\begin{figure}
\vbox{
\psfig{file=typMAH.eps,angle=270,width=1.0\hsize}
\psfig{file=typvc.eps,angle=270,width=1.0\hsize}
\caption{{\it Upper panel:} the MAH of a randomly chosen halo with a
mass of $1.02 \times 10^{13} h^{-1}\>{\rm M_{\odot}}$. Various characteristic
events during the assembly of this halo are indicated: $z_{\rm vmax}$
(open triangle), $z_{\rm half}$ (open circle), and $z_{\rm vvir}$ (cross). The
solid dots with an arrow indicate major mergers (those with a mass
ratio larger than $1/3$). {\it Lower panel:} same as in upper panel,
except that here the evolution of the halo virial velocity is
shown.}
\label{fig:zformex}
}
\end{figure}
As an illustration, Fig.~\ref{fig:zformex} plots the MAH, $M(z)/M_0$
(upper panel), and the history of the virial velocity, $V_{\rm vir}(z)/V_0$
(lower panel) for a randomly selected halo (with $M_0 = 1.02 \times
10^{13} h^{-1} \>{\rm M_{\odot}}$). All major merger events are marked by a solid
dot plus arrow. The last major merger occurs at $z_{\rm lmm}= 1.60$. The
other formation redshifts, $z_{\rm half}=1.59$, $z_{\rm vvir}=3.77$, and
$z_{\rm vmax}=1.23$ are marked by an open circle, a cross, and an open
triangle, respectively.
\begin{figure}
\centerline{\psfig{file=zformcorr.ps,angle=270,width=1.0\hsize}}
\caption{The correlations between various halo formation redshifts for
haloes with present day masses in the range $10^{11} h^{-1} \>{\rm M_{\odot}}
\leq M \leq 10^{12} h^{-1} \>{\rm M_{\odot}}$. The value of $r_s$ in each panel
shows the corresponding Spearman rank-order correlation coefficient.
Due to the finite time resolution in the PINOCCHIO simulations, in
some cases the values of two formation times can be the same.}
\label{fig:zformcorr}
\end{figure}
Fig.~\ref{fig:zformcorr} plots the correlations between the various
formation redshifts, for haloes with masses in the range $10^{11} -
10^{12} h^{-1}\>{\rm M_{\odot}}$. The value of $r_s$ in each panel shows the
corresponding Spearman rank-order correlation coefficients. Clearly,
there is significant correlation among all the formation redshifts,
but the scatter is quite large. This demonstrates that these different
formation times characterize different aspects of a given MAH.
Unlike simulation which outputs snapshots at arbitrary times,
PINOCCHIO only outputs when a merger occurs and the merger is treated as
instantaneous. Consequently, some formation times can have exactly
the same value in PINOCCHIO simulations. Note
that the correlation shown in the lower left panel is quite similar to
that obtained by Cohn \& White (2005) for simulated clusters of
galaxies. Note also that typically, $z_{\rm vvir} > z_{\rm half}$ and $z_{\rm vvir}
> >z_{\rm lmm}$. This shows that haloes {\it in this mass range} established
their potential wells before they accreted a major fraction of their
mass. The last major merger typically occurred well before $z_{\rm half}$,
which indicates that most of that mass has been accreted in a fairly
smooth fashion (see also W02 and Zhao {et al.~} 2003a).
\begin{figure}
\centerline{\psfig{file=zform.ps,angle=270,width=1.0\hsize}}
\caption{The probability distributions of $z_{\rm half}$ (dotted lines),
$z_{\rm vvir}$ (dashed lines), $z_{\rm vmax}$ (dot-dashed lines) and $z_{\rm lmm}$
(thick solid lines). Results are shown for four different mass bins,
as indicated in each panel. Note that the scale of the four panels
is different! See text for a detailed discussion.}
\label{fig:zform1}
\end{figure}
\begin{figure}
\centerline{\psfig{file=fracm.ps,angle=270,width=1.0\hsize}}
\caption{The distributions of the halo mass fraction at various formation
times. Different line-styles correspond to different definitions of
the formation time, as indicated in the upper left-hand panel. As in
Fig.~\ref{fig:zform1}, different panels correspond to different halo
mass bins, as indicated.}
\label{fig:zform2}
\end{figure}
Fig.~\ref{fig:zform1} shows the distributions of the four formation
redshifts defined above. Results are shown for four different mass
bins, as indicated. For all four formation redshifts, the median is
higher for haloes of lower masses. This reflects the hierarchical
nature of the assembly of dark matter haloes: less massive systems
assemble (`form') earlier. Note that the distribution of formation
times is also broader for lower mass haloes. For haloes with $M_0
\ga M^{*} \simeq 10^{13} h^{-1} \>{\rm M_{\odot}}$\footnote{Here $M^{*}$ is the
characteristic non-linear mass defined by $\sigma(M^{*}) =
\delta_{\rm crit}0$}, all the distribution functions except that of
$z_{\rm half}$ are peaked at, or very near to, $z = 0$. This shows
that the majority of these haloes are still in their fast accretion
phase, so that their potential wells are still deepening with time.
On the other hand, haloes with $M_0 \ll M^{*}$ typically have
$z_{\rm vvir} > z_{\rm half}$ and $z_{\rm vvir} >z_{\rm lmm}$ (cf.
Fig.~\ref{fig:zformcorr}), indicating that their potential wells have
already been established, despite the fact that they continue to
accrete appreciable amounts of mass.
Fig.~\ref{fig:zform2} shows the distributions of the ratio $M(z_{\rm
form}) / M_0$, with $z_{\rm form}$ one of our four formation
redshifts. By definition, the distribution of $M(z_{\rm half}) / M_0$ is a
$\delta$-function at $M(z_{\rm form})/M_0 = 0.5$, and is therefore not
shown. For haloes with $M_0 < 10^{13} h^{-1} \>{\rm M_{\odot}}$, the virial
velocity has already reached the present day value when the halo has
only assembled 10\%-20\% of its final mass. Thus, these systems
assemble most of their mass without significant changes to the depth
of their potential well. Only for massive haloes with $M_0 \ga
10^{14} h^{-1} \>{\rm M_{\odot}}$ is the median of $M(z_{\rm vvir}) / M_0$ larger than
0.5, implying that they have assembled the majority of their present
day mass through major (violent) mergers.
If we define major mergers as those with a progenitor mass ratio that
is at least $1/3$, the distribution of $M(z_{\rm lmm})/M_0$ is remarkably
flat. This implies that some haloes accrete a large amount of mass
after their last major merger, while for others the last major merger
signals the last significant mass accretion event. Remarkably, the
distribution of $M(z_{\rm lmm})/M_0$ is virtually independent of $M_0$. For
low mass haloes, the flatness of the distribution of $M(z_{\rm lmm})/M_0$
simply reflects the broad distribution of $z_{\rm lmm}$. However, for
massive haloes with $M \ga M^{*}$, the distribution of $z_{\rm lmm}$ is
fairly narrow. Therefore, for these haloes the flatness of the
$M(z_{\rm lmm})/M_0$ distribution implies that, since their last major
merger, they have accreted a significant amount of mass due to minor
mergers. Since the last major merger occurred fairly recently, this is
another indication that massive haloes are still in their fast
accretion phase.
\section{The properties of major mergers}
\label{sec:majmerprop}
During the assembly of dark matter haloes, major mergers play an
important role. Not only does a major merger add a significant amount
of mass, it also deepens the halo's potential well. Furthermore, in
current models of galaxy formation, a major merger of two galaxy-sized
haloes is also expected to result in a merger of their central
galaxies, probably triggering a starburst and leading to the formation
of an elliptical galaxy. Therefore, it is important to quantify the
frequency of major mergers during the formation of CDM haloes.
\begin{figure}
\centerline{\psfig{file=checkNjump.ps,angle=270,width=1.0\hsize}}
\caption{The median, $\langle N_{\rm jump}\rangle$, and
dispersion, $\sigma_{N_{\rm jump}}$, of the distribution
of the number of mass jumps, $N_{\rm jump}$, in the MAHs,
versus $n$ (see text for definitions). Left panels show
comparison between P1 and S1, while right panels
show comparison between P2 and S2. Note that the agreement
between the PINOCCHIO simulations and $N$-body simulations
is remarkable and the mass dependence is rather weak.}
\label{njump}
\end{figure}
As mentioned above, in a PINOCCHIO simulation mergers of dark matter
haloes are treated as instantaneous events, and the masses of the
merger progenitors are recorded whenever a merger happens.
This makes it very convenient to identify mergers in PINOCCHIO.
On the other hand, in an $N$-body simulation halos are identified only
in a number of snapshots, and so the accuracy of identifying mergers is
limited by the time intervals of the snapshots. For example,
if we define major mergers by looking for halos for which
the mass ratio between its second largest and largest progenitors
exceeds 1/3 in the last snapshot, we may miss major mergers in which
the two progenitors were assembled during the two snapshots.
On the other hand, if we identify major mergers in a simulation
by looking for halos whose masses increase by a factor between
1/4 and 1 in the next snapshot, we will overestimate
the number of major merger events, because some of the halos
may have increased their masses by accretion of small halos
rather than through major mergers. In the simulations
used here (S1 and S2), the time intervals between successive
snapshots are about 0.3-0.6 Gyr, comparable to the time scales of
major mergers, and the two definitions of major mergers described
above lead to a factor of 2 difference in the number of
major mergers. Because of this, it is difficult to make a direct
comparison between PINOCCHIO and N-body simulations in their
predictions for the number of major mergers. In order to check
the reliability of PINOCCHIO in predicting the number of
major mergers, we use quantities that are related to the
number of major mergers but yet can be obtained from both
our N-body and PINOCCHIO simulations. We first construct PINOCCHIO
haloes at each of the snapshots of our N-body simulations.
We then follow the MAH of each of the present halo using
the snapshots and identify the number of events in which
the mass of a halo increases by a factor exceeding $1/n$
between two successive snapshots, where $n$ is an integer
used to specify the heights of the jumps. In practice,
we trace the MAH backward in time until the mass of the halo
is 1\% of the final halo mass. Since exactly the same analysis
can also be carried out for the N-body simulations, we can
compare, for a given $n$ and for halos of given mass at the present
time, the statistics of the number of jumps, $N_{\rm jump}$,
predicted by PINOCCHIO simulations with that given by
the N-body simulations. We found that the distribution
of $N_{\rm jump}$ for a given $n$ can be well fit by a Gaussion
distribution, and in Fig.~\ref{njump} we plot the median
$\langle N_{\rm jump}\rangle $ and standard deviation
$\sigma_{N_{\rm jump}}$ versus $n$, in several mass bins.
The agreement between PINOCCHIO and N-body simulations is
remarkably good. Although $N_{\rm jump}$ is not exactly
the number of major mergers, the good agreement between
PINOCCHIO and N-body simulations makes us believe that
it is reliable to use PINOCCHIO to make predictions for the
statistics of major mergers.
\begin{figure}
\centerline{\psfig{file=Nmm.Dist.ps,angle=270,width=1.0\hsize}}
\caption{The distribution of the number of major mergers (those with a
mass ratio larger than $1/3$) in our PINOCCHIO simulations.
Lines in different styles represent different mass bins.
Note that the distributions are virtually independent of
halo mass.}
\label{mm}
\end{figure}
\begin{figure}
\centerline{\psfig{file=MMstat.ps,angle=270,width=1.0\hsize}}
\caption{Distribution of the number of mergers (in PINOCCHIO simulations)
with a mass ratio larger than $1/3$ (upper left-hand panel), $1/4$
(upper right-hand panel), and $1/6$ (lower left-hand panel). In all
three cases all haloes with masses in the range from $10^{11} h^{-1}
\>{\rm M_{\odot}}$ to $10^{15} h^{-1}\>{\rm M_{\odot}}$ are used. The dotted curves show the
best-fit Gaussians, the median and standard deviation of which are
indicated in the lower right-hand panel.}
\label{fig:mmstat}
\end{figure}
In order to investigate the statistics of major mergers in detail,
we count the number of major mergers for each of the halos
in the ensemble of simulations P0. Here again we only trace
a halo back to a time when the mass of its main progenitor
is 1\% of the halo's final mass. This choice of lower mass limit
is quite arbitrary. However, some limit is necessary, because otherwise
there will be a large number of major mergers involving progenitors
with excessively small masses at very early times.
Furthermore this mass limit is also the one we use in defining
$N_{\rm jump}$. The large number of halos in the ensemble ensures
that each mass bin contains about 2000 haloes.
Fig.~\ref{mm} plots the distributions of the number
of major mergers (with a progenitor mass ratio $\ge 1/3$) for haloes
of different masses at the present time. A halo experiences about 1
to 5 major mergers during its mass assembly history, with an average
of about 3. Note that the $N_{\rm mm}$-distributions are virtually
independent of halo mass. As we have shown in
Section~\ref{sec:ftime}, however, the redshifts at which these mergers
occur do depend strongly on halo mass: while most major mergers occur
before $z \simeq 2$ for galaxy-sized haloes, they occur much more
recently in the more massive, cluster-sized haloes.
\begin{figure}
\centerline{\psfig{file=mmfit1.ps,angle=270,width=1.0\hsize}}
\caption{The median (upper panel) and dispersion (lower panel) of the
number distributions of mergers with a mass ratio $M_1/M_2 \geq
1/n$, as a function of $n$. Steeper lines in each panel are the data
from all progenitors (summing over all branches of the merger trees)
while flatter lines are the results from the main branch.
In both cases, we have divided haloes into two mass bins as indicated
in each panel. Open triangles connected with dashed lines show the
results for haloes with masses $<10^{13}h^{-1}{\rm M}_\odot$,
while open circles connected with dotted lines show the results for
haloes with masses $\ge 10^{13}h^{-1}{\rm M}_\odot$.
The solid lines are the linear regressions of the data
drawn from the whole halo catalogue, with the slopes and zero points
indicated.}
\label{fig:mmfit}
\end{figure}
As pointed out above, the progenitor mass ratio used to define a major
merger is quite arbritrary. We therefore also investigate the
frequency of mergers with a mass ratio larger than $1/n$ with $n=2,
4,5,6,7,8$ (in addition to the $n=3$ discussed thus far). We find
that even with these values of $n$ the distributions of $N_{\rm mm}$
are still virtually independent of halo mass.
This allows us to
consider a single $N_{\rm mm}$-distribution for haloes of all masses.
Fig.~\ref{fig:mmstat} plots these distributions for three different values
of $n$ as indicated. Each of these distributions is reasonably well
described by a Gaussian function (dashed curves). Note that the use
of a Gaussian function is not entirely appropriate, because $N_{\rm
mm}$ cannot be negative. However, since the median value of $N_{\rm
mm}$ is, in all cases, significantly larger than the width of the
distribution, a Gaussian fit is still appropriate. To show how the
$N_{\rm mm}$-distribution depends on $n$, we plot, as in
Fig.~\ref{fig:mmfit}, the median and the dispersion of this
distribution as functions of $n$. As one can see, both the median and
the dispersion increase roughly linearly with $n$, but the slope for
the median ($\sim 1$) is much larger than that for the dispersion
($\sim 0.1$). Note that the results for haloes with masses
$<10^{13}h^{-1}\>{\rm M_{\odot}}$ and $>10^{13}h^{-1}\>{\rm M_{\odot}}$ are similar,
suggesting the distribution of the number of major mergers
is quite independent of halo mass.
Thus far we have only focused on the (major) merger events that merge
into the main branch of the merger tree. For comparison, we also
consider the merger rates of {\it all} progenitors, independent of
whether they are part of the main branch or not. As before we only
consider progenitors with masses in excess of one percent of the final
halo mass. The skewer lines in Fig.~\ref{fig:mmfit} show the median and
dispersion of the number of such mergers as functions of $n$. Here
again, both the median and dispersion have roughly linear relations
with $n$. The median number of such major mergers is roughly three
times as high as that of major mergers associated with the main
branch, and the dispersion increases with $n$ much faster.
\begin{figure}
\centerline{\psfig{file=pkNmmBA.ps,angle=270,width=1.0\hsize}}
\caption{The probability distributions of the number of major mergers
(those with a mass ratio larger than $1/3$) before (solid lines) and
after (dashed lines) $z_{\rm vmax}$. Note that the vast majority of
major mergers occur at $z > z_{\rm vmax}$, demonstrating that the growth
of the halo's virial velocity is mainly driven by major mergers.}
\label{VpkNmmBA}
\end{figure}
As mentioned above, major mergers are expected to be accompanied by
rapid changes of the halo's potential well, due to a resulting phase
of violent relaxation. To show such relation in more detail,
Fig.~\ref{VpkNmmBA} shows the distributions of the number of major
mergers (defined with $n=3$) before and after the formation redshift
$z_{\rm vmax}$. For haloes in all mass ranges, only a very small fraction
(less than 5\%) experiences a major merger at $z<z_{\rm vmax}$. This
demonstrates once again that the growth of the virial velocity is
mainly caused by major mergers. This result may have important
implications for understanding the structure of dark matter halos.
As shown in Lu et al. (2006), if the buildup of the potential well
associated with a dark matter halo is through major mergers, then
the velocities of dark matter particles may be effectively
randomized, a condition that may lead to a density profile close
to the universal density profile observed in $N$-body simulations.
Also, if galaxy disks are formed during a period when
no major mergers occur, our result suggests that the potential
wells of the halos of spiral galaxies should change little during
disk formation.
\section{Conclusions}
\label{sec:concl}
In the current paradigm, galaxies are thought to form in extended cold
dark matter haloes. A detailed understanding of galaxy formation,
therefore, requires a detailed understanding of how these dark matter
haloes assemble. Halo formation histories are typically studied using
either numerical simulations, which are time consuming, or using the
extended Press Schechter formalism, which has been shown to be of
insufficient accuracy. In this paper, we have investigated the growth
history of dark matter haloes using the Lagrangian perturbation code
PINOCCHIO, developed by Monaco {et al.~} (2002a). We have demonstrated
that the mass assembly histories (MAHs) obtained by PINOCCHIO are in
good agreement with those obtained using N-body simulations. Since
PINOCCHIO is very fast to run, does not require any special hardware
such as supercomputers or Beowulf clusters, and does not require any
labor intensive analysis, it provides a unique and powerful tool to
study the statistics and assembly histories of large samples of dark
matter haloes for different cosmologies.
Confirming earlier results based on N-body simulations (e.g. W02;
Zhao {et al.~} 2003a,b), we find that typical MAHs can be separated into
two phases: an early, fast accretion phase dominated by major mergers,
and a late, slow accretion phase during which the mass is mainly
accreted from minor mergers. However, the MAHs of individual haloes
are complicated, and therefore difficult to parameterize uniquely by a
single parameter. We therefore defined four different formation times:
the time when a halo acquires half of its final mass, the time when
the halo's potential well is established, the time when a halo
transits from the fast accretion phase to the slow accretion phase,
and the time when a halo experiences its last major merger. Using a
large number of MAHs of haloes spanning a wide range in masses, we
studied the correlations between these four formation redshifts, as
well as their halo mass dependence. Although all four formation times
are correlated, each correlation reveals a larger amount of scatter.
For all four formation redshifts, it is found that more massive haloes
assemble later, expressing the hierarchical nature of structure
formation. Haloes with masses below the characteristic non-linear
mass scale, $M^{*}$, establish their potential wells well before they
have acquired half of their present day mass. The potential wells
associated with more massive haloes, however, continue to deepen even
at the present time. The time when a halo reaches its maximum virial
velocity roughly coincides with the time where the MAH transits from
the fast to the slow accretion phase.
If we define major mergers as those with a progenitor mass ratio
larger than $1/3$, then on average each halo experiences about 3 major
mergers after its main progenitor has acquired one percent of its
present day mass. In addition, we found that the number of major
mergers the main branch of the merging tree has experienced is
linearly correlated with the mass ratio between the merging progenitors.
For the whole merging tree, the number of major mergers is about 3
times that of the major mergers in the main branch. The distribution of
the number of major mergers a halo has experienced is virtually
independent of its mass, and the ratio between the halo mass
immediately after the last major merger and the final halo mass has a
very broad distribution, implying that the role played by major mergers
in building up the final halo can differ significantly from system to
system.
\section*{Acknowledgments}
We are grateful to Pierluigi Monaco, Tom Theuns and Giuliano Taffoni
for making their wonderful code PINOCCHIO publicly available with an
easy to understand manual, and to Xi Kang for letting us share his
EPS merging tree code. We also thank the Shanghai Supercomputer
Center, the grants from NFSC (No. 10533030) and Shanghai Key Projects
in Basic Research (No. 05XD14019) for the N-body simulations
used in this paper. HJM would like to acknowledge the support of
NSF AST-0607535, NASA AISR-126270 and NSF IIS-0611948.
FvdB acknowledges useful and lively discussions with Risa Wechsler
during an early phase of this project.
\bigskip
| 2024-02-18T23:39:50.148Z | 2007-05-04T18:37:41.000Z | algebraic_stack_train_0000 | 580 | 11,311 |
|
proofpile-arXiv_065-2914 | \section{Introduction}
X-ray radiation coming from accreting black hole binary sources can show quasi-periodic modulations at two distinct high frequencies \mbox{($>30\,\text{Hz}$)}, which appear in the \mbox{$3:2$} ratio \citep{McClintockRemillard05}. Observations show that the solely presence of a thin accretion disk is not sufficient to produce these HFQPO modulations, because they are exclusively connected to the spectral state, where the energy spectrum is dominated by a steep power law with some weak thermal disk component. We have shown recently \citep{Bursa04} that significant temporal variations in the observed flux can be accomplished by oscillations in the geometrically thick flows, fluid tori, even if they are axially symmetric. Here we propose that the QPO variations in the energetic part of the spectrum may come from such very hot and optically thin torus terminating the accretion flow, which exhibits two basic oscillating modes.
Relativistic tori will generally oscillate in a mixture of internal and global modes. Internal modes cause oscillations of the pressure and density profiles within the torus. The outgoing flux is therefore directly modulated by changes in the thermodynamical properties of the gas, while the shape of the torus is nearly unchanged, which is off our interest here. Global modes, on the other hand, alter mainly the spatial distribution of the material. Because light rays do not follow straight lines in a curved spacetime, these changes can be displayed out by effects of gravitational lensing and light bending.
In this paper we summarize extended results of numerical calculations and show how simple global oscillation modes of a gaseous torus affect the outgoing flux received by an static distant observer in the asymptotically flat spacetime and how the flux modulation depends on the geometry and various parameters of the torus. In Section~2 we briefly summarise the idea of the slender torus model and the equations, which are used to construct the torus and to set its radiative properties. In Section~3 we let the torus to execute global oscillations and using a numerical ray-tracing we inspect how these oscillations modulate the observed flux. If not stated otherwise, we use geometrical units \mbox{$c=G=1$} throughout this paper.
\section{Slender torus model}
The idea of a slender torus was initially invented by \citet{MadejPaczynski77} in their model of accretion disk of U~Geminorum. They noticed that in the slender limit ({\it i.e.\ } when the torus is small as compared with its distance) and in the Newtonian potential, the equipotential surfaces are concentric circles. This additional symmetry induced by a Newtonian potential allowed \citet{Blaes85} to find a complete set of normal mode solutions for the linear perturbations of polytropic tori with constant specific angular momentum. He extended calculations done for a `thin isothermal ring' by \citet{PapaloizouPringle84} and showed how to find eigenfunctions and eigenfrequencies of all internal modes.
\citet{ABHKR05} have recently considered global modes of a slender torus and showed that between possible solutions of the relativistic Papaloizou-Pringle equation there exist also rigid and axisymmetric ($m\=0$) modes. These modes represent the simplest global and always-present oscillations in an accretion flow, axisymmetric up-down and in-out motion at the meridional and radial epicyclic frequencies.
\subsubsection*{Metric}
Most, if not all, of stellar and super-massive black holes have considerable amount of angular momentum, so that the Kerr metric has to be used to accurately describe their exterior spacetime. However, here we intend to study the basic effects of general relativity on the appearance of a moving axisymmetric body. We are mainly interested in how the light bending and gravitational lensing can modulate observed flux from sources. For this purpose we are pressing for a maximum simplicity to be able to isolate and recognise the essential effects of strong gravity on light.
Therefore, instead of the appropriate Kerr metric, we make use of the static Schwarzschild metric for calculations and where we compare with the non-relativistic case, the Minkowski flat spacetime metric is also used.
\subsubsection*{Equipotential structure}
The equipotential structure of a real torus is given by the Euler equation,
\begin{equation}
\label{eq:euler}
a_\mu = - \frac{\D{\mu} p}{p+\epsilon} \;,
\end{equation}
where \mbox{$a_\mu \!\equiv\! u^\nu\D{\nu} u_\mu$} is the 4-acceleration of the fluid and $\epsilon$, $p$ are respectively the proper energy density and the isotropic pressure. The fluid rotates in the azimuthal direction with the angular velocity $\Omega$ and has the 4-velocity of the form
\begin{equation}
\label{eq:4-velocity}
u^\mu = \big(u^t,\,0,\,0,\,u^\phi\big) = u^t\,\big(1,\,0,\,0,\,\Omega)
\end{equation}
After the substitution of \citeq{eq:4-velocity}, the Euler equation reads
\begin{equation}
\label{eq:euler-2}
- \frac{\D{\mu} p}{p+\epsilon} = \D{\mu}\,\mathcal{U} - \frac{\Omega\D{\mu}\ell}{1-\Omega\,\ell} \;,
\end{equation}
where \mbox{$\mathcal{U}\=-\frac12\ln\left(g^{tt} + \ell^2 g^{\phi\phi}\right)$} is the effective potential and $\ell$ is the specific angular momentum.
For a barotropic fluid, {\it i.e.\ } the fluid described by a one-parametric equation of state $p\=p(\epsilon)$, the surfaces of constant pressure and constant total energy density coincide and it is possible to find a potential $W$ such that $W\=-\int_0^p \nabla{p}/(p+\epsilon)$, which simplifies the problem enormously \citep{AJS78}.
The shape of the `equipotential' surfaces \mbox{$W(r,\,z)\=\text{const}$} is then given by specification of the rotation law \mbox{$\ell=\ell(\Omega)$} and of the gravitational field.
We assume the fluid to have uniform specific angular momentum,
\begin{equation}
\label{eq:ell}
\ell(r) = \ell_\text{K}(r_0) = \frac{\sqrt{M\,r_0^3}}{r_0 - 2M} \;,
\end{equation}
where $r_0$ represents the centre of the torus. At this point, gravitational and centrifugal forces are just balanced and the fluid moves freely with the rotational velocity and the specific angular momentum having their Keplerian values $\Omega_\text{K}(r_0)$ and $\ell_\text{K}(r_0)$.
The shape of the torus is given by the solution of equation~\citeq{eq:euler-2}, which in the case of constant $\ell$ has a simple form,
\begin{equation}
W = \mathcal{U} + \text{const} \;.
\end{equation}
In the slender approximation, the solution can be expressed in terms of second derivatives of the effective potential and it turns out that the torus has an elliptical cross-section with semi-axes in the ratio of epicyclic frequencies (\citealt{ABHKR05}; see also [\v{S}r\'{a}mkov\'{a}] in this proceedings).
In the model used here, we make even greater simplification. Introducing the cylindrical coordinates \mbox{$(t,\,r,\,z,\,\phi)$}, we use only the expansion at \mbox{$r\!=\!r_0$} in the \mbox{$z$-direction} to obtain a slender torus with a circular cross-section of equipotential surfaces,
\begin{equation}
W(r,\,z) = \frac12 \ln\!\left[ \frac{(r_0-2M)^2}{r_0\,(r_0-3M)} \right] + \frac{M\,[(r\!-\!r_0)^2\!+\!z^2]}{2\,r_0^2\,(r_0-3M)} \,.\;\;
\end{equation}
The profiles of the equipotential structure of a relativistic torus and of our model are illustrated in Fig.~\ref{fig:torus-equipotentials}.
\begin{figure}[t]
\resizebox{\hsize}{!}
{\includegraphics[height=5cm]{img-torus-equipotentials.eps}}
\caption{
An illustration of the equipotential structure of a real relativistic torus ({\em lower part}) and of our circular slender torus model ({\em upper part}) surrounding a black hole. The equipotential contours are separated by equal steps in the potential $W$.}
\label{fig:torus-equipotentials}
\end{figure}
\subsubsection*{Thermodynamics}
An equation of state of polytropic type,
\begin{equation}
p=K\,\rho^\gamma \;,
\end{equation}
is assumed to complete the thermodynamical description of the fluid. Here, $\gamma$ is the adiabatic index, which have a value of $\ratio53$ for an adiabatic mono-atomic gas, and $K$ is the polytropic constant determining the specific adiabatic process.
Now, we can directly integrate the right-hand side of the Euler equation \citeq{eq:euler} and obtain an expression for the potential W in terms of fluid density,
\begin{equation}
W = \ln \rho - \ln\left(K\,\gamma\,\rho^\gamma + \rho\,\gamma - \rho \right) + \ln\left(\gamma-1\right) \;,
\end{equation}
where we have fixed the integration constant by the requirement \mbox{$W(\rho\=0)=0$}. The density and temperature profiles are therefore
\begin{align}
\rho &= \left[ \frac{\gamma-1}{K\,\gamma} \left(e^W-1\right) \right]^\frac{1}{\gamma-1} \;,\\[0.5em]
T &= \frac{m_\text{u}\,\mu_{\text w}}{k_\text{B}}\,\frac{p}{\rho} = \frac{m_\text{u}\,\mu_{\text w}}{k_\text{B}} \frac{\gamma-1}{\gamma} \left(e^W-1\right) \;,
\end{align}
where $\mu_{\text w}$, $k_\text{B}$ and $m_\text{u}$ and the molecular weight, the Boltzmann constant and the atomic mass unit, respectively (Fig.~\ref{fig:torus-rho-T}).
\begin{figure}[b]
\resizebox{\hsize}{!}
{\includegraphics[height=5cm]{img-torus-rho-T.eps}}
\caption{
The density ({\it left}) and temperature ({\it right}) profiles of a polytropic gas forming an accretion torus with the centre at \mbox{$r_0\!=\!10.8\,M$}. Solid lines represent the slender model with radius \mbox{$R_0\!=\!2\,M$} and dashed lines represent the real torus filling the potential well of the same depth.}
\label{fig:torus-rho-T}
\end{figure}
\subsubsection*{Bremsstrahlung cooling \footnote{CGS units are used in this paragraph}}
We assume the torus to be filled with an optically thin gas radiating by the bremsstrahlung cooling. The emission include radiation from both electron\-ion and electron\-electron collisions \citep{StepneyGuilbert83, NarayanYi95}:
\begin{equation}
f = f_{ei} + f_{ee} \;.
\end{equation}
The contributions of either types are given by
\begin{align}
f_{ei}& = n_e\,\bar{n}\,\sigma_{\scriptscriptstyle T}\,c\,\alpha_{\scriptscriptstyle f}\,m_e\,c^2\,F_{ei}(\theta_{e}) \quad \text{and} \\
f_{ee}& = n_e^2 c\,r_e^2 \alpha_{\scriptscriptstyle f}\,m_e\,c^2 F_{ee}(\theta_{e}) \;,
\end{align}
where $n_e$ and $\bar{n}$ are number densities of electrons and ions, $\sigma_{\scriptscriptstyle T}$ is Thomson cross-section, $m_e$ and \mbox{$r_e\!=\!e^2/m_e c^2$} denote mass of electron and its classical radius, $\alpha_{\scriptscriptstyle f}$ is the fine structure constant, $F_{ee}(\theta_{e})$ and $F_{ei}(\theta_{e})$ are radiation rate functions and \mbox{$\theta_e\!=\!k\,T_e/m_e\,c^2$} is the dimensionless electron temperature. $F_{ee}(\theta_{e})$ and $F_{ei}(\theta_{e})$ are about of the same order, so that the ratio of electron\-ion and electron\-electron bremsstrahlung is
\begin{align}
\frac{f_{ei}}{f_{ee}} \approx \frac{\sigma_{\scriptscriptstyle T}}{r_e^2} \approx 8.4
\end{align}
and we can neglect the contribution from electron\-electron collisions. For the function $F_{ei}(\theta_{e})$ \citet{NarayanYi95} give the following expression:
\begin{align}
F_{ei}(\theta_{e}) &= 4\left(\frac{2\theta_e}{\pi^3}\right)^{1/2} \left[1+1.781\,\theta_e^{1.34}\right] \;,
\quad &\theta_e<1 \;, \\
&= \frac{9\theta_e}{2\pi} \left[\ln(1.123\,\theta_e + 0.48) + 1.5\right] \;,
\quad &\theta_e>1 \;.
\end{align}
In case of a multi-component plasma, the density $\bar{n}$ is calculated as a sum over individual ion species, \mbox{$\bar{n}\!=\!\sum Z_j^2\,n_j$}, where $Z_j$ is the charge of $j$-th species and $n_j$ is its number density. For a hydrogen\-helium composition with abundances $X\!:\!Y$ holds the following:
\begin{alignat}{3}
n_e & \equiv \sum Z_j\,n_j & &=&
{\textstyle \frac{X+2\,Y}{X+Y}\,\sum n_j} \;, \\
%
\bar{n} & \equiv \sum Z_j^2\,n_j & &=&
{\textstyle \frac{X+4\,Y}{X+Y}\,\sum n_j} \;, \\
%
\rho & \equiv \sum {A_\text{r}}_j\,m_\text{u}\,n_j & &=&\;
{\textstyle m_\text{u}\,\frac{X+4\,Y}{X+Y}\,\sum n_j} \;,
\end{alignat}
where ${A_\text{r}}_j$ is the relative atomic weight of the \mbox{$j$-th} species, $m_\text{u}$ denotes the atomic mass unit and we define \mbox{$\mu \equiv (X+4Y)/(X+Y)$}. The emissivity is then
\begin{equation}
f_{ei} = 4.30 \times 10^{-25}\,\tfrac{\mu+2}{3\,\mu}\,\rho^2\,F_{ei}(\theta_{e})\ \,\text{erg}\,\,\text{cm}^{-3}\,\,\text{s}^{-1}\;,
\end{equation}
which for the non-relativistic limit ($\theta_e\!\ll\!1$) and Population~I abundances ($X\!=\!0.7$ and $Y=0.28$) gives
\begin{equation}
\label{eq:emissivity}
f_{ei} = 3.93 \times 10^{20}\,\rho^2\,T^\ratio12\ \,\text{erg}\,\,\text{cm}^{-3}\,\,\text{s}^{-1}\;.
\end{equation}
\section{Torus oscillations}
\begin{figure*}[t!]
\resizebox{\hsize}{!}{
\includegraphics{torus-tn-demo-nw-psd.eps}
\includegraphics{torus-tn-demo-mk-psd.eps}
\includegraphics{torus-tn-demo-sw-psd.eps}}
\caption{Power spectra of an oscillating torus calculated in the Newtonian limit ({\it left}), Minkowski spacetime ({\it middle}) and the Schwarzschild spacetime ({\it right}). Viewing angle is $70^\circ$.}
\label{fig:effect-geometry}
\end{figure*}
\begin{figure}[t]
\resizebox{\hsize}{!}
{\includegraphics{img-torus-schema-displacement.eps}}
\caption{A schematic illustration of the displacement. The centre \textsf{T} of the torus is displaced radially by $\delta r$ and vertically by $\delta z$ from its equilibrium position \textsf{E}, which is at the distance $r_0$ from the centre of gravity \textsf{G}.}
\label{fig:torus-configuration}
\end{figure}
In the following, we excite in the torus rigid and axisymmetric \mbox{($m\!=\!0$)} sinusoidal oscillations in the vertical direction, {\it i.e.\ } parallel to its axis, as well as in the perpendicular radial direction. Such an assumption will serve us to model the possible basic global modes found by \citet{ABHKR05}. In our model, the torus is rigidly displaced from its equilibrium (Fig.~\ref{fig:torus-configuration}), so that the position of the central circle varies as
\begin{equation}
r(t) = r_0 + \delta{r}\,\sin(\omega_r t) \;, \quad
z(t) = \delta{z}\,\sin(\omega_z t) \;.
\end{equation}
Here, \mbox{$\omega_z = \Omega_\text{K}=(M/r_0^3)^\frac12$} is the vertical epicyclic frequency, in Schwarzschild geometry equal to the Keplerian orbital frequency, and \mbox{$\omega_r = \Omega_\text{K}(1-6M/r_0)^\frac12$} is the radial epicyclic frequency. The torus is placed at the distance \mbox{$r_0\=10.8\,M$} so that the oscillation frequency ratio \mbox{$\omega_z:\omega_r$} is \mbox{$3:2$}, but the choice is arbitrary. If not stated otherwise, the cross-section radius is \mbox{$R_0\=2.0\,M$} and amplitudes of the both vertical and radial motion are set to \mbox{$\delta{z}=\delta{r}=0.1\,R_0$}.
We initially assume the `incompressible' mode, where the equipotential structure and the thermodynamical quantities describing the torus are fixed and do not vary in time as the torus moves. Later in this Section we describe also the `compressible' mode and discuss how changes in the torus properties affect powers in the different oscillations.
The radial motion results in a periodic change of volume of the torus. Because the optically thin torus is assumed to be filled with a polytropic gas radiating by bremsstrahlung cooling and we fix the density and temperature profiles, there is a corresponding change of luminosity \mbox{$L\!\propto\!\int\!f\,\text{d}{V}$}, with a clear periodicity at $2\pi/\omega_r$. On the contrary, the vertical motion does not change the properties of the torus or its overall luminosity. We find that in spite of this, and although the torus is perfectly axisymmetric, the flux observed at infinity clearly varies at the oscillation frequency $\omega_z$. This is caused by relativistic effects at the source (lensing, beaming and time delay), and no other cause need to be invoked to explain in principle the highest-frequency modulation of X-rays in luminous black-hole binary sources.
\subsubsection*{Effect of spacetime geometry}
In the Newtonian limit and when the speed of light \mbox{$c\!\rightarrow\!\infty$}, the only observable periodicity is the radial oscillation. There is no sign of the $\omega_z$ frequency in the power spectrum, although the torus is moving vertically. This is clear and easy to understand, because the \mbox{$c\!\rightarrow\!\infty$} limit suppresses the time delay effects and causes photons from all parts of the torus to reach an observer at the same instant of time, so it is really seen as rigidly moving up and down giving no reason for modulation at the vertical frequency.
When the condition of the infinite light speed is relaxed, the torus is no longer seen as a rigid body. The delay between photons, which originate at the opposite sides of the torus at the same coordinate time, is \mbox{$\Delta{t} \simeq 2\,r_0/c\, \sin{i}$}, where $i$ is the viewing angle ({\it i.e.\ } inclination of the observer). It is maximal for an edge-on view (\mbox{$i\=\ratio{\pi}{2}$}) and compared to the Keplerian orbital period it is \mbox{$\Delta{t}/T_\text{K} \simeq (2\pi^2\,r_0/r_g)^{-1/2}$}. It makes about 10\% at \mbox{$r_0\=10.8M$}. The torus is seen from distance as an elastic ring, which modulates its brightness also at the vertical oscillation frequency $\omega_z$ due to the time delay effect and the seeming volume change.
Curved spacetime adds the effect of light bending. Photons are focused by the central mass's gravity, which leads to a magnification of any vertical movement. Black hole is not a perfect lens, so that the parallel rays do not cross in a single point, but rather form a narrow focal furrow behind it. When the torus trench the furrow (at high viewing angles), its oscillations are greatly amplified by the lensing effect. This is especially significant in the case of the vertical oscillation, as the bright centre of the torus periodically passes through the focal line.
Figure~\ref{fig:effect-geometry} illustrates the geometry effect on three Fourier power density spectra of an oscillating torus. The spectra are calculated for the same parameters and only the metric is changed. The appearance of the vertical oscillation peak in the `finite light speed' case and its power amplification in the relativistic case are clearly visible.
\subsubsection*{Effect of inclination}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\includegraphics{img-osc-inc-km0-power-rz.eps}}
\caption{The inclination dependence of powers in the radial ({\it red}) and the vertical ({\it blue}) oscillations. Top panel shows calculations in the flat spacetime, bottom panel shows powers as computed in the curved Schwarzschild spacetime. Dashed lines represent the same calculations done with switched-off \mbox{$g$-factor} \mbox{($g \equiv 1$)}.}
\label{fig:effect-inclination-km0}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\includegraphics{img-osc-csr-power-rz.eps}}
\caption{Powers in the radial ({\it top}) and vertical ({\it middle}) oscillations and their ratio ({\it bottom}) as a function of the torus size. Different viewing angles are plotted.}
\label{fig:effect-size-km0}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\includegraphics{img-osc-crd-power-rz.eps}}
\caption{Powers in the radial ({\it top}) and vertical ({\it middle}) oscillations and their ratio ({\it bottom}) as a function of the torus distance from the gravity centre. Different viewing angles are plotted.}
\label{fig:effect-distance-km0}
\end{figure}
In previous paragraphs we have find out that both the time delay and the lensing effects are most pronounced when the viewing angle is rather high. Now we will show how much is the observed flux modulated when the torus is seen from different directions.
The effect of inclination is probably the most featured, in spite of it is difficult to be directly observed. Changing the line of sight mixes powers in amplitudes, because different effects are important at different angles. When the torus is viewed \mbox{face-on} ({\it i.e.\ } from the top), we expect the amplitude of $\omega_r$ to be dominant, as the radial pulsations of the torus can be nicely seen and light rays passing through the gas are not yet strongly bended. When viewed almost edge-on, the Doppler effect dumps the power of $\omega_r$ and gravitational lensing amplifies the power in $\omega_z$. Thus we expect the vertical oscillation to overpower the radial one.
Figure~\ref{fig:effect-inclination-km0} shows the inclination dependence of oscillation powers in the flat Minkowski spacetime ({\it top}) and in the curved Schwarzschild spacetime ({\it bottom}). We see that in the flat spacetime the power of radial oscillation gradually decreases, which is caused by the Doppler effect ({\it c.f.\ } the red dotted line in the graph). The vertical oscillation decreases as well, but it is independent on \mbox{the $g$-factor}. At inclinations $i>75^\circ$ it has a significant excess caused by the obscuration of part of the torus behind an opaque sphere of radius $2M$ representing the central black hole.
When gravity is added, the situation at low inclinations (up to \mbox{$i\!\simeq\!25^\circ$}) is very much similar to the Minkowski case. The power of gravitational lensing is clearly visible from the blue line, {\it i.e.\ } the vertical oscillation, progression. It is raising slowly for inclinations \mbox{$i\!>\!45^\circ$}, then it shows a steeper increase for \mbox{$i\!>\!75^\circ$}, reaches its maximum at \mbox{$i\!=\!85^\circ$} and it finally drops down to zero. At the maximum it overpowers the radial oscillation by a factor of 40, while it is $20\times$ weaker if the torus is viewed \mbox{face-on}. The rapid decrease at the end is caused by the equatorial plane symmetry. If the line of sight is in the \mbox{$\theta\!=\!\ratio{\pi}{2}$} plane, the situation is the same above and below the plane, thus the periodicity is $2\,\omega_z$. The power in the base frequency drops abruptly and moves to overtones.
\subsubsection*{Effect of the torus size}
The effect of the size of the torus is very important to study, because it can be directly tested against observational data. Other free model parameters tend to be fixed for a given source ({\it e.g.\ } like inclination), but the torus size may well vary for a single source as a response to temporal changes in the accretion rate.
The power in the radial oscillation is correlated with its amplitude, which is set to \mbox{$\delta{r}\!=\!0.1\,R_0$} and grows with the torus size. It is therefore evident, that the radial power will be proportional to $R_0$ squared. If the amplitude was constant or at least independent of $R_0$, the $\omega_r$ power would be independent of $R_0$ too. Thus the non-trivial part of the torus size dependence will be incurred by vertical movements of the torus.
Figure~\ref{fig:effect-size-km0} shows the PSD power profiles of both the radial and vertical oscillations for several different inclinations. Indeed, the radial power has a quadratic profile and is more dominant for lower viewing angles, which follows from the previous paragraph. The power in the vertical oscillation is at low inclinations also quadratic and similar to the radial one, but the reason is different. The time delay effect causes apparent deformations from the circular cross-section as the torus moves up and down, {\it i.e.\ } to and from the observer in the case of a face-on view. The torus is squeezed along the line of sight at the turning points and stretched when passing the equatorial plane. Deformations are proportional to its size, being the reason for the observed profile. At high inclinations the appearance of strong relativistic images boosts the vertical oscillation power even more. But, as can be clearly seen from the $85^\circ$ line and partially also from the $80^\circ$ line, there is a size threshold, beyond which the oscillation power decreases though the torus still grows. This corresponds to the state, where the torus is so big that the relativistic images are saturated. Further increase of the torus size only entails an increase of the total luminosity, while the variability amplitude remains about the same, hence leading to the fractional rms amplitude downturn.
\subsubsection*{Effect of the torus distance}
The distance of the torus also affects the intensity of modulations in observed lightcurves (Fig.~\ref{fig:effect-distance-km0}). The power in the radial oscillation is either increasing or decreasing, depending on the inclination. Looking face-on, the $g$-factor is dominated by the redshift component and the power in $\omega_r$ is increasing with the torus distance being less dumped. When the view is more inclined, the Doppler component starts to be important and the oscillation looses power with the torus distance. The critical inclination is about $70^\circ$.
The power of vertical oscillation generally decreases with the torus distance. It is made visible mainly by the time delay effect and because with the increasing distance of the torus the oscillation period also increases, the effect is loosing on importance. An exception is when the inclination is very high. The large portion of visible relativistic images causes the vertical power first to increase up to some radius, beyond which it then decays. Both small and large tori do not have much of visible secondary images, because they are either too compact or they are too far. The ideal distance is about $11\,M$ -- this is the radius, where the torus has the largest portion of higher-order images, corresponding to the maximum of the vertical power in Fig.~\ref{fig:effect-distance-km0}.
Generally, the relative power of the vertical oscillation is getting weaker as the torus is more and more far-away from the graviting centre. This is most significant for higher viewing angles, where the drop between $8M$ and $16M$ can be more than one order of magnitude. On the other hand, for low inclinations the effect is less dramatic and if viewed face-on the power ratio is nearly independent from the distance of the fluid ring.
\subsubsection*{Effect of radial luminosity variations}
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics{img-osc-inc-km1-power-rz.eps}}
\caption{The inclination dependence of powers in the radial ({\it red}) and the vertical ({\it blue}) oscillations in the compressible mode. This is the same figure as Fig.~\ref{fig:effect-inclination-km1}, except that it is computed with the inclusion of density scaling. Top panel shows calculations in the flat spacetime, bottom panel shows powers as computed in the curved Schwarzschild spacetime. Dashed lines represent the same calculations done with switched-off \mbox{$g$-factor}.}
\label{fig:effect-inclination-km1}
\end{figure}
As already mentioned above, the volume of the torus changes periodically as the torus moves in and out. In the incompressible torus, which we have considered so far, this results in a corresponding variance of the luminosity, linearly proportional to the actual distance of the torus $r(t)$ from the centre,
\begin{equation}
L(t) \sim {\textstyle \int f\,\text{d}{V}} \sim r(t) \sim \delta{r}\,\sin(\omega_r t) \;.
\end{equation}
Because we do not change the thermodynamical properties, it also means that the total mass \mbox{$M\!=\!\int\!\rho\,\text{d}{V}$} contained within the torus is not conserved during its radial movements, which is the major disadvantage. In this paragraph we relax this constraint and explore the compressible mass conserving mode.
A compressed torus heats up, which results in an increase of its luminosity and size. These two effects go hand-in-hand, however to keep things simple we isolate them and only show, how powers are affected if we only scale the density and temperature without changing the torus cross-section.
We allow the torus to change the pressure and density profiles in a way that it will keep its total mass constant. The volume element $\text{d}{V}$ is proportional to $r$, so that in order to satisfy this condition, the density must be scaled as
\begin{equation}
\rho(r,\,z,\,t) = \rho^\circ(r,\,z) \, \frac{r_0}{r(t)} \;,
\end{equation}
where $\rho^\circ$ refers to the density profile of a steady non-oscillating torus with central ring at radius $r_0$. If we substitute for the emissivity from \citeq{eq:emissivity}, we find out that the luminosity now goes with $r$ as
\begin{equation}
L(t) \sim {\textstyle \int f(\rho)\,\text{d}{V}} \sim {\textstyle \int \rho^{7/3}\,\text{d}{V}} \sim r(t)^{-1.33} \;.
\end{equation}
The negative sign of the exponent causes the luminosity to increase when the torus moves in and compresses. Moreover, the luminosity variance is stronger than in the incompressible case, because of the greater absolute value of the exponent.
Figure~\ref{fig:effect-inclination-km1} shows the inclination dependence of oscillation powers in the compressible case. Compared to Fig.~\ref{fig:effect-inclination-km0} we see that the signal modulation at vertical frequency is not affected, but the slope of the radial oscillation power is reversed. A key role in this reversing plays the $g$-factor, which combines effects of the Doppler boosting and the gravitation redshift.
The Doppler effect brightens up the part of the torus, where the gas moves towards the observer, and darkens the receding part. This effect is maximal for inclinations approaching $\ratio{\pi}{2}$, {\it i.e.\ } for \mbox{edge-on} view. On average, {\it i.e.\ } integrated over the torus volume, the brighten part wins and the torus appears more luminous when viewed edge-on (see Fig.~\ref{fig:lx-total-inclination}).
The redshift effect adds the dependence on the radial distance from the centre of gravity, which is an important fact to explain the qualitative difference between Figs.~\ref{fig:effect-inclination-km0} and \ref{fig:effect-inclination-km1}. In the incompressible mode, the luminosity has a minimum when the torus moves in and a maximum when it moves out of its equilibrium position. The \mbox{$g$-factor} goes the same way and consequently amplifies the amplitude of the luminosity variability. The situation is right opposite in the compressible mode and the luminosity has a maximum when the torus moves in and a minimum when it moves out. The \mbox{$g$-factor} goes with the opposite phase and dumps the luminosity amplitude. Because the difference in the \mbox{$g$-factor} value is more pronounced with inclination, it results in increasing or decreasing dependence of the radial power on inclination in the compressible or incompressible case, respectively.
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics{img-steady-inc-lx.eps}}
\caption{The total observed bolometric luminosity of a steady (non-oscillating) torus as a function of inclination. In a flat spacetime ({\it orange}) with only special relativistic effects, the total luminosity is increased by a factor of two if the view is changed from face-on to edge-on. It is even more in a curved spacetime ({\it blue}), where the relativistic images make a significant contribution. For comparison also calculations with switched-off \mbox{$g$-factor} (with $g$ being set to unity) are shown ({\it dashed} lines).}
\label{fig:lx-total-inclination}
\end{figure}
\section{Discussion and Conclusions}
We have found out that intrinsic variations of the radiation emitted from inner parts of an accretion flow may be significantly modified by effects of a strong gravitational field. Above all we have shown that orientation of the system with respect to the observer is an important factor, which may alter the distribution of powers in different modes. However this effect, although strong, cannot be directly observed, because the inclination of a given source is fixed and mostly uncertain.
Within the model there are other parameters, which may be used for predictions of powers in different frequencies. We have shown that the size of the torus affects the power of the vertical oscillation. In this model this corresponds to an emission of harder photons from a hotter torus and provides a link between the model and observations. From those we know \citep{Remillard02} that the higher HFQPO peak is usually more powerful than the lower one in harder spectral states, which is consistent with the model, but the exact correlation depends on amplitudes of both oscillations.
The power in the radial oscillation very much depends on the thermodynamical properties of the torus and on its behaviour under the influence of radial movements. We have shown that different parametrizations of intrinsic luminosity in the \mbox{in-and-out} motion ({\it i.e.\ } compressible and incompressible modes) change power of the radial oscillation. On the other hand, the power of the vertical oscillation remains unaffected. This is an important fact and it means that the flux modulation at the vertical frequency is independent on the torus properties, driven by relativistic effects only.
Another model parameter is the distance of the thin accretion disk. The Shakura-Sunyaev disk is optically thick and blocks propagation of photons, which cross the equatorial plane at radii beyond its moving inner edge. Most of the stopped photons are strongly lensed and carry information predominantly about the vertical mode, thus the presence or not-presence of an opaque disk may be important for the power distribution in QPO modes. However, this effect is beyond the scope of this article and will be described in a separate paper.
\acknowledgements
I am thankful to all my collaborators and especially to M.~Abramowicz, V.~Karas and W.~Klu{\' z}niak for incentive comments. This work was supported by the Czech GAAV grant IAA~300030510. The Astronomical Institute is operated under the project AV0Z10030501.
| 2024-02-18T23:39:50.178Z | 2005-10-15T12:17:39.000Z | algebraic_stack_train_0000 | 584 | 5,510 |
|
proofpile-arXiv_065-3027 | \section{Introduction}
\label{intro}
Relatively little is known about the submillimetre properties of `normal' galaxies in the local Universe. The advent of \textit{IRAS}\/ in the 1980s brought the first investigations of dust in relatively large samples of galaxies (e.g. Devereux \& Young 1990), yet the limitations of investigating dust at far-IR wavelengths are marked; the strong temperature dependence of thermal emission means that even a small amount of warm dust can dominate the emission from a substantially larger proportion of cold dust, and \textit{IRAS} is only sensitive to dust with \mbox{$T>30$\,K}. \textit{IRAS}\/ studies of `normal' galaxies (e.g. Devereux \& Young 1990) found a high value of the gas-to-dust ratio ($\sim$1000), an order of magnitude higher than found for the Milky Way ($\sim$160; Dunne et al. 2000), indicating that \textit{IRAS} may have `missed' $\sim$90\% of the dust in late-type galaxies. \textit{IRAS}\/ also revealed relatively little about the dust in early-type galaxies, since only $\sim$15\% of ellipticals were detected by \textit{IRAS}\/ (Bregman et al. 1998).
The next major step in the study of dust in galaxies is to make observations in the submillimetre waveband \mbox{($100\,\micron\le \lambda \le 1$\,mm)} since the 90\% of dust that is too cold to radiate in the far-IR will be producing most of its emission in this waveband. The advent of the SCUBA camera on the James Clark Maxwell Telescope (JCMT)\footnote{The JCMT is operated by the Joint Astronomy Center on behalf of the UK Particle Physics and Astronomy Research Council, the Netherlands Organisation for Scientific Research and the Canadian National Research Council.} (Holland et al. 1999) opened up the submillimetre waveband for astronomy and made it possible, for the first time, to investigate the submillimetre emission of a large sample of galaxies; prior to SCUBA only a handful of submillimetre measurements had been made of nearby galaxies, using single-element bolometers. In particular, in contrast to the extensive survey work going on at other wavelengths, prior to SCUBA it was not possible to carry out a large survey in the submillimetre waveband. SCUBA has 2 bolometer arrays (850\hbox{\,$\umu$m } and 450\,\micron) which operate simultaneously with a field of view of $\sim$2 arcminutes. At 850\hbox{\,$\umu$m } SCUBA is sensitive to thermal emission from dust with fairly cool temperatures \mbox{($T\geq10$\,K)} so crucially, whereas \textit{IRAS} was only sensitive to warmer dust \mbox{($T>30$\,K)}, SCUBA should trace most of the dust mass.
\subsection{A local submillimetre galaxy survey}
\label{local-survey}
A survey of the dust in nearby galaxies is also important because of the need to interpret the results from surveys of the distant Universe. Many deep SCUBA surveys have been carried out (Smail, Ivison \& Blain 1997; Hughes et al. 1998; Barger et al. 1998, 1999; Blain et al. 1999a; Eales et al. 1999; Lilly et al. 1999; Mortier et al. 2005), but studies of the high redshift Universe, and in particular studies of cosmological evolution (Eales et al. 1999; Blain et al. 1999b), have until now depended critically on assumptions about, rather than measurements of, the submillimetre properties of the \textit{local} Universe. Prior to the existence of a direct local measurement of the submillimetre luminosity function (LF) most deep submillimetre investigations have started from a local \textit{IRAS} 60\hbox{\,$\umu$m } LF, extrapolating out to submillimetre wavelengths by making assumptions about the average FIR-submm SED. However, as shown by Dunne et al. (2000), this underestimates the local submillimetre LF, and thus a direct measurement of the local submillimetre LF is vital for overcoming this significant limitation in the interpretation of the results of high-redshift surveys.
The ideal method of carrying out a submillimetre survey of the local Universe would be to survey a large area of the sky and then measure the redshifts of all the submillimetre sources found by the survey. However, with current submillimetre instruments such a survey is effectively impossible since, for example, the field of view of SCUBA is only $\sim$2 arcminutes. The alternative method, and the only one that is currently practical, is to carry out targeted submillimetre observations of galaxies selected from statistically complete samples selected in other wavebands. With an important proviso, explained below, it is then possible to produce an unbiased estimate of the submillimetre LF using `accessible volume' techniques (Avni \& Bahcall 1980) (see Section~\ref{lumfun}).
To this end several years ago we began the SCUBA Local Universe Galaxy Survey (SLUGS). In Papers I and II Dunne et al. (2000, hereafter D00) and Dunne \& Eales (2001, hereafter DE01) presented the results of SCUBA observations of a sample selected at 60\hbox{\,$\umu$m } (the \textit{IRAS}-Selected sample, hereafter the IRS sample). This paper presents the results of SCUBA observations of an optically-selected sample (hereafter the OS sample).
The accessible volume method will produce unbiased estimates of the LF provided that no class of galaxy is unrepresented in the samples used to construct the LF. D00 produced the first direct measurements of the submillimetre LF and dust mass function (the space-density of galaxies as a function of dust mass) using the IRS sample, but this LF would be biased if there exists a `missed' population of submillimetre-emitting galaxies, i.e. a population \emph{that is not represented at all in the IRS sample}. In this earlier work we found that the slope of the submillimetre LF at lower luminosities was steeper than $-$2 (a submillimetre `Olbers' Paradox'), which indicated that the IRS sample may not be fully representative of all submillimetre-emitting sources in the local Universe. This `missed' population could consist of cold-dust-dominated galaxies, i.e. galaxies containing large amounts of `cold' dust (at \mbox{$T<25$\,K}), which would be strong emitters at 850\hbox{\,$\umu$m } but weak 60\,\micron-emitters. The OS sample is selected on the basis of the optical emission from the galaxies and, unlike the IRS sample which was biased towards warmer dust, the OS sample should be free from dust temperature selection effects. The results from the OS sample will therefore test the idea that our earlier IRS sample LF was an underestimate.
\subsection{Previous investigations of cold dust in galaxies}
\label{cold-dust}
The paradigm for dust in galaxies is that there are two main components: (i) a warm component \mbox{(T\,$>$\,30\,K)} arising from dust grains near to star-forming regions and heated by young (OB) stars, and (ii) a cool `cirrus' component \mbox{(T\,=\,15--25\,K)} arising from diffuse dust associated with the HI and heated by the general interstellar radiation field (ISRF) (Cox, Kr\"ugel \& Mezger 1986; Lonsdale Persson \& Helou 1987; Rowan-Robinson \& Crawford 1989). \textit{IRAS} would only have detected the warm component, hence using \textit{IRAS} fluxes alone to estimate dust temperature would result in an overestimate of the dust temperature and an underestimate of the dust mass. Conversely, using the submillimetre to estimate dust masses has clear advantages. The flux is more sensitive to the mass of the emitting material and less sensitive to temperature in the Rayleigh-Jeans part of the Planck function, which is sampled when looking at longer submillimetre wavelengths.
Studies at the longer wavelengths (170--850\,\micron; e.g. ISO, SCUBA) have confirmed the existence of cold dust components \mbox{($15<T_{d}<25$\,K)}, in line with the theoretical prediction of grain heating by the general ISRF (Cox et al. 1986), both in nearby spiral galaxies and in more IR-luminous/interacting systems (Gu\'elin et al. 1993, 1995; Sievers et al. 1994; Sodroski et al. 1994; Neininger et al. 1996; Braine et al. 1997; Dumke et al. 1997; Alton et al. 1998a,b, 2001; Haas et al. 1998; Davies et al. 1999; Frayer et al. 1999; Papadopoulos \& Seaquist 1999; Xilouris et al. 1999; Haas et al. 2000; DE01; Popescu et al. 2002; Spinoglio et al. 2002; Hippelein et al. 2003; Stevens, Amure \& Gear 2005). Many of these authors find an order of magnitude more dust than \textit{IRAS}\/ observations alone would indicate. Alton et al. (1998a), for example, find, by comparing their 200\hbox{\,$\umu$m } images of nearby galaxies to B-band images, that the cold dust has a greater radial extent than the stars, and conclude that \textit{IRAS} `missed' the majority of dust grains lying in the spiral disks. Other studies find evidence of cold dust components in a large proportion of galaxies. Contursi et al. (2001) find evidence of a cold dust component \mbox{($T\sim22$\,K)} for most of their sample of late-type galaxies; Stickel et al. (2000) find a large fraction of sources in their 170\hbox{\,$\umu$m } survey have high $S_{170}/S_{100}$ flux ratios and suggest this indicates a cold dust component \mbox{($T\leq20$\,K)} exists in many galaxies; Popescu et al. (2002) find, for their sample of late-type (later than S0) galaxies in the Virgo Cluster, that 30 out of 38 galaxies detected in all three observed wavebands (60, 100 and 170\,\micron) exhibit a cold dust component.
An additional property of dust that can be investigated with submillimetre measurements is the dust emissivity index $\beta$. Dust radiates as a modified Planck function (a `grey-body'), modified by the emissivity term such that $Q_{em}\propto v^{\beta}$. Until recently the value of $\beta$ was quite uncertain, with suggested values lying between 1 and 2 (Hildebrand 1983). Recent multi-wavelength studies of galaxies including submillimetre observations, however, have consistently found $1.5\le\beta\le2$ with $\beta$=2 tending to be favoured (Chini et al. 1989; Chini \& Kr\"ugel 1993; Braine et al. 1997; Alton et al. 1998b; Bianchi et al. 1998; Frayer et al. 1999; DE01). This agrees with the values found in \textit{COBE}/FIRAS studies of the diffuse ISM in the Galaxy (Masi et al. 1995; Reach et al. 1995; Sodroski et al. 1997).
\subsection{The scope of this paper}
This paper presents the results from the SCUBA Local Universe Galaxy Survey (SLUGS) optically-selected sample. This OS sample is taken from the Center for Astrophysics (CfA) optical redshift survey (Huchra et al. 1983), and includes galaxies drawn from right along the Hubble sequence.
In Section~\ref{data-red} we discuss our observation and data reduction techniques. Section~\ref{results} presents the sample and the results. Section~\ref{properties} presents an analysis of the submillimetre properties of the sample. In Section~\ref{lumfun} we present the local submillimetre luminosity and dust mass functions. We assume a Hubble constant \mbox{$H_{0}$=75 km\,s$^{-1}$ Mpc$^{-1}$} throughout.
\begin{table*}
\centering
\begin{minipage}{14.5cm}
\caption{\label{fluxtab}\small{850\hbox{\,$\umu$m } flux densities and isothermal SED parameters. (Notes on individual objects are listed in Section~\ref{maps}).}}
\begin{tabular}{lllrrrrrllr}
\hline
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\
Name & R.A. & Decl. & cz & $S_{60}$ & $S_{100}$ & $S_{850}$ & $\sigma_{850}$ & $T_{dust}$ & $\beta$ & Type\\
& (J2000) & (J2000) & (km\,s$^{-1}$) & (Jy) & (Jy) & (Jy) & (Jy) & (K) & & \\
\hline
UGC 148 & 00 15 51.2 & +16 05 23 & 4213 & 2.21 & 5.04 & 0.055 & 0.012 & 31.6 & 1.4 & 4\\
NGC 99 & 00 23 59.4 & +15 46 14 & 5322 & 0.81 & 1.49 & 0.063 & 0.015 & 41.8 & 0.4 & 6\\
PGC 3563 & 00 59 40.1 & +15 19 51 & 5517 & 0.35 & $^{\displaystyle s}$1.05 & 0.027 & 0.008 & 31.0 & 1.0 & 2\\
NGC 786 & 02 01 24.7 & +15 38 48 & 4520 & 1.09 & 2.46 & 0.066 & 0.019 & 35.2 & 0.8 & 4M\\
NGC 803 & 02 03 44.7 & +16 01 52 & 2101 & 0.69 & 2.84 & 0.093 & 0.019 & 27.4 & 1.1 & 5\\
UGC 5129 & 09 37 57.9 & +25 29 41 & 4059 & 0.27 & 0.92 & $<$0.034 & ... & ... & ... & 1\\
NGC 2954 & 09 40 24.0 & +14 55 22 & 3821 & $<$0.18 & $<$0.59 & $<$0.027 & ... & ... & ... &-5\\
UGC 5342 & 09 56 42.6 & +15 38 15 & 4560 & 0.85 & 1.66 & 0.032 & 0.008 & 36.4 & 0.9 & 4\\
PGC 29536 & 10 09 12.4 & +15 00 19 & 9226 & $<$0.18 & $<$0.52 & $<$0.041 & ... & ... & ... & -5\\
NGC 3209 & 10 20 38.4 & +25 30 18 & 6161 & $<$0.16 & $<$0.65 & $<$0.022 & ... & ... & ... & -5\\
NGC 3270 & 10 31 29.9 & +24 52 10 & 6264 & 0.59 & 2.39 & 0.059 & 0.014 & 26.8 & 1.3 & 3\\
NGC 3323 & 10 39 39.0 & +25 19 22 & 5164 & 1.48 & 3.30 & 0.070 & 0.014 & 34.0 & 1.0 & 5\\
NGC 3689 & 11 28 11.0 & +25 39 40 & 2739 & $^{\displaystyle s}$2.86 & $^{\displaystyle s}$9.70 & 0.101 & 0.017 & 26.8 & 1.7 & 5\\
UGC 6496 & 11 29 51.4 & +24 56 16 & 6277 & ... & ... & $<$0.018 & ... & ... & ... & -2\\
PGC 35952 & 11 37 01.8 & +15 34 14 & 3963 & 0.47 & 1.32 & 0.051 & 0.013 & 32.2 & 0.8 & 4\\
NGC 3799$^{\scriptstyle p}$ & 11 40 09.4 & +15 19 38 & 3312 & U & U & $<$0.268 & ... & ... & ... & 3\\
NGC 3800$^{\scriptstyle p}$ & 11 40 13.5 & +15 20 33 & 3312 & U & U & 0.117 & 0.025 & ... & ... & 3\\
NGC 3812 & 11 41 07.7 & +24 49 18 & 3632 & $<$0.23 & $<$0.56 & $<$0.038 & ... & ... & ... & -5\\
NGC 3815 & 11 41 39.3 & +24 48 02 & 3711 & 0.70 & 1.88 & 0.041 & 0.011 & 31.0 & 1.1 & 2\\
NGC 3920 & 11 50 05.9 & +24 55 12 & 3635 & 0.75 & 1.68 & 0.034 & 0.009 & 34.0 & 1.0 & -2\\
NGC 3987 & 11 57 20.9 & +25 11 43 & 4502 & 4.78 & 15.06 & 0.186 & 0.030 & 27.4 & 1.6 & 3\\
NGC 3997 & 11 57 48.2 & +25 16 14 & 4771 & 1.16 & $^{\displaystyle s}$1.95 & $<$0.023 & ... & ... & ... & 3M\\
NGC 4005 & 11 58 10.1 & +25 07 20 & 4469 & U & U & $<$0.015 & ... & ... & ... & 3\\
NGC 4015 & 11 58 42.9 & +25 02 25 & 4341 & 0.25 & $^{\displaystyle s}$0.80 & $<$0.050 & ... & ... & ...& 10M\\
UGC 7115 & 12 08 05.5 & +25 14 14 & 6789 & $<$0.20 & $<$0.68 & 0.051 & 0.011 & ... & ... & -5\\
UGC 7157 & 12 10 14.6 & +25 18 32 & 6019 & $<$0.24 & $<$0.63 & $<$0.032 & ... & ... & ... & -2\\
IC 797 & 12 31 54.7 & +15 07 26 & 2097 & 0.74 & 2.18 & 0.085 & 0.021 & 31.6 & 0.8 & 6\\
IC 800 & 12 33 56.7 & +15 21 16 & 2330 & 0.38 & 1.10 & 0.076 & 0.019 & 34.6 & 0.4 & 5\\
NGC 4712 & 12 49 34.2 & +25 28 12 & 4384 & 0.48 & 2.02 & 0.102 & 0.023 & 28.0 & 0.9 & 4\\
PGC 47122 & 13 27 09.9 & +15 05 42 & 7060 & $<$0.11 & 0.55 & $<$0.035 & ... & ... & ... &-2\\
MRK 1365 & 13 54 31.1 & +15 02 39 & 5534 & 4.20 & 6.11 & 0.032 & 0.009 & 35.2 & 1.6 & -2\\
UGC 8872 & 13 57 18.9 & +15 27 30 & 5529 & $<$0.22 & $<$0.45 & $<$0.021 & ... & ... & ... & -2\\
UGC 8883 & 13 58 04.6 & +15 18 53 & 5587 & 0.45 & 1.19 & $<$0.040 & ... & ... & ... & 4\\
UGC 8902 & 13 59 02.7 & +15 33 56 & 7667 & 1.23 & 3.32 & 0.067 & 0.018 & 30.4 & 1.2 & 3\\
IC 979 & 14 09 32.3 & +14 49 54 & 7719 & $^{\displaystyle s}$$^{\ast}$0.19 & $^{\displaystyle s}$$^{\ast}$0.60 & 0.057 & 0.017 & 34.0$^{\ast}$ & 0.3$^{\ast}$ & 2\\
UGC 9110 & 14 14 13.4 & +15 37 21 & 4644 & U & U & $<$0.046 & ... & ... & ... & 3\\
NGC 5522 & 14 14 50.3 & +15 08 48 & 4573 & 2.06 & 4.05 & 0.072 & 0.014 & 35.8 & 1.0 & 3\\
NGC 5953$\dag$$^{\scriptstyle p}$ & 15 34 32.4 & +15 11 38 & 1965 & U & U & 0.184 & 0.024 & ... & ... & 1 \\
NGC 5954$\dag$$^{\scriptstyle p}$ & 15 34 35.2 & +15 11 54 & 1959 & U & U & 0.112 & 0.019 & ... & ... & 6\\
NGC 5980 & 15 41 30.4 & +15 47 16 & 4092 & 3.45 & 8.37 & 0.253 & 0.043 & 34.0 & 0.8 & 5\\
IC 1174 & 16 05 26.8 & +15 01 31 & 4706 & $<$0.18 & $<$0.32 & 0.025 & 0.009 & ... & ... & 0\\
UGC 10200 & 16 05 45.8 & +41 20 41 & 1972 & 1.41 & 1.67 & $<$0.020 & ... & ... & ...& 2M\\
UGC 10205 & 16 06 40.2 & +30 05 55 & 6556 & 0.39 & 1.54 & 0.058 & 0.015 & 28.0 & 1.0 & 1\\
NGC 6090 & 16 11 40.7 & +52 27 24 & 8785 & 6.66 & 8.94 & 0.091 & 0.015 & 40.6 & 1.1 & 10M\\
NGC 6103 & 16 15 44.6 & +31 57 51 & 9420 & 0.64 & 1.67 & 0.052 & 0.012 & 33.4 & 0.8 & 5\\
NGC 6104 & 16 16 30.6 & +35 42 29 & 8428 & 0.50 & 1.76 & $<$0.033 & ... & ... & ... & 1\\
IC 1211 & 16 16 51.9 & +53 00 22 & 5618 & $<$0.12 & $<$0.53 & 0.028 & 0.009 & ... & ... & -5\\
UGC 10325$\S$ & 16 17 30.6 & +46 05 30 & 5691 & 1.57 & 3.72 & 0.041 & 0.009 & 31.0 & 1.4 & 10M\\
NGC 6127 & 16 19 11.5 & +57 59 03 & 4831 & $<$0.10 & $<$0.30 & 0.086 & 0.020 & ... & ... & -5\\
NGC 6120 & 16 19 48.1 & +37 46 28 & 9170 & 3.99 & 8.03 & 0.065 & 0.011 & 32.2 & 1.5 & 8\\
NGC 6126 & 16 21 27.9 & +36 22 36 & 9759 & $<$0.15 & $<$0.43 & 0.023 & 0.008 & ... & ... & -2\\
NGC 6131 & 16 21 52.2 & +38 55 57 & 5117 & 0.72 & 2.42 & 0.054 & 0.013 & 28.6 & 1.2 & 6\\
NGC 6137 & 16 23 03.1 & +37 55 21 & 9303 & $<$0.18 & $<$0.53 & 0.029 & 0.010 & ... & ... & -5\\
NGC 6146 & 16 25 10.3 & +40 53 34 & 8820 & $<$0.12 & $<$0.48 & 0.028 & 0.007 & ... & ... & -5\\
NGC 6154 & 16 25 30.4 & +49 50 25 & 6015 & $<$0.15 & $<$0.36 & $<$0.040 & ... & ... & ... & 1\\
NGC 6155 & 16 26 08.3 & +48 22 01 & 2418 & 1.90 & 5.45 & 0.116 & 0.022 & 29.8 & 1.2 & 6\\
UGC 10407 & 16 28 28.1 & +41 13 05 & 8446 & 1.62 & 3.12 & 0.026 & 0.009 & 32.8 & 1.5 & 10M\\
NGC 6166 & 16 28 38.4 & +39 33 06 & 9100 & $^{\displaystyle s}$0.10 & $^{\displaystyle s}$0.63 & 0.073 & 0.017 & 26.2 & 0.6 & -5\\
NGC 6173 & 16 29 44.8 & +40 48 42 & 8784 & $<$0.17 & $<$0.23 & $<$0.024 & ... & ... & ... & -5\\
NGC 6189 & 16 31 40.9 & +59 37 34 & 5638 & 0.75 & 2.57 & 0.072 & 0.019 & 28.6 & 1.1 & 6\\
NGC 6190 & 16 32 06.7 & +58 26 20 & 3351 & 0.58 & 2.37 & 0.099 & 0.024 & 28.0 & 1.0 & 6\\
\end{tabular}
\end{minipage}
\end{table*}
\begin{table*}
\centering
\begin{minipage}{14.5cm}
\contcaption{}
\begin{tabular}{lllrrrrrllr}
\hline
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\
Name & R.A. & Decl. & cz & $S_{60}$ & $S_{100}$ & $S_{850}$ & $\sigma_{850}$ & $T_{dust}$ & $\beta$ & Type\\
& (J2000) & (J2000) & (km\,s$^{-1}$) & (Jy) & (Jy) & (Jy) & (Jy) & (K) & &\\
\hline
NGC 6185 & 16 33 17.8 & +35 20 32 & 10301 & 0.17 & 0.56 & $<$0.030 & ... & ... & ... & 1\\
UGC 10486 & 16 37 34.3 & +50 20 44 & 6085 & $<$0.19 & $<$0.60 & $<$0.029 & ... & ... & ... & -3\\
NGC 6196 & 16 37 53.9 & +36 04 23 & 9424 & $<$0.12 & $<$0.44 & $<$0.023 & ... & ... & ... & -3\\
UGC 10500 & 16 38 59.3 & +57 43 27 & 5218 & $^{\displaystyle s}$$^{\ast}$0.16 & $^{\displaystyle s}$$^{\ast}$0.71 & $<$0.028 & ... & ... & ... & 0\\
IC 5090 & 21 11 30.4 & $-$02 01 57 & 9340 & 3.04 & 7.39 & 0.118 & 0.017 & 31.6 & 1.2 & 1\\
IC 1368 & 21 14 12.5 & +02 10 41 & 3912 & 4.03 & 5.80 & 0.047 & 0.011 & 37.6 & 1.3 & 1\\
NGC 7047 & 21 16 27.6 & $-$00 49 35 & 5626 & 0.43 & 1.65 & 0.055 & 0.013 & 28.0 & 1.1 & 3\\
NGC 7081 & 21 31 24.1 & +02 29 29 & 3273 & 1.79 & 3.87 & 0.044 & 0.010 & 32.8 & 1.3 & 3\\
NGC 7280 & 22 26 27.5 & $+$16 08 54 & 1844 & $<$0.12 & $<$0.48 & $<$0.040 & ... & ... & ... & -1\\
NGC 7442 & 22 59 26.5 & $+$15 32 54 & 7268 & 0.78 & 2.22 & 0.046 & 0.009 & 31.0 & 1.1 & 5\\
NGC 7448$\dag$ & 23 00 03.6 & $+$15 58 49 & 2194 & 7.23 & 17.43 & 0.193 & 0.032 & 31.0 & 1.4 & 5\\
NGC 7461 & 23 01 48.3 & $+$15 34 57 & 4272 & $<$0.176 & $<$0.64 & $<$0.022 & ... & ... & ... & -2\\
NGC 7463 & 23 01 51.9 & +15 58 55 & 2341 & U & U & 0.045 & 0.010 & ... & ... & 3M \\
III ZW 093 & 23 07 21.0 & +15 51 11 & 14962 & 0.48 & $<$3.16 & $<$0.026 & ... & ... & ... & 10Z\\
III ZW 095 & 23 12 43.3 & +15 54 12 & 7506 & $<$0.09 & $<$0.80 & $<$0.019 & ... & ... & ... & 10Z\\
UGC 12519 & 23 20 02.7 & +15 57 10 & 4378 & 0.76 & 2.59 & 0.074 & 0.016 & 29.2 & 1.1 & 5 \\
NGC 7653 & 23 24 49.3 & +15 16 32 & 4265 & 1.31 & 4.46 & 0.112 & 0.020 & 28.6 & 1.2 & 3\\
NGC 7691 & 23 32 24.4 & +15 50 52 & 4041 & 0.53 & 1.67 & $<$0.025 & ... & ... & ... & 4\\
NGC 7711 & 23 35 39.3 & +15 18 07 & 4057 & $<$0.15 & $<$0.50 & $<$0.027 & ... & ... & ... & -2\\
NGC 7722 & 23 38 41.2 & +15 57 17 & 4026 & 0.78 & 3.03 & 0.061 & 0.015 & 26.8 & 1.4 & 0\\
\hline
\end{tabular}\\
(1) Most commonly used name. \\
(2) Right ascension, J2000 epoch. \\
(3) Declination, J2000 epoch. \\
(4) Recessional velocity taken from NED. [The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.] \\
(5) 60\hbox{\,$\umu$m } flux from the \textit{IRAS}\/ Faint Source Catalogue (Moshir et al. 1990); upper limits listed are measured using SCANPI as described in Section~\ref{iras-fluxes}. \\
(6) 100\hbox{\,$\umu$m } flux from the \textit{IRAS}\/ Faint Source Catalogue (Moshir et al. 1990); upper limits listed are measured using SCANPI as described in Section~\ref{iras-fluxes}.\\
(7) 850\hbox{\,$\umu$m } flux (this work). \\
(8) Error on 850\hbox{\,$\umu$m } flux, calculated as described in Section~\ref{errors}. \\
(9) Dust temperature derived from a single-component fit to the 60, 100 and 850\hbox{\,$\umu$m } data points, as described in Section~\ref{sed-fits}. \\
(10) Emissivity index derived from the single-component fit, as described in Section~\ref{sed-fits}. \\
(11) Hubble type (t-type) taken from the LEDA database; we have assigned t=10 to any multiple systems unresolved by \textit{IRAS} or SCUBA (indicated by `10M') and any systems with no type listed in LEDA (indicated by `10Z'; these 2 objects are listed as `compact' sources in NED); all other types marked `M' are listed as multiple systems in LEDA.\\
\smallskip\\
$^{\scriptstyle p}$ Part of a close or interacting pair which was resolved by SCUBA. Fluxes here are the individual galaxy fluxes; fluxes measured for the combined pair are given in Table~\ref{pairstab}.\\
U Unresolved by \textit{IRAS}.\\
$^{\displaystyle s}$ The $\textit{IRAS}$ flux is our own SCANPI measurement (see Section~\ref{iras-fluxes}); any individual comments are listed in Section~\ref{maps}.\\
$^{\ast}$ SCANPI measurements and fitted values should be used with caution (see Section~\ref{iras-fluxes}).\\
$\S$ The coordinates of this object refer to one galaxy (NED01) of a the \textit{pair} UGC 10325.\\
$\dag$ Objects are also in the Paper I \textit{IRAS}-selected sample (DE00).
\end{minipage}
\end{table*}
\section{Observations and Data Reduction}
\label{data-red}
\subsection{The sample}
\label{sample}
This OS sample is taken from the Center for Astrophysics (CfA) optical redshift survey (Huchra et al. 1983), which is a magnitude-limited sample of optically-selected galaxies, complete to \mbox{$m_{B} \leq 14.5$ mag}. It has complete information on magnitude, redshift and morphological-type, and also avoids the Galactic plane. The OS sample consists of all galaxies in the CfA sample lying within three arbitrary strips of sky: (i) all declinations from (B1950.0) \mbox{$16.1<\textrm{RA}<21.5$}, (ii) all RAs from \mbox{$15<\textrm{Dec}<16$} and (iii) RAs from \mbox{$9.6<\textrm{RA}<12.8$} with declinations from \mbox{$25<\textrm{Dec}<26$}. We also imposed a lower velocity limit of \mbox{1900\,km\,s$^{-1}$} to try to ensure that the galaxies did not have an angular diameter larger than the field of view of SCUBA. There are 97 galaxies in the CfA survey meeting these selection criteria and of these we observed 81 (which were at convenient positions given our observing schedule). The OS sample covers an area of \mbox{$\sim$\,570} square degrees and is listed in Table~\ref{fluxtab}. Unlike the IRS sample which contained many interacting pairs (most of which were resolved by SCUBA but not by \textit{IRAS}), the OS sample contains just 2 such pairs.
\subsection{Observations}
\label{obs}
We observed the OS sample galaxies using the SCUBA bolometer array at the 15-m James Clark Maxwell Telescope (JCMT) on Mauna Kea, Hawaii, between December 1997 and January 2001, with a handful of additional observations in February 2003 (due to bad data obtained when observed the first time round; see Section~\ref{red}). Observational methods and techniques were similar to those for the IRS sample described in D00. We give a brief description of these below.
The SCUBA camera has 2 bolometer arrays (850\hbox{\,$\umu$m } and 450\,\micron, with 37 and 91 bolometers respectively) which operate simultaneously with a field of view of \mbox{$\sim$\,2.3} arcminutes at 850\hbox{\,$\umu$m } (slightly smaller at 450\,\micron). Beamsizes are measured to be $\sim$15 arcsec at 850\hbox{\,$\umu$m } and $\sim$8 arcsec at 450\,\micron. Our observations were made in `jiggle-map' mode which, for sources smaller than the field of view, is the most efficient mapping mode. Since the arrangement of the bolometers is such that the sky is instantaneously undersampled, and since we observed using both arrays, the secondary mirror was stepped in a 64-point jiggle pattern in order to fully sample the sky. The cancellation of rapid sky variations is provided by the telescope's chopping secondary mirror, operating at 7.8\,Hz. Linear sky gradients and the gradual increase or decrease in sky brightness are compensated for by nodding the telescope to the `off' position every 16 seconds. We used a chop throw of 120 arcsec in azimuth, except where the galaxy had a nearby companion, in which case we used a chop direction which avoided the companion.
The zenith opacity $\tau$ was measured by performing regular skydips. The observations were carried out under a wide range of weather conditions, with opacities at 850\hbox{\,$\umu$m } $\tau_{850}$ ranging from 0.12 to 0.52. This means that some galaxies were observed in excellent conditions ($\tau_{850}<0.2$) while others were observed in far less than ideal conditions. As a result we obtained useful 450\hbox{\,$\umu$m } data for only a fraction of our galaxies. This is discussed in more detail in Section~\ref{450data}. Our observations were centred on the coordinates taken from the NASA/IPAC Extragalactic Database (NED). We made regular checks on the pointing and found it to be generally good to $\sim$2 arcsec. The integration times depended on source strength and weather conditions. Since most of the OS sample are relatively faint submillimetre sources we typically used $\sim$12 integrations ($\sim$30 mins), although many sources were observed in poorer weather and so required longer integration times.
We calibrated our data by making jiggle maps of Uranus and Mars, or, when these planets were unavailable, of the secondary calibrators CRL 618 and HL Tau. We took the planet fluxes from the JCMT {\small {FLUXES}} program, and CRL 618 and HL Tau were assumed to have fluxes of 4.56 and 2.32 \mbox{Jy beam$^{-1}$} respectively at 850\,\micron.
\begin{table*}
\centering
\begin{minipage}{14.5cm}
\caption{\label{pairstab}\small{Combined SCUBA fluxes for pairs unresolved by \textit{IRAS}. (Notes on individual objects are listed in Section~\ref{maps}).}}
\begin{tabular}{lllrrrrrllr}
\hline
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\
Name & R.A. & Decl. & cz & $S_{60}$ & $S_{100}$ & $S_{850}$ & $\sigma_{850}$ & $T_{dust}$ & $\beta$ & Type\\
& (J2000) & (J2000) & (km\,s$^{-1}$) & (Jy) & (Jy) & (Jy) & (Jy) & (K) & &\\
\hline
NGC 3799/3800 & 11 40 11.4 & +15 20 05 & 3312 & 4.81 & 11.85 & 0.135 & 0.035 & 29.8 & 1.5 & 10M\\
NGC 5953/4 & 15 34 33.7 & +15 11 49 & 1966 & 10.04 & 18.97 & 0.273 & 0.034 & 35.2 & 1.1 & 10M\\
\hline
\end{tabular}\\
Note. Columns have the same meanings as in Table~\ref{fluxtab}. \\
\end{minipage}
\end{table*}
\subsection{Data reduction}
\label{red}
The 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } data was reduced using the standard SCUBA specific tasks in the {\small {SURF}} package (Jenness \& Lightfoot 1998, 2000; Jenness et al. 2002), where possible via the {\small {XORACDR}} automated data reduction pipeline (Economou et al. 2004). The off-nod position was subtracted from the on-nod in the raw beam-switched data and the data was then flat-fielded and corrected for atmospheric extinction.
In order to correct SCUBA data for atmospheric extinction we must accurately know the value of the zenith sky opacity, $\tau$. Although less crucial at 850\hbox{\,$\umu$m } if the observation is made in good weather ($\tau_{850}<$0.3) and at low airmass, in worse weather or at 450\hbox{\,$\umu$m } the measured source flux can be severely affected by an error in $\tau$. $\tau$ is most commonly estimated either by performing a skydip or by extrapolating to the required wavelength (using relations given in the JCMT literature and in Archibald et al. (2002)) from polynomial fits to the continuous measurements of $\tau$ at 225GHz made at the nearby Caltech Submillimetre Observatory. Since skydips are measured relatively infrequently, the polynomial fits to the CSO $\tau_{225}$ data are recommended in the JCMT literature to be the more reliable way of estimating $\tau$ for both SCUBA arrays. As such, for both 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } data we have wherever possible (the large majority of observations) used the derived CSO opacity at $225$GHz ($\tau_{cso}$). Where $\tau_{cso}$ values were not available the opacities were derived from 850\hbox{\,$\umu$m } skydip measurements (at 450\hbox{\,$\umu$m } using the $\tau_{850}$-to-$\tau_{450}$ relation described in the JCMT literature and Archibald et al. (2002)).
Noisy bolometers were noted but not removed at this stage (it was frequently found to be the case that flagging a noisy bolometer as `bad' creates even worse noise spikes in the final map around the position of the removed bolometer data). Large spikes were removed from the data using standard {\small {SURF}} programs.
The nodding and chopping should remove any noise which is correlated between the different bolometers. In reality, since the data was not observed in the driest and most stable conditions the signal on different bolometers was often highly correlated due to incomplete sky subtraction. In the majority of cases we used the {\small SURF} task {\small REMSKY}, which takes a set of user-specified bolometers to estimate the sky variation as a function of time. More explicitly, in each time step {\small REMSKY} takes the median signal from the specified sky bolometers and subtracts it from the whole array. To ensure that the sky bolometers specified were looking at sky alone and did not contain any source emission we used a rough SCUBA map together with optical (\textit{Digitised Sky Survey}\footnote{The Digitised Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAGW-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present digital form with the permission of these institutions.} (DSS)) images as a guide when choosing the bolometers, though in this sample there are so few bright sources that in the majority of cases all bolometers could be safely used.
Even after this step, however, due to the relatively poor conditions in which much of the data was observed the residual sky level was sometimes found to vary linearly across the array, giving a `tilted plane' on the array. Moreover, in a number of cases a noisy `striped' sky (due possibly to some short-term instrumentation problem) was found. Though the {\small SURF} task {\small {REMSKY}} was designed to remove the sky noise, it is relatively simplistic and cannot remove such spatially varying `tilted' or `striped' sky backgrounds. In these cases, as for the IRS sample (D00), we used one of our own programs in place of {\small REMSKY}. In a handful of cases the `striped' sky was so severe that it could not be removed, so these objects were re-observed in February 2003.
\begin{table*}
\centering
\begin{minipage}{11cm}
\caption{\label{450tab}\small{450\hbox{\,$\umu$m } flux densities and two-component SED parameters. (Notes on individual objects are listed in Section~\ref{maps}). }}
\begin{tabular}{lrrrllrcc}
\hline
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\
Name & $S_{450}$ & $\sigma_{450}$ & $\frac{S_{450}}{S_{850}}$ & $T_{w}$ & $T_{c}$ & $\frac{N_{c}}{N_{w}}$ & $M_{d2}$ & $L_{fir}$\\
& (Jy) & (Jy) & & (K) & (K) & & (log $M_{\odot}$) & (log $L_{\odot}$)\\
\hline
UGC 148$\dag$ & 0.944 & 0.236 & 17.18 & 34 & 18 & 37 & ... & 10.33\\
NGC 99 & 0.490 & 0.182 & 7.73 & 47 & 17 & 542 & 7.72 & 10.08\\
NGC 803$\ddag$ & 0.631 & 0.196 & 6.79 & 33 & 18 & 92 & 7.02 & 9.46\\
NGC 3689$^{\ast}$ & 1.045 & 0.357 & 10.30 & 59 & 23 & 910 & 7.13 & 10.16\\
PGC 35952 & 0.421 & 0.116 & 8.26 & 58 & 18 & 1859 & 7.31 & 9.77\\
NGC 3987 & 1.110 & 0.319 & 5.98 & 44 & 22 & 279 & 7.85 & 10.78\\
IC 979$\S$ & 0.874 & 0.341 & 15.39 & ... & ... & ... & ... & ...\\
NGC 5953/4 & 2.879 & 0.683 & 10.54 & 54 & 21 & 277 & 7.33 & 10.28\\
NGC 5980 & 1.398 & 0.495 & 5.53 & 43 & 18 & 321 & 8.06 & 10.53\\
NGC 6090 & 0.803 & 0.180 & 8.82 & 55 & 22 & 122 & 8.09 & 11.29\\
NGC 6120 & 0.528 & 0.127 & 8.08 & 45 & 24 & 76 & 7.96 & 11.17\\
NGC 6155$^{\ast}$ & 0.381 & 0.135 & 3.30 & 30 & 20 & 7 & 6.92 & 9.80\\
NGC 6190$^{\ast}$ & 0.880 & 0.308 & 8.89 & 56 & 18 & 2684 & 7.16 & 9.85\\
IC 5090 & 1.018 & 0.240 & 8.66 & 52 & 21 & 346 & 8.28 & 11.19\\
IC 1368 & 0.425 & 0.137 & 9.10 & 55 & 23 & 110 & 7.10 & 10.37\\
NGC 7081 & 0.241 & 0.067 & 5.43 & 32 & 20 & 6 & 6.98 & 9.93\\
NGC 7442 & 0.410 & 0.099 & 9.02 & 54 & 20 & 665 & 7.70 & 10.45\\
UGC 12519 & 0.408 & 0.108 & 5.54 & 28 & 17 & 12 & 7.57 & 10.02\\
NGC 7722 & 0.595 & 0.148 & 9.78 & 54 & 20 & 1224 & 7.33 & 10.04\\
\hline
\end{tabular}\\
(1) Most commonly used name. \\
(2) 450\hbox{\,$\umu$m } flux (this work). \\
(3) Error on 450\hbox{\,$\umu$m } flux, calculated as described in Section~\ref{errors}. \\
(4) Ratio of 450- to 850-$\micron$ fluxes. \\
(5) Warm temperature using $\beta=2$. \\
(6) Cold temperature using $\beta=2$. \\
(7) Ratio of cold-to-warm dust. \\
(8) Dust mass calculated using parameters in columns (5)--(7). \\
(9) FIR luminosity (40--1000\micron) integrated under the two-component SED. \\
$\ast$ Some caution is advised (see Section~\ref{maps}).\\
$\S$ The data could not be fitted with a 2-component model (see Section~\ref{maps}).\\
$\dag$ Not well-fitted by two-component model using 850\hbox{\,$\umu$m } data point; fitted parameters here are from 2-component fit to the 60, 100, 450 and 170\hbox{\,$\umu$m } (ISO) data points; (see Section~\ref{maps}).\\
$\ddag$The data for NGC 803 are also well-fitted by the parameters: \mbox{$T_{w}$=60\,K}, \mbox{$T_{c}$=19\,K}, \mbox{$\frac{N_{c}}{N_{w}}$=2597}, \mbox{log $M_{d2}$=6.99} and \mbox{log $L_{fir}$=9.48}, (see Section~\ref{maps}).\\
\end{minipage}
\end{table*}
Once the effects of the sky were removed the data was despiked again and the final map produced by re-gridding the data into a pixel grid to form an image on $1$ arcsecond pixels. Where there were multiple data-sets for a given source they were binned together into a co-added final map. In these cases each data set was weighted prior to co-adding using the {\small SURF} task {\small SETBOLWT}, which calculates the standard deviation for each bolometer and then calculates weights relative to the reference bolometer (the central bolometer in the first input map). This method is therefore only suitable if there are no very bright sources present in the central bolometer (if a bright source was present we weighted each dataset using the inverse square of its measured average noise). This step also ensures that noisy bolometers contribute to the final map with their correct statistical weight.
\subsection{850\hbox{\,\boldmath{$\umu$}m} flux measurement}
\label{data-red:flux}
The fluxes were measured from the SCUBA maps by choosing a source aperture over which to integrate the flux, such that the signal-to-noise was maximised. The extent of the galaxy in the optical (DSS) images and the extent of the submillimetre source on the S/N map (see Section~\ref{snmaps}) were used to select an aperture that included as much of the submillimetre flux of the galaxy as possible while minimising the amount of sky included. Note, the optical images in Figure~\ref{egmaps} are shown stretched for optimum contrast -- however, apertures for flux measurement were drawn for a more modest optical extent, as seen at a standard level of contrast.
Conversion of the measured aperture flux in volts to Janskys was carried out by measuring the calibrator flux for that night using the same aperture as for the object. The orientation of the aperture (relative to the chop throw) was also kept the same as for the object, as particularly for more elliptical apertures this has a significant effect.
Objects are said to be detected at $>3\sigma$ if either: (a) the peak S/N in the S/N map was $>3\sigma$ or (b) the flux in the aperture was greater than 3 times the noise in that aperture (where the noise is defined as described in Section~\ref{errors}).
\subsection{450\hbox{\,\boldmath{$\umu$}m} data}
\label{450data}
Due to the increased sensitivity to weather conditions at 450\,\micron, sources emitting at 450\hbox{\,$\umu$m } will only be detected if they are relatively bright at 850\,\micron. This, together with the wide range of observing conditions for this sample, meant that we found useful 450\hbox{\,$\umu$m } data for only 19 objects.
Where possible the 450\hbox{\,$\umu$m } emission was measured in an aperture the same size as used for the 850\hbox{\,$\umu$m } data. In some cases a smaller aperture had to be used for the 450\hbox{\,$\umu$m } data, and these individual cases are discussed in Section~\ref{maps}.
\subsection{Error analysis}
\label{errors}
The error on the flux measurement is made up of three components:
\begin{itemize}
\item{A background sky subtraction error $\sigma_{sky}$ due to the uncertainty in the sky level.}
\item{A shot (Poisson) noise term $\sigma_{shot}$ due to pixel-to-pixel variations within the sky aperture. Unlike CCD images, in SCUBA maps the signal in adjoining pixels is correlated; this correlated noise depends on a number of factors, including the method by which the data is binned at the data reduction stage. This has been discussed in some detail by D00, who find that a correction factor is required for each array to account for the fact that pixels are correlated; they find the factor to be 8 at 850\hbox{\,$\umu$m } and 4.4 at 450\,\micron.}
\item{A calibration error term $\sigma_{cal}$ which for SCUBA observations at 850\hbox{\,$\umu$m } is typically less than 10\%. We have therefore assumed a conservative calibration factor of 10\% at 850\,\micron. The calibration error at 450\hbox{\,$\umu$m } was taken to be 15\%, following DE01.}
\end{itemize}
The relationships used to calculate the noise terms are as follows:
\[
\sigma_{sky}=\sigma_{ms}N_{ap}
\]
and
\[
\sigma_{shot}=8\sigma_{pix}\sqrt{N_{ap}} \qquad \textrm{or} \qquad \sigma_{shot}=4.4\sigma_{pix}\sqrt{N_{ap}}
\]
for 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } flux measurements respectively.
The error in the mean sky $\sigma_{ms}=S.D./\sqrt{n}$, where S.D. is the standard deviation of the mean sky values in \textit{n} apertures placed on off-source regions of the map. ${N_{ap}}$ is the number of pixels in the object aperture; $\sigma_{pix}$ is the mean standard deviation of the pixels within the sky apertures. The total error for each flux measurement is then given by
\begin{equation}
\sigma_{tot}=(\sigma_{sky}^{2}+\sigma_{shot}^{2}+\sigma_{cal}^{2})^{1/2} \label{eq1}
\end{equation}
as for the IRS sample. This error analysis is discussed in detail in D00 and DE01.
850\hbox{\,$\umu$m } fluxes were found to have total errors $\sigma_{tot}$ typically in the range \mbox{15--30\,\%}. 450\hbox{\,$\umu$m } fluxes were found to have total errors $\sigma_{tot}$ typically in the range \mbox{25--35\,\%}. Note, the $\sigma_{tot}$ used to determine whether a source was detected at the 3$\sigma$ level is defined as in Equation~\ref{eq1} but without the calibration error term.
\subsection{S/N maps}
\label{snmaps}
Unlike the IRS sources the OS sources were not selected on the basis of their dust content. Many of the OS sources, especially the early types, are close to the limit of detection. Also, it is often hard to assess whether a source is detected, or whether some feature of the source is real, due to the variability of the noise across the array. This is due both to an increase in the noise towards the edge of each map, caused by a decrease in the number of bolometers sampling each sky point, and to individual noisy bolometers. For this reason we used the method described in D00 to generate artificial noisemaps, which we used with our real maps to produce signal-to-noise maps. The real maps and the artificial maps were first smoothed (using a 12 pixel FWHM) before creating the S/N map.
We used these S/N maps to aid in choosing the aperture for measuring the 850\hbox{\,$\umu$m } flux (Section~\ref{data-red:flux}). We have also presented S/N maps of each source (see Section~\ref{results}), as this makes it easier to assess the reality of any features in the maps.
\subsection{\textit{IRAS} fluxes}
\label{iras-fluxes}
\textit{IRAS} 100\hbox{\,$\umu$m } and 60\hbox{\,$\umu$m } fluxes, where available, were taken from the \textit{IRAS Faint Source Catalogue} (Moshir et al. 1990; hereafter FSC) via the NED database. Where literature fluxes were unavailable the NASA/IPAC Infrared Science Archive (IRSA) SCANPI (previously ADDSCAN) scan coadd tool was used to measure a flux from the \textit{IRAS} survey data.
The small number of SCANPI fluxes are indicated by `s' in Table~\ref{fluxtab}, and any special cases are discussed individually in Section~\ref{maps}. We take SCANPI fluxes to be detections if the measurements are formal detections at $>4.5\sigma$ at 100\hbox{\,$\umu$m } or $>4\sigma$ at 60\,\micron, which Cox et al. (1995) conclude are actually detections at the 98\% confidence level. Otherwise we give a 98\% confidence upper limit (4.5$\sigma$ at 100\hbox{\,$\umu$m } or 4$\sigma$ at 60\,\micron) using the 1$\sigma$ error found from SCANPI (again following Cox et al. (1995)). If both fluxes are SCANPI measurements we mark the subsequent fitted values by `$\ast$' if there is any doubt as to their viability (for example possible source confusion, confusion with galactic cirrus, or no literature \textit{IRAS}\/ fluxes in NED for either band).
\section{Results}
\label{results}
We detected 52 of the 81 galaxies in the OS sample. Table~\ref{fluxtab} lists the 850\hbox{\,$\umu$m } fluxes and other parameters. For interacting systems resolved by SCUBA but not resolved by \textit{IRAS} the 850\hbox{\,$\umu$m } fluxes given are for the individual galaxies; the 850\hbox{\,$\umu$m } fluxes measured for the combined system are listed in Table~\ref{pairstab} along with the \textit{IRAS} fluxes. Table~\ref{450tab} lists the 450\hbox{\,$\umu$m } fluxes for the 19 galaxies which are also detected at the shorter wavelength. The galaxies detected in the OS sample are shown in Figure~\ref{egmaps}, with our 850\hbox{\,$\umu$m } SCUBA S/N maps overlaid onto optical (DSS) images. Comments on the individual maps are given in Section~\ref{maps}.
The 850\hbox{\,$\umu$m } images have several common features. Firstly, we find that many spiral galaxies exhibit two peaks of 850\hbox{\,$\umu$m } emission, seemingly coincident with the spiral arms. This is most obvious for the more face-on galaxies (for example NGC 99 and NGC 7442), but it is also seen for more edge-on spirals (e.g. NGC 7047 and UGC 12519). This `two-peak' morphology is not seen for all the spirals, however. Some, for example NGC 3689, are core-dominated and exhibit a single central peak of submillimetre emission, while others (NGC 6131 and NGC 6189 are clear examples) exhibit a combination of these features, with both a bright nucleus and peaks coincident with the spiral arms. In a number of cases the 850\hbox{\,$\umu$m } peaks clearly follow a prominent dust lane (e.g. NGC 3987 and NGC 7722). These results are consistent with the results of numerous mm/submm studies. For example, Sievers et al. (1994) observe 3 distinct peaks in NGC 3627 and note that the two outer peaks are coincident with the transition region between the central bulge and the spiral arms -- they also observe dust emission tracing the dust lanes of the spiral arms; Gu\'elin et al. (1995), Bianchi et al. (2000), Hippelein et al. (2003) and Meijerink et al. (2005) observe a bright nucleus together with extended dust emission tracing the spiral arms. Many of the features seen in our OS sample 850\hbox{\,$\umu$m } maps are also found by Stevens et al. (2005) in their SCUBA observations of nearby spirals.
Secondly, we find that a number of galaxies appear to be extended at 850\hbox{\,$\umu$m } compared to the optical emission seen in the DSS images. In many cases this extended 850\hbox{\,$\umu$m } emission appears to correspond to very faint optical features, as can be seen for NGC 7081 and NGC 7442 in Figure~\ref{egmaps}. In order to investigate this further we have already carried out follow-up optical imaging for $\sim$ half the sample detected at 850\,\micron, to obtain deeper images than available from the DSS. The results and discussion of this deeper optical data will be the subject of a separate paper (Vlahakis et al., in preparation).
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ugc148.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc99.ps}\\[-1ex]
\hfill
\vfill
\includegraphics[angle=0, width=8cm]{fig1_pgc3563.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc786.ps}\\[-1ex]
\hfill
\vfill
\includegraphics[angle=0, width=8cm]{fig1_ngc803.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ugc5342.ps}
\hfill
\caption{\label{egmaps}{The optically-selected SLUGS: 850\hbox{\,$\umu$m } SCUBA S/N maps (produced as described in Section~\ref{snmaps}; 1$\sigma$ contours) overlaid onto DSS optical images ($2\arcmin\!\times\!2\arcmin$, except for NGC 803 and NGC 6155 which are $3\arcmin\!\times\!3\arcmin$). (Optical images are shown here with a contrast that optimises the optical features, however when used as a guide for drawing SCUBA flux measurement apertures a more conservative stretch was applied).}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ngc3270.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc3323.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc3689.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_pgc35952.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc3800.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc3815.ps}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ngc3920.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc3987.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ugc7115.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ic797.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ic800.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc4712.ps}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_mrk1365.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ugc8902.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ic979.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc5522.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc5953-4.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc5980.ps}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ic1174.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ugc10205.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6090.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6103.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ic1211.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ugc10325.ps}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ngc6127.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6120.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6126.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6131.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6137.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6146.ps}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ngc6155.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ugc10407.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6166.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6189.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc6190.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ic5090.ps}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ic1368.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc7047.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc7081.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc7442.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc7463.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ugc12519.ps}
\vfill
\contcaption{}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig1_ngc7653.ps}
\hfill
\includegraphics[angle=0, width=8cm]{fig1_ngc7722.ps}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\subsection{Notes on individual objects}
\label{maps}
In the following discussion of individual objects we note that since the number of bolometers sampling each sky point decreases towards the edges of the submillimetre maps the noise increases towards the edge of the maps. Although the S/N maps in Figure~\ref{egmaps} were produced using artificial noisemaps (Section~\ref{snmaps}) which should normally account for this effect there are certain circumstances, such as a `tilted-sky' (see Section~\ref{red}) or the very noisiest bolometers, where residual `noisy' features may remain in the S/N maps. This means that any submillimetre emission in Figure~\ref{egmaps} seen beyond the main optical extent (and away from the centre of the map) should be regarded with some caution. However, in order to aid distinction between probable residual noisy features in the S/N map and potential extended submillimetre emission we have made a thorough investigation of each individual map. In the following discussion, unless otherwise stated we have found all \mbox{$\ge$\,2$\sigma$} submillimetre peaks away from the main optical galaxy to be associated with noisy bolometers or a tilted sky.
\textbf{UGC 148}. Data points for this object are not well-fitted by the two-component dust model (Section~\ref{sed-fits}), probably due to the 850\hbox{\,$\umu$m } flux having been underestimated -- the 850\hbox{\,$\umu$m } S/N contours shown in Figure~\ref{egmaps} for this object show evidence of a residual tilted sky plane (the sky is more positive on one side of the map than the other), suggesting that sky removal techniques may have been inadequate in this case and that therefore the source flux may have been under- (or over-) estimated. Also, since the E-NE part of the galaxy is coincident with noisy bolometers in the 850\hbox{\,$\umu$m } map the flux-measurement aperture was drawn to avoid this region, so the 850\hbox{\,$\umu$m } flux may be underestimated. However, an additional data point at 170\hbox{\,$\umu$m } (ISO) is available from the literature (Stickel et al. 2000, 2004). Using the 60, 100, 170 and 450\hbox{\,$\umu$m } fluxes we find that the data points are well-fitted by the two-component dust model (we take an average of all 170\hbox{\,$\umu$m } fluxes available, see Section~\ref{sed-fits}), and these results are listed in Table~\ref{450tab} and the SED is shown in Figure~\ref{2compSEDfig}.
\textbf{NGC 99}. The submillimetre emission follows the spiral arm structure. The 2$\sigma$ peak to the NE of the galaxy is not associated with any noisy bolometers.
\textbf{NGC 786}. The submillimetre emission to the SE of the galaxy is not associated with any noisy bolometers but we note that this object was observed in \textit{very} poor weather.
\textbf{NGC 803}. None of the submillimetre peaks are associated with noisy bolometers. Due to the high ratio of $S_{25}/S_{60}$ good two-component SED fits (Section~\ref{sed-fits}) to the 4 data points (60, 100, 450 and 850\,\micron) can be achieved for two quite different values of the warm component temperature. In addition to the parameters listed in Table~\ref{450tab} a good fit is also found with the following parameters: $T_{w}$=60\,K, $T_{c}$=19\,K, \mbox{$\frac{N_{c}}{N_{w}}$=2597}, \mbox{log $M_{d2}$=6.99} and \mbox{log $L_{fir}$=9.48}.
\\[-2.2ex]
\textbf{UGC 5342}. This observation had a tilted sky.
\textbf{NGC 3270}. All the submillimetre emission away from the main optical extent, and the emission to the N of the galaxy, is associated with noisy bolometers. This observation also suffered from a tilted sky, which may explain much of the submillimetre emission in the N part of the map. However, the emission towards the centre of the map, coinciding with the main optical galaxy, is not associated with any noisy bolometers. This emission seems to occur where the galaxy bulge ends and the inter-arm region begins, as was found for NGC 3627 by Sievers et al. (1994).
\textbf{NGC 3689}. Very few scans were available via SCANPI, so $\textit{IRAS}$ fluxes (and corresponding fitted parameters) for this object should be used with caution.
\textbf{PGC 35952}. The submillimetre emission to the S and SW is not associated with any noisy bolometers. The submillimetre peaks coincident with the main optical extent of the galaxy appear to follow the spiral arm structure, as for NGC 6131, and it is possible the extended submillimetre emission to the S relates to the very extended faint spiral arms seen in the optical.
\textbf{NGC 3799/3800}. NGC 3799 and NGC 3800 were observed in separate maps. NGC 3799 individually is not detected at the 3$\sigma$ level (although we measure flux at the 2$\sigma$ level). For NGC 3800 (shown in Figure~\ref{egmaps}) most of the integrations for the bolometers to the S of the map were unusable, and consequently this part of the map is very much noisier. Generally this observation is very noisy, and especially bad at 450\,\micron; thus no 450\hbox{\,$\umu$m } flux is available. In fact only the main region of submillimetre emission at the centre of the map is in an area free from noisy bolometers, and it is this region over which we have measured the submillimetre flux of the galaxy.
The S$_{850}$ listed for NGC 3799/3800 in Table~\ref{pairstab} is a conservative measurement of the 850\hbox{\,$\umu$m } emission from the system, the sum of the separately measured fluxes from each of the two component galaxies. In coadding the two maps there appears to be a `bridge' of 850\hbox{\,$\umu$m } emission between the two galaxies, consistent with emission seen in the optical (NGC 3799 is to the SW of NGC 3800 in Figure~\ref{egmaps}). However, since this region of the map has several noisy bolometers we only measure fluxes for the main optical extent of the galaxies.
\textbf{NGC 3815}. The submillimetre emission to the NE and W of the galaxy is associated with noisy bolometers. However, the arm-like submillimetre structures seen extending from the galaxy to the N and S are \textit{not}. Both of these `arms' extend in the direction of faint optical features seen in the DSS and 2MASS images. The optical images also show evidence of extended optical emission around the main optical extent (just visible to the NE in Figure~\ref{egmaps}), but 2MASS (JHK) images show a band of emission stretching E-W between two nearby galaxies on either side of NGC 3815. It seems clear that some kind of interaction is taking place in this system, and therefore it is perhaps not unlikely that there might also be significantly extended submillimetre emission.
\textbf{NGC 3920}. The submillimetre emission to the E and S is not associated with any noisy bolometers.
\textbf{NGC 3987}. This edge-on galaxy has a prominent dust lane in the optical. Though the submillimetre emission follows the dust lane it is seen slightly offset. A similar result was found for another edge-on spiral by Stevens et al. (2005), who conclude this effect is simply an effect of the inclination of the galaxy on the sky.
\textbf{NGC 3997}. The FSC gives an upper limit \mbox{$<$3.101\,Jy} for S$_{100}$, likely due to possible source confusion with NGC 3993. The S$_{100}$ we measure with SCANPI should therefore be used with some caution.
\textbf{NGC 4005}. This object is detected at 850\hbox{\,$\umu$m } at only the 2.5$\sigma$ level. It is unresolved by $\textit{IRAS}$ and may be confused with $\textit{IRAS}$ source IRASF11554+2524 (NGC 4000).
\textbf{UGC 7115}. With the exception of the submillimetre emission to the SE, none of the submillimetre emission in this map is associated with noisy bolometers. However, we found this SCUBA observation to have a tilted sky. We estimate that as much as 80\% of the 850\hbox{\,$\umu$m } flux from this elliptical may be due to synchrotron contamination from a radio source associated with the galaxy (Vlahakis et al., in prep.).
\textbf{IC 800}. The submillimetre emission to the E of this galaxy corresponds to a region of the map which is only slightly noisy, making it unlikely any residual features of this noise remains in the S/N map (Section~\ref{snmaps}). However, we also note that this object was observed in poor weather.
\textbf{NGC 4712}. None of the submillimetre emission is associated with noisy bolometers, with the exception of the 2$\sigma$ peak closest to the galaxy in the arm-like structure extending to the E. However, we note that this arm-like feature, though faint, is also seen in the optical (though not reproduced in the optical image in Figure~\ref{egmaps}); it appears to originate from the main galaxy extent, where there appears to be a significant amount of dust obscuration.
\textbf{UGC 8902}. The region of submillimetre emission to the E, far SW and far S are all associated with noisy bolometers. However, the 4$\sigma$ submillimetre peak lying to the S/SE beyond the main optical extent, at a similar declination to the small galaxy to the SE, is not associated with any noisy bolometers. This submillimetre emission is consistent with the fact that the overall emission associated with the galaxy is offset to the S/SE with respect to the optical. We also note that this region in the optical contains a number of faint condensations in the direction of the small galaxy to the SE.
\textbf{IC 979}. Although this galaxy is detected with relatively low S/N at 850\hbox{\,$\umu$m } it is also detected at 450\,\micron. We allocate a higher 450\hbox{\,$\umu$m } calibration error (25\%) for this source, since there were no good 450\hbox{\,$\umu$m } calibrator observations that night (calibration was achieved by taking the mean results from a number of calibrators observed that and the previous night). Note also that no two-component fit could be made to the data since the 450\hbox{\,$\umu$m } data point is higher than the 100\hbox{\,$\umu$m } value, possibly due to the problems with calibrating the 450\hbox{\,$\umu$m } data but more likely due to an underestimate of the 100\hbox{\,$\umu$m } flux. We note this object was observed in poor weather.
\textbf{UGC 9110}. There appears to be flux present at both 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } at the 2$\sigma$ level, but the maps are very noisy and several integrations unusable, most likely due to unstable and deteriorating weather conditions during the observation. This object is unresolved by \textit{IRAS}: \textit{IRAS}\/ source IRASF14119+1551 (FSC fluxes S$_{100}$=2.341 and \mbox{S$_{60}$=0.802\,Jy}) is likely a blend of UGC9110 and companion CGCG103-124 (Condon, Cotton \& Broderick 2002).
\textbf{NGC 5522}. The 850\hbox{\,$\umu$m } emission to the SE of NGC 5522 is associated with a region of the map which is slightly noisy and where there are a number of spikes in the data. We note that this observation was carried out in poor weather.
\begin{figure*}
\begin{center}
\subfigure[UGC 148: T=(34,18), n=37]{
\includegraphics[angle=270, width=5.5cm]{fig2_ugc148.ps}}
\subfigure[NGC 99: T=(47,17), n=542]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc99.ps}}
\subfigure[NGC 803: T=(33,18), n=92 (shown) \textit{or} T=(60,19), n=2597]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc803.ps}}
\subfigure[NGC 3689: T=(59,23), n=910]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc3689.ps}}
\subfigure[PGC 35952: T=(58,18), n=1859]{
\includegraphics[angle=270, width=5.5cm]{fig2_pgc35952.ps}}
\subfigure[NGC 3987: T=(44,22), n=279]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc3987.ps}}
\subfigure[NGC 5953/4: T=(54,21), n=277]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc5953-4.ps}}
\subfigure[NGC 5980: T=(43,18), n=321]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc5980.ps}}
\subfigure[NGC 6090: T=(55,22), n=122]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc6090.ps}}
\subfigure[NGC 6120: T=(45,24), n=76]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc6120.ps}}
\subfigure[NGC 6155: T=(30,20), n=7]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc6155.ps}}
\subfigure[NGC 6190: T=(56,18), n=2684]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc6190.ps}}
\caption{\label{2compSEDfig}{Best-fitting two-component SEDs assuming $\beta=2$, fitted to the 60, 100, 450 and 850\hbox{\,$\umu$m } fluxes (with the exception of (a), see Section~\ref{maps}). Solid lines represent the composite two-component SED and dot-dash lines indicate the warm and cold components. Any additional 170\hbox{\,$\umu$m } (ISO) fluxes from the literature (Stickel et al. 2000, 2004) are also plotted, though (with the exception of (a)) not fitted. Note, captions show the fitted parameters as listed in Table~\ref{450tab}, so may be the averaged values (see Section~\ref{sed-fits}).}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\setcounter{subfigure}{12}
\subfigure[IC 5090: T=(52,21), n=346]{
\includegraphics[angle=270, width=5.5cm]{fig2_ic5090.ps}}
\subfigure[IC 1368: T=(55,23), n=110]{
\includegraphics[angle=270, width=5.5cm]{fig2_ic1368.ps}}
\subfigure[NGC 7081: T=(32,20), n=6]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc7081.ps}}
\subfigure[NGC 7442: T=(54,20), n=665]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc7442.ps}}\hfill
\subfigure[UGC 12519: T=(28,17), n=12]{
\includegraphics[angle=270, width=5.5cm]{fig2_ugc12519.ps}}\hfill
\subfigure[NGC 7722: T=(54,20), n=1224]{
\includegraphics[angle=270, width=5.5cm]{fig2_ngc7722.ps}}
\hfill
\contcaption{}
\end{center}
\end{figure*}
\textbf{NGC 5953/4} is also in the IRS SLUGS sample. While D00 used colour-corrected \textit{IRAS} fluxes as listed in the \textit{IRAS} BGS (from Soifer et al. (1989)) we present here, as for all the OS sample, fluxes from the \textit{IRAS} FSC.
\textbf{NGC 5980}. This observation suffered from a tilted sky, which potentially explains the extended 850\hbox{\,$\umu$m } emission to the W of the galaxy. The aperture used to measure the 450\hbox{\,$\umu$m } flux was smaller than at 850\hbox{\,$\umu$m } in order to avoid a noisy bolometer, thus source flux at 450\hbox{\,$\umu$m } may be underestimated.
\textbf{IC 1174}. This source is \textit{just} detected at 850\hbox{\,$\umu$m } at the 3$\sigma$ level.
\textbf{UGC 10205}. The 850\hbox{\,$\umu$m } map in Figure~\ref{egmaps} is a coadd of two observations. Since emission at the optical galaxy position is very clear in one observation (both at 850\hbox{\,$\umu$m } \textit{and} 450\,\micron) and not in the other observation, and since we find no explanation for this, we simply coadd the two observations (Section~\ref{obs}). The submillimetre emission coincident with the main optical galaxy extent, and also the peak to the S, are not associated with any noisy bolometers. Peaks to the N and W of the galaxy lie in a region of the map which is slightly noisy. We note that this observation was carried out in less than ideal weather.
\textbf{NGC 6090} is a closely interacting/merging pair, and is also in the IRS SLUGS sample.
\textbf{NGC 6103}. The submillimetre emission to the S of the galaxy is not associated with any noisy bolometers. This region contains a number of faint features seen in optical (DSS) and 2MASS images. We note that this object was observed in less than ideal weather.
\textbf{IC 1211}. We find in the literature no known radio sources associated with this elliptical galaxy (NVSS 1.4GHz 3$\sigma$ upper limit is \mbox{$<$1.2\,mJy}), and therefore cannot attribute the 850\hbox{\,$\umu$m } flux detected here to contamination from synchrotron radiation.
\textbf{UGC 10325 (NED01)} is one galaxy of the pair \mbox{UGC 10325}. The SCUBA map is centred on this galaxy (NED01), but NED02 can be seen at the SE edge of the DSS image in Figure~\ref{egmaps}. Thus all fluxes given are for the individual galaxy \mbox{UGC 10325 NED01}.
\textbf{NGC 6127}. We find in the literature no known radio sources associated with this elliptical galaxy (NVSS 1.4GHz 3$\sigma$ upper limit is $<$1.2\,mJy), and therefore cannot attribute the 850\hbox{\,$\umu$m } flux detected here to contamination from synchrotron radiation. The 4$\sigma$ submillimetre peak to the W of the galaxy, coincident with a knot in the optical, is not associated with any noisy bolometers.
\textbf{NGC 6120}. The submillimetre emission to the W of the galaxy, and at the S of the map, is associated with noisy bolometers.
\textbf{NGC 6126}. The submillimetre source (which we measured as a point source) is offset to the S of the optical extent of the galaxy. We note that, at minimum contrast, a small satellite/companion object can be seen in this region in the DSS and 2MASS images. The 3$\sigma$ peak to the S of the map is not associated with any noisy bolometers and is coincident with a small object visible in the optical. This observation, however, was carried out in poor weather.
\textbf{NGC 6131}. The submillimetre emission to the very NW of the galaxy (beyond the main optical extent) may be associated with a noisy bolometer.
\textbf{NGC 6137}. We estimate that $\sim$\,20\% of the 850\hbox{\,$\umu$m } flux from this elliptical galaxy could be due to synchrotron contamination from a radio source associated with the galaxy (Vlahakis et al., in prep.). Although only the submillimetre emission to the W of the galaxy coincides with a noisy bolometer we note that this observation had a tilted sky.
\textbf{NGC 6146}. We estimate that as much as 80\% of the 850\hbox{\,$\umu$m } flux from this elliptical may be due to synchrotron contamination from a radio source associated with the galaxy (Vlahakis et al., in prep.).
\textbf{NGC 6155} The submillimetre map shows extended emission to the S and SE of the galaxy at 850\,\micron, coincident with a number of small galaxies/condensations in the optical. M\'arquez et al. (1999) find one of the spiral arms in this galaxy is directed towards the SE. None of the submillimetre peaks in this map are associated with noisy bolometers.
A large aperture was used to measure all the flux associated with this object, and these results are listed in Table~\ref{fluxtab}. However at 450\hbox{\,$\umu$m } any flux appears confined to the main optical extent (though the map at 450\hbox{\,$\umu$m } is very noisy), and thus the flux measurement at 450\hbox{\,$\umu$m } was made using a smaller aperture. Using these values of the 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } flux we found a two-component SED could not be fitted (Section~\ref{sed-fits}); the $S_{450}/S_{850}$ ratio is simply too low, most likely because we have measured extended emission at 850\,\micron. Thus we also measured the $S_{850}$ using a smaller aperture the same size as used at 450\,\micron, and find \mbox{S$_{850}=0.069\pm0.013$\,Jy}. For this smaller aperture we find that a two-component model can just be fitted to the data, and those parameters we list in Table~\ref{450tab}.
\textbf{NGC 6166} is an elliptical and is located in a very busy field -- it is the dominant galaxy in the cluster Abell 2199. The presence of dust lanes is well documented in the literature. We note that our SCANPI measurements are in good agreement with those of Knapp et al. (1989). Using all available radio fluxes from the literature we estimate that as little as 4\% or as much as 100\% of the 850\hbox{\,$\umu$m } flux from this elliptical may be due to synchrotron contamination from a radio source associated with the galaxy (depending whether a spectral index is assumed constant over the whole galaxy or whether it is assumed to have a flatter core) -- this is a preliminary analysis and discussion of this and the other five ellipticals detected in the OS sample will be the subject of a separate paper (Vlahakis et al., in prep.).
\textbf{NGC 6173}. We measure \mbox{S$_{100}$=0.20\,Jy} with SCANPI but the detection is unconvincing since the coadds do not agree. Therefore we give an upper limit at 100\hbox{\,$\umu$m } in Table~\ref{fluxtab}.
\textbf{NGC 6190}. Some of the data for this object was very noisy and unusable. Consequently the remaining data may not be reliable. The submillimetre emission to the W of the galaxy lies in a region where there is a noisy bolometer in the 850\hbox{\,$\umu$m } flux map. Thus the apertures used also unavoidably encompass some noisy bolometers, particularly at 450\,\micron, and results for this object should be used with caution. However, the rest of the 850\hbox{\,$\umu$m } emission in the map is not associated with any noisy bolometers, so while the flux measurements may be unreliable this does not apply to the emission extent, which appears to follow the outer spiral arm structure.
\textbf{NGC 7081}. The submillimetre emission to the E and W of this galaxy are not associated with any noisy bolometers. The emission to the N and SE is coincident with regions of the map which are only slightly noisy, and since this observation was carried out in very good weather it is unlikely that any residual features of this noise remain in the S/N map (Section~\ref{snmaps}). Though from the optical DSS image only the central region of the galaxy (coincident with the main submillimetre peak) is clearly visible, there is evidence that this spiral has a very faint extended spiral arm structure. This is confirmed by optical images from SuperCOSMOS which clearly show very knotty and irregular faint spiral arms coincident with the peaks of submillimetre emission to the N and W of the galaxy.
\textbf{NGC 7442}. The 3$\sigma$ submillimetre peak to NW of main optical extent is coincident with faint optical knots and (unlike the 2$\sigma$ peaks elsewhere in the submillimetre map) is not associated with any noisy bolometers.
\textbf{NGC 7463}. This galaxy is part of a triple system with NGC 7464 (to the S of NGC 7463) and NGC 7465 (not shown in Figure~\ref{egmaps}). At 850\hbox{\,$\umu$m } we clearly detect emission from both NGC 7463 and NGC 7464, which seem to be joined by a bridge of submillimetre emission. The flux listed in Table~\ref{fluxtab} is for NGC 7463 alone, measured in an aperture corresponding to its main optical extent. Unfortunately a very noisy bolometer to the SE prevents us measuring the flux from the eastern half of NGC 7464, but excluding this region we measure a flux for the pair of \mbox{0.051$\pm$0.012\,Jy}, though obviously a lower limit.
An $\textit{IRAS}$ source is associated with NGC 7465, which is resolved from the other members of the system at 60\hbox{\,$\umu$m } (HIRES; Aumann, Fowler \& Melnyk 1990). Dust properties of this system (using SCUBA data observed as part of SLUGS) are studied in detail by Thomas et al. (2002).
\textbf{UGC 12519}. The 850\hbox{\,$\umu$m } emission to the NE of this galaxy is coincident with a number of small objects seen in the optical and is not associated with any noisy bolometers. Although UGC 12519 is also detected at 450\hbox{\,$\umu$m } the slightly smaller field of view of the short array means these NE objects lie just outside the 450\hbox{\,$\umu$m } map.
\textbf{NGC 7722}. Along with a very high ratio of cold-to-warm dust this object also has a very prominent dust lane, extending over most of the NE `half' of the galaxy. The 850\hbox{\,$\umu$m } emission clearly follows the dust lane evident in the optical. The 2$\sigma$ submillimetre peak to the NW of the galaxy is not associated with any noisy bolometers.
\subsection{Spectral fits}
\label{sed-fits}
In this section we describe the dust models we fit to the spectral energy distributions (SEDs) of the OS sample galaxies and present the results of these fits. Comparison of the results of the OS and IRS samples is discussed in Section~\ref{properties}.
D00 found that for the IRS sample the 60\,\micron, 100\hbox{\,$\umu$m } and 850\hbox{\,$\umu$m } fluxes could be fitted by a single-temperature dust model. However, with the addition of the 450\hbox{\,$\umu$m } data (DE01) they found that a single dust emission temperature no longer gives an adequate fit to the data, and that in fact two temperature components are needed, in line with the paradigm that there are two main dust components in galaxies (Section~\ref{cold-dust}). For the OS galaxies we have fitted a two-component dust model where there is a 450\hbox{\,$\umu$m } flux available. Since only 19 of the galaxies have 450\hbox{\,$\umu$m } flux densities we have also fitted an isothermal model to the data for all the galaxies in the OS sample which were detected at 850\,\micron.
We fitted two-component dust SEDs to the 60\,\micron, 100\hbox{\,$\umu$m } (\textit{IRAS}), 450\hbox{\,$\umu$m } \& 850\hbox{\,$\umu$m } (SCUBA) fluxes, by minimising the sum of the $\chi^2$ residuals. This two-component model expresses the emission at a particular frequency as the sum of two modified Planck functions (`grey-bodies'), each having a different characteristic temperature, such that
\begin{equation} \label{eq:2comp}
S_{\nu}=N_{w} \times \nu^{\beta}B(\nu,T_{w})+ N_{c} \times \nu^{\beta}B(\nu,T_{c})
\end{equation}
for the optically thin regime. Here $N_{w}$ and $N_{c}$ represent the relative masses in the warm and cold components, $T_{w}$ and $T_{c}$ the temperatures, $B(\nu,T)$ the Planck function, and $\beta$ the dust emissivity index. DE01 used the high value for the ratio of $S_{450}/S_{850}$ and the tight correlation between $S_{60}/S_{450}$ and $S_{60}/S_{850}$ for the IRS galaxies to argue that $\beta\approx2$. The OS galaxies follow a similar tight correlation (Section~\ref{prop:ir-opt}). For the OS sample we find the mean $S_{450}/S_{850}$=8.6 with $\sigma$=3.3, which is slightly higher than found for the IRS sample (where $S_{450}/S_{850}$=7.9 with $\sigma$=1.6) and with a slightly less tight distribution (though still consistent with being produced by the uncertainties in the fluxes). Both the OS and IRS values are somewhat higher than that found for the Stevens et al. (2005) sample of 14 local spiral galaxies, where the mean $S_{450}/S_{850}$=5.9 and $\sigma$=1.0. Since the OS galaxies have a similar high value for this ratio to the IRS galaxies we follow DE01 in assuming $\beta$=2.
We constrained $T_{w}$ by the \textit{IRAS} 25\hbox{\,$\umu$m } flux (the fit was not allowed to exceed this value), though we did not actually fit this data point, and we allowed $T_{c}$ to take any value lower than $T_{w}$. This method is the same as that used in DE01 for the IRS sample, but while many of the IRS galaxies with 450\hbox{\,$\umu$m } data also had fluxes at several other wavelengths in the literature we note that for the OS sample galaxies we have only four data points to fit. Since this is not enough data points to provide a well-constrained fit the values of $\chi^{2}_{min}$ may be unrealistically low. In Table~\ref{450tab} we list the parameters producing the best fits or, where more than one set of parameters produces an acceptable fit, we list an average of all those parameters. In practice we find that it is only $T_{w}$ (and hence $N_{c}$/$N_{w}$) for which there is sometimes a fairly large range of acceptable values, and this is likely due to our not fitting any data points below 60\,\micron. We show all our fitted two-component SEDs in Figure~\ref{2compSEDfig} (for the best-fitting, not averaged, parameters). Any additional 170\hbox{\,$\umu$m } (ISO) fluxes available from the literature (Stickel et al. 2000, 2004) are also plotted in Figure~\ref{2compSEDfig}, though (with the exception of UGC 148 (see Section~\ref{maps})) \textit{not} fitted; where there are several 170\hbox{\,$\umu$m } measurements available we plot the mean value.
\begin{figure}
\begin{center}
\includegraphics[angle=270, width=7.5cm]{fig3_ngc99.ps}\\[4ex]
\vfill
\includegraphics[angle=270, width=7.5cm]{fig3_ngc3987.ps}
\vfill
\caption{\label{SEDfig}{Two representative isothermal SEDs.}}
\end{center}
\end{figure}
We find a mean warm component temperature \mbox{$T_{w}=47.4\pm2.4$\,K} and a mean cold component temperature \mbox{$T_{c}=20.2\pm0.5$\,K}. The fitted warm component temperatures are in the range \mbox{$28<T_{w}<59$\,K} and cold component temperatures are in the range \mbox{$17<T_{c}<24$\,K}. Thus, the cold component temperature is close to that expected for dust heated by the general ISRF (Cox et al. 1986), one of the two components in the current paradigm (Section~\ref{cold-dust}).
We find a mean \mbox{$N_{c}/N_{w}=532\pm172$} (or higher if we include the higher value found for NGC 803 (see notes to Table~\ref{450tab} and Section~\ref{maps})). For the IRS sample DE01 found a large variation in the relative contribution of the cold component to the SEDs (described by the parameter $N_{c}$/$N_{w}$); for the OS sample we find an even larger variation. Objects NGC 6190 and PGC 35952 in Figure~\ref{2compSEDfig}, for example, clearly exhibit very `cold' SEDs with a strikingly prominent cold component (with \mbox{$\approx$\,2000} times as much cold dust as warm dust). Comparison of the two samples is discussed in detail in Section~\ref{properties}.
\begin{table*}
\centering
\begin{minipage}{12cm}
\caption{\label{lumtab}\small{Luminosities and masses.}}
\begin{tabular}{lrrrrrr}
\hline
(1) & (2) & (3) & (4) & (5) & (6) & (7)\\
Name & log $L_{60}$ & log $L_{850}$ & log $L_{fir}$ & log $M_{d}$ & log $M_{HI}$ & log $L_{B}$ \\
& (W\,Hz$^{-1}$sr$^{-1}$) & (W\,Hz$^{-1}$sr$^{-1}$) & ($L_{\odot}$) & ($M_{\odot}$) & ($M_{\odot}$) & ($L_{\odot}$)\\
\hline
UGC 148 & 22.80 & 21.20 & 10.22 & 7.05 & 9.82 & 10.39 \\
NGC 99 & 22.57 & 21.46 & 9.94 & 7.17 & 10.29 & 10.37 \\
PGC 3563 & 22.24 & 21.13 & 9.76 & 6.99 & ... & 10.03 \\
NGC 786 & 22.56 & 21.34 & 9.99 & 7.14 & ... & 9.94 \\
NGC 803 & 21.69 & 20.82 & 9.37 & 6.76 & 9.78 & 10.13 \\
UGC 5129 & 21.86 & $<$20.96 & $<$9.46 & $<$7.09 & 9.34 & 10.01 \\
NGC 2954 & $<$21.60 & $<$20.81 & $<$9.23 & ... & 8.09 & 10.25 \\
UGC 5342 & 22.46 & 21.03 & 9.84 & 6.81 & ... & 10.40 \\
PGC 29536 & $<$22.40 & $<$21.76 & $<$9.94 & ... & ... & 10.74 \\
NGC 3209 & $<$21.99 & $<$21.13 & $<$9.66 & ... & ... & 10.45 \\
NGC 3270 & 22.58 & 21.58 & 10.23 & 7.53 & 10.49 & 10.78 \\
NGC 3323 & 22.81 & 21.48 & 10.23 & 7.30 & 9.68 & 10.12 \\
NGC 3689 & $^{\displaystyle s}$22.54 & 21.09 & 10.09 & 7.04 & 9.14 & 10.29 \\
UGC 6496 & ... & ... & ... & ... & ... & 10.10 \\
PGC 35952 & 22.08 & 21.11 & 9.61 & 6.96 & 9.73 & 10.07 \\
NGC 3799/3800$^{\scriptstyle p}$ & 22.93 & 21.38 & 10.39 & 7.27 & 9.34 & ... \\
NGC 3812 & $<$21.68 & $<$20.91 & $<$9.17 & ... & ... & 9.95 \\
NGC 3815 & 22.19 & 20.96 & 9.70 & 6.83 & 9.62 & 10.15 \\
NGC 3920 & 22.21 & 20.86 & 9.63 & 6.68 & ... & 9.87 \\
NGC 3987 & 23.20 & 21.79 & 10.74 & 7.72 & 9.75 & 10.63 \\
NGC 3997 & 22.63 & $<$20.93 & $<$9.96 & $<$7.06 & 9.83 & 10.30 \\
NGC 4005 & ... & $<$20.69 & ... & $<$6.82 & 9.22 & 10.35 \\
NGC 4015 & 21.88 & $<$21.18 & $<$9.49 & $<$7.31 & ... & ... \\
UGC 7115 & $<$22.17 & 21.59 & $<$9.80 & $\dag$7.71 & ... & 10.45 \\
UGC 7157 & $<$22.15 & $<$21.27 & $<$9.65 & ... & ... & 10.32 \\
IC 797 & 21.72 & 20.78 & 9.27 & 6.63 & 8.50 & 9.77 \\
IC 800 & 21.52 & 20.82 & 9.07 & 6.63 & 8.51 & 9.67 \\
NGC 4712 & 22.18 & 21.50 & 9.87 & 7.43 & 10.18 & 10.50 \\
PGC 47122 & $<$21.95 & $<$21.46 & $<$9.71 & $<$7.58 & ... & 10.32 \\
MRK 1365 & 23.32 & 21.20 & 10.61 & 7.00 & 9.23 & 10.00 \\
UGC 8872 & $<$22.04 & $<$21.02 & $<$9.44 & ... & ... & 10.29 \\
UGC 8883 & 22.36 & $<$21.31 & $<$9.86 & $<$7.44 & ... & 10.00 \\
UGC 8902 & 23.07 & 21.81 & 10.57 & 7.69 & ... & 10.80 \\
IC 979 & $^{\displaystyle s \ast}$22.27 & 21.75 & $^{\ast}$9.85 & $^{\ast}$7.56 & ... & 10.64 \\
UGC 9110 & U & $<$21.21 & ... & ... & 9.72 & 10.27 \\
NGC 5522 & 22.84 & 21.39 & 10.22 & 7.17 & 9.77 & 10.51 \\
NGC 5953/4$^{\scriptstyle p}$ & 22.80 & 21.22 & 10.17 & 7.03 & 9.32 & ... \\
NGC 5980 & 22.97 & 21.84 & 10.44 & 7.65 & ... & 10.53 \\
IC 1174 & $<$21.80 & 20.95 & $<$9.19 & $\dag$7.08 & ... & 10.18 \\
UGC 10200 & 21.95 & $<$20.10 & $<$9.21 & $<$6.22 & 9.54 & 9.05 \\
UGC 10205 & 22.44 & 21.61 & 10.10 & 7.53 & 9.57 & 10.55 \\
NGC 6090 & 23.93 & 22.06 & 11.20 & 7.78 & 8.82 & 10.73 \\
NGC 6103 & 22.97 & 21.88 & 10.45 & 7.71 & ... & 10.83 \\
NGC 6104 & 22.77 & $<$21.59 & $<$10.35 & $<$7.71 & ... & 10.62 \\
IC 1211 & $<$21.78 & 21.16 & $<$9.52 & $\dag$7.29 & ... & 10.37 \\
UGC 10325 & 22.92 & 21.34 & 10.35 & 7.20 & 9.92 & ... \\
NGC 6127 & $<$21.57 & 21.51 & $<$9.17 & $\dag$7.64 & ... & 10.66 \\
NGC 6120 & 23.74 & 21.95 & 11.11 & 7.80 & ... & 10.73 \\
NGC 6126 & $<$22.37 & 21.56 & $<$9.89 & $\dag$7.69 & ... & 10.61 \\
NGC 6131 & 22.49 & 21.36 & 10.07 & 7.27 & 9.83 & 10.37 \\
NGC 6137 & $<$22.41 & 21.62 & $<$9.94 & $\dag$7.75 & ... & 11.04 \\
NGC 6146 & $<$22.18 & 21.55 & $<$9.83 & $\dag$7.68 & ... & 11.01 \\
NGC 6154 & $<$21.95 & $<$21.37 & $<$9.44 & ... & 9.86 & 10.38 \\
NGC 6155 & 22.25 & 21.04 & 9.78 & 6.93 & 8.95 & 9.82 \\
UGC 10407 & 23.28 & 21.48 & 10.63 & 7.32 & ... & 10.62 \\
NGC 6166 & $^{\displaystyle s}$22.14 & 22.00 & 10.03 & 7.96 & ... & 11.30 \\
NGC 6173 & $<$22.33 & $<$21.48 & $<$9.63 & ... & ... & 11.14 \\
NGC 6189 & 22.59 & 21.57 & 10.19 & 7.48 & 10.07 & 10.68 \\
NGC 6190 & 22.02 & 21.24 & 9.69 & 7.18 & 9.48 & 9.97 \\
\end{tabular}
\end{minipage}
\end{table*}
\begin{table*}
\begin{minipage}{12cm}
\contcaption{}
\begin{tabular}{lrrrrrr}
\hline
(1) & (2) & (3) & (4) & (5) & (6) & (7)\\
Name & log $L_{60}$ & log $L_{850}$ & log $L_{fir}$ & log $M_{d}$ & log $M_{HI}$ & log $L_{B}$ \\
& (W\,Hz$^{-1}$sr$^{-1}$) & (W\,Hz$^{-1}$sr$^{-1}$) & ($L_{\odot}$) & ($M_{\odot}$) & ($M_{\odot}$) & ($L_{\odot}$)\\
\hline
NGC 6185 & 22.47 & $<$21.72 & $<$10.06 & $<$7.85 & ... & 10.96 \\
UGC 10486 & $<$22.06 & $<$21.24 & $<$9.63 & ... & ... & 10.31 \\
NGC 6196 & $<$22.24 & $<$21.53 & $<$9.86 & ... & ... & 10.85 \\
UGC 10500 & $^{\displaystyle s \ast}$21.85 & $<$21.09 & $<$9.56 & $<$7.22 & ... & 10.20 \\
IC 5090 & 23.64 & 22.23 & 11.09 & 8.08 & ... & 10.46 \\
IC 1368 & 23.00 & 21.07 & 10.30 & 6.83 & ... & 10.14 \\
NGC 7047 & 22.35 & 21.45 & 9.98 & 7.38 & 9.08 & 10.50 \\
NGC 7081 & 22.49 & 20.88 & 9.89 & 6.72 & 9.51 & 9.75 \\
NGC 7280 & $<$20.80 & $<$20.34 & $<$8.51 & ... & 8.16 & 9.85 \\
NGC 7442 & 22.83 & 21.60 & 10.32 & 7.47 & 9.75 & 10.48 \\
NGC 7448 & 22.84 & 21.19 & 10.19 & 7.04 & 9.75 & 10.39\\
NGC 7461 & $<$21.70 & $<$20.81 & $<$9.33 & ... & ... & 9.90 \\
NGC 7463 & U & 20.60 & ... & $^{\scriptscriptstyle T}$6.73 & 9.33 & 10.12 \\
III ZW 093 & 23.26 & $<$21.99 & $<$11.06 & $<$8.12 & 9.95 & 11.60 \\
III ZW 095 & $<$21.92 & $<$21.24 & $<$9.93 & ... & ... & 9.90 \\
UGC 12519 & 22.37 & 21.36 & 9.95 & 7.26 & 9.53 & 10.27 \\
NGC 7653 & 22.59 & 21.52 & 10.17 & 7.43 & ... & 10.49 \\
NGC 7691 & 22.15 & $<$20.82 & $<$9.68 & $<$6.95 & 9.59 & 10.24 \\
NGC 7711 & $<$21.60 & $<$20.86 & $<$9.20 & ... & ... & 10.57 \\
NGC 7722 & 22.31 & 21.20 & 9.94 & 7.15 & 9.51 & 10.48 \\
\hline
\end{tabular} \\
(1) Most commonly used name. \\
(2) 60\hbox{\,$\umu$m } luminosity. \\
(3) 850\hbox{\,$\umu$m } luminosity. \\
(4) FIR luminosity, calculated by integrating measured SED from 40--1000\,\micron. \\
(5) Dust mass, calculated using a single temperature, $T_{d}$, as listed in Table~\ref{fluxtab} ($T_{d}$ derived from fitted SED to the 60, 100 and 850\hbox{\,$\umu$m } data points). Upper limits are calculated using $T_{d}$=20\,K.\\
(6) HI mass; refs.: Chamaraux, Balkowski \& Fontanelli (1987), Haynes $\&$ Giovanelli (1988, 1991), Huchtmeier $\&$ Richter (1989), Giovanelli $\&$ Haynes (1993), Lu et al. (1993), Freudling (1995), DuPrie $\&$ Schneider (1996), Huchtmeier (1997), Theureau et al. (1998), Haynes et al. (1999). \\
(7) Blue luminosity, calculated from corrected blue magnitudes taken from the LEDA database. \\
\vspace{0.5pt}\\
$^{\scriptstyle p}$ A close or interacting pair which was resolved by SCUBA; parameters given refer to the combined system, as in Table~\ref{pairstab}.\\
$^{\ast}$ Values should be used with caution (see $^{\ast}$ notes to Table~\ref{fluxtab}).\\
$\dag$ Object was only detected at 850\hbox{\,$\umu$m } (and not in either $\textit{IRAS}$ band), so these dust masses should be used with caution; these are all early types; dust masses calculated using $T_{d}$=20\,K. \\
$^{\scriptscriptstyle T}$ Dust mass calculated using $T_{d}$=20\,K, since no fitted value of $T_{d}$.\\
U Unresolved by \textit{IRAS}.\\
\vspace{0pt}\\
Notes on HI fluxes:-\\
NGC 7463 Giovanelli $\&$ Haynes (1993) note confused HI profile, many neighbours.\\
UGC 12519 HI flux from Giovanelli $\&$ Haynes (1993) gives HI mass of \mbox{log $M_{\odot}$=9.65}.\\
NGC 7691 Haynes et al. (1999) gives \mbox{log $M_{\odot}$=9.88}.\\
NGC 5953/4 HI flux from Freudling (1995); Huchtmeier $\&$ Richter (1989) give \mbox{log $M_{\odot}$} in the range \mbox{8.82 -- 9.20}.\\
NGC 3799/3800 flux for NGC\,3800 but sources may be confused (Lu et al. 1993).\\
NGC 803 Giovanelli $\&$ Haynes (1993) give \mbox{log $M_{\odot}$=9.68}, and note optical disk larger than beam.\\
NGC 6090 Huchtmeier $\&$ Richter (1989) give values up to \mbox{log $M_{\odot}$=10.24}.\\
NGC 6131 confused with neighbour.\\
NGC 6189 Haynes et al. (1999) gives \mbox{log $M_{\odot}$=10.26}.\\
NGC 7081 Huchtmeier $\&$ Richter (1989) give values up to \mbox{log $M_{\odot}$=9.69}.\\
NGC 4712 Huchtmeier $\&$ Richter (1989) give \mbox{log $M_{\odot}$=9.91}.\\
\end{minipage}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=14cm]{fig4.ps}
\caption{\label{colplot}{Colour-colour plot: $S_{60}/S_{100}$ versus $S_{60}/S_{850}$ colours for the optically-selected (this work) and \textit{IRAS}-selected (D00) SLUGS (filled and open points respectively).}}
\end{center}
\end{figure*}
Since 450\hbox{\,$\umu$m } fluxes are only available for only $\sim$ one third of the sample we have in addition fitted single-component SEDs for all sources in the OS sample. In these fits we have allowed $\beta$ to vary as well as $T_{d}$, as it is rarely possible to get an acceptable fit with $\beta$=2. The best fitting $T_{d}$ and $\beta$ are listed in Table~\ref{fluxtab}. We include fitted parameters only for those objects with detections in all 3 wavebands (60\,\micron, 100\hbox{\,$\umu$m } and 850\,\micron). The sample mean and error in the mean for the best-fitting temperature is \mbox{$\bar{T}_{d}=31.6\pm0.6$\,K} and for the dust emissivity index $\bar{\beta}=1.11\pm0.05$. Figure~\ref{SEDfig} shows two representative isothermal SEDs. As an example of the potential dangers of fitting single-temperature SEDs, we note one of these objects (NGC 99) is also fitted with the two-component model (shown in Figure~\ref{2compSEDfig}). NGC 99 can clearly be well-fitted by both an isothermal dust model with very flat $\beta$ ($\beta$=0.4 in this case) \textit{and} a two-component model with much steeper $\beta$ ($\beta$=2); this is also the case for NGC 6190 and PGC 35952 described above. We note that the low values of $\beta$ found from the isothermal fits are not the \textit{true} values of $\beta$ but rather is evidence that galaxies across all Hubble types contain a significant proportion of dust that is colder than these fitted temperatures, and it is likely that these objects (as for NGC 99) require a two-component model to adequately describe their SED.
\subsection{Dust masses}
\label{sec:dmass}
Dust masses for the OS galaxies are calculated using the measured 850\hbox{\,$\umu$m } fluxes and dust temperatures ($T_{d}$) from the isothermal fits (listed in Table~\ref{fluxtab}) using
\begin{equation} \label{eq:dmass}
M_{d}=\frac{S_{850}D^{2}}{\kappa_{d}(\nu)B(\nu,T_{d})}
\end{equation}
where $\kappa_{d}$ is the dust mass opacity coefficient at 850\,\micron, $B(\nu,T_{d})$ is the Planck function at 850\hbox{\,$\umu$m } for the temperature $T_{d}$ and D is the distance.
As discussed in D00 we assume a value for $\kappa_{d}(\nu)$ of 0.077m$^{2}$kg$^{-1}$, which is consistent with the value derived by James et al. (2002) from the global properties of galaxies. Though the true value of $\kappa_{d}(\nu)$ is uncertain, as long as dust has similar properties in all galaxies then our relative dust masses will be correct. The uncertainties in the relative dust masses then depend only on errors in $S_{850}$ and $T_{d}$.
Values for dust masses (calculated using $T_{d}$ from our isothermal fits) are given in Table~\ref{lumtab}. We find a mean dust mass \mbox{$\bar{M_{d}}=(2.34\pm0.36)\times{10^{7}}$ M$_{\odot}$} (where the $\pm$ error is the error on the mean), which is comparable to that found for the IRS sample (D00). This, together with the fact that for the OS sample we find significantly lower values of $\beta$, poses a number of issues. As shown by DE01, if more than one temperature component is present our use of a single-temperature model will have given us values of $\beta$ which are lower than actually true, biased our $T_{d}$ estimates to higher temperatures, and lead to underestimates of the dust masses.
For those galaxies for which we have made two-component fits (Table~\ref{450tab}) we also calculate the two-component dust mass ($M_{d2}$), using
\begin{equation} \label{eq:dmass2}
M_{d2}=\frac{S_{850}D^2}{\kappa_{d}}\times\left[\frac{N_{c}}{B(\nu,T_{c})}+\frac{N_{w}}{B(\nu,T_{w})}\right]
\end{equation}
where parameters are the same as in Equation~\ref{eq:dmass} and $T_{c}$, $T_{w}$, $N_{c}$ and $N_{w}$ are the fitted two-component parameters as in Equation~\ref{eq:2comp} (and listed in Table~\ref{450tab}). The mean two-component dust mass is found to be \mbox{${\bar M_{d2}}=(4.89\pm1.20)\times{10^{7}}$ M$_{\odot}$}, and the two-component dust masses are typically a factor of 2 higher than found from fitting single-temperature SEDs, though in some cases (such as NGC 99) as much as a factor of 4 higher.
Given the lack of CO measurements for the OS sample galaxies, one potential problem with the above estimates of dust mass is any contribution to the SCUBA 850\hbox{\,$\umu$m } measurements by CO(3-2) line emission. Seaquist et al. (2004) find, for a representative subsample of the IRS SLUGS galaxies from D00, that contamination of 850\hbox{\,$\umu$m } SCUBA fluxes by CO(3-2) reduces the average dust mass by \mbox{25--38\%}, though this does not affect the shape of the dust mass function derived using the IRS SLUGS sample in D00. However, the OS galaxies are relatively faint submillimetre sources compared with the IRS sample. From the fractional contribution of CO(3-2) line emission derived by Seaquist et al. (a linear fit to the plot of SCUBA-equivalent flux produced by the CO line versus SCUBA flux) we estimate that for the OS sample the CO line contribution to the 850\hbox{\,$\umu$m } flux is small and is well within the uncertainties on the 850\hbox{\,$\umu$m } fluxes we give in Table~\ref{fluxtab}.
\begin{figure*}
\begin{center}
\includegraphics[angle=0, width=14cm]{fig5.ps}
\caption{\label{colplot-450}{Colour-colour plot: $S_{60}/S_{450}$ versus $S_{60}/S_{850}$ colours for the optically-selected (this work) and \textit{IRAS}-selected (D00) SLUGS (filled and open points respectively).}}
\end{center}
\end{figure*}
\subsection{Gas masses}
\label{sec:gasmass}
The neutral hydrogen masses listed in Table~\ref{lumtab} were calculated from HI fluxes taken from the literature\footnote{See notes to Table~\ref{lumtab}.} using
\begin{equation}
M_{HI}=2.356\times10^5D^2S_{HI}
\end{equation}
where $D$ is in Mpc and $S_{HI}$ is in Jy km s$^{-1}$.
Only a small handful of objects in the OS sample had CO fluxes in the literature, and so in this work we will not present any molecular gas masses.
\subsection{Far-infrared luminosities}
\label{sec:fir}
The FIR luminosity ($L_{fir}$) is usually calculated using
\[FIR=1.26\times10^{-14}(2.58 S_{60}+S_{100})
\]
and
\begin{equation} \label{eq:fir}
L_{fir}=4\pi D^2\times FIR\times C
\end{equation}
as described in the Appendix of \textit{Catalogued Galaxies and Quasars Observed in the IRAS Survey} (Version 2, 1989), where $S_{60}$ and $S_{100}$ are the 60\hbox{\,$\umu$m } and 100\hbox{\,$\umu$m } \textit{IRAS} fluxes, D is the distance, and C is a colour-correction factor dependant on the ratio $S_{60}/S_{100}$ and the assumed emissivity index. The purpose of this correction factor is to account for emission outside the \textit{IRAS} bands, and is explained by Helou et al. (1988).
However, since we have submillimetre fluxes we can use our derived $T_{d}$ and $\beta$ to integrate the total flux under the SED out to 1000\,\micron. This method gives more accurate values of $L_{fir}$ since it makes no general assumptions. We list in Table~\ref{lumtab} $L_{fir}$ calculated using this method and our fitted isothermal SEDs; $L_{fir}$ values calculated using our two-component SEDs are listed in Table~\ref{450tab}.
\subsection{Optical luminosities}
\label{sec:optlum}
The blue luminosities given in Table~\ref{lumtab} are converted (using M$_{B\odot}$=5.48) from blue apparent magnitudes taken from the Lyon-Meudon Extragalactic Database (LEDA; Paturel et al. 1989, 2003) which have already been corrected for galactic extinction, internal extinction and k-correction.
\section{The Submillimetre Properties of Galaxies}
\label{properties}
\begin{figure*}
\begin{center}
\subfigure[\label{beta:opt-irs}]{
\includegraphics[angle=0, width=8.7cm]{fig6_a.ps}}
\hfill
\subfigure[\label{temp:opt-irs}]{
\includegraphics[angle=0, width=8.7cm]{fig6_b.ps}}
\hfill
\caption{\label{opt-iras-hist}{Distributions of (a) $\beta$ values and (b) $T_{d}$ values for the optically- and \textit{IRAS}-selected SLUGS (line-filled and shaded histograms respectively).}}
\end{center}
\end{figure*}
\subsection{Optical selection versus IR selection}
\label{prop:ir-opt}
Figures~\ref{colplot} and~\ref{colplot-450} show the OS and IRS galaxies plotted on two-colour diagrams (filled and open symbols respectively). The IRS and OS galaxies clearly have different distributions, and in particular there are OS galaxies in parts of the diagram where there are no IRS galaxies. In Figure~\ref{colplot} $\sim$\,50\% of the OS galaxies are in a region of the colour-colour diagram completely unoccupied by IRS galaxies. This shows there are galaxies `missing' from IR samples, with important implications for the submillimetre LF (Section~\ref{lumfun}).
Figure~\ref{colplot-450} shows the $S_{60}/S_{450}$ versus $S_{60}/S_{850}$ colour-colour plot for the OS sample objects and IRS sample objects which have 450\hbox{\,$\umu$m } fluxes. We confirm the very tight correlation found by DE01 (here the correlation coefficient \mbox{$r_{s}$\,=\,0.96}, \mbox{significance\,=\,9.20e-21})), and the scatter for the OS sample may be completely explained by the uncertainties on the fluxes. Importantly, this relationship holds for all the objects in the OS sample for which we have 450\hbox{\,$\umu$m } fluxes, which include a wide range of galaxy types \mbox{(t-type=0 to 10)} and with $L_{fir}$ ranging over 2 orders of magnitude. The (least-squares) best-fitting line to the \textit{combined} \mbox{OS + IRS} samples shown in Figure~\ref{colplot-450} is given by
\[
\mathrm{log(S_{60}/S_{450})=(1.03\pm0.05)\,log(S_{60}/S_{850})-(0.955\pm0.070)}
\]
(or re-written $S_{60}/S_{450}=0.119(S_{60}/S_{850})^{1.03}$) and is very similar to that found by DE01, confirming the finding for the IRS sample that, within the uncertainties, the ratio $S_{450}/S_{850}$ is constant. DE01 conclude, from the results of simulations of the 450/850\hbox{\,$\umu$m } flux ratio and from the fitted $\beta$ values for those galaxies whose SEDs require a cold component, that $\beta\sim2$ for all galaxies, and that therefore the cold dust component in all galaxies has a similar temperature \mbox{($T_{c}\sim$\,20--21\,K)}. The fact that we also find the $S_{450}/S_{850}$ ratio constant for the OS sample suggests that these conclusions are true for all Hubble types (only \mbox{t-types$<$0} are unrepresented in the OS sub-sample with 450\hbox{\,$\umu$m } data).
The positions of the OS galaxies in the colour diagrams suggest there is more cold dust in the OS galaxies than in the IRS galaxies. We can investigate this further with the results of our spectral fits. Figure~\ref{beta:opt-irs} shows the comparison between the distribution of $\beta$ values (found from the isothermal fits) for the OS and IRS samples. We find OS sample galaxies with $\beta$ values lower than any found in the IRS sample. The two-sided Kolmogorov--Smirnov (K-S) test shows that the distributions of the two samples are significantly different (the probability that the two samples come from the same distribution function is only 1.8e-5). Though this clearly demonstrates that the properties of the dust in the OS and IRS samples are different, rather than interpreting this as a physical difference in the emissivity behaviour of the grains ($\beta$) we believe that it is a difference in the two samples' ratios of cold/warm dust.
Figure~\ref{temp:opt-irs} shows the comparison between the distribution of dust temperatures (from isothermal fits) for the OS and IRS samples. We note that the OS sample has consistently colder $T_{d}$ compared to the IRS sample. Once again using a K-S test we find that the OS and IRS sample dust temperatures do not have the same distribution, with the probability of the two samples coming from the same distribution function being only 1.41e-4.
For those objects in the OS and IRS samples for which two-component fits were possible, the distributions of the warm and cold component temperatures ($T_{w}$ and $T_{c}$) for the two samples are shown in Figures~\ref{temp-warm} and~\ref{temp-cold} respectively. The distributions of $T_{c}$ for the OS and IRS samples are statistically indistinguishable, while conversely the distributions of $T_{w}$ are not similar (probability of same distribution is 0.03). While the mean cold component temperature for the OS sample (\mbox{$\bar T_{c}=20.2\pm0.5$\,K}) is very similar to the value found for the IRS sample (mean \mbox{$\bar T_{c}=20.1\pm0.4$\,K}), the mean warm component temperature is rather higher (\mbox{$\bar T_{w}=47.4\pm2.4$\,K} for the OS sample as opposed to \mbox{$\bar T_{w}=39.3\pm1.4$\,K} for the IRS sample).
Figure~\ref{norm-ratio} shows the distribution of $N_{c}/N_{w}$ for the OS and IRS samples. The OS and IRS samples clearly have different distributions -- the (K-S test) probability of the two samples having the same distribution is 8.4e-4. For the OS sample the mean \mbox{$N_{c}/N_{w}=532\pm172$} (or higher, see Section~\ref{sed-fits}), for the IRS sample the mean \mbox{$N_{c}/N_{w}=38\pm11$}. For the OS sample there is a much larger range of $N_{c}/N_{w}$ than for the IRS sample. Interestingly, few of the OS objects even have a $N_{c}$/$N_{w}$ low enough to fall within the range found for the IRS sample, strongly suggesting a prevalence of cold dust in the OS sample compared to the IRS sample.
\begin{figure}
\begin{center}
\subfigure[\label{temp-warm}]{
\includegraphics[angle=0, width=8.35cm]{fig7_a.ps}}\\
\vfill
\subfigure[\label{temp-cold}]{
\includegraphics[angle=0, width=8.35cm]{fig7_b.ps}}
\vfill
\caption{\label{temp-warm-cold}{Distributions of warm component (a) and cold component (b) temperatures for the OS and IRS samples (line-filled and shaded histograms respectively).}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=8.45cm]{fig8.ps}
\caption{\label{norm-ratio}{Distribution of log($N_{c}/N_{w}$) for the OS and IRS samples (line-filled and shaded histograms respectively).}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=8.45cm]{fig9.ps}
\caption{\label{norm-lum60}{$N_{c}/N_{w}$ versus 60\hbox{\,$\umu$m } luminosity for the OS and IRS samples (filled and open points respectively).}}
\end{center}
\end{figure}
The large difference between the distributions of $N_{c}/N_{w}$ for the OS and IRS galaxies implies that most OS sample galaxies contain much larger proportions of cold dust relative to warm dust than found for the IRS galaxies, additional evidence that \textit{IRAS} missed a population of cold-dust-dominated objects. The similarity of the temperature of the cold component for the OS and IRS sample and the difference in the distribution of $N_{c}/N_{w}$ supports the current paradigm for dust in galaxies. An alternative model for dust in galaxies would be one in which \textit{IRAS}\/ galaxies are ones in which the general ISRF is more intense, and therefore the majority of dust is hotter. The similarity of $T_{c}$ for the different samples argues against this and suggests that most dust in all galaxies is relatively cold and has a similar temperature. The temperature differences between galaxies arise from a second dust component, presumably the dust in regions of intense star formation. Our results for the OS sample indicate that the ratio of the mass of dust in this second component to the mass of dust in the first component can vary by roughly a factor of 1000. There are two other pieces of evidence in favour of the two-component model. First, the ISO 170\hbox{\,$\umu$m } flux densities that exist for 3 of our two-component-fitted galaxies (Stickel et al. 2004; Section~\ref{sed-fits}) agree very well with our model SEDs (we did not use these data in making our fits, with one exception; see Section~\ref{sed-fits}). Second, the ratio of the mass of cold dust to the mass of warm dust correlates inversely with 60\hbox{\,$\umu$m } luminosity (Figure~\ref{norm-lum60}; \mbox{$r_{s}$\,=\,$-$0.41}, \mbox{significance\,=\,1.24e-2}); in the two-component model one might expect the most luminous \textit{IRAS}\/ sources to be dominated by the warm component.
The difference in the distributions of $T_{w}$ does not, however, fit in with this general picture. In the two-component model one would expect $T_{w}$ and $T_{c}$ to be constants, with the only thing changing between galaxies being the proportion of cold and warm dust. The difference in the distributions of $T_{w}$ may indicate that this model is too simplistic. Two things may be relevant here. First, as can be seen in Figure~\ref{2compSEDfig} it is those OS galaxies with very prominent cold components which typically account for the highest warm component temperatures (for example PGC 35952 or NGC 6090). Second, the model SEDs with high values of $T_{w}$ also generally provide a good fit to the 25\hbox{\,$\umu$m } flux density, whereas the model values with low values of $T_{w}$ tend to underestimate the 25\hbox{\,$\umu$m } flux density. This last point suggests that to fully understand dust in galaxies one cannot ignore the measurements at wavelengths $<$\,60\,\micron; however, if we did include these measurements we would then definitely need more than two dust components. This is clearly demonstrated by Sievers et al. (1994) who, for NGC 3627, fit a three-component model. A two-component model is nonetheless adequate for our purposes, since we are interested in the cold component rather than a third hot component.
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=8.45cm]{fig10.ps}
\caption{\label{type-hist}{Distribution of Hubble types for the OS sample 850\hbox{\,$\umu$m } detections (upper panel) and non-detections (lower panel).}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=8cm]{fig11.ps}
\caption{\label{steve-lum-plot}{Cumulative luminosity distributions for early-type galaxies
(solid line) and late-type galaxies (dot-dashed). The
maximum values for both samples are less than one
because of the upper limits that fall below the lowest
actual measurement.}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=8.45cm]{fig12.ps}
\caption{\label{850-opt}{850\hbox{\,$\umu$m } luminosity versus optical luminosity $L_{B}$ for the OS sample, with different Hubble types indicated by different symbols: E-S0(t=-5 to 0), Early-type spirals(t=1 to 4), S?(t=5), Late-type spirals(t=6 to 10): circles, triangles, stars, and squares respectively. The 6 detected ellipticals are highlighted as open circles.}}
\end{center}
\end{figure}
\subsection{Submillimetre properties along the Hubble sequence}
\label{prop:hubble}
In this section we investigate the submillimetre properties of galaxies as a function of Hubble type (\textit{t}). We first compare the distributions of Hubble type for the detections (D) and non-detections (ND) in our OS sample (Figure~\ref{type-hist}). We use the K-S test to find that the probability of their having the same distribution is $\simeq2\%$. Thus Figure~\ref{type-hist} suggests that early-type galaxies are less likely to be submillimetre sources than later types.
\begin{figure*}
\begin{center}
\subfigure[\label{bhist-split}]{
\includegraphics[angle=0, width=8.7cm]{fig13_a.ps}}
\subfigure[\label{temp-type-m}]{
\includegraphics[angle=0, width=8.7cm]{fig13_b.ps}}
\caption{\label{type-beta-temp}{Distribution of (a) $\beta$ values and (b) $T_{d}$ values for the OS SLUGS, with different Hubble types (as given in LEDA according to the RC2 code, and listed in Table~\ref{fluxtab}) indicated by different shaded regions: E-S0 (t=-5 to 0), Early-type spirals (t=1 to 4), S? (t=5), Late-type spirals (t=6 to 10). ((b): Inserted panel: mean $T_{d}$ for the OS SLUGS for different Hubble types, with error bars of error on the mean (bins with only 1 source are not plotted)).}}
\end{center}
\end{figure*}
To investigate this apparent morphological
difference further,
we estimated the submillimetre
luminosity distributions of early-
and late-type galaxies. A major complication is the
large number of upper limits. We used the
Kaplan-Meier estimator (Wall \& Jenkins 2003) to incorporate
information from both
the upper limits and the measurements. We defined early-type
galaxies as all those with $t\,\leq\,1$ and late-type
galaxies as those with $t\,>\,1$. We used this division, because
the greatest difference between the cumulative distributions
of Hubble type for detected and non-detected galaxies (Figure~\ref{type-hist}) was found at
t=1. Figure~\ref{steve-lum-plot} shows the cumulative luminosity distributions
estimated in this way for the early-type and late-type galaxies.
There appears to be a tendency for the late-type galaxies
to be more luminous submillimetre sources. However, the tendency
is not very strong. We also used the ASURV
statistical package for censored data (Feigelson \& Nelson 1985) to compare the results for the two samples,
using the Gehan test and
the log-rank test (see Wall \& Jenkins 2003). We found
a marginally significant (10\%) difference using the log-rank
test but no significant difference using the Gehan test.
Figure~\ref{850-opt} shows a plot of 850\hbox{\,$\umu$m } luminosity versus optical
luminosity. For clarity we simply divide our sample into 4 broad groups based on the galaxies' t-type parameter given in LEDA (which uses the standard numerical codes for the de Vaucoulers morphological type, as defined in RC2): E-S0 \mbox{(t=-5 to 0)}, Early-type spirals \mbox{(t=1 to 4)}, S? (t=5) and Late-type spirals \mbox{(t=6 to 10)}. The different Hubble types show similar relationships.
On further inspection of the data, the more marked dependence on
Hubble type visible in Figure~\ref{type-hist} appears to be at least partly
caused by the early-type galaxies being observed in worse
conditions. In summary, there appears to be some difference
in submillimetre properties as one moves along the Hubble sequence,
but it is not very strong.
We can also use the results of our spectral fits to investigate whether there are any trends with Hubble type. As above, we simply divide our sample into 4 broad groups based on the galaxies' t-type. Figures~\ref{bhist-split} and \ref{temp-type-m} show the distributions of $\beta$ and $T_{d}$ (derived from our single-component fits) for the OS sample. We note that the objects of each type appear fairly evenly distributed across the bins from \mbox{$\beta$\,=\,0 to 2} (Figure~\ref{bhist-split}), and in order to test this statistically we divide the sample into two broad groups: early types ($-5\leq \textrm{t-type} \leq4$) and late types ($5\leq \textrm{t-type} \leq10$), and perform a K-S test on the two groups. We find that the distributions of the early and late type groups are not significantly different. The distribution of isothermal dust temperatures appears similar for all Hubble types (Figure~\ref{temp-type-m}); we find no significant differences between the early and late types. We also investigated the distributions of the warm and cold component temperatures found from our two-component fits to look for any differences between early and late types; for example Popescu et al. (2002) find a tendency for the temperatures of the cold dust component to become colder for later types. We divided our 18 two-component fitted temperatures into Hubble types as in Popescu et al. (2002) and, due to our smaller number of sources, also into two broad groups of early ($0\leq \textrm{t-type} \leq4$) and late ($6\leq \textrm{t-type} \leq10$) types, and compared the overall distributions and the median $T_{c}$ for each type grouping. We found no differences between either the overall distributions or the median values of $T_{c}$ or $T_{w}$ for the early and late types, though we note the limitations of such a small sample.
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=8.45cm]{fig14.ps}
\caption{\label{d-HI-mass}{Dust mass versus HI mass for the OS and IRS samples (filled and open circles respectively).}}
\end{center}
\end{figure}
\subsection{Ellipticals}
\label{ellipticals}
It was once thought that ellipticals were entirely devoid of dust and gas, but optical absorption studies now show that dust is usually present (Goudfrooij et al. 1994; van Dokkum \& Franx 1995). Furthermore, dust masses for the \mbox{$\sim$\,15\%} of ellipticals detected by \textit{IRAS} (Bregman et al. 1998) have been found to be as much as a factor of 10--100 higher when estimated from their FIR emission compared to estimates from optical absorption (Goudfrooij \& de Jong 1995).
At 850\hbox{\,$\umu$m } we detect 6 ellipticals, from a total of 11 ellipticals in the OS sample, and find them to have dust masses in excess of \mbox{$10^{7}$ $M_{\odot}$}. However, a literature search revealed that for 4 of the 6 detections there are radio sources. We have used the radio data
to estimate the contribution of synchrotron emission at 850\,\micron. These estimates are often very uncertain because of the limited number of flux measurements available between 1.4GHz and 850\hbox{\,$\umu$m } (353GHz). However, in some cases (Section~\ref{maps}) it is clear that some or all of the 850\hbox{\,$\umu$m } emission may be synchrotron radiation. We are currently investigating ellipticals further with SCUBA observations of a larger sample. This will be the subject of a separate paper (Vlahakis et al., in prep.).
\subsection{The relationship between gas and dust}
\label{prop:gas-dust}
In D00 we found that both the mass of atomic gas and the mass of molecular gas are correlated with dust mass, but the correlation is tighter for the molecular gas. There are virtually no CO measurements for the OS sample, so here we have only estimated the mass of atomic gas. We compared the dust mass ($M_{d}$, calculated using dust temperatures from the isothermal fits) to the HI mass for the OS sample (Figure~\ref{d-HI-mass}) and find a very weak correlation. Though the correlation for the OS sample alone is very weak it is nonetheless consistent with the correlation found by D00 for the IRS sample; most of the OS points lie within the region covered by the IRS points, but though they cover the same range of HI masses we note that we do not have any HI masses for our OS sample objects with the higher dust masses. The weakness of the correlation for the OS sample is therefore likely due simply to the small number of OS sample 850\hbox{\,$\umu$m } detections for which we have HI data (28 objects).
The mean neutral gas-to-dust ratio for the OS sample is $M_{HI}/M_{d}$=395$\pm71$, where the error given is the error on the mean. The (neutral $+$ molecular) gas-to-dust ratios for the IRS SLUGS sample and the Devereux \& Young (1990; herein DY90) sample of spiral galaxies are respectively $M_{{H_{2}}+HI}/M_{d}$=581$\pm43$ and $M_{{H_{2}}+HI}/M_{d}$=1080$\pm70$, but since for the OS sample we have no CO measurements and therefore no measure of the mass of molecular hydrogen we can at this stage only compare the neutral gas-to-dust ratio for the OS sample. We therefore compare our OS value to mean neutral gas-to-dust ratios which we calculate, for the IRS sample and the DY90 sample respectively, to be $M_{HI}/M_{d}$=305$\pm24$ and $M_{HI}/M_{d}$=2089$\pm341$. There is a large difference between the values for both SLUGS samples and the value determined by DY90. This is almost certainly due to the fact that the DY90 dust masses were estimated from \textit{IRAS}\/ fluxes and therefore, for the reasons described in Section~\ref{intro}, will have `missed' the cold dust.
There is also a difference between the SLUGS values and the Galactic value of 160 for the (neutral $+$ molecular) gas-to-dust ratio (the value derived from Sodroski et al. (2004) by D00). The neutral gas-to-dust ratios for both the SLUGS samples are at least a factor of 2 larger than this Galactic value, and as shown by D00 when the molecular gas is included the value of the gas-to-dust ratio for the IRS sample is more than 3 times larger than the Galactic value. D00 attribute this discrepancy to a missed `cold dust' component \mbox{($T_{d}\le 20$\,K)} in the IRS sample. We have already noted in this paper that the single-temperature fits lead to dust masses approximately a factor of 2 lower than the more realistic two-component fits (Section~\ref{sec:dmass}). Using the dust masses calculated using our two-component fits ($M_{d2}$; Table~\ref{450tab}), for the 13 galaxies for which there are HI masses we find the mean neutral gas-to-dust ratio for the OS sample is then $M_{HI}/M_{d2}$=192$\pm44$. This is in good agreement with the Galactic value, although if there is a significant amount of molecular gas this value would obviously be higher.
\section{Luminosity and Dust Mass Functions}
\label{lumfun}
The `accessible volume' method (Avni \& Bahcall 1980) will, in principle, produce unbiased estimates of the submillimetre luminosity function (LF) and dust mass function (DMF) provided that no population of galaxies is unrepresented by the sample used to derive the LF and DMF. In Paper I (D00) we produced a first estimate of the LF and DMF from the IRS sample. However, since our new observations of the OS sample have shown the existence of a population of galaxies with low values of the $S_{60}/S_{100}$ and $S_{60}/S_{850}$ flux ratios (Figure~\ref{colplot} and discussion in Section~\ref{sed-fits}) \textit{of which there is not a single representative in the IRS sample}, our earlier estimates of the LF and DMF are likely to be biased.
In this section we use our new (OS sample) results to produce new estimates of the submillimetre LF and DMF.
\subsection{Method}
\label{lumfun:method}
We derive the local submillimetre LF and DMF by two different methods: 1) directly from the OS SLUGS sample, and 2) by extrapolating the spectral energy distributions of the galaxies in the \textit{IRAS} PSCz catalogue out to 850\,\micron. The PSCz catalogue (Saunders et al. 2000) is a complete redshift survey of $\sim$15000 \textit{IRAS}\/ galaxies in the \textit{IRAS}\/ Point Source Catalogue. Serjeant \& Harrison (2005; herein SH05) used the PSCz galaxies and the IRS SLUGS submm:far-IR two-colour relation to extrapolate the SEDs of the PSCz galaxies out to 850\hbox{\,$\umu$m } and produce an 850\hbox{\,$\umu$m } LF. Importantly, this method allows us to probe a wider range of luminosities than probed directly by the SLUGS samples.
We estimate the LF for both methods using
\begin{equation} \label{accvol}
\Phi(L)\Delta L=\sum_{i}\frac{1}{V_{i}}
\end{equation}
(Avni $\&$ Bahcall 1980). Here $\Phi(L)\Delta L$ is the number density of objects (Mpc$^{-3}$) in the luminosity range $L$ to $L+\Delta L$, the summation is over all the objects in the sample lying within this luminosity range, and $V_{i}$ is the accessible volume of the $i$th object in the sample. Throughout we use an $H_{0}$ of \mbox{75 km\,s$^{-1}$Mpc$^{-1}$} and a `concordance' universe with $\Omega_{M}$=0.3 and $\Omega_{\Lambda}$=0.7. We estimate the dust mass function (the space density of galaxies as a function of dust mass) in the same way as the LF, substituting dust mass for luminosity in Equation~\ref{accvol}. The details of these two methods, hereafter referred to as `directly measured' and `PSCz-extrapolated', are discussed in Sections~\ref{method:850LF} and~\ref{method:pscz} respectively.
\begin{table}
\caption{\label{optlf}\small{Directly measured OS SLUGS luminosity and dust mass functions}}
\begin{tabular}{cccc}
\hline
\multicolumn{4}{c}{850\hbox{\,$\umu$m } luminosity function} \\
\smallskip \\
$log L_{850}$ & $\phi$(L) & $\sigma_{\phi}$ & \\
(W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & \\
\smallskip \\
20.75 & 9.17e-3 & 3.47e-3 & \\
21.01 & 3.83e-3 & 1.15e-3 & \\
21.27 & 2.10e-3 & 6.32e-4 & \\
21.52 & 1.20e-3 & 3.10e-4 & \\
21.78 & 6.03e-4 & 2.46e-4 & \\
22.04 & 9.14e-5 & 5.28e-5 & \\
\medskip \\
$\alpha$ & $L_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\
& (W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) \\
\smallskip \\
$-$1.71$^{+0.60}_{-0.57}$ & $4.96^{+6.1}_{-2.5}\times10^{21}$ & 1.67$^{+5.21}_{-1.18}\times10^{-3}$ & 0.31 \\
\medskip\\
\multicolumn{4}{c}{850\hbox{\,$\umu$m } dust mass function} \\
\smallskip \\
$log M_{d}$ & $\phi$(M) & $\sigma_{\phi}$ & \\
($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & \\
\smallskip \\
6.75 & 9.08e-3 & 3.03e-3 & \\
6.99 & 3.99e-3 & 1.33e-3 &\\
7.23 & 3.09e-3 & 8.57e-4 & \\
7.48 & 9.25e-4 & 3.08e-4 &\\
7.72 & 8.14e-4 & 2.45e-4 &\\
7.96 & 5.69e-5 & 4.02e-5 &\\
\medskip \\
$\alpha$ & $M_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\
& ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) \\
\smallskip \\
$-$1.67$^{+0.24}_{-0.25}$ & 3.09$^{+1.09}_{-0.64}\times10^{7}$ & 3.01$^{+1.62}_{-1.38}\times10^{-3}$ & 1.17 \\
\medskip \\
\hline
\medskip
\end{tabular}
\end{table}
\begin{table}
\caption{\label{PSCZlf}\small{PSCz-extrapolated luminosity function}}
\begin{tabular}{cccc}
\hline
\smallskip \\
log $L_{850}$ & $\phi$(L) & $\sigma_{\phi}^{down}$ & $\sigma_{\phi}^{up}$ \\
(W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & \multicolumn{2}{c}{(Mpc$^{-3}$dex$^{-1}$)} \\
\smallskip \\
18.52 & 3.42e-02 & 2.42e-02 & 2.74e-02 \\
18.75 & 6.30e-02 & 2.38e-02 & 2.83e-02 \\
18.99 & 3.90e-02 & 1.62e-02 & 9.45e-03 \\
19.23 & 3.20e-02 & 1.06e-02 & 6.17e-03 \\
19.47 & 2.35e-02 & 3.50e-03 & 7.86e-03 \\
19.70 & 3.08e-02 & 8.14e-03 & 3.42e-03 \\
19.94 & 1.85e-02 & 2.81e-03 & 5.72e-03 \\
20.18 & 1.26e-02 & 1.98e-03 & 1.34e-03 \\
20.42 & 1.16e-02 & 1.14e-03 & 6.74e-04 \\
20.65 & 1.02e-02 & 1.70e-03 & 4.41e-04 \\
20.89 & 6.67e-03 & 1.12e-03 & 8.25e-04 \\
21.13 & 4.30e-03 & 6.77e-04 & 3.01e-04 \\
21.36 & 2.73e-03 & 7.59e-04 & 1.63e-04 \\
21.60 & 1.34e-03 & 4.61e-04 & 1.00e-04 \\
21.84 & 4.43e-04 & 1.75e-04 & 1.36e-04 \\
22.08 & 1.17e-04 & 6.67e-05 & 2.36e-05 \\
22.31 & 1.85e-05 & 7.86e-06 & 1.45e-05 \\
22.55 & 4.27e-06 & 2.64e-06 & 3.81e-07 \\
22.79 & 2.91e-07 & 1.28e-07 & 9.39e-07 \\
23.03 & 9.86e-08 & 5.62e-08 & 3.49e-08 \\
\medskip \\
$\alpha$ & $L_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\
& (W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) \\
\smallskip \\
$-$1.38$^{+0.02}_{-0.03}$ & 3.73$^{+0.29}_{-0.32}\times10^{21}$ & 4.17$^{+0.41}_{-0.45}\times10^{-3}$ & 1.0 \\
\smallskip \\
\hline
\medskip
\end{tabular}
\end{table}
\subsubsection{Directly measured 850\,$\mu$m luminosity function and dust mass function}
\label{method:850LF}
We calculated the directly measured LF and DMF from the 52 objects in the OS sample which were detected at 850\,\micron. For the DMF we use the dust masses listed in Table~\ref{lumtab}, which were calculated using the isothermal SED-fitted temperatures or, where no fit was made (11 objects), using a dust temperature of 20K. For the OS sample the accessible volume is the maximum volume in which the object would still be detected at 850\hbox{\,$\umu$m } and still be included in the CfA sample. Since objects with \mbox{$cz<1900$\,km\,s$^{-1}$} were excluded from our sample this volume is not included in our calculation of $V_{i}$. When calculating the maximum redshift at which an object would still be detected at 850\hbox{\,$\umu$m } we used the noise appropriate for the observation of that object. We corrected the LF by the factor 97/81 to account for the CfA galaxies we did not observe at all at 850\hbox{\,$\umu$m } (Section~\ref{sample}).
The corrected directly measured 850\hbox{\,$\umu$m } LF and DMF are shown as star symbols in Figures~\ref{lumfun-plot} and~\ref{dmfun} respectively, and are given in tabular form in Table~\ref{optlf}. The errors on the directly measured LF and DMF are standard Poisson errors. One effect that may lead to our estimates of the LF and DMF being slight underestimates is that we noticed that the OS galaxies not detected at 850\hbox{\,$\umu$m } were generally observed under worse weather conditions than the sources that were detected.
\subsubsection{\textit{IRAS} PSCz-extrapolated 850\,$\mu$m luminosity function and dust mass function}
\label{method:pscz}
\begin{figure*}
\begin{center}
\includegraphics[angle=270, width=13cm]{fig15.ps}
\caption{\label{lumfun-plot}{PSCz-extrapolated 850\hbox{\,$\umu$m } luminosity function (filled circles) with best-fitting Schechter function (solid line). The parameters for the Schechter function are $\alpha=-1.38$, $L_{\ast}=3.7\times10^{21}$ W\,Hz$^{-1}$sr$^{-1}$. Also shown are the directly measured 850\hbox{\,$\umu$m } luminosity function for the OS SLUGS sample (filled stars) with best-fitting Schechter function (dashed line) and the results for the IRS SLUGS sample from Dunne et al. (2000) (open triangles and dotted line).}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\subfigure[]{\label{dmfun-a}
\includegraphics[angle=270, width=13cm]{fig16_a.ps}}\\
\subfigure[]{
\includegraphics[angle=270, width=13cm]{fig16_b.ps}}
\caption{\label{dmfun}{(a) PSCz-extrapolated dustmass function (filled circles) with best-fitting Schechter function (solid line). The dust masses were calculated using $T_{d}$ derived from the \textit{IRAS} 100/60 colour and $\beta$=2. The parameters for the Schechter function are $\alpha=-1.34$, $M_{\ast}=2.7\times10^7 M_{\odot}$. The dashed line and open circles are for a `cold dustmass function' in which dust masses are calculated using $T_{d}$=20K and $\beta$=2. The best-fitting Schechter parameters are $\alpha=-1.39, M_{\ast}=5.3\times10^7 M_{\odot}$.
(b) Directly measured dustmass function for the OS SLUGS sample (filled stars) with best-fitting Schechter function (dashed line). The dust masses were calculated using $T_{d}$ from isothermal SED fitting. Also shown are the results for the IRS SLUGS sample from Dunne et al. (2000) (open triangles and dotted line). The filled circles and solid line show the PSCz-extrapolated dustmass function as in (a).}}
\end{center}
\end{figure*}
In order to better constrain the LF at the lower luminosity end more data points are needed, probing a wider range of luminosities than probed directly by the SLUGS samples. We achieve this using a method described by SH05, whereby the 850\hbox{\,$\umu$m } LF is determined by extrapolating the spectral energy distributions of the $\sim$15000 \textit{IRAS} PSCz survey galaxies (Saunders et al. 2000) out to 850\,\micron. Since for the two SLUGS samples we find a strong correlation between the $S_{60}/S_{100}$ and $S_{60}/S_{850}$ colours (Figure~\ref{colplot}) we can use a linear fit to this colour-colour relation to make the extrapolation from 60\hbox{\,$\umu$m } to 850\hbox{\,$\umu$m } flux density. SH05 derived the submm:far-IR two-colour relationship from the IRS SLUGS sample. However, we have shown in this paper that the OS and IRS samples have quite different properties. In order to determine the sensitivity of the LF/DMF to the colour relationship we have derived colour relationships for the combined \mbox{OS + IRS} sample, the OS sample alone, and the IRS sample alone (Table~\ref{colplot-params}).
\begin{table}
\caption{\label{colplot-params}\small{Linear fit parameters for the SLUGS colour-colour plot (log($S_{60}/S_{100}$) vs log($S_{60}/S_{850}$)) shown in Figure~\ref{colplot}.}}
\begin{tabular}{ccc}
\hline
SLUGS data fitted & \multicolumn{2}{c}{linear fit (y=mx+c)} \\
& m & c \\
\hline
OPT+\textit{IRAS} & $0.365\pm0.014$ & $-0.881\pm0.024$\\
OPT & $0.296\pm0.031$ & $-0.797\pm0.039$\\
\textit{IRAS} & $0.421\pm0.023$ & $-0.981\pm0.042$\\
\hline
\end{tabular}
\end{table}
In order to produce unbiased estimates of the LF and DMF we have excluded some PSCz galaxies. Firstly we exclude all those objects that do not have redshifts, those that have velocities \mbox{$<300$\,km\,s$^{-1}$} (to ensure that peculiar velocities are unimportant), and those that have redshifts $>0.2$ (thus excluding any ultra-luminous, high-redshift objects). We then exclude all objects with upper limits at 100\,\micron, since for these objects we cannot apply the SH05 method. Finally, we use the {\textit{IRAS} Point Source Catalogue flags, as listed in the PSCz catalogue, to exclude sources which are likely to be either solely or strongly contaminated by Galactic cirrus. It is important to exclude these sources because they are very cold sources and so potentially can have a large effect on the 850\hbox{\,$\umu$m } LF. If two or more of the flags indicate Galactic cirrus (using flag value limits indicated in the \textit{IRAS} Explanatory Supplement) we exclude that object. As a check on the validity of this method we inspected by eye (using the IRSA ISSA Image Server) a sample of $\sim$40 objects randomly chosen from those excluded as Galactic cirrus, and a further sample of $\sim$40 objects randomly chosen from those that made it into our final sample. We found that 98\% of the sources with cirrus flags and 7\% of the sources without cirrus flags showed signs of significant cirrus, although for two thirds of the sources with cirrus flags there still appeared to be a genuine source present. In total, from the $\sim$14500 galaxies with redshifts in the \textit{IRAS} PSCz catalogue we exclude $\sim$4300 objects because of either 100\hbox{\,$\umu$m } upper limits or Galactic cirrus. This leaves 10252 galaxies in our PSCz-selected sample.
\begin{table}
\caption{\label{PSCZdmf}\small{PSCz-extrapolated dustmass functions}}
\begin{tabular}{cccc}
\hline
\multicolumn{4}{c}{PSCz-extrapolated single temperature dustmass function}
\smallskip \\
log $M_{d}$ & $\phi$(M) & $\sigma_{\phi}^{down}$ & $\sigma_{\phi}^{up}$ \\
($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) & \multicolumn{2}{c}{(Mpc$^{-3}$dex$^{-1}$)} \\
\smallskip \\
4.30 & 3.26e-02 & 2.30e-02 & 2.30e-02 \\
4.55 & 1.62e-02 & 7.26e-03 & 4.37e-02 \\
4.80 & 6.78e-02 & 4.22e-02 & 1.70e-02 \\
5.05 & 3.11e-02 & 8.74e-03 & 9.10e-03 \\
5.30 & 2.58e-02 & 6.36e-03 & 3.77e-03 \\
5.54 & 2.36e-02 & 5.72e-03 & 8.87e-03 \\
5.79 & 2.68e-02 & 8.69e-03 & 2.48e-03 \\
6.04 & 1.46e-02 & 1.62e-03 & 3.60e-03 \\
6.29 & 1.19e-02 & 2.41e-03 & 6.62e-04 \\
6.54 & 9.29e-03 & 3.93e-04 & 8.97e-04 \\
6.79 & 7.68e-03 & 1.68e-03 & 2.53e-04 \\
7.04 & 4.67e-03 & 8.33e-04 & 3.80e-04 \\
7.29 & 2.80e-03 & 7.87e-04 & 1.03e-04 \\
7.54 & 1.38e-03 & 4.83e-04 & 1.94e-04 \\
7.79 & 4.51e-04 & 2.12e-04 & 1.87e-04 \\
8.04 & 1.06e-04 & 6.10e-05 & 3.12e-05 \\
8.29 & 1.44e-05 & 7.21e-06 & 1.48e-05 \\
8.54 & 3.10e-06 & 2.22e-06 & 1.18e-06 \\
8.79 & 4.82e-07 & 4.21e-07 & 5.94e-07 \\
9.04 & 5.24e-08 & 5.64e-08 & 5.24e-08 \\
\medskip \\
$\alpha$ & $M_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\
& ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) \\
\smallskip \\
$-$1.34$^{+0.13}_{-0.08}$ & 2.74$^{+1.23}_{-1.13}\times10^{7}$ & 5.16$^{+3.90}_{-1.74}\times10^{-3}$ & 0.65 \\
\medskip \\
\multicolumn{4}{c}{PSCz-extrapolated 20K `cold' dustmass function}
\smallskip \\
log $M_{d}$ & $\phi$(M) & $\sigma_{\phi}^{down}$ & $\sigma_{\phi}^{up}$ \\
\smallskip \\
4.65 & 3.41e-02 & 2.41e-02 & 2.76e-02 \\
4.88 & 6.28e-02 & 2.37e-02 & 2.82e-02 \\
5.12 & 3.88e-02 & 1.61e-02 & 9.42e-03 \\
5.36 & 3.20e-02 & 1.06e-02 & 6.16e-03 \\
5.60 & 2.59e-02 & 4.59e-03 & 6.08e-03 \\
5.84 & 2.86e-02 & 4.67e-03 & 3.24e-03 \\
6.07 & 1.85e-02 & 2.60e-03 & 4.39e-03 \\
6.31 & 1.29e-02 & 1.82e-03 & 1.13e-03 \\
6.55 & 1.17e-02 & 1.11e-03 & 6.77e-04 \\
6.79 & 1.02e-02 & 1.58e-03 & 4.37e-04 \\
7.03 & 6.75e-03 & 1.21e-03 & 5.96e-04 \\
7.26 & 4.29e-03 & 6.15e-04 & 4.09e-04 \\
7.50 & 2.79e-03 & 7.83e-04 & 1.07e-04 \\
7.74 & 1.35e-03 & 4.65e-04 & 1.07e-04 \\
7.98 & 4.54e-04 & 1.81e-04 & 1.37e-04 \\
8.22 & 1.19e-04 & 6.64e-05 & 2.08e-05 \\
8.45 & 1.91e-05 & 7.57e-06 & 1.47e-05 \\
8.69 & 4.56e-06 & 3.07e-06 & 3.27e-07 \\
8.93 & 2.87e-07 & 9.82e-08 & 9.94e-07 \\
9.17 & 1.03e-07 & 5.48e-08 & 3.26e-08 \\
\medskip \\
$\alpha$ & $M_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\
& ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) \\
\smallskip \\
$-$1.39$^{+0.03}_{-0.02}$ & 5.28$^{+0.45}_{-0.55}\times10^{7}$ & 4.04$^{+0.74}_{-0.50}\times10^{-3}$ & 1.28 \\
\medskip \\
\hline
\end{tabular}
\end{table}
For the PSCz-extrapolated sample the accessible volume is the maximum volume in which the object could still be seen and still be included in the \textit{IRAS} PSCz catalogue. Since objects with \mbox{$cz<300$\,km\,s$^{-1}$} were excluded from our sample this volume is not included in our calculation of $V_{i}$. For the PSCz-extrapolated DMF the dust masses were calculated using $T_{d}$ derived from the \textit{IRAS} 100/60 colour and $\beta$=2.
For completeness, the effect of excluding real 60\hbox{\,$\umu$m } sources must be taken into account by applying a correction factor to the LF and DMF. This correction factor will be uncertain, since some excluded sources will be real and some not. Therefore we correct using our best estimate of real sources as follows. We corrected for two thirds of the sources we excluded as being contaminated by cirrus. The correct correction factor for the sources that were excluded because they have 100\hbox{\,$\umu$m } upper limits is even more uncertain. These are probably all genuine sources, but they will generally have warmer colours than the sources that were not excluded. We arbitrarily corrected for 50\% of these. Including the correction for $\sim$100 sources without redshifts, the final correction factor for excluded sources is 1.27. This is obviously very uncertain, however at the most it could be 1.43 and at the least it could be 1.00. This produces maximum errors of $+$13\% and $-$21\% on the LF and DMF in addition to the errors described below. We made a correction for evolution out to z=0.2 using a density evolution $\propto (1+z)^{7}$ (Saunders et al. 1990). We confirmed that the strength assumed for the evolution made virtually no difference to our results.
The PSCz-extrapolated 850\hbox{\,$\umu$m } LF and DMF are shown as filled circles in Figures~\ref{lumfun-plot} and~\ref{dmfun} respectively, and are given in tabular form in Tables~\ref{PSCZlf} and~\ref{PSCZdmf}. For comparison we also produce a `cold' PSCz-extrapolated DMF, produced as above but with dust masses calculated using $T_{d}$=20K and $\beta$=2; this is shown as open circles in Figure~\ref{dmfun-a} and listed in Table~\ref{PSCZdmf}.
While the errors on the directly measured LF and DMF are standard Poisson errors, the errors on the PSCz-extrapolated LF and DMF are derived from a combination of Poisson errors and the errors resulting from the fact that the 850\hbox{\,$\umu$m } luminosities have been derived using the best-fitting linear relation to our SLUGS colour-colour plot (Figure~\ref{colplot}). In order to take into account how our `choice' of linear fit affects the LF we produce, we additionally generate two `extremes' of the PSCz-extrapolated LF and DMF using two alternative fits to the SLUGS colour-colour plot: 1) a fit to the OS data only, and 2) a fit to the IRS data only (linear fit parameters listed in Table~\ref{colplot-params}). We then use the maximum difference between these `extreme' LF values and our actual PSCz-extrapolated LF data points as the errors on our LF due to our `choice' of colour-colour linear relation. We then also take into account the number statistics, and thus add in quadrature the standard Poisson errors and the `choice of colour-colour fit' errors to obtain our total errors listed in Tables~\ref{PSCZlf} and~\ref{PSCZdmf}. In addition to these errors there are, at most, upper and lower errors of +13\% and $-$21\% from our choice of correction factors.
\subsection{Results and discussion}
\label{lumfun:results}
The directly measured OS LF and PSCz-extrapolated LF agree remarkably well over the range of luminosities covered by the SLUGS samples, yet we find that in comparison the IRS sample of D00 (plotted as triangles in Figures~\ref{lumfun-plot} and~\ref{dmfun}) consistently underestimates the submillimetre LF by a factor of 2 and the DMF by a factor of 4. The fact that we see this underestimate compared to our OS sample, which by definition should be free from any dust temperature selection effects, is strong evidence that a population of `cold' dusty galaxies was indeed `missed' by \textit{IRAS} and that therefore the IRS sample was missing $\sim$half the galaxies. The bigger difference between the DMFs is probably due to the fact that, unlike the IRS sample, for the OS sample we do not have fitted isothermal SEDs for all galaxies and therefore have calculated dust masses using an assumed $T_{d}$=20K for $\sim$20\% of the sample (Section~\ref{method:850LF}).
We fit both the directly measured and PSCz-extrapolated 850\hbox{\,$\umu$m } LFs and DMFs with Schechter functions of the form
\[
\Phi(L)dL=\phi(L)\left(\frac{L}{L_{\ast}}\right)^\alpha e^{-(L/L_{\ast})} dL/L_{\ast}
\]
(Press \& Schechter 1974; Schechter 1975).
The best-fitting parameters for the PSCz-extrapolated 850\hbox{\,$\umu$m } LF and DMF are listed in Tables~\ref{PSCZlf} and~\ref{PSCZdmf} respectively, along with the reduced chi-squared values ($\chi^{2}_{\nu}$) for the fits; likewise best-fitting parameters for the directly measured 850\hbox{\,$\umu$m } LF and DMF are shown in Table~\ref{optlf}.
We find that both the directly measured and PSCz-extrapolated LFs and DMFs are well-fitted by Schechter functions. For the PSCz-extrapolated LF and DMF the best-fitting Schechter function \mbox{($\alpha$\,=\,$-$1.38)} fits the data points extremely well across most of the luminosity range -- however, we note that the PSCz-extrapolated functions are much less well fitted at the high luminosity end. Investigation of the 3 or 4 high end luminosity bins has found several anomalies for the objects in these bins, the most striking of which is the fact that in each bin there is typically a small number of objects with accessible volumes 2 or 3 orders of magnitude lower than the rest of the objects in that bin, and thus it is these few objects in each of these bins which are the main contributors to the high space density.
There are many possible explanations for the excess at the high luminosity end. One possible explanation could be that the objects in these bins are multiple systems. At larger distances \textit{IRAS}\/ galaxies are mostly very luminous starbursts and are frequently in interacting pairs. The density of galaxy pairs at these distances might by substantially higher than the local galaxy density which may produce an excess in the LF at high luminosities. Several authors find this excess at high luminosities or high masses. For example Lawrence et al. (1999) find a similar excess in their 60\hbox{\,$\umu$m } LF, as do Garcia-Appadoo, Disney \& West (in preparation) for their HI Mass Function, who find that the higher HI masses are typically multiple systems. One can also think of ways our use of a global colour-colour relation might have produced a spurious excess if, for example, the galaxies at the highest luminosities have systematically different colours. This would not, however, explain the excess seen in the 60\hbox{\,$\umu$m } LF.
In our earlier work the 850\hbox{\,$\umu$m } LF derived from the IRS sample (D00) was found to have a slope steeper than $-$2 at the low luminosity end, suggesting that the submillimetre sky should be infinitely bright (a submillimetre `Olbers' Paradox'). Using the OS sample we find the slope of the PSCz-extrapolated 850\hbox{\,$\umu$m } LF is $-$1.38, showing that the LF does flatten out at luminosities lower than those probed by the IRS sample, thus solving the submillimetre `Olbers' Paradox'.
\section{Conclusions}
\label{conc}
Following our previous SCUBA survey of an \textit{IRAS}-selected sample of galaxies we have carried out the first systematic survey of the local submillimetre Universe free from dust temperature selection effects -- a submillimetre survey of a sample of 81 galaxies selected from the CfA optical redshift survey. We obtained the following results:
(i) We detected 52 out of 81 galaxies at 850\hbox{\,$\umu$m } and 19 galaxies at 450\,\micron. Many of these galaxies have 850\hbox{\,$\umu$m } emission which appears extended with respect to the DSS optical emission, and which appears to correspond to very faint optical features.
(ii) We fitted two-component dust spectral energy distributions to the 60, 100, 450 and 850\hbox{\,$\umu$m } flux densities for 18 of the galaxies which were detected at 850\hbox{\,$\umu$m } \textit{and} at 450\,\micron. We find that the \textit{IRAS}\/ and submillimetre fluxes are well-fitted by a two-component dust model with dust emissivity index $\beta$=2. The tight and fairly constant ratio of $S_{450}/S_{850}$ for both the OS galaxies and the IRS galaxies is evidence that $\beta\approx 2$. The temperatures of the warm component range from 28 to 59\,K; the cold component temperatures range from 17 to 24\,K.
(iii) We find the ratio of the mass of cold dust to the mass of warm dust is much higher for our optically-selected galaxies than for our previous work on \textit{IRAS}-selected galaxies (DE01), and can reach values of $\sim$1000. By comparing the results for the \textit{IRAS}- and optically-selected samples we show that there is a population of galaxies containing a large proportion of cold dust that is unrepresented in the \textit{IRAS}\/ sample.
(iv) We also fitted single-temperature dust spectral energy distributions (to the 60, 100 and 850\hbox{\,$\umu$m } flux densities) for the 41 galaxies in the OS sample with detections in all 3 wavebands. The mean best-fitting temperature for the sample is $\bar{T}_{d}=31.6\pm0.6$K and the mean dust emissivity index is $\bar{\beta}=1.12\pm0.05$. These values are significantly lower than for the IRS sample. The very low value of $\beta$ is additional evidence that galaxies, across all Hubble types, contain a significant amount of cold dust.
(v) Using our isothermal fits we find a mean dust mass \mbox{$\bar{M_{d}}=(2.34\pm0.36)\times{10^{7}}$ M$_{\odot}$}, which is comparable to that found for the IRS sample. However, using our two-component fits we find a mean dust mass a factor of two higher.
(vi) We find little change in the properties of dust in galaxies along the Hubble sequence, except a marginally significant trend for early-type galaxies to be less luminous submillimetre sources than late-types.
(vii) We detect 6 out of 11 ellipticals in the sample and find them to have dust masses in excess of \mbox{$10^{7}$ $M_{\odot}$}. It is possible, however, that for some of these galaxies the submillimetre emission may be synchrotron emission rather than dust emission.
(viii) We have derived local submillimetre luminosity and dust mass functions, both directly from the optically-selected SLUGS sample and by extrapolation from the \textit{IRAS} PSCz survey, and find excellent agreement between the two. By extrapolating the spectral energy distributions of the \textit{IRAS} PSCz survey galaxies out to 850\hbox{\,$\umu$m } we have probed a wider range of luminosities than probed directly by the SLUGS samples. We find the LFs to be well-fitted by Schechter functions except at the highest luminosities. We have shown that, whereas the slope of the \textit{IRAS}-selected LF at low luminosities was steeper than $-$2 (a submillimetre `Olbers' Paradox'), the PSCz-extrapolated LF, as expected, flattens out at the low luminosity end and has a slope of \mbox{$-$1.38}.
(ix) We find that as a consequence of the omission of a population of `cold' dusty galaxies from the \textit{IRAS}\/ sample the LF presented in our earlier work (D00) is too low by a factor of 2, and the DMF by a factor of 4.
In order to further investigate the properties of dust in galaxies follow-up optical imaging (to obtain deeper images than available from the DSS) for the whole OS sample detected at 850\hbox{\,$\umu$m } is needed, in order to make a full comparison of the optical versus submillimetre emission. This is important since for many of the OS sample galaxies the 850\hbox{\,$\umu$m } emission appears extended with respect to the DSS optical emission. Work on obtaining this data is in progress.
\section*{Acknowledgements}
We thank Diego Garcia-Appadoo for providing information about his HI Mass Function, and Jonathan Davies for his useful comments. We also thank Steve Serjeant for useful discussions. Many of the observations for this survey were carried out as part of the JCMT service programme, so we are grateful to Dave Clements, Rob Ivison and the many other observers and members of the JCMT staff who have contributed to this project in this way. This research has made use of the NASA/IPAC Extragalactic Database (NED) and the NASA/IPAC Infrared Science Archive which are operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We have also made use of the LEDA and DSS databases. Research by LD and SE is supported by the Particle Physics and Astronomy Research Council.
| 2024-02-18T23:39:50.515Z | 2005-10-27T13:38:15.000Z | algebraic_stack_train_0000 | 604 | 25,159 |
|
proofpile-arXiv_065-3106 | \section{Introduction}
Accreting neutron stars (NSs) often show nearly coherent
modulations during type I X-ray bursts \citep[see reviews of][]{bil98,sb03}
called burst oscillations \citep[][and references therein]{mun01}.
The high temporal stability of each NS's characteristic frequency
\citep[within 1 part in $10^3$ over years,][]{mun02}, along with burst
oscillations seen from two accreting millisecond pulsars at their spin
frequencies \citep{cha03,str03} have led many to conclude that burst
oscillations are a modulation at the NS spin frequency. Nevertheless,
it has been a long standing mystery as to what creates the surface
asymmetry in the burst tail, long after any hot spots from the burst
ignition should have spread over the entire star \citep{bil95,slu02}.
A recently developed and promising hypothesis is that burst
oscillations are surface {\it r}-modes \citep{hey04}. In this picture
the oscillations are created by a retograde mode with an observed
frequency just below the NS spin. As the star cools in the burst
tail the mode replicates the observed rising frequency. Current
theoretical work has focused on calculating the frequencies
\citep{lee04,pb05b} and flux perturbations \citep{hey05,ls05} expected
for such modes. Unfortunately, besides the highly sinusoidal nature
of burst oscillations \citep{moc02}, which is expected for modes, there
is little direct evidence that modes are the correct explanation.
\citet{pb05b} addressed this issue by calculating how the mode's
properties would manifest themselves in the burst oscillations. First, they
showed that higher persistent luminosity NSs should exhibit smaller
frequency drifts, consistent with current observations. Second, they
hypothesized that additional modes might be present
with such large frequency drifts that they are difficult to detect.
Though these are promising steps, both predictions are directly tied
to Piro \& Bildsten's (2005b) model that the burst oscillations are a
surface wave changing into a crustal interface wave \citep{pb05a}.
What is needed is a complementary, and more general, argument
of how a surface mode should exhibit itself, independent of the specific
model invoked. That is the goal of this present study.
A key characteristic of burst oscillations is a larger pulsed fraction
at increasing energies in the range of $2-23\ {\rm keV}$
\citep[][hereafter MOC03]{moc03}. This is distinct from other pulsing
NSs. For example, accretion-powered millisecond X-ray pulsars
decrease (marginally) in amplitude between $2-10\ {\rm keV}$
\citep{cmt98,gal02,pg03}. MOC03 modeled the burst oscillations
as a hot spot on the NS surface and nicely reproduced the
burst oscillation energy dependence. Given the promising possibility
that the oscillation may be a mode, we perform a calculation of the
pulsed amplitude as a function of energy for a nonradial oscillation.
Similar studies have recently been completed by \citet{hey05} and
\citet{ls05}. Our work differs from these in that we are primarily
interested in comparisons with observations, and thus do not perform
an exhaustive parameter survey. We also include analytic estimates
to explain our numerical calculations.
A primary difficulty with comparisons between theory and observations
is that the mode amplitudes are not predicted from linear mode calculations.
To overcome this complication we follow the example of pulsating white
dwarf (WD) studies \citep{kep00} and only fit the {\it shape} of the energy
dependence, leaving the overall amplitude as a free parameter. Unfortunately,
bursting NSs have a drawback with respect to pulsating WDs in that
the limb darkening of their bursting envelope is largely independent of
photon energy above $\approx1\ {\rm keV}$ \citep{mad91}.
This prevents constraining
NS properties with burst oscillation data because the inclination,
NS mass, $M$, and radius, $R$, and even the mode's angular eigenfunction only
alter the pulsed amplitude normalization as we show in
\S \ref{sec:comparison}. For this reason, full integrations of the pulsed
NS emission are well replicated by an analytic result
that only depends on the NS surface color
temperature, $T_c$ (eq. [\ref{eq:result}]).
We compare our result with the observed energy dependence of burst
oscillations, finding agreement for $k_{\rm B}T_c\approx2-3\ {\rm keV}$, as
expected for NSs during X-ray bursts. This suggests that
the burst oscillations are due to a nonradial mode, independent of the mode's
identification. The excitation and nonlinear evolution
of the mode is of upmost importance if we are to infer NS attributes from
burst oscillations.
In \S \ref{sec:theory} we calculate the energy dependence
of a mode's amplitude and compare it with X-ray
burst oscillations in \S \ref{sec:observations}. In \S \ref{sec:sn} we investigate
the optimal photon energy ranges for detection of burst oscillations.
We conclude in \S \ref{sec:conclusion}
with a summary of our results and note the importance of
measuring the energy dependence of burst oscillations from an
accreting millisecond pulsar.
\section{The Energy Dependence of a Mode's Amplitude}
\label{sec:theory}
We first describe our procedure for calculating the pulsed amplitude
of a surface mode as a function of energy. This
follows previous studies of pulsed NS emission \citep[for example,][]{pg03},
but is included to provide context for our analytic results in
\S \ref{sec:analytic}. In \S \ref{sec:comparison}
we compare the analytics with numerical integrations.
\subsection{Equations for Calculating the Pulsed Amplitude}
\label{sec:full}
We use a spherical coordinate system given by $(r,\theta,\phi)$ for
the inertial reference frame of the observer, with its origin at the center
of the star. The observer sits at an angle $\theta=0$. Nonradial
oscillations are set in a spherical coordinate system that shares
its origin with the observer's coordinates, but is rotated by an inclination
angle, $i$. We denote this by $(r,\theta',\phi')$, with the pulsation axis,
which is coincident with the spin axis, at $\theta'=0$. The cartesian
coordinates of the two frames are related by
\newcounter{subequation}[equation]
\renewcommand{\theequation}{\arabic{equation}\alph{subequation}}
\begin{eqnarray}
\addtocounter{subequation}{+1}
x' &=& x\cos i - z\sin i
\\
\addtocounter{equation}{-1}
\addtocounter{subequation}{+2}
y' &=& y
\\
\addtocounter{equation}{-1}
\addtocounter{subequation}{+3}
z' &=& x\sin i + z\cos i.
\end{eqnarray}
Gravitational light bending causes photons that reach the observer
to be emitted at an angle $\alpha\geq\theta$, which is given
to high accuracy for a Schwarzschild metric by
\begin{eqnarray}
1-\cos\alpha = (1-\cos\theta)\left(1-\frac{r_g}{R}\right),
\end{eqnarray}
where $r_g=2GM/c^2$ is the Schwarzschild radius \citep{bel02}.
The equations that we derive adopt a number of simplifying
assumptions.
These are justified because they either are negligible corrections in the context
of pulsed emission from a NS, or because they do not affect the energy
dependence of the pulsed amplitude. We ignore
the effects of frame dragging, which would modify our results by an amount
less than the current observational errors (Cadeau et al. 2005 show that a
Schwarzschild plus Doppler treatment as presented here provides sufficiently
accurate results in comparison to a full relativistic calculation). Lorentz
boosting can be ignored since it is a $\lesssim0.1\%$ correction for
a NS spin of
$\nu=600\ {\rm Hz}$, and as well we omit relativistic aberration because
it only marginally alters our results.
Finally, we also ignore Doppler shifting from the wave motions because the
transverse velocity of {\it r}-modes \citep[$\sim10^7\ {\rm cm\ s^{-1}}$ for an
order unity perturbation, approximated from the results of][]{pb05b}
is much less than $c$.
Given these simplifications the observed flux at photon energy $E$ is
related to the intensity from the surface, $I(E',\theta',\phi')$, by \citep{pg03}
\begin{eqnarray}
F(E) \propto
\int\int
\delta^3
I(E',\theta',\phi')h(E',\cos\alpha)
\cos\alpha d\Omega,
\label{eq:integral}
\end{eqnarray}
where $h(E',\cos\alpha)$ is the limb darkening function,
$d\Omega=d\cos\alpha d\phi$ is the angular element,
and $E'=E/\left(\delta\sqrt{1-r_g/R}\right)$ is the photon energy
in a frame co-rotating with the NS surface.
The Doppler factor is given by
\begin{eqnarray}
\delta = 1/(1-\beta\sin\alpha\sin\phi\sin i),
\label{eq:dopplerfactor}
\end{eqnarray}
where $\beta=2\pi R\nu/\left(c\sqrt{1-r_g/R}\right)$ is the equatorial velocity (with $\nu$ the spin
frequency). In addition, equation (\ref{eq:integral}) should have
factors due to gravitational redshift, the NS radius, and the NS
distance, but we omit these since they cancel when we take the
pulsed fraction. The integration limits are $0\leq\alpha\leq\pi/2$ and
$0\leq\phi\leq2\pi$.
In general, the limb darkening function can depend on photon energy.
This is crucial for studies of ZZ Ceti stars, which have opacities strongly
affected by lines, so that the latitudinal quantum numbers can be identified
by studying the energy dependence of the pulsed emission
\citep{rkn82,kep00}. In contrast, bursting NSs have a surface
opacity dominated by electron
scattering. For photons with energy $\gtrsim1\ {\rm keV}$ the limb darkening is
largely energy independent and well-approximated by $h=0.5+0.5\cos\alpha$
\citep{mad91}, which is the functional form we assume for our
calculations.
Finding the perturbed flux requires perturbing each term in
the integrand of equation (\ref{eq:integral}) and keeping terms
of linear order. This results in three integrals to evaluate, which
correspond to changes in intensity, surface area, and
surface normal. Since for nonradial incompressible modes the transverse
velocity dominates over the radial velocity ($V_\perp/V_r\sim R/H\gg1$, where
$H$ is the scale height in the bursting layer), the latter two changes are
negligible \citep{bs79,rkn82}. Using just the integral which
contains the intensity perturbation, $\Delta I$,
the fractional amplitude of the mode is then
\begin{eqnarray}
A(E)&\equiv&\frac{\Delta F(E)}{F(E)}
\nonumber
\\
&=&\frac{\displaystyle \int\int\delta^3
\Delta I(E',\theta',\phi') h(\cos\alpha)\cos\alpha d\Omega}{\displaystyle
\int\int\delta^3
I(E',\theta',\phi') h(\cos\alpha)\cos\alpha d\Omega}.
\label{eq:ampl1}
\end{eqnarray}
We next relate $\Delta I$ to the mode eigenfunction, which
is just the temperature perturbation. This relation depends
on the bursting NS spectrum, which is well-characterized as a dilute blackbody with a
temperature given by a color temperature $T_c\approx(1.4-1.6)T_{\rm eff}$
\citep{mad91,psz91,mad97}. The change in overall normalization does not
affect the energy dependence, hence
we use $I(E')=B_{E'}(T_c)$ and perturb
this by setting $T_c\rightarrow T_c+\Delta T$, keeping terms of
linear order in $\Delta T$,
\begin{eqnarray}
\frac{\Delta I}{I} = \frac{\partial\log I}{\partial\log T_c}
\frac{\Delta T}{T_c}
=
\frac{x' e^{x'}}{e^{x'}-1}
\frac{\Delta T}{T_c},
\label{eq:intensity}
\end{eqnarray}
where $x'\equiv E'/k_{\rm B}T_c$.
Substituting this result into equation
(\ref{eq:ampl1}), the fractional amplitude becomes
\begin{eqnarray}
A(E) =
\frac{\displaystyle \int\int \frac{x' e^{x'}}{e^{x'}-1}
\frac{\Delta T}{T_c}(\theta',\phi')I(E')
h(\cos\alpha)\cos\alpha d\Omega}{\displaystyle
\int\int I(E')h(\cos\alpha)
\cos\alpha d\Omega},
\label{eq:ampl2}
\end{eqnarray}
which can be integrated for any angular eigenfunction
$\Delta T(\theta',\phi')/T_c$.
\subsection{Analytic Estimates}
\label{sec:analytic}
Before we calculate equation (\ref{eq:ampl2}) numerically we simplify
the integrals so that their energy dependence can be studied analytically.
There only exists an energy dependence in two terms: $I(E')$ and the
logarithmic derivative found in equation (\ref{eq:intensity}). If these terms
contained no angular dependence, they could be taken outside of the integrals
so that the integrals become irrelevant for determining the energy
dependence of $A(E)$. In principle
this cannot be done because $E'$ contains an angular dependence through
the Doppler factor, equation (\ref{eq:dopplerfactor}), so that
the integrals must be performed numerically.
On the other hand, if the Doppler shifts are negligible,
then the energy dependence of $A(E)$ is simply
\begin{eqnarray}
A(E) \propto \frac{x'e^{x'}}{e^{x'}-1}.
\label{eq:result}
\end{eqnarray}
This has high and low energy limits that are useful
for gaining intuition about the expected dependence on energy,
\begin{eqnarray}
A(E) \propto \left\{
\begin{array}{cc}
E/k_{\rm B}T_c,
&\hspace{0.3cm}E>k_{\rm B}T_c\sqrt{1-r_g/R} \\
{\rm constant},
&\hspace{0.3cm}E<k_{\rm B}T_c\sqrt{1-r_g/R}
\end{array}
\right.
\label{eq:limits}
\end{eqnarray}
At high energies the amplitude is linear with
energy, while at low energies the amplitude is approximately
constant. This argument shows why a mode will naturally show a larger
amplitude at larger energy. {\it It is simply a result of perturbing a blackbody
spectrum, and the fractional change of intensity is much stronger in the
Wien tail.} Similar results are found by MOC03 for their hot spot model
when the temperature difference between hot and cold regions is small.
When the temperature difference they use becomes large, this can lead to
deviations away from a linear amplitude-energy relation at high energies
($x'\gtrsim10$), perhaps
providing an important discriminant between the mode and hot spot
models. Unfortunately, this is outside the currently observed energy range.
To find the correction to equation (\ref{eq:result}) introduced by Doppler
effects we expand $\Delta I$ to first
order in $\beta'\equiv\beta\sin\alpha\sin\phi\sin i$, giving
\begin{eqnarray}
\Delta I \approx (\Delta I)_{\beta=0}
\left[ 1+\beta'\left(\frac{2x'e^{x'}}{e^{x'}-1}-x'-1\right)\right].
\label{eq:doppler}
\end{eqnarray}
When $x'\ll1$, this changes the amplitude by a factor $1+\beta'$, which
contains no energy dependence. When $x'\gg1$ the
amplitude changes by a factor $1+\beta'x'$, so that Doppler shifting
increases the amplitude at larger energy.
\subsection{Comparisons to the Numerical Integrations}
\label{sec:comparison}
We now compare our analytic results to numerical
integrations of equation (\ref{eq:ampl2}). This shows that the analytics
reproduce the amplitude versus energy relation. For the angular pattern of the mode,
$\Delta T(\theta',\phi')/T_c$, we use a buoyant
{\it r}-mode with angular quantum numbers $l=2$ and $m=1$ \citep[as
identified in the slowly rotating limit,][]{pb04}, a favored
mode for reproducing the frequency evolution of the burst oscillations
\citep{hey04, hey05, pb05b}. This mode's latitudinal eigenfunction is
parametrized by the spin parameter $q=2\nu/f$, where $f$ is the mode
frequency in a frame co-rotating with the star in units of ${\rm Hz}$. A faster spinning
NS (higher $q$) results in an eigenfunction that is more concentrated
near the NS equator due to Coriolis effects. We fix
$M=1.4\ M_\odot$ and $k_{\rm B}T_c=3\ {\rm keV}$ so that
we can concentrate on whether other attributes of a NS can affect the energy
dependence. The amplitude at a given energy is assessed by calculating
the time-dependent amplitude and then fitting this with a sinusoidal function.
In Figure \ref{fig:spin} we compare integrations of different spins and inclinations,
keeping the mode pattern fixed at $q=200$ as well as fixing $M$ and $R$.
All amplitudes are normalized to $A(E)=1$ at $E/k_{\rm B}T_c=0.1$.
At low spin, $\nu=10\ {\rm Hz}$ ({\it solid line}), the analytic
result of equation (\ref{eq:result}) ({\it thick dashed line}) and the numerical calculation
are practically identical. As the spin is increased to $\nu=600\ {\rm Hz}$
({\it long dashed line}) the amplitude increase at high energies, as predicted by equation (\ref{eq:doppler}). We also consider a more
face-on orientation for the NS ($i=25.8\degr$, {\it dotted line}), which has the effect of
looking like a faster spin NS. This somewhat counterintuitive result has been
seen in previous studies \citep{hey05} and is due the
mode pattern, which has a maximum amplitude at latitudes
above and below the equator. These comparisons
show that neither the spin nor inclination change the energy dependence
dramatically from our analytic result.
\begin{figure}
\epsscale{0.95}
\plotone{f1.eps}
\caption{The energy dependence of the pulsed amplitude, $A(E)$, for both the full
numerical integration and the analytic result given by equation (\ref{eq:result})
({\it thick dashed line}).
The numerical integrations all use a $q=200$, $l=2$, $m=1$ buoyant
{\it r}-mode on a $M=1.4\ M_\odot$ and $R=10\ {\rm km}$ NS.
The parameters we explore are $\nu=10\ {\rm Hz}$, $i=90\degr$ ({\it solid line}),
$\nu=600\ {\rm Hz}$, $i=90\degr$ ({\it long dashed line}), and $\nu=600\ {\rm Hz}$,
$i=25.8\degr$ ({\it dotted line}).
Though the normalization can change drastically for different inclinations
\citep[for an example, see Fig. 4 of][]{hey05} we
renormalize all the results to $A(E)=1$ at $E/k_{\rm B}T_c=0.1$ to
focus on the shape of the energy dependence. At high energies, $A(E)\propto E$
as we show in \S \ref{sec:analytic}.}
\label{fig:spin}
\epsscale{1.0}
\end{figure}
In Figure \ref{fig:nsproperties} we keep the spin and inclination fixed at
$\nu=600\ {\rm Hz}$ and $i=90\degr$, and investigate the effect of changing $q$ and
$R$. When we set $R=20\ {\rm km}$ ({\it long dashed line}) the
amplitude of the pulsed fraction decreases at high
energies. This is because changing $R$ decreases gravitational redshifting
so that the break between a constant and linearly increasing amplitude comes
at a higher energy (see eq. [\ref{eq:limits}]). We also decrease
$q$ dramatically ({\it dotted line}) but find very little change in the energy dependence.
This shows that it is difficult to identify the mode pattern on the NS surface via
the amplitude energy dependence.
\begin{figure}
\epsscale{0.95}
\plotone{f2.eps}
\caption{The same as Figure \ref{fig:spin}, but with $\nu=600\ {\rm Hz}$ and $i=90\degr$
and varying $q$ and $R$ for the numerical calculations. Setting $R=20\ {\rm km}$
({\it long dashed line}) has the effect of decreasing the amplitude
at high energies (compare this to the solid line).
Decreasing $q$ from 200 to 10 ({\it dotted line}) affects the energy dependence very little, showing
that it is difficult to constrain the surface pattern created by the mode.}
\label{fig:nsproperties}
\epsscale{1.0}
\end{figure}
\section{Comparisons with Observations}
\label{sec:observations}
MOC03 studied the energy dependence of burst oscillation amplitudes
from 6 different bursting NSs. A total of 51 burst oscillation trains were measured,
and multiple trains were averaged to obtain amplitude
versus energy relations. Some objects were divided into
multiple epochs to assure that the gain of the Proportional
Counter Array (PCA) on the {\it Rossi X-Ray Timing Experiment} ({\it RXTE}) was
relatively constant. To correctly compare our calculations to their results
we must weight our pulsed amplitudes by the PCA's effective area,
$A_{\rm eff}(E)$, as well as bin the amplitudes over appropriate energy ranges.
The PCA is composed of five Proportional Counter Units
(PCUs). Since each of these have approximately the same $A_{\rm eff}(E)$,
we use that of PCU3 for our integrations.
The pulsed amplitude is then given by equation (\ref{eq:ampl2}) with
$A_{\rm eff}(E)$ placed within each integrand. Qualitatively, $A_{\rm eff}(E)$
has a large wide maximum spread from $\approx4-15\ {\rm keV}$ with a smaller,
secondary peak at $\approx34\ {\rm keV}$. The binning of the amplitude
depends on the epoch of the observation and is outlined in Table 2 of MOC03.
In Figure \ref{fig} we compare the calculated amplitudes with the
measurements of MOC03 ({\it triangles with error bars}). For the
calculated amplitudes we fix $M=1.4\ M_\odot$, $R=10\ {\rm km}$, $q=200$, and
$i=90$. The spin is set to the burst oscillation frequency for that
object. This is reasonable since in all current mode explanations of burst
oscillations the mode moves retrograde with respect to the spin with
$\nu\gg f$ \citep[see discussion in][]{ls05}.
The normalization of the numerical calculations
are set to maximize the fit for each comparison. We consider
$k_{\rm B}T_c=3\ {\rm keV}$ ({\it solid lines}) as a fiducial temperature
exhibited near burst peak. Ideally, we
should be able to constrain $k_{\rm B}T_c\sqrt{1-r_g/R}$ by fitting for
the break in the
amplitude (providing $M/R$ if $T_c$ is known). This is difficult because
when the photon energy is
$\gtrsim k_{\rm B}T_c$, as is the case for these observations, the
amplitude is always linear with energy. Nevertheless, the theoretical calculations show reasonable agreement with the observations.
\begin{figure*}
\epsscale{1.2}
\plotone{f3.eps}
\caption{The observed energy dependence of the burst oscillation amplitudes
(MOC03, {\it triangles with error bars}),
in comparison with our full numerical calculations using
$k_{\rm B}T_c=3\ {\rm keV}$ ({\it solid lines}).
We label each panel with the NSs name
and the epoch of the measurements if that NS was observed over multiple epochs.
For each calculation the spin is set to that
object's burst oscillation frequency and the normalization is set to maximize
the fit.}
\label{fig}
\end{figure*}
Comparisons to the observations are complicated by the fact that the observations
are averaged over a range of temperatures throughout the cooling of the burst, so
that we should consider temperatures in the range of $k_{\rm B}T_c\approx2-3\ {\rm keV}$.
If the mode amplitude remains relatively constant, then the
pulsed amplitude in the Wien tail should increase as the star cools (see eq.[\ref{eq:limits}]).
We test this in Figure \ref{fig:2} for two of the observed amplitudes from Figure \ref{fig},
but in this case calculating the amplitudes for temperatures of
$k_{\rm B}T_c=2\ {\rm keV}$ ({\it dashed lines})
and $3\ {\rm keV}$ ({\it solid lines}). The overall normalization is chosen to maximize
the fit, but the relative amplitude of the two curves in each panel
is set by the temperature ratio. The two temperatures envelop the data, showing that
the spread in the data may be due to cooling in the burst tail. An interesting future
test of our work would be to divide the burst data into early and late stages,
and see whether the amplitude evolves. This expected temperature
dependence can be divided out of the data to investigate how much the amplitude of the
mode is changing due to other effects (e.g., changes in the intrinsic amplitude,
or changes in $q$).
\begin{figure}
\epsscale{0.95}
\plotone{f4.eps}
\caption{The observed energy dependence of the burst oscillations amplitudes
(MOC03, {\it triangles with error bars}), for 2 of the 9 panels
from Fig. \ref{fig}. These represent a relatively ``good'' fit
({\it left panel}) and a ``poor'' fit ({\it right panel}).
We compare these with our numerical calculations using
$k_{\rm B}T_c=2\ {\rm keV}$ ({\it dashed lines}) and $3\ {\rm keV}$ ({\it solid lines}).
The overall normalization is set to maximize the fit with the data,
but the relative amplitude of the two curves is set by the two temperatures. This
shows that the observed spread in that data could arise
from the cooling of the NS in the
burst tail.}
\label{fig:2}
\epsscale{1.0}
\end{figure}
\section{Optimal Energies for Detection of Neutron Star Modes}
\label{sec:sn}
Since we understand the spectrum of the burst
oscillations, this can be used to find the optimal photon
energy range for burst oscillations searches. The pulsed signal is given by
the total number of pulsed photons integrated over some energy range,
\begin{eqnarray}
S &=&\sqrt{1-\frac{r_g}{R}}\left(\frac{R}{D}\right)^2t_{\rm obs}\int A_{\rm eff}(E)\frac{dE}{E}
\nonumber
\\
&&\times\int\int\frac{x' e^{x'}}{e^{x'}-1}
\frac{\Delta T}{T_c}I(E')
h(\cos\alpha)\cos\alpha d\Omega,
\label{eq:signal}
\end{eqnarray}
where $D$ is the source distance, $t_{\rm obs}$ is the observing time,
and we have assumed $\delta\approx1$.
The background noise within this energy range is estimated from
photon counting statistics
\begin{eqnarray}
N & = & \left[\sqrt{1-\frac{r_g}{R}}\left(\frac{R}{D}\right)^2t_{\rm obs}
\int A_{\rm eff}(E)\frac{dE}{E}\right.
\nonumber
\\
&&\left.\times\int\int I(E')
h(\cos\alpha)\cos\alpha d\Omega\right]^{1/2},
\label{eq:noise}
\end{eqnarray}
which is the square-root of the total number
of photons detected,
To evaluate equations (\ref{eq:signal}) and (\ref{eq:noise}) we use
a blackbody spectrum, $I(E') = B_{E'}(T_c)$, with $k_{\rm B}T_c=3\ {\rm keV}$,
and the analytic pulsed fraction, ignoring
Doppler corrections, since the agreement is so close between the numerical and analytic
results. We assume $D=4.3\ {\rm kpc}$
(using 4U $1728-30$ for a fiducial distance), $t_{\rm obs}=10\ {\rm s}$
(one X-ray burst) and $\Delta T/T_c=0.025$ (which replicates the observed
pulsed fractions).
In Figure \ref{fig:sn} we plot these S/N calculations using three different forms for $A_{\rm eff}$:
a flat energy response with $A_{\rm eff}=1000\ {\rm cm^{2}}$
({\it solid line}), the $A_{\rm eff}$ of one PCU from {\it RXTE}'s PCA ({\it dotted line}),
and the proposed $A_{\rm eff}$ for the
{\it Nuclear Spectroscopic Telescope Array} \citep[{\it NuSTAR};][{\it dashed line}]{har04},
a future X-ray mission.
For each we show a series of $3\ {\rm keV}$ integrations, which are then connected
with lines to guide the eye.
This comparison demonstrates that the
energy range of $\approx2-25\ {\rm keV}$ contributes most to $S/N$.
Other than integrating over this range,
there is little more an observer can do to maximize the
opportunity of finding burst oscillations. At high energies the pulsed
fraction is considerably higher, but this range is not necessarily
better since there are so few photons in the
Wien tail and because $A_{\rm eff}$ drops at higher energies.
Future missions
interested in burst oscillations searches could mitigate this
by having large $A_{\rm eff}$ at higher energies.
\begin{figure}
\epsscale{1.0}
\plotone{f5.eps}
\caption{The ratio of signal to noise (using eqs. [\ref{eq:signal}] and [\ref{eq:noise}])
expected for a NS surface mode during a $10\ {\rm s}$ X-ray burst. We set
$k_{\rm B}T_c=3\ {\rm keV}$, $D=4.3\ {\rm kpc}$, and $\Delta T/T_c=0.025$,
and compare a flat energy response
({\it solid line}), the $A_{\rm eff}(E)$ of one PCU on {\it RXTE}'s PCA
({\it dotted line}), and the proposed $A_{\rm eff}$ for the {\it NuSTAR} mission ({\it dashed line}).
Each curve connects a series of points, with each point representing
an integration over a $3\ {\rm keV}$ width energy bin.
}
\label{fig:sn}
\epsscale{1.0}
\end{figure}
It is especially exciting that {\it NuSTAR} may be able to observe burst
oscillations. Though {\it NuSTAR}'s specifications of a large $A_{\rm eff}$,
high energy range ($\approx5-80\ {\rm keV}$), and good spectral
resolution (900 eV at 68 keV) are tuned for
observations of black holes, active galactic nuclei, and supernova
remnants, it also has fast timing ($\sim1\ {\rm ms}$) which makes
it ideal for studying burst oscillations. In addition, its high angular resolution
($\approx40\ {\rm arcsec}$) coupled with its timing abilities may make it
useful for identifying accreting millisecond pulsars in crowded fields such as at
the Galactic center (something beyond {\it RXTE}'s capabilities).
A typical accreting millisecond pulsar at the galactic center has a peak flux in
outburst 100 times less than a type I burst, so the persistent
pulse could easily be found in a $\approx1\ {\rm day}$ long observation.
\section{Discussion and Conclusions}
\label{sec:conclusion}
We have studied the energy dependence of NS surface mode amplitudes
for NS surface temperatures of $k_{\rm B}T_c=3\ {\rm keV}$
and compared this with burst oscillations. The observations follow our
calculated trend of a linear amplitude for photon energies
$\gtrsim k_{\rm B}T_c$ and becoming flatter when the photon energy is
$\approx k_{\rm B}T_c$.
Unfortunately, there are currently no data for burst oscillations below an energy
of $k_{\rm B}T_c$. Measuring the amplitude at such an energy is crucial for
fully testing the mode explanation of burst oscillations. One must be cautious
of interpretations for $E'\lesssim1\ {\rm keV}$ because limb darkening
begins to depend on energy in this range, so that equation (\ref{eq:result})
is no longer applicable. However, this also raises the
possibility for surface pattern identification
at low energies
\citep[analogous to what is done for pulsating WDs,][]{kep00}.
The agreement between our analytic result and the observations is promising
for explaining burst oscillations as modes, but frustrates the ability of using burst
oscillations as a tool to learn about these NSs. As long as the amplitude of the surface
perturbation caused by the mode is unknown, it will be difficult to constrain NS
properties. Future theoretical studies
should work to address this unanswered question. We expect all bursting NS modes
to show the energy dependence we present here, including the oscillation seen
in the 4U $1636-53$ superburst \citep{sm02}, for which this energy dependence
was not determined.
However, other types of NS pulsations need not match this, for example millisecond
accreting pulsars in their persistent emission. This raises the critical question
of what is the energy dependence of the burst oscillations from these systems
(SAX J$1808.4-3658$ [Chakrabarty et al. 2003] and
XTE J$1814-338$ [Strohmayer et al. 2003]).
\citet{pb05b} describe a number of differences
between the properties of burst oscillations from pulsars and nonpulsars.
These differences may simply be due to deviations in magnetic field strength, but
they could instead indicate that the pulsar burst oscillations are due to a completely
different mechanism. Measuring their energy dependence would help to settle
this unresolved issue.
One topic we have not addressed is the phase lag observed for high energy
photons in burst oscillations (MOC03). We focused on the amplitude relation
because of its stronger statistical evidence in the observations.
For individual oscillation trains measured by MOC03, only 13 out of 51 exhibit
phases that vary as a function of energy at 90\% confidence. In comparison,
34 of these exhibit some dependence with energy at 90\% confidence (with the
remaining typically having less counts in their folded profile).
Nevertheless, it is somewhat troubling that our calculations find the reverse
phase dependence due to Doppler shifts, just as was found in previous studies of
pulsed emission from NSs \citep[e.g.,][]{wml01,hey05}.
This inconsistency has been cited by many as evidence that a Comptonizing
corona exists around a NS during an X-ray burst. Without further theoretical
studies of the physical limits of such a Comptonizing corona or further investigations
on how robust of a property this observed hard phase lag is, it is not presently clear how
dire this inconsistency is for the mode explanation of burst oscillations.
\citet{ls05} have
found some parameter space in $R$ and $i$ that exhibit hard lags, which
may be promising to pursue further. Though
their result is for a different eigenfunction than we consider here, it does not affect
our main conclusions since the energy dependence will still be independent of the
specific mode.
\acknowledgements
We appreciate the help of Mike Muno for providing us with the pulsed amplitude data
and Philip Chang for reading and contributing comments on a previous draft.
We thank Fiona Harrison for providing us with the effective area response of
the {\it NuSTAR} satellite.
We also thank Deepto Chakrabarty and Andrew Cumming for many helpful
discussions. This work was supported by the
National Science Foundation under grants PHY99-07949 and AST02-05956,
and by the Joint Institute for Nuclear Astrophysics through NSF grant PHY02-16783.
| 2024-02-18T23:39:50.782Z | 2005-11-01T18:04:53.000Z | algebraic_stack_train_0000 | 623 | 5,328 |
|
proofpile-arXiv_065-3148 | \section{Introduction}
\label{sec:introduction}
In the early stages of their formation protoplanets are still
embedded in the disk from which they form. Not only will the protoplanet accrete
material from the disk and increase its mass,
it will also interact gravitationally with it.
Planet-disk interaction is an important aspect of planet formation
because it leads to a change in the planetary orbital elements.
Already before the discovery of extrasolar planets
the interaction of an embedded object with a disk
has been studied for small perturber masses by linear analysis
\citep[eg.][]{1980ApJ...241..425G, 1984ApJ...285..818P, 1986Icar...67..164W},
and in more recent years also for massive planets through detailed
numerical simulations in two and three dimensions
\citep[eg.][]{1999ApJ...514..344B,1999MNRAS.303..696K,2001ApJ...547..457K,
2002A&A...385..647D,2003MNRAS.341..213B}.
In all these simulations the planet has been held fixed on a circular orbit and
its influence onto the disk has been analyzed. The back reaction of the disk
in terms of migration rate
or eccentricity change can be calculated by summing over the force contribution
of each disk element.
The full evolution of a single planet embedded in a disk has been
followed for example by \citet{2000MNRAS.318...18N}. In a later
study by \citet{2001A&A...366..263P} numerical simulations have been
performed for a range of planetary masses with emphasis on the
eccentricity evolution of the planets. It has been found that massive
planets create an eccentric disturbance in the outer disk which in
turn may back-react on the planet and increase its eccentricity.
However, only for planets larger than 10-20 Jupiter masses a visible
increase up to $e=0.20$ has been found. However, these values are
significantly below the observed eccentricities for extrasolar planets
which average at about $e=0.3-0.4$ for planetary masses between 1 and
10 $M_{Jup}$. Also for smaller planet masses an average
eccentricity of about $e = 0.3$ is observed. This may be due to
planet-planet interactions, but these interactions are more effective
with increasing planet mass.
For a recent overview of planetary properties see
\citet{2005PThPS.158...24M} and
the {\it Extrasolar Planets Encyclopedia} ({\tt http://www.obspm.fr/encycl/encycl.html})
maintained by J.~Schneider.
The distribution of eccentricities does not show a strong dependence
on $m \sin{i}$ nor on the distance from the central star.
As one possible scenario to explain the origin of the observed high eccentricities
the aforementioned interaction of a planet with the protoplanetary disk
has been suggested.
In particular, \citet{2003ApJ...585.1024G,
2004ApJ...606L..77S} estimate that Lindblad resonances may lead to eccentricity
growth under reasonable assumptions. Numerical simulations tend to show the
opposite, for Jupiter mass planets the eccentricity is typically damped
on short time scales $\approx 100$ orbits, only for
massive planets at least transient growth has been seen
\citep{2000MNRAS.318...18N}.
This last result may be related to the back reaction of an eccentric disk
onto the planet, where the disk's eccentricity has been induced by the presence
of the massive planet.
Additionally, the mass of the embedded planet has also profound consequences for the
mass accretion rate onto it, i.e. its growth-time.
As the induced gap in the disk becomes wider and deeper upon increasing $M_{p}$
the accretion rate diminishes and essentially limits the growth for masses
beyond 5 $M_{Jup}$ \citep{1999ApJ...514..344B, 1999ApJ...526.1001L}.
However, those calculations covered only a couple of hundred orbits of
the planet which is much smaller than the viscous time scale.
Consequently, no equilibrium structure has been reached.
Here we follow this line of thought and investigate the influence a massive
embedded planet has on the structure of the ambient protoplanetary disk.
We use a hydrodynamical description to follow the evolution of the disk, where
the planet is fixed on a circular or a slightly eccentric orbit.
All simulations are run until a quasi-stationary equilibrium has been reached
and overall values of mass and energy in the computational domain remain
unchanged.
We vary the mass of the planet, the temperature and the disk viscosity, and analyze their
influence on the structure of the disk, in particular on its eccentricity.
Indeed, we find that (for a given viscosity) there appears to be clear
transition in the disk from an circular state into an eccentric state.
We analyze the magnitude of the induced disk eccentricity and estimate its
influence on the accretion rate of the planet.
In particular, we find that for sufficiently massive planets the disk becomes
eccentric, where the critical minimum mass depends on the value of the
viscosity coefficient.
For a viscosity of $\alpha = 4 \times 10^{-3}$, a reasonable
value for protoplanetary disks, the disk becomes eccentric already
for planets of 3 Jupiter masses. At the same time the mass accretion rate
onto the planet increases strongly for an eccentric disk.
In the next section we describe our model assumptions, in section 3 we present
our results followed by theoretical analysis and conclusions.
\section{The Standard Hydrodynamical Model}
\label{sec:hydro-model}
The models presented here are calculated
basically in the same manner as those described previously in
\citet{1998A&A...338L..37K, 1999MNRAS.303..696K}.
The reader is referred to those papers
for details on the computational aspects of this type of simulations.
Other similar models, following explicitly the motion of single
planets in disks, have been presented by
\citet{2000MNRAS.318...18N}, \citet{2000ApJ...540.1091B}.
We use cylindrical coordinates ($r, \varphi, z$) and
consider a vertically averaged, infinitesimally thin disk located
at $z=0$. The origin of the coordinate system, which is co-rotating
with the planet, is either
at the position of the star or in the combined
center of mass of star and planet.
Since in the first case the coordinate system is accelerated and
rotating, care has to be taken to include also the indirect terms of
the acceleration \citep{1998A&A...338L..37K}.
The basic hydrodynamic equations (mass and momentum conservation)
describing the time evolution of such a viscous
two-dimensional disk with embedded planets have been stated frequently and are
not repeated here \citep[see][]{1999MNRAS.303..696K}.
In the present study we restrict ourselves to the situation where the
embedded planet is on a fixed orbit, i.e. the gravitational
back reaction of the disk on the planet is not taken into account.
\subsection{Initial Setup}
The two-dimensional ($r - \varphi$) computational domain consists of a
complete ring of the protoplanetary disk.
The radial extent of the computational domain
(ranging from $r\subscr{min}$ to $r\subscr{max}$) is taken such that there is
enough space on both sides of the planet, although, as we shall see later,
the effect we are analyzing appears to occur only in the outer disk.
Typically, we assume $r\subscr{min}=0.40$ and for $r\subscr{max}$ we take
two different values 2.5 and 4.0, in units where the planet is located at $r=1$.
In the azimuthal direction for a complete annulus we have $\varphi\subscr{min} =0$ and
$\varphi\subscr{max} = 2 \pi$.
The initial hydrodynamic structure of the disk (density, temperature, velocity)
is axisymmetric with respect to the location of the star.
The surface density is constant ($\Sigma=1$ in
dimensionless units) over the entire domain with no initial gap.
To make sure that only little disturbances or numerical artifacts
arise upon immersion of the planet, its
mass will be slowly turned on from zero to the final required mass (eg. 5 Jupiter masses)
over a time span of typically 50 orbital periods.
The initial velocity is pure Keplerian rotation ($u_r=0,
u_\varphi = \Omega_K r = (G M_*/r)^{1/2}$), and
the temperature stratification is always given by
$T(r) \propto r^{-1}$ which follows from an assumed
constant vertical height $H/r$.
For these isothermal models the temperature profile is left unchanged
at its initial state throughout the computations.
For our standard model we use a constant kinematic viscosity coefficient $\nu$
but present additionally a sequence of $\alpha$-disk models.
\subsection{Boundary conditions}
To ensure a most uniform environment for all models and
minimize disturbances (wave reflections) from the outer
boundary we impose at $r\subscr{min}$ and $r\subscr{max}$ damping boundary
conditions where the density and both velocity components
are relaxed towards their initial values as
\begin{equation}
\frac{d X}{d t} = - \frac{ X - X(t=0)}{\tau\subscr{damp}} \, R(r)^2
\end{equation}
where $X \in \{\Sigma, u_r, u_\varphi\}$,
$\tau\subscr{damp} = 1/\Omega\subscr{K}(r\subscr{boundary})$ and $R(r)$
is a dimensionless linear ramp-function rising from 0 to 1 from
$r\subscr{damp}$ to $r\subscr{boundary}$. Here,
$r\subscr{boundary}$ is either $r\subscr{min}$ or $r\subscr{max}$, depending
which edge of the disk is considered.
The initial radial velocity vanishes, and the boundary conditions
ensure that no mass flows through the radial boundaries at
$r\subscr{min}$ and $r\subscr{max}$. However, the total mass in the system may
nevertheless vary due to the applied damping.
In the azimuthal direction, periodic boundary conditions for all
variables are imposed.
These specific boundary conditions allow upon a long term evolution
for a well defined quasi-stationary state
if there is no back-reaction of the disk on the orbital elements
of the planet.
\subsection{Model parameters}
The computational domain is covered by 128 $\times$ 384 ($N_r \times N_\varphi$)
grid cells for the smaller models $[0.4,2.5]$ and
$200 \times 384$ for the larger $[0.4,4.0]$ ones.
The grid is spaced equidistant in both radius and azimuth.
The inner radius beyond which the damping procedure defined above gradually sets in
is given by $r\subscr{damp} = 0.5$, the outer damping radius is given by
$R\subscr{damp} = 0.84 r\subscr{max}$.
The star has a mass of 1 $M_\odot$, and the
mass of the planet in the different models
ranges from one to five Jupiter masses.
The planet is held on a fixed circular orbit.
For the viscosity a value of $\nu = 1.0 \cdot 10^{-5}$ (in units of
$\Omega_p r_p^2$) is used for our
standard models, which is equivalent to a value of $\alpha = 0.004$
for the standard $H/r = 0.05$. This is a typical value for the
effective viscosity in a protoplanetary disk.
To achieve a more detailed calculation of the observed phenomena we
refined some calculations to the higher resolution of 260 x 760 ($N_r
\times N_\varphi$) by interpolating the data from coarser
calculations. As the relaxation time for the system is very long ($>$
1000 orbits) it would be too time-consuming to complete the whole
calculation on the high-resolution grid. These higher resolution
simulations yield identical results.
To study the influence of physical parameters such as viscosity and pressure, we vary
$\nu$ and $H/r$ in some models.
In addition, we analyze the influence of several numerical parameters on the results.
\subsection{A few remarks on numerical issues}
We use two different codes for our calculations, {\tt RH2D} and {\tt
NIRVANA}. The numerical method used in both codes is a staggered
mesh, spatially second order finite difference method based, where
advection is based on the monotonic transport algorithm
\citep{1977JCoPh..23..276V}. Due to operator-splitting the codes are
semi-second order in time. The computational details of {\tt RH2D}
which can be used in different coordinate systems have been described
in general in \citet{1989A&A...208...98K}, and specifically for planet
calculations in \citet{1999MNRAS.303..696K}. The details of the {\tt
NIRVANA} code have been described in \citet{1998CoPhC.109..111Z}.
The use of a rotating coordinate system requires special treatment
of the Coriolis terms to ensure angular momentum conservation
\citep{1998A&A...338L..37K}.
Especially for the long-term calculations presented here, this is an
important issue.
In calculating the gravitational potential of the planet we use a
smoothed potential of the form
\begin{equation}
\Phi_P = - \frac{ G M\subscr{p}}{\sqrt{s^2 + \epsilon^2}}
\end{equation}
where $s$ is the distance from the planet.
For the smoothing length of the potential we choose $\epsilon = 0.4 R\subscr{Hill}$.
The viscous terms, including all necessary tensor components,
are treated explicitly.
To ensure stability in the gap region with very strong gradients in the
density an artificial bulk viscosity has been added, with a
coefficient $C\subscr{art} = 1.0$. For the detailed formulation
of the viscosity related issues and tests see \citet{1999MNRAS.303..696K}.
As the mass ratio $M\subscr{p}/M_*$ of the planet can be very large
we have found it preferable to work with a density floor, where the density
cannot fall below a specified minimum value $\Sigma\subscr{min}$.
For our purpose we use a value of $\Sigma\subscr{min} = 10^{-8}$ in dimensionless values,
where the initial density is of ${\cal{O}}(1)$.
\section{The dual-state disk}
We first consider our standard model as described above
using a planetary mass ranging from 1 to 5 $M_{Jup}$, i.e.
a mass ratio of $q = 10^{-3}$ to $5 \cdot 10^{-3}$.
The other physical parameters are identical for all models.
Due to the nature of the damped boundary conditions and a non-zero
physical viscosity we might expect after a sufficiently long evolution
time a convergence towards an equilibrium state where the density
structure and the total amount of mass in the disk remain
constant in time, at least in the co-rotating frame.
Indeed, for small planetary masses,
$M_{p} < 3 M_{Jup}$ we find a circular stationary state
which displays the typical features of embedded planets in disks: a deep,
circular depression of density at the location of the planet (the gap), spiral
arms in the inner and outer disk.
This state is shown in the top graph of Fig.~\ref{fig:equil},
which shows the surface density of
the obtained equilibrium state at an evolutionary time of $t=2000$ orbits.
However, if the planetary mass reaches $M_{p} \geq 3 M_{Jup}$ we
surprisingly do not reach a stationary equilibrium state
anymore. Instead we find after a very long time ($> 1000$ orbits) a
new periodic state which has approximately the same period as the
orbital period of the planet. In this state the disk is clearly
eccentric with an extremely slow precession rate such that the
eccentric pattern appears to be nearly stationary in the inertial
frame. This eccentric quasi-equilibrium state for $M\subscr{p} = 5
M_{Jup}$ is shown in the bottom graph of Fig.~\ref{fig:equil}.
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg1a.eps}}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg1b.eps}}
\end{center}
\caption{ Logarithmic plots of the surface density $\Sigma$ for the
relaxed state after 2000 orbits for two different masses of the
planet which is located at $r=1.0$ in dimensionless units. {\bf
Top}) $q = 3.0 \, 10^{-3}$ and {\bf Bottom}) $q = 5.0 \, 10^{-3}$
calculated with NIRVANA. The inner disk stays circular in
both cases but the outer disk only in the lower mass case. For $q =
5.0 \, 10^{-3}$ it becomes clearly eccentric with some visible fine
structure in the gap. For illustration, the drawn ellipse (solid
line in the lower plot) has one focus at the stellar location and an
eccentricity of 0.20. }
\label{fig:equil}
\end{figure}
\subsection{The eccentric disk}
A measure of the eccentricity of the disk is calculated as follows:
For a ring at radius $r_i$ we calculate the eccentricity for every
cell in the ring from the velocity and position vector of that cell by
assuming the fluid element is a particle moving freely in the central
potential of the star, feeling no pressure forces. The average over
all cells in the ring is then defined as the eccentricity of the disk
at that radius $r_i$. This value is plotted for different masses in
Fig.~\ref{fig:ecc1-mp} at the evolutionary time of $t=2500$, only for
$M_p = 3 M_{Jup}$ at $t=3850$ orbits.
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg2.eps}}
\end{center}
\caption{Disk eccentricity as a function of radius for the several models with
$q = 0.001$ up to $q = 0.005$ at $t=2500$ orbits, for the $q=0.003$ model at
$t=3850$.
For the two lower curves $q = 0.001$ and $q = 0.002$,
the outer edge of the computational domain lies at $r_{max} = 2.5$.
}
\label{fig:ecc1-mp}
\end{figure}
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg3.eps}}
\end{center}
\caption{Azimuthally averaged radial profiles of the surface density
for different planet masses,
for the same models and times as in Fig.~\ref{fig:ecc1-mp}.
The width of the gap increases with planetary mass.
}
\label{fig:sig1-mp}
\end{figure}
For planetary masses below around $M_p \approx 3 M_{Jup}$, the maximum
eccentricity of the disk is about 0.10, and is strongly peaked at $r
\approx 1.2$. For the larger planetary masses the eccentricity of the
disk nearly doubles and reaches 0.22 for $M_p = 5 M_{Jup}$. In
addition, a much larger region of the disk has become eccentric, which
has been seen clearly already in the surface density distribution in
Fig.~\ref{fig:equil}, bottom, where the ellipse indicates an
eccentricity of 0.20 with one focus at the stellar position. The
precession rate $\dot{\varpi}$ of the eccentric disk is very small and
typically prograde. From our longest runs (over several thousand
orbits) we estimate $\dot{\varpi} \approx 10\deg / 1000$ Orbits. In
Fig.~\ref{fig:ecc1-mp} the curves for the lower planet masses end at
$r = 2.5$ because this is the outer boundary for those low mass
models.
In Fig.~\ref{fig:sig1-mp} the azimuthally averaged density profile is
plotted for different planetary masses for the same models
as in Fig.~\ref{fig:ecc1-mp}. Clearly the gap width increases for the larger
planet mass, as expected due to the stronger gravitational torques.
For the lowest mass
$q=0.001$ model (solid line) the gap is not completely cleared.
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg4a.eps}}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg4b.eps}}
\end{center}
\caption{
Surface density and eccentricity profile for models using
$q = 0.004$ at a time of 2500 orbits,
the high resolution (model short dashed line) at 1750 orbits.
Plotted are results for different models varying the numerical
setup.
}
\label{fig:sigecc-num}
\end{figure}
\subsection{Dependencies on numerical parameters}
The threshold mass where the transition from circular to eccentric occurs
apparently depends on the width and shape of the gap, and parameters that
will change the gap structure will also change this threshold mass.
Before we analyze physical influences we display
in Fig.~\ref{fig:sigecc-num} the surface density profile
and the disk eccentricity for models
using different numerical parameters but all with same physical setup
for $q = 0.004$, and
at the same evolutionary time of 2500 orbits (the high resolution
model at $t=1750$ orbits).
The solid line refers to the basic reference model (as in
Fig.~\ref{fig:sig1-mp}, $4 M_{Jup}$ model). We first find that the
mass value where the transition occurs may depend on the location of
the outer boundary $r\subscr{max}$. If the stand-off distance of the
planet to the outer boundary is too small the damping boundary
conditions, which tend to circularize the disk, prevent the disk from
becoming eccentric. The simulations using a $4 M_{Jup}$ planet and a
smaller $r\subscr{max}$ clearly shows this effect. For this mass of
the planet the disk will not anymore become eccentric for
$r\subscr{max} = 2.5$ (dotted curve). Hence, to properly study this
effect a sufficiently large $r_{max}$ has to be chosen. An extended
domain with $r\subscr{max} = 10$ (short-dashed-dotted) does not alter
the eccentricity behavior of the disk. The inner disk remains
circular for all planet masses because of the strong damping
introduced by the boundary condition.
A higher resolution (200 $\times$ 500, short-dashed line), and running
the model in the inertial frame (long-dashed) have no significant
influence on the density distribution and the occurrence and magnitude
of the disk eccentricity. A lower resolution model
(long-dashed-dotted) using 128 $\times$ 128 grid cells, results in a
slightly lower eccentricity due to a larger (numerical) damping. In
addition, we have compared results with two different numerical codes
({\tt RH2D} and {\tt NIRVANA}) and again found good agreement. Hence,
we conclude that the eccentric disk state is a robust,
reproducible physical phenomenon.
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg5a.eps}}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg5b.eps}}
\end{center}
\caption{
Surface density and eccentricity profile for models using
$q = 0.004$ at a time of 2500 orbits.
Plotted are results for different models varying the physical
setup.
}
\label{fig:sigecc-phy}
\end{figure}
\subsection{Dependencies on physical parameters}
In Fig.~\ref{fig:sigecc-phy} we display the surface density profile
and the disk eccentricity for models with $q = 0.004$ using different
physical parameters. If the dimensionless viscosity $\nu$ is enlarged
to $3 \times 10^{-5}$ (dotted line) the gap width and depth is reduced
and the disk will no longer become eccentric for the planet mass of $q
= 0.004$ (and also not for $q = 0.005$). Similarly, an increased
$H/r$ (long-dashed line) leads also to a narrower gap and a smaller disk
eccentricity. If, on the other hand, the viscosity is lowered by a factor
of three (short-dashed), or $H/r$ is reduced we find that the disk
reaches about the same eccentricity as before.
The last model (dashed-dotted line) refers to a planet on an eccentric orbit
with $e_p=0.05$ and a 3 times higher viscosity than the basis model.
As can be seen, the disk remains circular for these parameter.
This model demonstrates that it is not the planetary eccentricity which is
responsible for producing the disk eccentricity but that it is rather
a genuine instability. This conclusion is confirmed by a model
with $M_{p} =2 M_{Jup}$ and $e_p = 0.05$ which (for the standard viscosity)
does not produce an eccentric disk.
\subsection{The two equilibrium states for an $\alpha$ type viscosity}
\label{subsect:alpha}
To illustrate the effect under different physical conditions we
present additional simulations using a slightly different setup.
Here, we consider a planet moving inside a disk at a radius of 0.35
AU, assuming that the inner disk has been cleared already. The outer
radius of the computational domain lies at 1.2 AU, and the inner one
at 0.25 AU. The scale height of the disk is $H/r = 0.05$, and for the
viscosity we use here as an alternative an $\alpha$-prescription, with
a constant value of $\alpha = 0.01$. In these models we have used a
planetary eccentricity of $e_p = 0.01$ which is typically found in
models of embedded planets that follow the orbital evolution. As shown
above this value of $e_p$ has no influence on the transition to the
eccentric disk state. The remaining setup is similar to the models
described above. The viscosity may be on the large side of
protoplanetary disks but has (in combination with the lack of the
inner disk) the clear advantage of speeding up the simulations
considerably which allows us to reach the quasi-equilibrium states in
which global quantities such as mass, energy do not vary in time
anymore, with reasonable computational effort. This alternative setup
has been used recently in a paper modeling the resonant system GJ 876
and it is described in more detail in
\citet{2005A&A...437..727K}. Here we describe additional results
concerning details of the eccentric disk state.
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg6.eps}}
\end{center}
\caption{
The dependence of
the accretions rate onto the planet
(in dimensionless units) on the planetary mass
for relaxed quasi-equilibrium configurations.
Results are displayed for
models using an $\alpha = 0.01$ viscosity.
}
\label{fig:mdisk-mp}
\end{figure}
For these $\alpha$-models we vary the planet-star mass ratio $q$ from
$1 \cdot 10^{-3}$ to about $7 \cdot 10^{-3}$. In all cases the models
are evolved until a quasi-stationary state has been reached. As
already seen above for the constant viscosity case, also in this case
the disk changes its structure from circular for small planetary
masses to eccentric for large planetary masses. Here the transition
occurs at a larger planetary mass because of the higher effective
viscosity.
In Fig.~\ref{fig:mdisk-mp} we display the mass
accretion rate onto the planet as a function of the planet mass.
There is a strong jump in the magnitude of the accretion rate at a
critical planetary mass $q_{crit} \approx 5.25 \cdot 10^{-3}$,
exactly at the point where the
disk switches from circular to eccentric.
For small planetary masses $q < q_{crit}$ the mass accretion rate falls
off with increasing planetary mass, because upon increasing $M_{p}$
the stronger gravitational torques will deepen the gap and reduce the
accretion rate \citep{1999ApJ...514..344B, 1999ApJ...526.1001L}.
However, when the disk turns eccentric the gap edge periodically
approaches the planet and it may even become engulfed in the disk
material for sufficiently large eccentricity (see
Fig.~\ref{fig:sigma2d}). Consequently, the mass accretion rate onto
the planet is strongly increased allowing for more massive planets.
This sudden change in the accretion rate is reminiscent of a {\it phase
transition} where the ordering parameter is given here by the
planetary mass. Test simulations have shown that the obtained
equilibrium structure does not depend on the initial configuration
(eg. density profile, initial mass in the disk) but is solely given by
the chosen physical parameters. As shown above the transition from
the non-eccentric state to the eccentric state, which is here a
function of only the planetary mass, depends also on the viscosity and
temperature on the disk which we have held fixed in this model
sequence.
Similarly to accretion rate the total disk mass contained in the system
also changes abruptly at the $q_{crit}$ as
a consequence of the applied the boundary conditions
at $r_{max}$. These are chosen such that the disk relaxes towards
its initial conditions at the outer boundary, eg. the value of the surface
density is fixed at that point. Upon increasing the planet mass
the gap becomes more pronounced and disk mass is pushed towards the outer
boundary increasing the density there. At the onset of the eccentric
state this relation changes abruptly.
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg7a.eps}}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg7b.eps}}
\end{center}
\caption{
Gray scale plots of the surface density $\Sigma$ for the relaxed state
for two different planetary masses: {\bf a}) $q = 4.5 \cdot 10^{-3}$
and {\bf b}) $q = 5.9 \cdot 10^{-3}$ calculated with RH2D.
Due to the higher planetary mass much stronger wave-like disturbances are
created in the density.
}
\label{fig:sigma2d}
\end{figure}
The existence of the two equilibrium states of the disk is further
illustrated in Fig.~\ref{fig:sigma2d} where we display gray scale
plots of the surface density $\Sigma$ for the relaxed state.
for two different mass ratios ($q = 4.5$ and $5.9 \cdot
10^{-3}$) in a $r - \varphi$ representation. While for the lower mass
case ($q = 4.5 \cdot 10^{-3}$) the disk structure remains quite
regular, the second high mass case ($q = 5.9 \cdot 10^{-3}$) shows a
strongly disturbed disk which has gained significant eccentricity ($e
= 0.2$) where also the gap edge becomes highly deformed (compare to
Fig.~\ref{fig:equil}).
\begin{figure}[ht]
\begin{center}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg8a.eps}}
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg8b.eps}}
\end{center}
\caption{
{\bf a)} The time dependence of the total radial kinetic energy of the disk
in the computational domain for four different planet masses.
{\bf b)} the growth rate of the eccentric disk mode as a function of
the planetary mass. The superimposed straight line has a slope
of $\tau \propto {M_{p}}^{-2.4}$.
}
\label{fig:ekingrow}
\end{figure}
\subsection{Eccentricity growth rates}
The growth of the eccentricity of the disk depends primarily on the
mass of the planet. To measure the speed of the increase we analyze
the time dependence of the total radial kinetic energy $E_{kin,rad}$
in the models, because this is a quantity most readily available. In
the top panel of Fig.~\ref{fig:ekingrow} we display the $E_{kin,rad}
(t)$ for four different planet masses. For a low mass of $M_{p} =2
M_{Jup}$ no growth is visible but for larger planets the growth time
shortens upon increasing $M_{p}$. From the growth of $E_{kin,rad}
(t)$ we estimate visually the growth-times $\tau$ as a function of
planetary mass (lower panel of Fig.~\ref{fig:ekingrow}). Clearly, for
more massive planets the disk will turn eccentric much faster. From
the plot we may estimate a growth rate $\gamma = 1/\tau \propto
{M_{p}}^{2.4}$, a relation which is indicated by the additional
straight line in the graph. This dependence on planetary mass is
somewhat stronger than that estimated on theoretical grounds
\citep{2001A&A...366..263P}.
\begin{figure}[ht]
\begin{center}
\rotatebox{270}{
\resizebox{0.98\linewidth}{!}{%
\includegraphics{3914fg9.eps}}}
\end{center}
\caption{ The strength of several modes in the disk as a function of
time. The solid line refers to the global disk eccentricity
$S_{1,0}$. In the exponential growth regime the time derivative of
the eccentricity (dotted line) is proportional to the (2,1) wave
mode (short-dashed line). }
\label{fig:res}
\end{figure}
\subsection{Theoretical analysis}
The observed growth of the disk eccentricity in our simulations resembles
that found by \citet{2001A&A...366..263P} for very massive planets
with $M_{p} \raisebox{-0.6ex}{$\stackrel{{\displaystyle>}}{\sim}$} 10 M_{Jup}$. The effect can be explained by a tidally
driven eccentricity through resonant interaction of the disk with
particular components of the planet's gravitational potential
\citep{1991ApJ...381..259L}. Using cylindrical coordinates $(r,
\varphi)$ we decompose the potential of the planet, which is on a {\it
circular orbit}, in the form
\begin{equation}
\label{eq:potent}
\Phi_p (r, \varphi) = \sum_{m=0}^{m=\infty} \phi_m(r)
\cos [ m ( \varphi - \Omega_p t) ]
\end{equation}
where $\Omega_p$ is the angular frequency of the planet.
The response of the disk has the form
\[
\propto \exp [ i ( k \varphi - l \Omega_p t) ]
\]
The planetary potential produces tides in the disk, which interact
with an initially small eccentric disk. The $m$-th Fourier component
of the potential (in Eq.~\ref{eq:potent}) excites an eccentric
Lindblad resonance in the {\it outer disk} where the rotation period
of the disk is $\Omega = {m\over{m+2}} \Omega_P$ which corresponds to
the mode $(k,l) = (m+1,m)$ \citep{1991ApJ...381..259L}. Hence, for an
eccentric ($m=1$) perturbation the radial location lies at the outer
1:3 resonance at $r = 2.08$. As the mass of the planetary companion
increases, the gap it opens in the disk will be deeper and
wider. Already in \citet{1992PASP..104..769A} it was suggested that
for sufficiently wide gaps, eccentricity growth can be induced by
interaction at the 1:3 resonance in the outer disk, However, for
smaller planet masses this is damped by other resonances which are
listed in \citet{2003ApJ...585.1024G, 2004ApJ...606L..77S}. The main
contributing eccentricity-damping resonances are the co-orbital
resonances and the resonances located at the outer 1:2 resonance.
Only if the gap is deep and wide enough these two resonances can no
longer cancel the eccentricity-exciting effect of the interaction at
the 1:3 resonance. The radial surface density profiles for
simulations with different planet masses at 2500 orbits have been
displayed in Fig.~\ref{fig:sig1-mp}. As can be seen, only for planet
masses larger than approximately 3 $M_{Jup}$ the gap is sufficiently
cleared at the 1:2 resonance ($r \approx$ 1.58) to allow for an
eccentricity increase of the disk.
Theoretical analysis in \citet{1991ApJ...381..268L} defines the total mode
strength $S_{k,l}$ as
\[
S_{k,l} = (S^2_{cos,cos,k,l} + S^2_{cos,sin,k,l}
+ S^2_{sin,cos,k,l} + S^2_{sin,sin,k,l})^{1/2}
\] with $S_{f,g,k,l}$, defined in the inertial frame, given by
\begin{eqnarray*}
S_{f,g,k,l}& = & {2\over{\pi M (1+\delta_{i,0})(1+\delta_{j,0})}}\int\limits_t^{t+2\pi}dt'\int dr \int\limits_0^{2\pi} r d\theta \\ & & \times \Sigma (r,\theta,t) f(i\theta)g(jt')
\end{eqnarray*}
In his analysis it is shown that the time derivative of the
$k=1, l=0$ mode $S_{1,0}$ is given by the
$k=2, l=1$ component:
\begin{equation}
\label{eq:proport}
\frac{ S_{1,0}}{dt} \propto S_{2,1} \cdot S_{1,1}
\end{equation}
The evolution of the relevant mode strengths for a model with
$q=0.005$ is displayed in Fig.~\ref{fig:res}. The amplitude of the
global eccentric mode ($k=1, l=0$) shows exponential growth (see
Fig.~\ref{fig:res}, solid line). Furthermore, Eq.~(\ref{eq:proport})
is confirmed directly by comparing the $S_{2,1}$-mode (short-dashed
line) and the numerically obtained derivative of the eccentricity,
i.e. $S_{1,0}$ (dotted curve). As it can be seen from the plot,
$S_{1,1}$ is constant as suggested by the theoretical analysis of
\citet{1991ApJ...381..259L}.
The good agreement of our results with theoretical expectations
supports our conclusion that the mechanism for eccentricity growth is
that described by \citet{1991ApJ...381..259L} and
\citet{2001A&A...366..263P}. In our simulations growth will start
after the disk has settled sufficiently and the gap has been cleared,
a process which occurs on viscous time scales. Our numerical growth
rates during the eccentricity increase have been estimated from the
time evolution of total radial kinetic energy
(Fig.~\ref{fig:ekingrow}).
\section{Conclusions}
We have performed numerical time dependent hydrodynamical calculations
of embedded planets in viscous accretion disks. During the evolution
the planet is held fixed on a circular orbit, and the whole system is
evolved in time until a quasi-equilibrium state has been reached. In
contrast to previous existing simulations on this problem we have
extended the evolutionary time to several thousand orbits of the
embedded planet for a whole range of different planetary masses.
We find that beyond a certain critical mass of the planet the
structure of the disk changes from a circular to an eccentric
state. For typical viscosities in protoplanetary disks $\nu =
10^{-5}$ (or $\alpha \approx 0.004$) the transition to the eccentric
case occurs already for critical masses of $M_{p} = 3 M_{Jup}$.
Through a modal analysis we demonstrate that the eccentric ($k=1,
l=0$) mode in the disk is indeed driven by the ($k=2, l=1$) wave mode
which is excited at the outer 1:3 Lindblad resonance. The numerically
inferred growth rate of the unstable eccentric disk mode is roughly
proportional to ${M_p}^{n}$ with $n=2.4$, which is slightly larger
that the predicted value of $n=2.0$
\citep{1991ApJ...381..259L,2001A&A...366..263P}. The discrepancy is
most likely due to a change in the density structure of the gap for
different planetary masses. For small masses $M_{p} = 2 M_{Jup}$ no
eccentricity growth has been found. Here the damping effects of disk
viscosity and pressure keep the disk in the circular state.
Upon increasing the planetary mass the eccentricity eventually
saturates at a value of $e \approx 0.25$.
The excitation of eccentric disk modes by massive companions has been
studied within the framework of Cataclysmic Variable stars as an
instability of the inner disk
\citep{1991ApJ...381..259L,1991ApJ...381..268L}. In those cases the
change in viscous dissipation induced by the slow precession of the
disk is presently the preferred mechanism to explain the observed
superhumps in the light curve of some systems. That the same process
is also applicable to (outer) disks around an embedded protoplanet has
been confirmed by \citet{2001A&A...366..263P} in their study of very
massive planets.
In their simulations a much larger threshold mass
($\approx 10-20 {\rm M\subscr{Jup}}$) has been found. However, their
simulations were run only for 800 planetary orbits or less,
which is not sufficient to see growth for small mass planets
considering the long growth time of
the eccentric mode.
The change in the state of the disk has significant consequences for
the mass accretion rate onto the planet. For circular disks the
width of the gap widens upon an increase in the planetary mass which
shuts off eventually further accretion of disk material. The maximum
mass a planet may reach by this process is around $5 {\rm M\subscr{Jup}}$
\citep{1999ApJ...514..344B, 1999ApJ...526.1001L}. We suggest that
through the excitation of the eccentric mode in the disk the planet
can reach larger masses more readily, as there are quite a few systems
with (minimum) planetary masses larger than $5 M_{Jup}$. The
influence an eccentric disk might have on the evolution of a pair of
planets engaged in a 2:1 resonance has been analyzed recently by
\citet{2005A&A...437..727K}. Here, changes in the libration amplitude
of the resonant angles are to be expected.
It has been suggested that the gravitational back reaction of an
embedded planet with the surrounding disk can lead to an increase in
the orbital eccentricity of the planet \citep{2003ApJ...585.1024G,
2003ApJ...587..398O}, and may serve as a possible mechanism to explain
the observed high eccentricities in extrasolar planetary systems. In
the present work the gravitational back reaction of such an eccentric
disk on the planetary orbit has not been analyzed, and remains to be
studied in the future. The magnitude of the reachable eccentricity
depends on the absolute physical mass of the ambient disk. Through
numerical simulations \citet{2001A&A...366..263P} find that a
significant increase in planetary eccentricity is only seen for a
planet mass above $10 \, M_{Jup}$. However, even in this case the
maximum eccentricities do not increase beyond $e=0.25$. Additionally,
the evolution time of the models was very short and did not allow to
study the longterm evolution of the eccentricity.
As the effect of disk eccentricity scales with planet mass at least as
$\propto {M_{p}}^{2.4}$ the effect is most pronounced for very massive
planets. However, in that case it is also more difficult to induce
high planetary eccentricities. Hence, it is very questionable if the
back reaction of the disk can produce the observed high eccentricities
found in the surveys.
The present study is only two dimensional and has not included any
thermal effects such as radiative cooling or transport. Since in two
dimensional calculations the gravitational effect between planet and
disk tends to be over-estimated (as the disk is confined to the
equatorial plane) one might expect a reduced effect in full
three-dimensional simulations. But the very low value of the critical
transition mass leaves sufficient room for an importance of this
effect in the growth of extrasolar planets.
\begin{acknowledgements}
We would like to thank Stephen Lubow, Doug Lin and Richard Nelson
for stimulating discussions during the course of this project.
The work was sponsored by the EC-RTN Network {\it The Origin of Planetary Systems}
under grant HPRN-CT-2002-00308.
\end{acknowledgements}
\bibliographystyle{aa}
| 2024-02-18T23:39:50.934Z | 2005-10-13T15:22:27.000Z | algebraic_stack_train_0000 | 629 | 6,686 |
|
proofpile-arXiv_065-3192 | \section{Introduction}
NGC 6822 is a typical Magellanic dwarf Irregular galaxy located
at 500 kpc from the Sun, and after the Magellanic Clouds, is our closest
neighbor more luminous than M$_V$ = --16.
The optical appearance of this galaxy is dominated by a bar, mapped
by
\citet{hod77}, about 8$'$ long and with a position angle (PA)of 10$^\circ$
while its HI content forms a huge disk at a PA $\sim$ 130$^\circ$ \citep{wel03}.
Dwarf Irregular galaxies (dIrrs) and dwarf spheroidal/elliptical
galaxies (dSph/dEs)
differ primarily by their baryonic content, with the virtual
absence of a stellar population younger than a crossing time
and no significant HI disk in dSph/dEs; the known exceptions, such as NGC 205,
bear marks of a tidally disturbed past.
dIrrs, on the other hand,
are marked by a photometrically dominant population of young stars
unevenly distributed over the inner portions of a pronounced HI disk that
often extends (at 10$^{19}$ cm $^{-2}$) well beyond their Holmberg radii.
In terms of their surrounding environment the dSph/dEs are found concentrated
toward regions of high galaxy density in clusters where mean times to
harassing encounters are significantly less than a Hubble time. The dIrrs
are found preferentially in isolated environments where the mean time for
a tidally harassing encounter is long, so that for their intrinsic observed
velocities the likelihood of encountering a ``neighbor'' of comparable or
greater mass in less than a Hubble time is esteemed to be minute.
A currently fashionable scenario envisions these two classes as sharing a
common origin, with transitions of dIrrs and small spirals into
dSph/dEs types arising from ram
pressure stripping \citep{fab83} or galaxy harassment \citep{moo98}
by more massive neighbors.
With the long relaxation time of dwarf galaxies, the former mechanism
does not significantly modify the disk angular momenta while the latter
may do so when a sufficient number of harassments accumulates.
It is worth contrasting this view with an alternate relying on
numerical simulations initiated by Alar and Juri Toomre
\citep{too72}. These demonstrate that tidally generated
tails of interacting galaxies may leave debris mixtures of some variety.
The internal dynamical signatures of this debris, such as spin,
velocity dispersion and orbital
angular momentum would almost certainly be distinct
from those of dwarf systems of primordial
origin. Observations suggesting the current formation of such stellar
debris during a tidally disruptive encounter are well known
\citep{sch83,bar92}.
Observational properties of a dwarf galaxy are not unambiguously
indicative of which formation mechanism is relevant.
Consequently it becomes risky to
propose cosmological inferences from dynamical traits observed in dwarf
galaxies
whose dynamical histories are no longer uniquely determined. The most
irritating aspect of this conundrum is the difficulty in accounting for
the presence or absence of system spin.
Recent surveys, over large angular areas, of Local Group galaxies have
revealed that these galaxies, being spiral or dIrrs
are much bigger than previously thought. M31's halo and disk have recently
expanded \citep{gah05, iba05}, the disk of NGC 300
has been detected up to
10 scale lengths \citep{bla05}. Our Local Group carbon
survey reveals the existence of two distinct scenarios: 1)
a stationary environment, i.e., WLM, NGC 3109, NGC 185
\citep{bat04a,dem03,bat04b}
where all the C stars lie within
a few scale lengths from the center; 2) scenarios where we see
dynamical violence in their past history. The
primary examples of the latter are the Magellanic Clouds \citep{irw91},
NGC 6822
\citep{let02} and IC10 \citep{dem04}
with carbon stars found beyond seven scale
lengths. The LMC is quite extended with fragments of the disk seen
by \citet{gal04} 7$^\circ$ to the north.
For the last twelve years we have identified C stars in several
Local Group galaxies (see \citet{bat05} for a summary)
for eventual
use as dynamical test particles in the outer parts of galaxies
\citep{kun97a,kun97b,kun00},
to facilitate
tracking of angular momentum.
\section{Observations}
The photometric data set, discussed in this letter, corresponds to r$'$ and
i$'$ images taken with MegaPrime/Megacam in Queue
Observing mode in May and June 2004 on the Canada-France-Hawaii Telescope.
The wide field imager Megacam consists of 36 2048 $\times$ 4612 pixel
CCDs, covering nearly a full 1$^\circ \times 1^\circ$ field.
It offers a resolution of 0.187 arcsecond per pixel.
Four slightly overlapping fields were observed, the galaxy located
at the common corners, to cover essentially a 2$^\circ \times 2^\circ$ area
centered on NGC 6822.
The data distributed by the CFHT have been pre-reduced, corrected for
bias, flat fielded, etc.
The photometric reductions were done by Terapix, the data reduction
center dedicated to the processing of extremely large data flow. The Terapix
team, located at the Institut d'Astrophysique de Paris, matches and stacks all
images taken with the same filter and, using SExtractor \citep{ber96},
provided magnitude calibrated
catalogues of objects in each of the combined images.
The spectroscopic data consist of two sets of observations. Spectra
of carbon stars (identified by \citet{let02})
were obtained with two telescopes.
Fifty stars were observed with the WFCCD spectrograph, in
its \'echellette mode,
at the du Pont 2.5 telescope of Las Campanas Observatory in August 2002.
The FWHM resolution is 1.7\AA\ over the spectral region 7850 to 8760 \AA.
Sixty stars were observed with DOLORES multi-object spectrograph
(FWHM = 3.1 \AA) attached to the
Telescopio Nazionale Galileo located on Cerro de Los Muchachos, La Palma.
\section{Isodensity map}
The surface density of stars in the direction of NGC 6822 is high
because of its relatively low Galactic latitude (b = --18.4$^\circ$).
In order to increase the
contrast between its low density periphery and the foreground field,
we selected only stars with (r$'$ -- i$'$)$_0$ colors and magnitudes
corresponding to the NGC 6822 red giant branch (RGB) stars. The reddening
of the entire field has previously been mapped to deredden each star
\citep{dem05}. This giant star selection yields
some 150,000 stars, most of them members of NGC 6822.
The whole field is then covered by a 50 $\times$ 50 pixel wide grid
and stars are counted over a circular 500 pixel sampling area, centered
on each intersection of the grid. This is done to smooth out major
irregularities (mainly due to bright foreground stars that locally prevent
the detection of fainter members of NGC6822). This density map is
transformed into a density image that is analyzed with
IRAF/STSDAS/analysis/isophote/ellipse
to fit isodensity ellipses, determine their position angles and
ellipticities. This technique has previously been employed by us to
map the structure of IC 10 \citep{dem04}.
Figure 1 presents the isodensity contours of the RGB stars. Contours
corresponding to 3,10,20,30 sigmas above the average count level are shown.
The inner ellipse fits the 3$\sigma$ contours while the larger one
corresponds to 1.2$\sigma$ and is the outermost one identified by the
IRAF ELLIPSE task, it has a semi-major axis of 36$'$.
One can see that the major axis of the ellipse is nearly orthogonal
to the HI disk, represented by the dashed line.
This spheroid density profile is well fitted by a
two-exponential law with scale
lengths of 3.8$'$ and 10$'$, which are interpreted as follow.
Beyond the Freeman radius
of roughly 5 scale lengths (of 3.8$'$) we perceive a change in slope,
a flattening,
that in very roughly corresponds to the radius where tidal deformations
from interactions might be expected. N-particle simulations should
eventually clarify this interpretation.
The two ellipses in Fig. 1
are characteristic of the whole family of ellipses of various
major axes than can be traced. Indeed, from 10$'$ to 35$'$ the
PA of the major axis varies from 80$^\circ$ to 65$^\circ$ while
the ellipticity range from 0.24 to 0.38. More details can be found
in our forthcoming paper \citep{dem05}.
This figure shows that the bulk of the spheroidal population stars in NGC 6822,
surrounding its bright central bar is comparable in size to the HI disk,
mapped by \citet{deb00}, but oriented quite differently.
\section{Kinematics}
Radial velocities of 110 carbon stars observed within 15$'$ of the HI major
axis from an earlier survey \citep{let02} are described here. The
spectra covered the spectral domain from 7500 to roughly 9000 \AA, relying on
roughly 50 night sky lines for wavelength calibration, and four template
carbon stars from \citet{tot98}; velocity variability affect
these N-type carbons, limiting the system precision to $\pm$15 km s$^{-1}$.
Their
mean distribution, referred to TI 0357+0908, lies between +10 and
--70 km s$^{-1}$. Figure 2 shows the radial velocities plotted as a function
for the spheroid major axis.
Even so, carbon stars as kinematic ``test particles'' airport the advantage
of freedom from contamination the Galactic foreground might otherwise
impose. Telluric absorption features (from primarily H$_2$O and O$_2$) were
used to compensate for possibly uneven illumination of the slit masks that
might otherwise introduce an arbitrary velocity shift.
The spatial distribution of these carbon stars fits the spheroid
better than the HI. Quite apart from this spatial distribution, however,
it was found that the coordinate system x', y' that yielded the minimal
residuals in rotation velocities is directed toward PA of between 63 and
67 degrees, and so places the rotation axis of the system of carbon stars
at very nearly right angles to that of the HI disk and close to the minor
axis of the spheroid. Figure 3 shows the variation of the dispersion
residuals with trial orientations of the x', y' frame. Our conclusion is
that the carbon stars (1) show no preference for the HI disk and (2)
demonstrate that they form part of a stellar population rotating at nearly
right angles to the HI disk, leading to the suggestion that the HI disk is
a structure more reminiscent of a polar ring.
\section{DISCUSSION}
The close similarity between the morphology and kinematics of NGC 6822 and
classical polar ring galaxies (PRG's) suggests analogous formation
scenarios, which we explore briefly (with our thanks to the referee).
Foremost among these similarities are the nearly perpendicular
orientations of two systems of angular momenta. Also, investigators of
PRG's have commented that almost all such systems show indications of
recent formation, possibly within the last two Gyr \citep{whi90,iod03}.
Last, and from our perspective the most significant,
is the severe dissimilarity between the population types of these two
components: one being virtually gas-free, while the other shows little
evidence for the existence of an older component (represented by an RGB).
More explicitly, the absence of carbon stars in the HI disk on the one
hand, and simultaneously the absence of gas and a young stellar population
in the spheroid on the other carries important connotations for time
scales.
The presence of carbon stars in a dominated by RGB stars
spheroidal component of NGC 6822 bears similarities with other
spheroids of the Local Group \citep{bat05} even though
major differences must be noted. No other spheroid of the Local
Group shows comparable spin; for NGC 205 (M$_V$ = --16.3) the spin found by
\citet{geh05} is one seventh that detected here, and a comparison
with dwarf ellipticals of the Virgo Group by \citet{geh03} places the
spheroid of NGC6822 far above the upper limit of their sample (their Fig.
4). Comparing the luminosity function of the spheroid's RGB with that of
Fornax \citep{dem94} yields an M$_V$ = --14.6, or M$_I$ = --15.8
then locates the spin of NGC6822's spheroid
within one standard deviation of the infrared Tully-Fisher relation as
given by \citet{gio97}. This suggests interpretation that, instead of a spheroid,
we are seeing a
dwarf disk that has lost its original gaseous and young population
component.
An inspection of the
distribution of the young population associated with the HI disk may help
to put constraints to possible scenarios. These stars fill
a narrow elongated ellipse close to the outer HI periphery \citep{bat03}.
The outer
envelope of this distribution is the same, and comparably populated, for
stars of $\sim$ 500 Myr age (as determined from isochrone fittings)
as is found
for extremely young stars of $\sim$ 50 Myr, suggesting that the last traces
of transient tidal phenomena had extinguished completely at some prior epoch,
leading to the conclusion that whatever tidal event produced the polar ring
phenomenon must be significantly older.
In their descriptions of the classical PRG's \citet{whi90} comment
on the population dissimilarity between the two components. We note also
an observation from \citet{too72} that strikes us as crucially
relevant. To start a merger or accretion process that initiates a hiatus or
break of duration longer than a crossing time of the spheroid or the polar
ring without going to completion requires incredibly finely tuned
dynamical starting conditions of low orbital energies. Such low
orbital energy differences between two neighboring galaxies in a cluster
such as where S0 galaxies are observed is an exceedingly unlikely
occurrence.
More problematic, if what is seen are incomplete mergers,
why are such detained mergers observed only when one member (generally the
more massive) is an almost gas-free system, not far in the Hubble sequence
from the S0 types, and the other always gas rich? If the orbital
energetics of such finely tuned merging encounters are the determining
factor for the formation of PRG's, then all Hubble types should be
represented, and this is not seen.
From our work of carbon stars as dynamical test particles in interacting
disruptive encounters between the Magellanic Clouds, another formation
scenario for PRG's, not treated by \citet{bou03} in their
exploration of likely formation processes, strikes us as capable of
generating ``gentle'' merger conditions thought significant by the Toomre's
and simultaneously to account for the population dissimilarity between the
two components. In this concept a disk galaxy striking the outer envelope
of a more massive perturber experiences a purely gravitational though
relatively mild impulse to its stellar component, but additionally another
incremental impulse to its gaseous component.
In the case of the
SMC traversing the LMC disk decreased the orbital angular momentum of the
gaseous component more so that the purely gravitational tidal impulse, so
as to lag behind the stellar component by almost 2 kpc some 300 Myr later
\citep{kun00}; other manifestations separating the gas component
trajectories from those of the stellar are apparent in much of the region
between the Magellanic Clouds. We recall the case of UGC7636, a dwarf
galaxy which, after interacting with the elliptical NGC 4472 left its
entire gas component several system diameters behind in orbit \citep{san87}
a more aggravated interaction of similar sort. Without
asking with which of the two separated components of UGC7636 a DM halo
might remain, we note that as an instance scaled somewhere in between the
SMC episode and that of UGC7636, the PRG's syndrome would appear best
represented by an incomplete tidal disruption in which the PRG hiatus is
now more easily placed into context phenomenologically, accounting
entirely for the dissimilarity between the population contents of the two
components.
We conclude noting that whichever scenarios for a PRG formation one prefers,
encounters with massive neighbors are always needed. This is quite a puzzling
circumstance since NGC 6822 is considered to be a typical isolated dwarf galaxy.
Further investigations are needed to solve the riddle of the missing culprit.
\acknowledgments
This research is funded in parts (S. D.) by the Natural
Sciences and Engineering Research Council
of Canada.
{\it Facilities:} \facility{CFHT (Megacam)}, \facility{Du Pont (WFCCD)},
\facility{TNG (DOLORES)}
| 2024-02-18T23:39:51.119Z | 2005-12-05T13:38:57.000Z | algebraic_stack_train_0000 | 643 | 2,625 |
|
proofpile-arXiv_065-3297 | \section{Introduction}
Along with the significant growth of computing power, complicated models becomes available for problems with large degrees of freedom, which, in recent years, has been further popularized along with the progress in deep learning learning research.
People are generally interested in analyzing high-order tensors of large scales, and discussing their capability to capture complex relations.
Efficient tensor models are desired to solve real life problems with fewer adaptive parameters.
Despite the proliferation of related work in theoretical analysis on tensors, there actually has been a long-time mis-understanding about the modeling power of tensors.
Given a tensor, there are two related while completely different quantities: tensor complexity, and its model capacity.
While the latter is the concern of most research, studies frequently take the problem of the former one to analyze.
Briefly speaking, tensors with large complexity is not guaranteed to be a model with sufficient capacity.
This confusion motivates the current work, which delivers a comparison between these two perspective, including problem setup, theoretical analysis, and related techniques.
More specifically, in the field of tensor analysis, in general, people are interested in low rank efficient representations of high order tensors. There are mainly two different sets of problems:
\begin{enumerate}
\item
One does not tries to approximate a specific tensor, but is more interested in finding an efficient model space.
\item
One aims at approximating a specific tensor, given exact information about tensor entries.
\end{enumerate}
For the first set of problems, one focus more on tensor model structures, rather than an algorithm to find the optimal approximation. Therefore, the model capacity would be the foremost concern. In the second set of problems, the ultimate goal is to find a best (or sub-optimal) estimation with direct information on entries, therefore one would compose an explicit algorithm for higher order tensor decompositions, which produces a resulting model structure (e.g. Tucker-decomposition producing the Tucker-format, and sequential SVD producing the tensor-train format).
Due to the above difference, theoretic analysis associated with the two set of problems also differs a lot. On the one hand, to capture the model capacity, one popular choice is the so-called canonical polyadic (CP) rank: a higher CP-rank is usually regarded as a sign of higher capacity. On the other hand, the task of the tensor approximation requires that the model class forms a closed set, which rules out most model structures containing loops, leaving tree tensor models more popular among the community that can be optimized with generalized versions of SVD.
The major goal of this work is to
clarify a proper analysis scheme for investigating tensor model capacity, and therefore to provide further insight for designing efficient models.
However, we would start by arguing that the CP-rank is not a proper language for this purpose, as tensor complexity and model capacity are conceptually different. Instead, we apply the idea of truncating small weight Schmidt components, and clarify assumptions of "separability" implied by any low order tensor model structure.
To achieve this, we start by introducing the generalized Schmidt decomposition and finite rank truncation, along with some popular algorithms which help to find a quasi-optimal solution.
Then we introduce the definition of CP-rank as a generalized version of matrix rank, which can be used for tensor complexity analysis. We continue by clarifying the difference between tensor complexity and model capacity, and provides a more natural capacity measure: the \emph{separability scaling behavior} (SSB). With this measure, different existing tensor model structures are compared and further insights could be derived for model design in black-box modeling tasks.
\section{Generalized Schmidt Decomposition and Relevant Algorithms}
The problem of multivariate function tensorization (MFT) and tensor approximation (TA) problems are similar to each other. For MFT, one is usually given a function defined on $\mathbf{\mathcal{I}}\subset\mathbb{R}^L$: $f(x_1, x_2, \cdots, x_L)$, and attempt to decompose it into a product of single variate orthonormal basis functions. The TA problem, on the other hand, aims at decomposing a high order tensor $\mathcal{A}_{s_1s_2\cdots s_L}$ into a product of low order tensors, where $s_i\in [1, D]$.
In TA problems, one needs $D^L$ number of entries to specify a tensor; while in MFT, one instead requires $P^L$, where $P$ represents the number of single variate orthonormal basis functions, and in general could be $\infty$.
In general, as the complexity (for both storage and computation) increase exponentially when "order" increase, one is interested in a "low order decomposition" to approximate a tensor/function, where the term "order" means number of modes/variables in tensors/functions.
\subsection{General Schmidt Decomposition}
In both problems, the Schmidt decomposition plays the key role, which reveals the interplay (entanglement) between different modes/variables. More precisely, consider any bipartition of modes/variables which leads to a matricization of the original tensor/function:
\begin{align}
\mathcal{A}_{s_1s_2\cdots s_L} &= A_{s_a, s_b}; \nonumber \\
f(x_1, x_2, \cdots, x_L) &= f(x_a, x_b).
\end{align}
One could then apply a Schmidt decomposition on the two separated parts, by finding the left and right singular vectors/functions defined independently on the two parts:
\begin{align}
\quad u^{\alpha}_{s_a}\;\; &\quad ,\quad\;\;\: v^{\alpha}_{s_b} \nonumber \\
\psi^{\alpha}(x_a) &\quad ,\quad \phi^{\alpha}(x_b) \qquad \forall\alpha\in[1, R]
\end{align}
where $u^{\alpha}$ and $v^{\alpha}$ are left and right singular vectors of the matrix $A$, whose indices are labeled as $s_a$ and $s_b$ running from 1 to $d_a = D^{l_a}$ and $d_b = D^{l_b}$, respectively; while $\psi^{\alpha}(x_a)$ and $\phi^{\alpha}(x_b)$ are left and right singular functions of the bi-variable function $f(x_a, x_b)$, whose variables are labeled as $x_a$ and $x_b$ respectively (taking values in a bounded region). The index $\alpha$ labels different singular vectors/functions, where, in total, $R$ of them exist: $R$ is bounded by $\min(d_a, d_b)$ in matrix case used for TA and in general reaches $\infty$ in function case used for MFT. The resulting Schmidt decomposition then follows:
\begin{align}\label{schmidt}
A_{s_a, s_b} &= \sum_{\alpha=1}^{R} \sqrt{\lambda_{\alpha}}\cdot u^{\alpha}_{s_a} \cdot v^{\alpha}_{s_b}; \nonumber \\
f(x_a, x_b) &= \sum_{\alpha=1}^{R} \sqrt{\lambda_{\alpha}}\cdot \psi^{\alpha}(x_a) \cdot \phi^{\alpha}(x_b),
\end{align}
where $\lambda_{\alpha}$ is called singular values, which is assumed to be arranged in a descending order.
An important feature of this decomposition is that each component are orthogonal to each other, and therefore expands in an independent dimension (subspace). With a proper defined inner product (dot product for vectors, and $L_2$-integral on a bounded region for functions), the distance between two arbitrary tensors/functions can then be expressed with merely the coefficients $\lambda_{\alpha}$'s, which provides a necessary criteria to discuss tensor/function approximation.
Going back to Eq\eqref{schmidt}, instead of using all $R$ components which leads to an exact expression, one could use a truncated expression up to $r$ components as an approximation, regarding the fact that coefficients (singular values) are in a descending order. the resulting $L_2$-error in both case is then:
\begin{align}\label{error_general}
\epsilon = \sum_{\alpha=r+1}^{R} \lambda_{\alpha}.
\end{align}
One is usually interested in a low rank approximation, which corresponds to a small number $r$. To achieve an efficient low rank approximation, the descending sequence $\{\lambda_{\alpha}\}$ must vanish fast enough. We could consider the two extreme cases:
\begin{itemize}
\item in the worst scenario, all $\lambda_{\alpha}$'s are equal without vanishing, which would lead to a worst approximation (corresponding to the "maximally entangled" case);
\item in the best scenario, all but the first $\lambda_{\alpha}$ are zero, meaning the tensor/function can be written as a product of two tensors/functions defined in orthogonal spaces (corresponding to the "disentangled" case).
\end{itemize}
In more general cases, given an error acceptance threshold $\epsilon$, the minimum number $r$ of components kept to achieve the error threshold therefore suggests the difficulty of the approximation of a tensor/function: the larger $r$ is required, the more difficult the tensor/function approximation is.
The above discussion implies that the set of singular values $\{\lambda_{\alpha}\}$ (also called entanglement spectrum in physics) actually captures \textbf{the "separability" of two parts given a bipartition.} The information contained in the spectrum could be extracted in multiple levels:
\begin{itemize}
\item the number of non-zero $\lambda_{\alpha}$'s, i.e. matrix rank, which also relates to the zeroth order R\'enyi entropy;
\item the distribution of $\{\lambda_{\alpha}\}$, which can be further captured by the $L_n$-distance from an uniform distribution (all $\lambda_{\alpha}$'s are equal), related to the $n$-th order R\'enyi entropy.
\end{itemize}
This motivates one to use the entanglement entropy to categorize different problems.
\textcolor{black}{\subsection{The (quasi) Optimal Approximation of Low Order Decomposition}}
\textcolor{black}{The above introduced Schmidt decomposition in general setups not only provides a way to analyze the error in this specific approach, but also implies a method to achieve a quasi-optimal (if not the best) approximation with a low order decomposition. The capability for achieving the "quasi-optimal" approximation is rooted in two facts: the orthogonality of different components, and the exact error expression in Eq\eqref{error_general}. Indeed, \textbf{given rank $r$ for a bipartition approximation (express a tensor/function using two lower order tensors/functions), the best approximation minimizing the Frobenius norm criteria is achieved by truncating the subspace expanded by components indexed higher than $r$ in the Schmidt decomposition.}}
\textcolor{black}{To put it more systematically, low order decompositions involve connecting (contracting) pieces of low order components, i.e. lower order tensors in TA and functions with fewer variables in MFT. There is no algorithm that could find the best, or even the quasi-optimal, approximation (global optima) for generic tensor models. For example, both the CP-decomposition and tensor networks containing closed cycles consist of a tensor set that is not closed, which renders the problem of finding a best approximation ill-posed. However, for tree tensor networks, there is indeed a general recipe to find at least the quasi-optimal approximating tensor: a sequential Schmidt Decomposition (SSD), which corresponds to the high-order singular value decomposition (HOSVD) in tensor analysis field.}
\textcolor{black}{There are different versions of SSD that are associated with different tensor structures. \textbf{Generally speaking, a SSD consists of a sequence of Schmidt decompositions acting on different modes/variables. Each Schmidt decomposition slices out a lower order components that connects with the rest modes/variables through a single-leg tensor contraction.}}
\textcolor{black}{For example, the quasi-optimal approximation of Tensor-Train models can be achieved by slicing one mode/variable each time through a Schmidt decomposition; the quasi-optimal approximation of Tucker-format tensor models can be achieved by applying Schmidt decomposition on each single mode/variable individually; The Hierarchical-Tucker models can be quasi-optimally approximated through a root-to-leaves sequence of Schmidt decompositions.}
\textcolor{black}{The error analysis of algorithms differ in the tensor and function cases: in MFT problems, error could be bounded by certain constant (usually as a function of number of variables and tensor ranks); while in TA problems, given tensor ranks, error could only be bounded by the minimum error of the model structure itself multiplied by a constant (usually as a function of tensor orders). Briefly speaking, this is due to the fact that in MFT, there are usually certain extra assumptions on the smoothness of the function, while, in TA, for high order tensors with finite number of entries, there are no such constraints.}
\section{Tensor-Complexity Analysis: Canonical Polyadic Rank}
The above discussed TA problem describes the case when direct information (entries) is given about the target tensor, one is interested in finding a low order tensor approximation. The general procedure contains a sequence of SVDs that subsequently slice off mode clusters, and the final approximation are composed by a series of virtual index contractions.
Regardless of the general recipe, another important question is the complexity of a tensor, which renders the difficulty of approximation in practice. For higher order tensors, the most popular criteria describing tensor complexity is the canonical polyadic rank, which can be deemed as a generalization of matrix rank for higher order cases. Basically, a tensor with larger canonical polyadic rank is associated with a higher complexity.
In this section, we would introduce this widely used concept.
\subsection{Canonical Polyadic Rank}
The Canonical Polyadic (CP) rank can be regarded as a generalized version of matrix rank in the case of higher order ($>2$) tensors. We could discuss matrix rank using Schmidt decomposition form:
\begin{align}\label{mat_schmidt}
A_{s_a, s_b} &= \sum_{\alpha=1}^{R} \sqrt{\lambda_{\alpha}}\cdot u^{\alpha}_{s_a} \cdot v^{\alpha}_{s_b}, \qquad s_{a, b}\in [1, d_{a,b}].
\end{align}
The number $R$ is the rank of the matrix $A$. As mentioned before, the rank captures the separability of two parts from a bipartition of modes. For higher order tensors, the CP rank is defined in a similar way, where a bipartition is replaced by a multi-partition:
\begin{align}\label{cp}
\mathcal{A}_{s_1s_2\cdots s_L} &= \sum_{\alpha=1}^{R} \sqrt{\lambda_{\alpha}}\cdot v^{\alpha}_{s_1} v^{\alpha}_{s_2}\cdots v^{\alpha}_{s_L},
\end{align}
and $R$ is then called the CP-rank of tensor $\mathcal{A}$. As now the system is partitioned in multiple modes, it is not clear how to define the separability issue in this form now. In fact, \textbf{CP rank is equivalent to (up to a exp/log function) the Schmidt measure for multipartite entanglement.}
In quantum information field, it is well know that the Schmidt measure cannot distinguish between truly multipartite entanglement and bipartite entanglement.
\subsection{Upper Bound Nature of CP Rank}
Further more, we would like to relate the CP rank and matrix rank. As used above, a popular trick for higher order tensor is matricization through a mode bipartition, i.e.
\begin{align}
\mathcal{A}_{s_1s_2\cdots s_L} \quad \longrightarrow \quad A_{s_a, s_b}.
\end{align}
One could easily prove that \textbf{CP rank $R$ is an upper bound of matrix ranks for all possible matricizations.}
Basically, we prove $rank\big[A^{(a,b)}\big]<R$ for any matricization with mode bipartition $(s_a, s_b)$, where $A^{(a,b)}$ is the matricization partitioning the mode $(s_1, s_2, \cdots s_L)$ into $(s_a, s_b)$. For each component in a CP decomposition, the matrix rank is 1, since it is a direct product state and purely separable. Then due to the linear nature of the matricization operation:
\begin{align}
A^{(a,b)} :=& \bigg[\sum_{\alpha=1}^{R} \sqrt{\lambda_{\alpha}}\cdot v^{\alpha}_{s_1} v^{\alpha}_{s_2}\cdots v^{\alpha}_{s_L}\bigg]^{(a,b)} \nonumber \\
=& \sum_{\alpha=1}^{R} \big[\sqrt{\lambda_{\alpha}}\cdot v^{\alpha}_{s_1} v^{\alpha}_{s_2}\cdots v^{\alpha}_{s_L}\big]^{(a,b)},
\end{align}
we therefore have:
\begin{align}
rank\big[A^{(a,b)}\big] &= rank\bigg[\sum_{\alpha=1}^{R} \big[\sqrt{\lambda_{\alpha}}\cdot v^{\alpha}_{s_1} v^{\alpha}_{s_2}\cdots v^{\alpha}_{s_L}\big]^{(a,b)}\bigg] \nonumber \\
&\leq \sum_{\alpha=1}^{R} rank\bigg[\big[\sqrt{\lambda_{\alpha}}\cdot v^{\alpha}_{s_1} v^{\alpha}_{s_2}\cdots v^{\alpha}_{s_L}\big]^{(a,b)}\bigg]\nonumber \\
&= R
\end{align}
Therefore the CP rank $R$ is an upper bound of matrix rank among all possible matricizations.
Therefore, although CP-rank cannot be directly defined as "separability", it at least captures the upper bound of all possible matrix ranks. In other words, \textbf{the CP-rank describes the separability of the most inseparable mode bipartition.} In this sense, it can indeed serve as a tensor complexity measure, although more details of interactions (entanglements) among different modes are absent.
\section{Tensor Model Capacity Analysis: Coordinate Separability}
Above we introduced two important concepts:
\begin{itemize}
\item The sequential Schmidt decomposition provides both a class of algorithms to construct lower order (tree) tensor approximation for high order tensors and associated error analysis;
\item The CP-rank describes the complexity of a tensor, which is usually the target tensor to be approximated.
\end{itemize}
With the help of the above discussion, we now study the problem that is more crucial in practice: constructing a high-capacity model. Most Deep Learning researches are more related to this category: since deep neural network models are usually optimized using gradient based methods, the goal of changing model structure is then not to design a model that could be cheaply optimized by a novel algorithm, but purely to find a novel model which, itself, could capture desired features and dynamics of the downstream task. A meaningful criteria to evaluate the model capacity is hence desired.
In many previous studies, the same CP-rank has been used to describe model capacity. We would like to argue that this may not be the best criteria for model analysis, based on which, we would like to return to the separability property, and describe the model capacity using the scaling behavior of bipartition matrix ranks.
\subsection{Difference between Model Capacity and Tensor Complexity}
Firstly, we would like to clarify the difference between the complexity of a tensor and the capacity of a model.
As we discussed earlier, the technique using lower order pieces to construct higher order tensors relies on the separability issue. In the case where all modes a completely separable, only a linear number of basis vectors (rank-1 tensor) are required; in the case where any two parts are inseparable, it is quite difficult to construct a lower order representation. Therefore, both for tensor complexity and model capacity, we would discuss the separability issue.
On the one hand, the tensor complexity implies how difficult it is to describe a tensor. \textbf{The most difficulty would appear in the bipartition with a highest matrix rank, which in the lower order representation would result into a contraction with higher virtual dimension.}
On the other hand, the model capacity should be evaluated by considering "weakness" in the structure. Given a tensor model, one could also consider different mode bipartitions. In this scenario, however, the bipartitions associated with lower matrix rank should be concerned: \textbf{by applying the corresponding model, one has \emph{assumed} a strong separability on these mode bipartitions.}
As we proved above, the CP-rank provides an upper bound for bipartition matrix rank, and therefore could serve as a primitive and basic description for tensor complexity. But for model capacity, the CP rank may not be a proper criteria, as it does not contain the information of mode bipartitions with lower matrix ranks.
\subsection{Black-Box Tensor Modeling Problems}
Most generally, we could answer a more straightforward question: suppose a separability assumption is made on a $L$-order target tensor $\mathcal{A}_{s_1s_2\cdots s_L}$, what is the minimum universal virtual-leg dimension $R$ that could guarantee the target tensor being well-approximated, when using different model structures?
A black-box modeling procedure could be performed to approximate a target tensor with certain separability assumption (or say, information). Given a tensor model structure $\mathcal{M}$, different modes-bipartitions could capture interplay (entanglement) with different complexity, However, note as a black-box modeling, one in general cannot arrange different modes/variables in a way that the target tensor complexity and the model complexity match each other. Instead, to guarantee a solution to be found, the following relation should hold:
\textcolor{black}{
\begin{align}\label{minmax}
\max_{a\in \mathcal{P}_m}{rank\big[A^{(a, \Bar{a})}\big]} \leq
\min_{a'\in \mathcal{P}_m}{rank\big[A^{(a', \Bar{a}')}_{\mathcal{M}}\big]}, \quad \forall m\in [1, L/2],
\end{align}
where $A^{(a, \Bar{a})}$ is the matricization of the target tensor $\mathcal{A}$ associated with the modes-partition $(s_1, s_2, \cdots, s_L) = s_a\cup s_{\Bar{a}}$, and $A^{(a', \Bar{a}')}_{\mathcal{M}}$ is the matricization of the model tensor $\mathcal{A}_{\mathcal{M}}$ associated with the modes-partition $(s_1, s_2, \cdots, s_L) = s_{a'}\cup s_{\Bar{a}'}$. And $\mathcal{P}_m$ is the set of modes-bipartitions where the smaller part contains $m$ modes.}
With the above discussion, it becomes quite clear that the model capacity, which is captured on the right hand side, is related to the lower bound of matricization ranks.
We term the above relation in Eq.\eqref{minmax} as the \emph{Cannikin's law of tensor modeling}.
\subsection{Strong Separability Assumption in Popular Tensor models}
From the above analysis we are aware that, given any model structure, to analyze the model capacity, one should put particular concern on the mode bipartitions associated with lower matrix ranks. Now we analyze some popular tensor models as a further demonstration.
\subsubsection{Tensor-Train Model}
The Tensor-Train models (TT)~\cite{yu} construct higher order tensors in the following form:
\begin{align}
\mathcal{A}_{s_1s_2\cdots s_L} = \sum^{r}_{\{\alpha_i\}} M_{s_1}^{\alpha_1} M_{s_2}^{\alpha_1,\alpha_2}\cdots M^{\alpha_{L-1}}_{s_L},
\end{align}
where each $M$ is a order-3 tensor (except the boundary two which are of order 2). For simplicity while w.l.o.g, we consider the case where are virtual bonds have the same dimension $r$.
Firstly, consider any bipartition separating the sequence $(s_1s_2\cdots s_L)$ with one cut, i.e. $(s_1\cdots s_m)\cup(s_{m+1}\cdots s_L), \forall m\in[1, L-1]$, where we label two disjoint sets as $s_a = (s_1\cdots s_m)$ and $s_{\Bar{a}} = (s_{m+1}\cdots s_L)$. The resulting matrix rank can be calculated as:
\begin{align}
&\; rank\big[A^{(a, \Bar{a})}\big] \nonumber \\
= &\; rank\bigg[\sum^r_{\alpha_m} \bigg(\sum_{\{\alpha_i\}}M_{s_1}^{\alpha_1}\cdots M_{s_m}^{\alpha_{m-1},\alpha_m}\bigg)\cdot \bigg(\sum_{\{\alpha_j\}}M_{s_{m+1}}^{\alpha_{m},\alpha_{m+1}}\cdots M^{\alpha_{L-1}}_{s_L}\bigg)\bigg] \nonumber \\
= &\; rank\bigg[\sum^r_{\alpha_m}U_{s_1\cdots s_m}^{\alpha_m}U_{s_{m+1}\cdots s_L}^{\alpha_m}\bigg] \nonumber \\
= &\; rank\bigg[U_a \cdot U_{\Bar{a}}\bigg],
\end{align}
where the two matrices $U_a$ and $U_{\Bar{a}}$ are contracted through an inner product of virtual bond $\alpha_m$, which has dimension $r$. We can therefore conclude that such bipartitions produce matrix ranks upper-bounded by $r$.
More generally, any cut may result the upper bound of matrix ranks increase by $r$. Therefore, the bipartitions associated least lowest possible matrix rank are those created by one single cut.
Now we interpret the meaning of such mode bipartitions. As shown above, in the matrix expression:
\begin{align}
A^{(a, \Bar{a})}
= \sum^r_{\alpha_m}U_{s_1\cdots s_m}^{\alpha_m}U_{s_{m+1}\cdots s_L}^{\alpha_m},
\end{align}
the virtual bond contraction only involves $r$ terms. This is equivalent to assume that the interplay between the modes clusters $(s_1\cdots s_m)$ and $(s_{m+1}\cdots s_L)$ can be efficiently captures by only $r$ terms. In the case where $r\rightarrow \infty$, this decomposition could be exact, without any error introduced. In practice, however, we are interested in finite $r$ case, with the hope that the truncation in $r$ would be accepted given an error threshold $\epsilon$.
More precisely, w.l.o.g, we assume $m\leq \frac{L}{2}$ (hence $D^m\leq D^{L/2}$), and each original mode $s_i$ could take $D$ different possible values. The dimension of the space expanded by $(s_1\cdots s_m)$ is therefore $D^m$. The $r$ terms summation would be sufficient to capture the interplay between two clusters in the case where $r\geq D^{m}$; if $D^m\geq r$, which in a high order tensor problem is very possible, then the $r$ terms summation in general would miss certain interplay between modes clusters (i.e., truncated terms in the full summation), and eventually introduce errors. \textbf{In other words, by implementing a TT-model with finite virtual dimension $r$, one assumes each $m$-modes cluster (starting from one end) interacts with the rest modes through $r$ terms summation.}
To restore an arbitrary $L$-order tensor, the required universal virtual dimension of a TT model would then be:
\begin{align}
R_{TT} = D^{\frac{L}{2}}.
\end{align}
\subsubsection{Hierarchical-Tucker Model}
A $H$-level Hierarchical-Tucker model constructs higher order tensors in the following form:
\begin{align}
\phi_{\alpha}(h+1, j) = \sum_{\beta_1, \beta_2} \Lambda^{\beta_1, \beta_2}_{\alpha}(h+1,j)
\cdot\bigg[\phi_{\beta_1}(h, 2j-1)\cdot\phi_{\beta_2}(h, 2j)\bigg]. \nonumber
\end{align}
Each $(h, j_h)$ is a coordinate in the quasi 2-dimensional tree structure, $h\in[0, H]$ representing the layer index in the tree, and $j\in[1, l_h]$ represents the translational index in the layer.
The above form represents a bi-HT model: each higher layer tensor is obtained from only two lower layer tensors through an order-3 coefficient tensor $\Lambda(h, j_h)$. In this case, one could easily derive that:
\begin{align}
H = \log_2{L}; \qquad l_h = \frac{L}{2^{h}}.
\end{align}
The zero-th layer mode tensors have the following form:
\begin{align}
\phi_{s_i}(0, i), \qquad \forall i\in[1, L],
\end{align}
which, together with all order-3 coefficient tensors $\Lambda(h, j_h)$, determines the large $L$-order tensor. The last layer coefficient tensor could be an order-2 tensor: $\Lambda^{\beta_1,\beta_2}(H, 1)$.
Similar to the analysis on TT-models, again we consider all virtual dimensions are the same, i.e. all three indices of $\Lambda$ runs within $[1. r]$ (except the zero-th layer). The bipartitions with only one single cut then correspond to a relatively large truncation. In fact, a cut slicing off $m=2^h$ modes from one end, in general, requires $r=D^m$ virtual dimension. The cut on the last layer requires $r=D^{L/2}$ for an exact representation of an arbitrary $L$-order tensor, which is also the one that may introduce largest errors when a finite dimension truncation is applied. \textbf{In other words, by implementing a HT-model with finite virtual dimension $r$, one assumes each $2^h$-modes cluster (starting from one end) interacts with the rest modes through $r$ terms summation.}
To restore an arbitrary $L$-order tensor, the required universal virtual dimension of a HT model would then be:
\begin{align}
R_{HT} = D^{\frac{L}{2}}.
\end{align}
\subsection{Weaker Separability Assumptions}
The above study has clarifies the separability assumptions implied by popular tensor model structures. \textbf{Generally, when constructing higher order tensors from lower order ones through virtual bond contractions with finite bond dimension $r$, the model implicitly assumes the interplay between two modes-clusters could be well-approximated by $r$ terms.} This assumptions becomes more difficult to be satisfied when the contracted two clusters expand a larger space.
We call a separability assumption "\emph{strong}" when the number of summed terms capturing the modes interplay is much smaller than the total dimension of the interplay space, i.e. the \emph{truncatable} space is large.
In general, a tensor model should prevent stronger separability assumptions, unless the target tensor satisfies certain special conditions (e.g. area law functions in low-energy states of strongly correlated system). The popular tensor models analyzed above, however, when using finite tensor rank $r$. have implicitly added many strong separability assumptions on the cuts that separate two large clusters.
When no specific information is provided about the target tensor, e.g. modeling with deep neural networks, one in general expect the interplay between two clusters becomes more complicated when the cluster dimension increases. Therefore, when the separated two clusters have a higher dimension, the model structure should also involve more summed terms, which, given a universal virtual bond dimension, implies a larger number of contracted bonds.
More specifically, one aims at finding tensor model structures that contain more bonds contractions when separated modes-clusters expand larger spaces. Suppose the smaller cluster in a bipartition involve $m$ modes, then the number of contracted indices $n$ should increase when $m$ increases.
There could be different scaling behaviors of $n$ as a function of $m$, which in general depends on the targeted tensors. The worst scenario corresponds to the case where $n(m)$ is an exponential function; in this case, however, it is impossible to construct low rank tensor models, as any truncation would result into large errors, and we call it as an "irreducible problem". The best scenario may require only a constant number $n$ of contracted indices, which is the assumption made by both TT and HT models; however this is apparently an extremely strong assumption that is difficult to be satisfied. \textbf{One is hence interested in model structures with weaker separability assumptions, such that $n(m)$ is a monotonically increasing function, but not grow exponentially.} We use the term \emph{separability scaling behavior} (SSB) to describe the function form of $n(m)$. The TT and HT models therefore have constant separability scaling, and the irreducible case has an exponential separability scaling.
Two typical behaviors of $n(m)$ are power-law $n\sim m^{\alpha}$, and logarithm $n\sim \log{m}$.
Given a model structure, if \emph{any} modes bipartitions satisfy \emph{at least}:
\begin{itemize}
\item an exponential separability scaling, then we call the model an exponential separable mode;
\item a power law with an exponent $\alpha$ separability scaling, then we call the model a power-$\alpha$ separable mode;
\item a logarithm separability scaling, then we call the model a logarithm separable mode;
\item a constant separability scaling, then we call the model a constant separable mode.
\end{itemize}
Among all classes, the exponential separable models are irreducible, i.e. there does not exist a efficient low rank representation.
Importantly, in the above definition, the term \emph{"at least"} implies the fact that there may exist bipartitions associated with more complicated separability scaling behaviors, and emphasizes that in general one should be concerned with the lower bound of all possible scaling behaviors. \textbf{This contrasts the CP rank analysis of tensor complexity, due to the difference we emphasized earlier: the complexity analysis of a target tensor depends on the most complicated interplay among different modes, which corresponds to the upper bound of bipartition matricization ranks; while the capacity analysis of a model structure depends on the strongest assumption made on the "separability" issue, which corresponds to the lower bound of separability scaling.}
\section{Tensor Models with Weak Separability Assumptions}
In this section, we introduce a new tensor model structure that implies a weaker separability assumption, which is easier to be satisfied in practice compared with TT or HT models. The model is called Multiscale Entanglement Renormalization Ansatz (MERA), and belongs to the logarithm separable model category.
\subsection{Multiscale Entanglement Renormalization Ansatz}
MERA is proposed in the field of quantum information, to capture more complicated quantum states beyond an area-law scaling of EE.
We would briefly mention MERA, and more details can be found in \textit{Evenbly and Vidal 2007}\cite{evenbly} , etc.
The general idea of MERA, different from other TNs, is to use a $(d+1)$-dim TN to represent a $d$-dim system, where the extra dimension in physics represents the flow of the Renormalization Group (RG). It has been noticed before that a MERA structure is quite similar to CNN\cite{yahui}.
There are essentially two major types of tensor blocks in MERA: disentangler tensors and isometry tensors.
A MERA representation of a general high-order tensor follows a hierarchical structure.
Taking the spatial dimension to be 1, and set the original tensor order $L=4$, the MERA representation of the tensor could be written as:
\begin{align}
\mathcal{A}_{s_1s_2s_3s_4} &\simeq \sum_{\{q_i, r_j\}}\tilde{\tilde{V}}_{q_1q_2}\tilde{V}^{q_1}_{r_1r_2}\tilde{V}^{q_2}_{r_3r_4}\hat{V}^{r_1r_2}_{s_4s_1}\hat{V}^{r_3r_4}_{s_2s_3}.
\end{align}
where each $\hat{V}$ is an order-4 tensor, termed as a disentangler, and each $\tilde{V}$ is an order-3 tensor, termed as an isometry tensor.
The top-tensor $\tilde{\tilde{V}}$ is always of order 2, which could be viewed as a coefficient tensor.
For higher orders with larger $L$ value, the construction could be easily generalized hierarchically.
For 1d cases, MERA describes systems whose EE scales with a $log$-correction, which enters in MERA as a result of spatial coarse-grain in RG.
Hence MERA would be a nice candidate for $log$-correction problems, which is more complicated than area-law ones.
\subsection{Separability Assumption in MERA}
Suppose a universal virtual bond dimension exists, we are interested in the modes-bipartition associated with the least number of bond contractions, which corresponds to the strongest separability assumption implied by the structure. For a mode sequence $(s_1, s_2, \cdots s_L)$, the bipartition at position $m=2^h$ cut \emph{at least} $n(m)\propto \log_2{m}$ virtual bonds (either isometry bonds or reshaped disentangler bonds). As a reshaped disentangler bond (with probability $1$) has virtual dimension $r^2$, below we could focus on the case where all cut bonds are isometry bonds only with virtual dimension $r$.
Again, as $m$ increases, the total dimension of the (smaller) space from a bipartition increases as $D^m$. As, for any form of bipartitions, there are at least $\log_2{m}$ bonds, each of which contracts $r$ virtual dimensions, the model structure guarantees at least $r^{\log_2{m}}$ terms in the summation capturing the interplay between any two parts. If no truncation is allowed, then the virtual dimension $r$ required would be:
\begin{align}
r(m) = D^{\frac{m}{\log_2{m}}}.
\end{align}
To restore an arbitrary $L$-order tensor, the required universal virtual dimension of a MERA model would then be:
\begin{align}\label{mera_max}
R_{MERA} = D^{\frac{L}{2(\log_2{L}-1)}}.
\end{align}
Compared with TT and HT models, the required universal virtual dimensions have the following relation:
\begin{align}
R_{HT} = R_{TT} = \big[R_{MERA}\big]^{\log_2{L}-1}
\end{align}
\subsection{Black-Box Modeling with Given Separability Assumptions}
Now we provides a quantitative analysis for the question raised in earlier sections: the black-box tensor modeling problems.
Firstly we should clarify a reasonable form of separability assumptions about any target tensor. As discussed earlier, without further information, one in general expects the interplay between two parts becomes more complicated as the size $m$ of two parts (or the smaller one, which determines the matricization rank) increases. The complexity of the interplay could be captured by the number of Schmidt components, \textbf{the separability assumption could thus be represented by the maximum number of Schmidt components $N(m)$ as a monotonically increasing function of $m$.}
Now we derive the required universal virtual dimension $\chi$ in different models given $N(m)$. For both HT and TT models, according to Eq.\eqref{minmax}, we have:
\begin{align}
N(m) =
\chi_{{}_{HT, TT}}, \quad \forall m\in [1, L/2].
\end{align}
As the above equation should hold for any value of $m$, and recalling the monotonically increasing nature of $N(m)$, we thus have the final result:
\begin{align}\label{chi_ttht}
\chi_{{}_{HT, TT}} = N\bigg(\frac{L}{2}\bigg),
\end{align}
This in general is a large number, due to the weakness in the structure of both TT and HT models: there exist single cuts that bipartite two parts with large dimensions.
On the other hand, the situation in MERA is improved as:
\begin{align}
N(m) =
\chi_{{}_{MERA}}^{\log_2{m}}, \quad \forall m\in (1, L/2],
\end{align}
which eventually requires a universal virtual dimension:
\begin{align}\label{chi_mera}
\chi_{{}_{MERA}} = \max_{m\in(1, \frac{L}{2}]}\big[N(m)\big]^{\frac{1}{\log_2{m}}}.
\end{align}
It is obvious that Eq.\eqref{mera_max} is a special case of the above expression when $N(m)$ is an exponential function $D^m$, i.e. the irreducible problems. Comparing Eq.\eqref{chi_ttht} and Eq.\eqref{chi_mera} by taking the (base-2) logarithm on both expressions, we have:
\begin{align}
\log_2{\chi_{{}_{MERA}}} &= \max_{m\in(1, \frac{L}{2}]}\log{\bigg(\big[N(m)\big]^{\frac{1}{\log_2{m}}}\bigg)} \nonumber \\
&= \max_{m\in(1, \frac{L}{2}]}\frac{\log_2{N(m)}}{\log_2{m}} \nonumber \\
&\leq \max_{m\in(1, \frac{L}{2}]}\log_2{N(m)} \nonumber \\
&= \log_2{N\bigg(\frac{L}{2}\bigg)}\nonumber \\
&= \log_2{\chi_{{}_{HT, TT}}},
\end{align}
where the inequality has taken the fact that $m>1$ in general, as the interplay between a single mode and other parts should rarely be the most complicated one.
The above inequality demonstrates the advantage of MERA compared with HT and TT structures.
\section{Discussion}
We discuss the problem of tensor model capacity, and clarifies the difference between tensor complexity and model capacity.
Importantly, a tensor with large complexity does not guarantee sufficient model capacity, if the structure of it implies a strong separability assumption on the targeted problem.
And a Cannikin's law of modeling is proposed, which states that in the scenario of black-box modeling, to ensure a full description of the real world mechanism, the weakest interaction in the model should be stronger than the most complicated interaction in the task.
The concept of entanglement is introduced to the discussion of tensor analysis, which establishes a natural connection between quantum information and tensor analysis.
Based on the proposed separability criteria, new tensor models might be developed accordingly in future studies.
| 2024-02-18T23:39:51.451Z | 2022-04-19T02:15:47.000Z | algebraic_stack_train_0000 | 666 | 6,294 |
|
proofpile-arXiv_065-3410 | \section{Introduction}
Image inpainting (a.k.a. image completion), which aims
to fill missing regions of an image, has been an active research topic of computer vision for decades.
Despite the great progress made in recent years~\cite{lahiri2020prior,suin2021distillation,zhou2021transfill,yi2020contextual,nazeri2019edgeconnect,iizuka2017globally,liu2018image,xiong2019foreground,ren2019structureflow,liao2021image,xiao2019cisi,yu2020region,yang2020learning,yang2017high,wangimage,pathak2016context,song2018contextual,ren2019structureflow}, image inpainting remains a challenging problem due to its inherent ambiguity and the complexity of natural images. Therefore, various guided inpainting methods have been proposed that exploit external guidance information such as examplar~\cite{kwatra2005texture,zhao2019guided,zhou2021transfill}, sketches~\cite{liu2021deflocnet,yang2020deep,jo2019sc,portenier2018faceshop,yu2019free}, label maps~\cite{ardino2021semantic},~\etc. However, previous work on image inpainting mainly focuses on inpainting background or partially missing objects. The problem of inpainting an entire missing object is still unexplored. In this paper, we study a new guided inpainting task,~\ie shape-guided object inpainting, where the guidance is implicitly given by the object shape. As shown in Fig.~\ref{teaser}, given an incomplete input image, the goal is to generate a new object to fill the hole. It can be used in various practical applications such as object re-generation, object insertion, and object/person anonymization.
This task has a similar input and output setup to the traditional image inpainting task; both take an incomplete/masked image and the hole mask as input to produce a complete image as output. However, previous methods are mainly designed for background inpainting and are not suitable for this object inpainting task.
Early patch-based synthesis methods borrow content from the remaining image to fill the hole. These methods are hardly seemed fit for this task as they cannot generate novel content.
Recent deep generative inpainting methods should be able to inpaint both background and objects, but in practice, they still have a strong bias towards background generation~\cite{katircioglu2020self}.
The reason lies in both the training strategy and the model architecture of previous deep learning based approaches.
First, previous methods synthesize the training data by simply masking images at random positions with different regions masked at equal probability. Since the appearance of background patches are usually similar to surrounding, it is easier to learn to extend the surrounding background to fill a hole than to generate objects.
Second, previous methods formulate image inpainting as a bottom-up context-based process that uses stacked convolution layers to propagate context information from the known region to the missing regions. However, object generation is essentially a top-down process: it starts from a high-level concept of the object and gradually hallucinate the concrete appearance centering around the concept. Without any top-down guidance, it is hard to generate a reasonable object of consistent semantic meaning.
Therefore, in order to find a better solution, we design a new data preparation method and a new generative network architecture for the object inpainting task. On the data side, to overcome the bias towards the background, we incorporate object prior by using object instances as holes in training. For the network architecture, we consider three important goals of object inpainting:
(1) visual coherency between the appearance of generated and existing pixels;
(2) semantic consistency within the inpainted region,~\ie the generated pixels should constitute a reasonable object;
(3) high-level coherency between the generated objects and the context.
To achieve these goals, we propose a contextual object generator (CogNet) with two-stream network architecture.
It consists of a bottom-up and top-down stream that models a bottom-up and top-down generation process, respectively.
The bottom-up stream resembles a typical framework used by previous approaches to achieve appearance coherency. It takes the incomplete image as input and fills the missing region based on contextual information extracted from the existing pixels.
The bridge between the bottom-up stream is a predictive class embedding (PCE) module. It predicts the class of the missing object based on features from the bottom-up stream to encourage high-level coherency.
The top-down stream is designed inspired by semantic image synthesis~\cite{isola2017image,park2019semantic} and has a similar framework to it.
It aims to hallucinate class-related object features based on a semantic object map obtained by combining the predicted class and the hole mask. Since the features at all object pixels are generated from the same class label, their semantic consistency can be ensured.
In summary, our contributions are as follows:
\begin{itemize}
\item We explore a new guided image inpainting task,~\ie shape-guided object inpainting.
\item We propose a new data preparation method and a novel Contextual Object Generator (CogNet) model for object inpainting.
\item Experiments demonstrate that the proposed method is effective for the task and achieves superior performance against state-of-the-art inpainting models finetuned for the task.
\end{itemize}
\section{Related Work}
\subsection{Image Inpainting}
Conventional image inpainting methods fill the holes by borrowing existing content from the known region. Patch-based methods search well-matched patches from the known part in the input image as replacement patches to fill in the missing region. Efros~\etal~\cite{efros1999texture} propose a non-parametric sampling method for texture synthesis method that can synthesize images by sampling patches from a texture example. It can be applied for hole-filling through constrained texture synthesis. Drori~\etal~\cite{drori2003fragment} propose to iteratively fill missing regions from high to low confidence with similar patches. Barnes~\etal~\cite{barnes2009patchmatch} propose a randomized algorithm for quickly finding matched patches for filling missing regions in an image. Diffusion-based methods propagate local image appearance surrounding the missing region based on the isophote direction field. Bertalmio~\etal~\cite{10.1145/344779.344972} propose to smoothly propagate information from the surrounding areas in the isophotes direction to fill the missing regions. Ballester~\etal~\cite{ballester2001filling} propose to jointly interpolate the image gray-levels and gradient/isophotes directions to smoothly extend the isophote lines into the holes.
These methods cannot generate entirely new content that does not exist in the input image.
In recent years, driven by the success of deep generative models, extensive research efforts have been put into data-driven deep learning based approaches. This branch of work usually formulates image completion as an image generation problem conditioned on the existing pixels in known regions.
They can generate plausible new content and have shown significant improvements in filling holes in complex images.
The first batch of deep learning based approaches only works on square holes. Iizuka~\etal~\cite{iizuka2017globally} propose to use two discriminators to train a conditional GAN to make the inpainted content both locally and globally consistent. Yu~\etal~\cite{yu2018generative} propose contextual attention to explicitly utilize surrounding image features as references in the latent feature space. Zeng~\etal~\cite{zeng2019learning} propose to use region affinity from high-level features to guide the completion of missing regions in low-level features. Later on, the research effort has shifted to image completion with irregular holes. Liu~\etal~\cite{liu2018image} use collect estimated occlusion/dis-occlusion masks between two consecutive frames of videos and use them to generate holes and propose partial convolution to exploit information from the known region more efficiently. Yu~\cite{yu2019free} generate free-form masks by simulating random strokes. They generalize partial convolution to gated convolution that learns to select features for each channel at each spatial location across all layers. Zeng~\etal~\cite{zeng2020high} use object-shaped holes to simulate real object removal cases and propose an iterative inpainting method with a confidence feedback mechanism.
The above deep learning based methods mainly focus on background inpainting. In training, images are masked at random positions, resulting in a bias towards background as background is usually more predictable in most images. In addition, some methods use attention mechanisms to explicitly borrow patches/features from known regions~\cite{yu2018generative,yu2019free,zeng2019learning,zeng2020high,zhang2019residual,liu2019coherent} as in the conventional methods, which can be seen as background prior and will further encourage the tendency to generate background. Some previous works on deep learning based inpainting have touched on topics related to object inpainting. Xiong~\etal~\cite{xiong2019foreground} propose a foreground-aware image inpainting system by predicting the contour of salient objects. Ke~\etal~\cite{ke2021occlusion} propose an occlusion aware inpainting method to inpaint partially missing objects in videos. These methods mainly focus on inpainting partially missing objects.
\subsection{Guided Image Inpainting}
Some works attempt to allow users to provide more guidance to reduce the ambiguity of image inpainting and improve the results. Many types of guidance have been explored, such as examplar images, sketches, label maps, text.
Yu~\etal~\cite{yu2019free} propose DeepFillV2, which can perform sketch-guided image inpainting of general images as well as face images.
Park~\cite{jo2019sc} explore face inpainting with sketch and color strokes as guidance.
Zhang~\etal~\cite{zhang2020text} propose to inpaint the missing part of an image according to text guidance provided by users. Ardino~\etal~\cite{ardino2021semantic} propose to use label maps as guidance for image inpainting. Although the guided inpainting methods \cite{zhang2020text} and \cite{ardino2021semantic} might be able to generate an entire new object if the text or label map about the object are given as guidance, they require the users to provide the external guidance explicitly. In comparison, our method only takes the incomplete image and hole mask as input.
\subsection{Semantic Image Synthesis}
Semantic image synthesis is a sub-class of conditional image generation which aims to generate photo-realistic images from user-specified semantic layouts. It was first introduced by Isola~\etal~\cite{isola2017image}, who proposed an image-to-image translation framework, called Pix2Pix, to generate images from label maps or edge maps.
Zhu~\etal~\cite{zhu2017unpaired} propose CycleGAN to allow training an image translation model on unpaired data with a cycle consistency constraint. Park~\etal~\cite{park2019semantic} propose spatially-adaptive normalization for semantic image synthesis, which modulates the activations using semantic layouts to propagate semantic information throughout the network. Chen~\etal~\cite{chen2017photographic} propose cascaded refinement networks and use perceptual losses for semantic image synthesis. Wang~\etal~\cite{wang2018high} propose Pix2PixHD which improves the quality of synthesized images using feature matching losses, multiscale discriminators and an improved generator. Our method takes inspiration from semantic image synthesis methods to design the top-down stream of the contextual object generator. Unlike semantic image synthesis, where the semantic layouts or label maps are known, our semantic object maps are derived by combining the predicted class and the hole mask.
\subsection{Background-based Object Recognition}
Object recognition is a task to categorize an image according to the visual
contents. In recent years, the availability of large-scale datasets and powerful computers made it possible to train deep CNNs, which achieved a breakthrough success for object recognition~\cite{krizhevsky2012imagenet}.
Normally, an object recognition model categorizes an object primarily by recognizing the visual patterns in the foreground region. However, recent research has shown that a deep network can produce reasonable object results with only background available. Zhu~\etal~\cite{zhu2016object} find that the AlexNet model~\cite{krizhevsky2012imagenet} trained on pure background without objects achieves highly reasonable recognition performance that beats human recognition in the same situations.
Xiao~\etal~\cite{xiao2020noise} analyze the performance of state-of-the-art architectures on object recognition with foreground removed in different ways. It is reported that the models can achieve over 70\% test accuracy in a no-foreground setting where the foreground objects are masked. These works aim to predict only the class of an object from background. In this paper, we show that the entire object can be generated based on the background.
\section{Method}
Given an input image with missing regions, our goal is to fill the missing region with generated objects. We take a data-driven approach based on generative adversarial networks (GANs)~\cite{goodfellow2014generative,radford2015unsupervised,karras2017progressive,brock2018large,karras2019style,karras2020analyzing}. A contextual object generator is designed to generate objects based on the context that not only fit the known region and of reasonable semantic meanings.
The generator is jointly trained with a discriminator on a synthetic dataset obtained by masking object regions in real images. We use the discriminator proposed in \cite{karras2019style,karras2020analyzing}. In what follows, we introduce our data acquisition approach in Sec.~\ref{sec:data} and network architecture of the generator in Sec.~\ref{sec:gen}.
\subsection{Data Preparation}
\label{sec:data}
\begin{figure}
\begin{center}
\centering
\includegraphics[width=\textwidth]{fig0.pdf}\\
\scriptsize{\hfill\hfill {Previous approaches} \hfill\hfill {\textcolor{white}{ious app}Ours\textcolor{white}{roaches}} \hfill\hfill}
\caption{Top: input. Bottom: original image and ground-truth. Previous deep learning based inpainting methods generate training data by masking at random positions, which results in a bias towards background generation. We propose to incorporate object prior into training data by masking object instances. }
\label{fig0}
\vspace{-10pt}
\end{center}%
\end{figure}
Most deep learning based image inpainting methods prepare data by masking images at random positions using synthetic masks obtained by drawing random rectangles~\cite{zeng2019learning,yu2018generative,yang2017high}, brush strokes or from a fixed set of irregular masks~\cite{liu2018image,zeng2020high,liu2020rethinking}. Paired training data $\{(x',m),x\}$ can be formed by taking the masked image $x'=x\odot m$ and mask $m$ as input with the original image $x$ as ground-truth.
This data synthesis pipeline can generate a very large dataset for training a powerful deep model capable of completing large holes and dealing with complex scenes.
Although this random masking process produces diverse data with masks on both background and object regions, the trained model often has a strong tendency to generate background as background is more common and easier to predict than objects~\cite{joung2012reliable,katircioglu2019self}.
In this work, since we aim to train an image completion model to generate objects, the random masking process is not suitable. Therefore, we design a new data synthesis method that incorporates the object prior into training data by using object instances as holes.
For an image $x$, its instance segmentation $\{m^i, y^i\}_{i=1}^c$ can be obtained by manual annotation or using segmentation models, where $m^i, y^i$ are the mask and class of each object instance, $c$ denotes the number of instances. Then $c$ training samples $\{(x'^i,m^i),x\}_{i=1}^c$ can be constructed by masking the image $x$ with each instance mask: $x'^i=x\odot m^i$.
There exist datasets such as COCO~\cite{lin2014microsoft} with manually annotated segmentation masks, which can be used to construct high-quality training samples for object-based image completion. However, these datasets are limited in size and are not sufficient for representing the complexity of objects in natural images. To obtain larger and more diverse training data, we can use instance segmentation models to automatically label a larger dataset with instance masks complementary to the manually annotated segmentation datasets. Although the automatically annotated masks are less accurate, they still cover most object regions and thus can provide a reasonable object prior.
Fig.~\ref{fig0} compares our training samples with object instances as holes and the randomly generated training samples used in previous approaches.
\subsection{Network Architecture}
\label{sec:gen}
\begin{figure}
\begin{center}
\centering
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{Illustration of the two-stream network architecture. It consists of a bottom-up stream and a top-down stream. The bottom-up stream models the standard image inpainting process, which takes an incomplete image as input to produce a complete image. The predictive class embedding (PCE) predicts the object class label based on features from the bottom-up stream and embeds it into an embedding vector. The top-down stream generates an image conditioned on the semantic object map. The two streams share the same generator. }
\label{fig1}
\vspace{-10pt}
\end{center}%
\end{figure}
In this section, we present the network architecture of the proposed contextual object generator (CogNet).
Unlike the traditional image completion task, which only focuses on the consistency of the inpainted region and the context, object-based image completion also requires the inpainted content to be an object of semantic meanings.
Previous network architectures for image completion are mainly designed as a bottom-up process to propagate information from known regions to missing regions. The generated content can blend naturally into the context but rarely resemble an object due to the lack of top-down guidance.
To solve this problem, we design a two-stream architecture that combines the traditional image inpainting framework with a top-down object generation process inspired by the semantic image synthesis task~\cite{isola2017image,park2019semantic}. The overall structure is shown in Fig.~\ref{fig1}. Each stream has an independent encoder that takes input from the corresponding domains and interacts with each other through the shared generator.
\subsubsection{Bottom-up Process}
The bottom-up stream $g^b$ follows the standard design of an image inpainting model. It takes an incomplete RGB image $x' \in X$ and the hole mask $m$ as input and produce an inpainted RGB image $\hat{x} \in X$,~\ie $g^b:X\rightarrow X$.
Given an incomplete input image, the encoder extracts hierarchical features from the raw pixels of the known region. It consists of a sequence of $L$ convolutional blocks with a $2\times$ downsample operator between every two consecutive blocks. For an input of size $N\times N$, the encoder produces a series of feature maps $\{ f^{b,l} \}_{l=0}^{L-1}$ of various scales, where each feature map $f^{b,l}$ is of size $\frac{N}{2^l}$. Then the multi-scale feature maps $\{ f^{b,l} \}$ are used to modulate the generator features of the corresponding scale through the spatial-channel adaptive instance normalization (SC AdaIN) layers.
\subsubsection{Predictive Class Embedding}
The bottom-up stream can capture the environmental factor that affects the object's appearance, such as color, illumination, and style. However, the class-related information is still missing.
As recent studies~\cite{xiao2020noise,zhu2016object} have indicated, models can achieve reasonable object recognition performance by relying on the background alone. Based on this observation, we propose a predictive class embedding module to map the background features into object class embeddings by learning a background-based object recognition model.
First, the feature $f^{b,L-1}$ of the last block of the encoder is reshaped and transformed by a fully connected layer into a feature vector $h$. Then a linear classifier is trained to predict the object class given $h$ as input by minimizing $\mathcal{L}_c$:
\begin{equation}
\label{eq:loss_cls}
\mathcal{L}_c = \sum_i -t_i \log \hat{t}_i, \mbox{ where } \hat{t} = \sigma (W^c h)_i,
\end{equation}
where $t$ is the one-hot encoding of the true class label; $W^c$ is the weight of the linear classifier; $\sigma$ represents the softmax function; $\hat{t}$ represents the predicted class label. $h$ can be seen as an embedding of the predicted class and is also passed into the SC AdaIN layers.
\subsubsection{Top-down Process}
In most images, the appearance of the objects is less predictable from the context than background. Hence the bottom-up process is less effective for object-based image completion.
Therefore, we design a top-down stream to allow the model to hallucinate appearance features from semantic concepts for object generation.
The top-down stream $g^t:Y\rightarrow X$ is designed inspired by semantic image synthesis methods,~\ie generating image content from semantic layout.
Different from standard semantic image synthesis where the label maps are known, the top-down stream generated an RGB image based on the semantic object maps derived from the predicted class.
More specifically, given the predicted class $\hat{t}$, a semantic object map $y \in Y$ can be derived by combining the predicted class and the hole mask $m$:
\begin{equation}
y_i = \hat{t}_i \cdot m,
\end{equation}
where $y_i$ represents the semantic object map corresponding to the $i$-th class. Then an $L$-layer encoder with a similar structure to the one in the bottom-up stream encodes the semantic object maps into multi-scale feature maps $\{f^{t,l} \}_{l=1}^{L}$. These feature maps will be used to modulate the generator feature maps through SC AdaIN layers to provide spatial aware class-related information to the generator.
\subsubsection{SC AdaIN}
\begin{figure}
\begin{center}
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{Illustration of the spatial-channel adaptive instance normalization module. It consists of two steps of normalization and modulation in the channel and spatial dimensions, respectively. }
\label{fig2}
\vspace{-10pt}
\end{center}%
\end{figure}
Given the environmental features and class inferred from the background, there still can be many possible object appearances.
To model the uncertainty in object generation while preserving the information propagated from the encoders, we design the spatial-channel adaptive instance normalization module (SC AdaIN). Fig.~\ref{fig2} illustrates the structure of a SC AdaIN module.
Given an input image, we obtain the multi-scale feature maps $\{f^{b,l}\}, \{f^{t,l}\}$ from the encoders and sample a random latent code $z \sim \mathcal{N}(0,1)$. Then the latent code is transformed by a fully connected network as in~\cite{karras2020analyzing,karras2019style} and concatenated with the class embedding $h$ into a style code $w$.
For each scale $l$, we normalize and modulate the generator feature map channel-wise using the encoder features and position-wise using the style code $w$.
Let $X^l$ denote the generator feature map at scale $l$, the modulated feature map $\hat{X}^l$ is produced as follow,
\begin{equation}
\bar{X}^l_{c,x,y} = \frac{X^l_{c,x,y}-\mu^l_{c}}{\sigma^l_c}\cdot \gamma^l(w)_c + \beta^l(w)_c
\end{equation}
\begin{equation}
\hat{X}^l_{c,x,y} = \frac{\bar{X}^l_{c,x,y}-\bar{\mu}^l_{x,y}}{\bar{\sigma}^l_{x,y}}\cdot \bar{\gamma}^l(f^{b,l}+f^{t,l})_{c,x,y} + \bar{\beta}^l(f^{b,l}+f^{t,l})_{c,x,y}
\end{equation}
where $\mu^l_c, \sigma^l_c$ are the mean and standard deviation of $X^l$ in channel $c$; $\bar{\mu}_{x,y}, \bar{\sigma}_{x,y}$ are the mean and standard deviation of $\bar{X}^l$ at position $x,y$; $\gamma^l(w), \beta^l(w)$ and $\bar{\gamma}^l(f^l), \bar{\beta}^l(f^l)$ transform the style code $w$ and the encoder feature maps $f^l$ into the modulation parameters at scale $l$.
\section{Experiment}
\subsection{Implementation Details}
We implement our method and train the model using Python and Pytorch~\cite{NEURIPS2019_9015}. We use the perceptual loss~\cite{johnson2016perceptual}, GAN loss~\cite{gulrajani2017improved}, and the loss in Eqn.~\ref{eq:loss_cls} to train the contextual object generator. The detailed network architectures can be found in the supplementary material. The code will be made publicly available after the paper is published.
The model is trained on two A100 GPUs. It takes about a week for training. The inference speed at $256\times256$ resolution is 0.05 seconds per image.
We compare with two state-of-the-art image inpainting methods DeepfillV2~\cite{yu2019free} and CoModGAN~\cite{zhao2021comodgan} and RFR~\cite{li2020recurrent}. Since the original models of the compared methods are trained using random masks, it is not suitable to directly apply the pretrained models for object inpainting. Therefore, to compare with these methods, we train the model on the corresponding dataset using the mask synthesis method described in Sec.~\ref{sec:data}. We evaluate the performance using the metrics FID~\cite{heusel2017gans} and LPIPS~\cite{zhang2018perceptual} as they are the most commonly used metric for assessing the quality of generative models~\cite{lucic2018gans} and image-conditional GANs~\cite{albahar2019guided,huang2018multimodal,shen2019towards}.
\subsection{Datasets}
We train and evaluate our model on three datasets COCO~\cite{lin2014microsoft}, Cityscapes~\cite{Cordts2016Cityscapes} and Places2~\cite{zhou2017places}, which are commonly used in image inpainting, semantic segmentation, and semantic image synthesis. Note that the segmentation maps are only required in training. In the inference stage, only an input image and hole mask are needed.
We use the official training split to train and evaluate the models on the official validation split. All images are cropped into $256\times 256$ patches during training and evaluation.
Cityscape dataset contains segmentation ground truths for objects in city scenes such as roads, lanes, vehicles, and objects on roads. This dataset contains 30 classes collected over different environmental and weather conditions in 50 cities. It provides dense pixel-level annotations for 5,000 images pre-split into training (2,975), validation (500) and test (1,525). Since Cityscapes provides accurate segmentation ground-truth, it can be directly used for training our model.
COCO dataset is a large-scale dataset designed to represent a vast collection of common objects. This dataset is split into a training split of 82,783 images, a validation split of 40,504 images, and a test split of 40,775 images.
There are 883,331 segmented object instances in COCO dataset.
The object masks in COCO dataset are given by polygons. To obtain more accurate object masks, we preprocess the COCO object masks using a segmentation refinement method~\cite{cheng2020cascadepsp}.
Places2 dataset is a large-scale dataset for scene recognition and contains about 10 million images covering more than 205 scene categories.
For Places2 dataset, since there is no segmentation ground-truth available, we annotate the object masks using a segmentation method~\cite{li2021fully}.
\subsection{Comparison with State-of-the-art Methods}
\subsubsection{Qualitative evaluation}
Fig.~\ref{fig_results} shows the object inpainting results of the proposed method and state-of-the-art methods. Fig.~\ref{fig_diverse} shows the multiple diverse results produced by our method for the same input images.
Since the existing deep learning based inpainting methods mainly focus on the coherency of appearance between inpainted regions and known regions and only model the bottom-up generation process, they do not perform well for object inpainting. Even when trained on the object datasets, the object inpainting results of the previous approaches are still far from satisfactory. As we can see from the results, DeepFillV2 usually generates a colored shape hardly resembling an object. Benefiting from the powerful StyleGAN architecture, CoModGAN can produce relatively more object-like results, but often without a consistent semantic meaning,~\eg, the horse with giraffe patterns as shown in the right column of the third row.
In comparison, our method combines the bottom-up and the top-down generation process to achieve both low-level and high-level coherency between the generated content and the surrounding.
Our method can generate objects that can naturally blend into the context in the sense of both appearance and semantic meanings. The object appearance is consistent with the environment,~\eg lighting, color, and style, and is also well aligned with the corresponding semantic class.
\begin{figure}[t]
\begin{center}
\centering
\includegraphics[width=\textwidth]{results.pdf}\\
\scriptsize{\hfill{Input} \hfill\hfill {DeepFillV2} \hfill\hfill {CoModGAN} \hfill\hfill {Ours} \hfill\hfill {Input} \hfill\hfill {DeepFillV2} \hfill\hfill {CoModGAN} \hfill\hfill {Ours} \hfill}
\caption{Object inpainting results of our method and state-of-the-art methods. Our method can generate objects coherent with the context in terms of both appearance and semantic meanings, while the generated contents of previous approaches seldom resemble reasonable objects.
}
\label{fig_results}
\vspace{-10pt}
\end{center}%
\end{figure}
\subsubsection{Quantitative Evaluation}
Table~\ref{table_lpips} reports quantitative evaluation results on COCO, Places2, and Cityscapes datasets. The evaluation results show that our method outperforms the state-of-the-art methods on all metrics, especially the significantly lower FID.
Since FID measures the distance between the distribution of the deep features of generated images and real images, the lower FID scores imply that the objects generated by our model have a closer distribution to the distribution of natural objects.
This further demonstrates the superiority of our method in terms of object inpainting.
\begin{table}[t]
\caption{\small Quantitative evaluation results. }
\vspace{-0pt}
\label{table_lpips}
\small
\begin{center}
\begin{tabular}{c||cc|cc|cc}
\hline
& \multicolumn{2}{c|}{COCO} &\multicolumn{2}{c|}{Places2} &\multicolumn{2}{c}{Cityscapes}\\
Method&FID &LPIPS &FID &LPIPS &FID &LPIPS\\
\hline
CoModGAN &7.693&0.1122 &7.471&0.1086 &8.161&0.0491\\
DeepFillV2 &10.56&0.1216 &8.751&0.1201 &10.56&0.0542\\
RFR &13.38&0.1141 &14.22&0.1125 &15.92&0.0497\\
Ours &\textbf{4.700}&\textbf{0.1049} &\textbf{3.801}&\textbf{0.0928} &\textbf{7.411}&\textbf{0.0458}\\
\hline
\end{tabular}
\end{center}
\vspace{-0pt}
\end{table}
\subsection{Ablation Study}
In this section, we discuss the effect of each component. First, different from previous work on image inpainting which generates the training data using random masks, we construct the specialized training data for object inpainting to incorporate object prior. Without this prior, the trained inpainting model usually has the bias towards background generation and will not generate objects when filling a missing region, as shown in Fig.~\ref{fig_ablation} (b). The predictive class embedding (PCE) extracts class-related information from the context. Without this module, the model trained on object data might be able to produce object-like content. However, it is challenging to generate a semantically reasonable object without knowing the object's class. As shown in Fig.~\ref{fig_ablation} (c), usually the appearance of the generated objects are simply taken from the nearby regions. For instance, in the second row, the model without PCE generates an object of zebra shape but with the texture of a nearby giraffe.
The top-down stream takes the semantic object mask as input, which provides stronger spatial semantic guidance for object generation. Without this information, the model can only access class-related information from PCE, which is insufficient for hallucinating object appearance. Hence the model will still rely on the appearance of the surrounding area. As shown in Fig.~\ref{fig_ablation} (d), although the model without the top-down stream can produce some zebra strikes, the color of the zebra seems to be from the surrounding background area. Table~\ref{table_ablation_score} reports FID scores with and without each component. We can see that the predictive class embedding and the incorporation of the top-down stream can significantly reduce the FID by providing class-related information.
\begin{figure}[t]
\begin{center}
\centering
\includegraphics[width=.9\textwidth]{ablation.pdf}
\caption{From left to right are: (a) input, (b) without object training data, (c) without predictive class embedding, (d) without top-down stream, (e) full model. }
\label{fig_ablation}
\vspace{-10pt}
\end{center}%
\end{figure}
\begin{figure}[t]
\begin{center}
\centering
\includegraphics[width=.9\textwidth]{d1.pdf}\\
\includegraphics[width=.9\textwidth]{d2.pdf}\\
\includegraphics[width=.9\textwidth]{d3.pdf}\\
\includegraphics[width=.9\textwidth]{d4.pdf}\\
\includegraphics[width=.9\textwidth]{d5.pdf}\\
\caption{Our method can produce multiple diverse object inpainting results for the same input image by using different random latent code $z$. }
\label{fig_diverse}
\vspace{-10pt}
\end{center}%
\end{figure}
\begin{table}[t]
\caption{\small Effect of each component in terms of FID and LPIPS. }
\label{table_ablation_score}
\small
\begin{center}
\begin{tabular}{cccc||cc}
\hline
& Object Data & PCE & Top-down &FID&LPIPS\\
\hline
& $\surd$ & & &6.144 &0.1066\\
& $\surd$ & $\surd$ & &5.434 &0.1081\\
& $\surd$ & $\surd$ & $\surd$ &4.700 &0.1049\\%769
\hline
\end{tabular}
\end{center}
\vspace{-0pt}
\end{table}
\section{Conclusion and Future Work}
We study a new image inpainting task,~\ie shape-guided object inpainting. We find that existing image inpainting methods are not suitable for object inpainting due to the bias towards background and a lack of top-down guidance. Therefore, we design a new data preparation method that incorporates object priors by using object instances as holes and propose a Contextual Object Generator (CogNet) with a two-stream network architecture that combines the bottom-up image completion process with a top-down object generation process.
Experiments demonstrate that the proposed method can generate realistic objects that fit the context in terms of both visual appearance and semantic meanings.
The proposed method can be easily extended to inpaint partially missing objects by using partial instances masks in training. This can be an interesting topic for future work.
\clearpage
\bibliographystyle{splncs04}
| 2024-02-18T23:39:51.967Z | 2022-04-19T02:20:14.000Z | algebraic_stack_train_0000 | 691 | 5,352 |
|
proofpile-arXiv_065-3449 | \section{Introduction}
For over 25 years now,
there has been an intuition in the world
of security that formal theories of knowledge and belief should have
something interesting to say about security protocols.
Many logics have been designed that embody this intuition.
One of the earliest and the most discussed is BAN logic
\cite{r:burrows90}.
While BAN logic has been the subject of many (quite legitimate!) criticisms,
we believe that there are important features of the BAN approach that have been lost in
more recent approaches such as model checking \cite{r:lowe98,r:mitchell97},
inductive-assertions methods \cite{r:paulson98}, strand spaces
\cite{r:thayer99}, and process calculi \cite{r:gordon99}:
namely, the ability to express intuitions of protocol
designers regarding notions such as belief, trust, freshness, and
jurisdiction. Such high-level abstractions play a significant role in
informal reasoning about security protocols. It would be desirable
for such intuitive ideas concerning the protocol specifications to
be reflected in formal specifications, and for informal arguments
concerning such notions to be reflected in formal proofs.
In this paper, we argue that a modal logic with standard notions of
knowledge, probability, and time, together with atomic predicates that
capture messages {\em sent} and {\em received} by an agent, and a
predicate that we call $\mathsf{extract}$ which characterizes an agent's
ability to extract information from messages, can capture most of the
higher-level abstractions that we seem to need. We show that such a
logic is able to handle issues that have often been swept under the
rug by other approaches, and is flexible enough to capture the
higher-level security notions that appear in BAN logic. We do this by
providing a translation of the BAN operators into our logic, capturing
belief by a form of probabilistic knowledge, and showing that the
translation satisfies the BAN inference rules.
The translation highlights some subtleties in the BAN framework,
including some that were missed by earlier authors.
Logics in the BAN tradition have long struggled to reconcile
the information-theoretic semantics of logics of knowledge and belief
with the computational aspects of cryptography, which raise a version of
the \emph{logical omniscience problem}.
Suppose that $i$ sends $j$ the message $\mathbf{m}'$, where $\mathbf{m}'$ is
$\mathbf{m}$ encrypted by a shared
key $k$. Does $j$ know that $i$ has sent $\mathbf{m}$
encrypted by $k$? If $j$ does
not have the key $k$, then, intuitively, the answer is no; agent $j$ has
no way of knowing that $\mathbf{m}'$ is the result encrypting $\mathbf{m}$ by $k$.
Of course, if $j$ were not computationally bounded, then $j$ could
figure out that $\mathbf{m}'$ is indeed $\mathbf{m}$ encrypted by $k$. Standard
approaches to modeling knowledge treat agents as computationally
unbounded; in particular, agents are assumed to know all valid formulas.
Since (given a fixed encryption framework, and assuming unique
encryptions) the fact that $\mathbf{m}'$ is the result of encrypting $\mathbf{m}$ by
$k$ is a valid mathematical statement, all agents will know it. This is
the logical omniscience problem. There have been attempts to overcome this problem:
Cohen \cite{cohenthesis}, for instance, deals with it using what seems to us a rather
complicated
semantics for knowledge involving permutations (see
Section~\ref{sec:related}
for details).
We propose a simpler and arguably far more intuitive approach that
allows us to retain the standard semantics for knowledge.
It has been common in the literature on authentication logics to represent the
complex message that is the result of encrypting a message $\mathbf{m}$ by a
key $k$ using the {\em term} $\encr{\mathbf{m}}{k}$ in both the syntax and semantics
of the logic.
We depart from this approach by distinguishing two views of messages.
The first views a message simply as a string of symbols; the second
views the message
as
a term with structure. When $j$ receives the message
$\mathbf{m}'$, $j$ knows that \agpr received (the string) $\mathbf{m}'$. What $j$ does
not know is that \agpr received $\mathbf{m}$ encrypted by $k$; $j$ considers it
possible that $\mathbf{m}'$ is $\mathbf{m}''$ encrypted by $k''$, or that $\mathbf{m}'$
is not the encryption of any message. To model this, we consider both
strings and terms. What is sent or received are strings; we use the
notation $\mathsf{s} = \intn{\mathbf{m}}$ to denote that $\mathsf{s}$ is the string
that represents the message (term) $\mathbf{m}$. We also allow for
``impossible'' runs where $\mathsf{s} = \intn{\mathbf{m}'}$; that is, we allow the
agent to be uncertain as to what message is represented by the string
$\mathsf{s}$ (even when $\mathsf{s}$ representing $\mathbf{m}$ is a fact of
mathematics). Using such impossible runs, we can easily model the fact
that $i$ may know that $\mathsf{s}$ represents the encryption of some
message, even though $i$ does not know which message it is the
encryption of (in all runs that $i$ considers possible, $\mathsf{s} =
\intn{\encr{\mathbf{m}'}{k'}}$ for some message $\mathbf{m}'$ and key $k'$) or that
$i$ knows that encryptions are unique, or that $\mathsf{s}$ represents the
encryption of a message of length at most 20. We believe that this
approach to dealing with logical omnisicience should be
useful beyond the context of this paper.
\end{tarkin}
\section{A Logic for Security Properties}\label{s:logic}
\subsection{Syntax}\label{s:syntax}
We use a modal
logic for
reasoning about security protocols. We assume a finite set of
principals that for simplicity we
represent
by integers,
a set $\mathcal{K}$ of keys,
a set $\mathcal{N}$ of nonces, a set $\mathcal{T}$
of plaintexts,
and a set $\Phi$ of
(application-specific) primitive propositions.
We assume that $\mathcal{K}$ contains both symmetric keys (used in shared-key
cryptography) and asymmetric keys (used in public-key
cryptography), and that they can be distinguished. We also assume that
keys, nonces, and plaintexts can be distinguished, so that $\mathcal{K}$,
$\mathcal{N}$, and $\mathcal{T}$ are disjoint sets, and that encrypted messages can be
distinguished from unencrypted messages.
Like Abadi and Tuttle \citeyear{AT91} (AT from now on)
and other BAN successors,
we assume that
formulas can state properties of messages, and can also be used in
messages.
Thus, we define formulas and messages
simultaneously
as follows, where we use $p$ for a generic element of $\Phi$, $i$
for a generic principal (or agent), $\mathbf{m}$ for a generic message, $t$
for a generic plaintext, $k$ for a generic key, $n$ for a generic
nonce,
$\alpha$ for a generic real number in $[0,1]$,
$\mathbf{s}$ for a generic term of type string,
$\mathsf{s}$ for a generic concrete string,
$x$ for a generic variable ranging over strings,
and $\phi$ for a generic formula.
As usual, a concrete string is a sequence of symbols from some
alphabet $\Sigma$.
We view messages both as strings and as terms with
structure. When we write $\mathbf{m}$, we are thinking of
the message as a term
with structure, as is made clear in the following grammar:
\begin{eqnarray*}
\mathbf{s} &::= & \mathsf{s} ~|~ x \\
\mathbf{m} &::= & t ~|~ k ~|~ n ~|~ i ~|~ (\mathbf{m}_1,\mathbf{m}_2) ~|~
\encr{\mathbf{m}}{k} ~|~ \fmla{\phi} \\
\phi & ::= & p ~|~ \send{i}{\mathbf{s}} ~|~ \recv{i}{\mathbf{s}} ~|~
\extract{i}{\mathbf{m}} ~|~ \neg\phi ~|~ \phi_1\wedge\phi_2 ~|~ K_i\phi ~|~
\mbox{{\small $\bigcirc$}}\phi ~|~ \\
& & \NCirc\phi ~|~ \Box\phi ~|~
\NBox\phi ~|~ \Pr_i(\phi)\ge\alpha ~|~ \exists x \, \phi ~|~
\intn{\mathbf{m}} = \mathbf{s} ~|~ \mathbf{s} \sqsubseteq \mathbf{s}' \,.
\end{eqnarray*}
Besides the application-specific primitive propositions, we also have
``built-in''
primitive propositions of the form $\send{i}{\mathbf{s}}$, $\recv{i}{\mathbf{s}}$,
and $\extract{i}{\mathbf{m}}$.
Note that agents send and receive
strings, not message terms.
The proposition $\extract{i}{\mathbf{m}}$ holds if $i$ can ``extract'' the
message $\mathbf{m}$ from
strings
it has received (and other information at
its disposal).
Exactly what $\mathsf{extract}$ means depends on the application,
the
capabilities of principals, and the protocol they are running.
For now, we make no assumptions, viewing it as a black box.
(In \secref{s:dy}, we give a concrete implementation
of $\mathsf{extract}$ capturing the Dolev-Yao capabilities.)
The knowledge operator $K_i\phi$ states that agent $i$ knows the fact
$\phi$.
The temporal operator $\mbox{{\small $\bigcirc$}}\phi$ states that $\phi$ is true at the
next time step, while $\NCirc\phi$ states that $\phi$ was true at the
previous time step,
if any.
We will use the
abbreviations $\mbox{{\small $\bigcirc$}}^l\phi$ and $\NCirc^l\phi$ (for
$l\in\mathbb{N}$)
for the
$l$-fold application of $\mbox{{\small $\bigcirc$}}$ and $\NCirc$, respectively, to $\phi$.
The temporal operator $\Box\phi$ states that $\phi$ is true at the
current time, and all subsequent times.
Similarly, $\NBox\phi$ states that $\phi$ is true at the current time,
and all previous times.
The formula
$\Pr_i(\phi)\geq\alpha$ says that the formula $\phi$
holds with probability at least $\alpha$, according to agent $i$.
The range of quantification is strings: the formula $\exists x\, \phi$ says that there exists a
string $x$ such that $\phi$ holds. The construction $[\mathbf{m}]= \mathbf{s}$ says that
the string $\mathbf{s}$ is the encoding of the message $\mathbf{m}$.
That is, we assume that every message is represented as a string.
We also assume that there is a pairing
function that maps pairs of strings to strings, and
an encryption function that maps strings and keys to strings.
We discuss this in more detail below.
Finally, $\mathbf{s} \sqsubseteq \mathbf{s}' $ says that the string
$\mathbf{s}'$ can be constructed from $\mathbf{s}$ and other strings
using the pairing and encryption functions described in
Section~\ref{semantics}.
\begin{tarkin}
We use the usual abbreviations, and write $\sprev\phi$ for $\NCirc\phi \land \neg \NCirc \textbf{false}$
($\phi$ was true
at the previous step, and there was a previous step).
\end{tarkin}
\subsection{Semantics}
\label{semantics}
A \emph{multiagent system} \cite{r:fagin95} consists of $n$ agents
and an environment,
each of which is in some
\emph{local state} at a given point in time.
We briefly review the relevant details here.
We assume that an agent's local
state encapsulates all the information to which the agent has
access. In the security setting, the local state of an agent might
include some initial information regarding keys, the messages \agpr has
sent and received, and perhaps the reading of a clock.
The \emph{environment state} describes information relevant to the analysis
that may not be in any agent's state.
A \emph{global state} has the form $(\mathit{st}_e,\mathit{st}_1,\ldots, \mathit{st}_n)$, where
$\mathit{st}_i$ is agent $i$'s state, for $i = 1, \ldots , n$, and $\mathit{st}_e$ is the
environment state.
In general,
the actual form of
these
local states depends on the application.
\begin{tarkin}
We define a \emph{run} to be a function from time to global states. A
\emph{point} is a pair $(r, m)$ consisting of a run $r$ and a time
$m\in {\bf N}$. At a point $(r, m)$, the system is in some global
state $r(m)$.
If $r(m) = (\mathit{st}_e, \mathit{st}_1, \ldots , \mathit{st}_n)$, then we take $r_i(m)$ to be
$\mathit{st}_i$, agent $i$'s local state at the point $(r, m)$, and $r_e(m)$ to
be $\mathit{st}_e$, the environment state. We formally define a {\em system} to
consist of a set $\mathcal{R}$ of runs.
\end{tarkin}
For simplicity,
we restrict attention to a specific
class of systems, suited to modeling security protocols. These are
message-passing systems in which one (or more) of the agents is an
adversary with the capacity to monitor and control message
transmission.
Messages have compositional structure, but are transmitted as strings.
We assume that the local state of an agent
at time $m$
is a sequence of
the form $\<e_0, e_1, \ldots , e_m\>$, where $e_0$ is the initial state
(which typically contains the keys and nonces that $i$ is initially
aware of),
and $e_i$ for $i \ge 1$ is
a set of events of the form
$\sendE{j,\mathsf{s}}$ or
$\recvE{\mathsf{s}}$ where $\mathsf{s}$ is a
string
and
$j$
is an agent.
We assume that messages (as strings) are sent or received during a \emph{round},
where round $m$ takes place between times $m-1$ and $m$.
Event $\sendE{j,\mathsf{s}}$ is in $r_i(m)$ if $i$ sends string $\mathsf{s}$
in round $m$ of run $r$, intending that it be delivered to agent $j$,
while event $\recvE{\mathsf{s}}$ is in $r_i(m)$ if $i$ receives string
$\mathsf{s}$ in round $m$ of run $r$. Note that the sender is not
included in $\recvE{\mathsf{s}}$. Intuitively, this is because the
receiver may not be able to determine the sender of a message it
receives.
For an event $x$, we abuse notation and write $x\in r_i(m)$ to denote
that $x\in e_k$ for
some $k\leq m$.
The initial state
represents information such as the public and private keys to be used
during the run, and other values such as nonces to be used by the agent
during the run.
As we said, we distinguish between strings and message terms, and we
allow agents to be ``confused'' about what term a string represents.
We use the initial environment state of a run to encode the relationship
between terms and strings. Specifically, we take $r_e(0)$ to include a
collection of equations of the form $\intn{\mathbf{m}} = \mathsf{s}$,
with exactly one such equation for each message $\mathbf{m}$.
We write $\int{\mathbf{m}}{r}$ to denote the string $\mathsf{s}$ such that
$\intn{\mathbf{m}} = \mathsf{s}$ is in $r_e(0)$.
There
may be
some constraints on the relationship between strings and terms. For
example, in contexts where all messages are commonly known to be encoded
by unique strings, we would require that there is no run $r$ and no
message terms $\mathbf{m} \ne \mathbf{m}'$ such that
$\int{\mathbf{m}}{r} = \int{\mathbf{m}'}{r}$.
We now discuss some assumptions that we make for the purposes of this paper;
others are discussed in \secref{a:soundness}:
\begin{itemize}
\item Keys $k$ and their inverses $k^{-1}$ are strings,
and represent
themselves in all runs; that is, $\int{k}{r} = k$ and $\int{k^{-1}}{r} =
k^{-1}$ for all keys $k$ and runs $r$. Similarly, principals (agents),
plaintexts, nonces, and
messages in the form of formulas are also represented as strings, and
represent themselves.
\item There is a pairing function on strings, so that if $\mathsf{s}$,
$\mathsf{s}'$ are strings, then there is another string that we denote
$(\mathsf{s},\mathsf{s}')$.
Moreover, we assume that
the string representing $(\mathbf{m}_1,\mathbf{m}_2)$ is the
pairing of the strings representing $\mathbf{m}_1$ and $\mathbf{m}_2$; that
is, $\int{(\mathbf{m}_1,\mathbf{m}_2)}{r} = (\int{\mathbf{m}_1}{r},\int{\mathbf{m}_2}{r})$ for
all runs $r$.
\item There is a \emph{run-dependent} encryption function on strings.
That
is, given a string $\mathsf{s}$ and a key $k$, there is another string that
we denote $\int{\encr{\mathsf{s}}{k}}{r}$ that we can think of as the result
of encrypting $\mathsf{s}$ by $k$ in run $r$.
We do \emph{not} assume that
$\int{\encr{\mathsf{s}}{k}}{r} = \int{\encr{\mathsf{s}}{k}}{r'}$ for all runs
$r$ and $r'$.
An agent may be ``confused'' about how $\mathsf{s}$ is encrypted.
\item{}
We define
$\int{\encr{\mathbf{m}}{k}}{r} = \int{\encr{\mathsf{s}}{k}}{r}$ if
$\int{\mathbf{m}}{r} = \mathsf{s}$.
That is,
agents ``understand'' that a message is encrypted by means of an
operation on the string that encodes the message.
\item Encryption is unique: if $\int{\encr{\mathbf{m}}{k}}{r}
= \int{\encr{\mathbf{m}'}{k'}}{r}$, then
$\int{\mathbf{m}}{r} = \int{\mathbf{m}'}{r}$ and $k = k'$ for all runs $r$.
(This assumption is critical in
the use of encryption as an authentication mechanism, and
is typically assumed in the literature on authentication logics.)
Moreover, $\int{\encr{\mathbf{m}}{k}}{r}$ is distinct from any plaintext,
nonce, key, agent name, or pairing.
\end{itemize}
Because we want to reason about probabilities, we work with
\emph{interpreted (probabilistic) systems}
of the form $\mathcal{I} = (\mathcal{R},\pi, \mathscr{C}, \{\mu_C\}_{C \in \mathscr{C}})$,
where $\mathcal{R}$ is a system, $\pi$ is an interpretation for the propositions in $\Phi$ that assigns truth
values to the primitive propositions at the global states,
$\mathscr{C}$ is a partition of the runs in $\mathcal{R}$ into cells, and for each cell $C \in \mathscr{C}$,
$\mu_C$ is a probability distribution on the runs in $C$.
The assumption is that agents are using a possibly randomized
protocol, while the adversary is using a protocol that combines
possibly non-probabilistic choices (such as choosing an agent to
attack) with probabilistic moves. Cell $C$ ``factors out'' the
nonprobabilistic choices, so that, in all the runs in $C$, only
probabilistic choices are made. This allows us to put a probability
$\mu_C$ on the runs in $C$.
We do not assume a single
probability distribution on $\mathcal{R}$, since that would require us to put a
probability on the possible protocols that the adversary is using.
(See \cite{HT} for further discussion of this approach.)
We restrict the possible interpretations $\pi$
so as
to fix a particular
interpretation for the primitive propositions $\recv{i}{\mathsf{s}}$ and
$\send{i}{\mathsf{s}}$.
Specifically, we require that
\begin{itemize}
\item $\pi(r(m))(\recv{i}{\mathsf{s}})=\textbf{true}$ iff $\recvE{\mathsf{s}}\in r_i(m)$,
\item $\pi(r(m))(\send{i}{\mathsf{s}})=\textbf{true}$ iff $\sendE{j,\mathsf{s}}\in
r_i(m)$, for some $j$.
\end{itemize}
Given our interpretation of extraction as a black box, we put no
constraints here on how $\pi$ interprets $\extract{i}{\mathbf{m}}$;
however, we do assume that extraction is
monotonic, in the sense that
once an
agent is able to extract a message, it will be able to extract it at all
future times:
\begin{itemize}
\item If $\pi(r(m))(\extract i \mathbf{m}))=\textbf{true}$ and $m \leq n$ then
$\pi(r(n))(\extract i \mathbf{m})=\textbf{true}$.
\end{itemize}
As we would expect,
\begin{itemize}
\item $\pi(r(m))(\intn{\mathbf{m}} = \mathsf{s}) = \textbf{true}$ if
$\int{\mathbf{m}}{r} = \mathsf{s}$
(i.e., if $\intn{\mathbf{m}} = \mathsf{s}$ is in $r_e(0)$).
\end{itemize}
Roughly speaking, the formula $\mathsf{s} \sqsubseteq \mathsf{s}'$
says that $\mathsf{s}'$ can be constructed from
$\mathsf{s}$ and other strings using pairing and encryption. Since how
encryption works on strings is run-dependent,
we first define,
for each run $r$, a relation
$\sqsubseteq_r$ on strings as the smallest reflexive
and transitive relation such that
\begin{ccsin}
$\mathsf{s} \sqsubseteq_r (\mathsf{s},\mathsf{s}')$,
$\mathsf{s}' \sqsubseteq_r (\mathsf{s}',\mathsf{s})$, and
$\mathsf{s} \sqsubseteq_r \int{\encr{\mathsf{s}}{k}}{r}$.
\end{ccsin}
We now take
\begin{itemize}
\item $\pi(r(m))(\mathsf{s} \sqsubseteq \mathsf{s}') = \textbf{true}$ if
$\mathsf{s} \sqsubseteq_r \mathsf{s}'$.
\end{itemize}
The reader may wonder why we defined the $\sqsubseteq_r$ relation on
strings, rather than defining it
as the subterm relation
on
messages.
This formulation allows us to
model a situation where
we have
$\mathsf{s} \sqsubseteq_r \mathsf{s}'$ because $\mathsf{s} =
\int{\mathbf{m}}{r}$ and $\mathsf{s}' = \int{\encr{\mathbf{m}}{k}}{r}$, but the agent
does not know this, since \agpr does not realize that $\mathsf{s}' =
\int{\encr{\mathbf{m}}{k}}{r}$. Thus, \agpr considers a run $r'$ possible where
$\mathsf{s} \, {\not\sqsubseteq}_{r'} \,\mathsf{s}'$ (because, for example, we
may have
$\mathsf{s}' \ne \int{\encr{\mathbf{m}}{k}}{r'}$ even if $\mathsf{s} =
\int{\mathbf{m}}{r'}$).
As usual \cite{r:fagin95,r:hintikka62}, we say that agent $i$ knows a
fact $\phi$ at a point $(r,m)$ if $\phi$ is true at all points $i$
cannot distinguish from $(r,m)$, and define $i$'s indistinguishability
relation $\sim_i$ by taking $(r,m)\sim_i(r',m')$ if $r_i(m)=r'_i(m')$.
Despite making these standard choices,
we do not suffer from the usual
logical omniscience problems, exactly because of our distinction between
strings and message terms, and the fact that we allow ``impossible''
runs where the string corresponding to a message is not the one that is
mathematically determined.
Given our assumption that $r_i(m)$ has the form $\<e_0, \ldots, e_m\>$,
where $e_0$ is $i$'s initial state
and $e_i$ for $i > 1$ is a set of
events of the form $\sendE{j,\mathsf{s}}$ or $\recvE{\mathsf{s}}$ describing the
messages that
$i$ sent and received, it follows that $\mathcal{R}$ is a \emph{synchronous}
system where
agents
have \emph{perfect recall} \cite{r:fagin95}.
Considering synchronous systems with perfect recall makes it relatively
straightforward to give semantics to formulas of the form
$\Pr_i(\phi)\ge\alpha$.
But having a probability on runs does not suffice to give semantics to a
formula
such as $\Pr_i(\phi)\ge\alpha$ at a point $(r,m)$. To do this,
we need to go from probabilities on runs to probabilities on points.
Given a point $(r,m)$, let $C_r$ be the unique cell in $\mathscr{C}$ such that $r
\in C_r$. Let $\mathcal{K}_i(r,m)$ denote the set of points that
agent $i$ cannot distinguish from $(r,m)$,
that is, the set $\{(r',m') ~|~ (r,m)\sim_i(r',m')\}$.
Let $\mathscr{C}(r)$ be the set of points where the runs are
in $C_r$,
that is,
$\mathscr{C}(r)$ consists of all the points of the form $(r',m')$ with $r' \in
C_r$. The
probability $\mu_{C_r}$ on the runs of cell $C_r$ induces a
probability $\mu_{r,m,i}$ on the points in $\mathcal{K}_i(r,m)\cap\mathscr{C}(r)$ in a
straightforward way. If $U \subseteq \mathcal{K}_i(r,m)\cap \mathscr{C}(r)$, define
\[\mu_{r,m,i}(U) = \frac{\mu_{C_r}(\{r': (r',m) \in U\})}{\mu_{C_r}(\{r': (r',m) \in
\mathcal{K}_i(r,m)\cap\mathscr{C}(r)\})}.\]
The
relation of satisfaction
of a formula $\phi$
without free variables
in an interpreted system
$\mathcal{I}=(\mathcal{R},\pi)$ at point $(r,m)$, written
$(\mathcal{I},r,m)\models\phi$, is defined inductively,
as follows:
\begin{list}{}{\setlength\leftmargin{5pt}}
\item[] $(\mathcal{I},r,m) \models p$ iff $\pi(r(m))(p)=\textbf{true}$
\item[] $(\mathcal{I},r,m) \models \neg\phi$ iff $(\mathcal{I},r,m)\not\models\phi$
\item[] $(\mathcal{I},r,m) \models \phi_1\wedge\phi_2$ iff $(\mathcal{I},r,m)\models\phi_1$ and
$(\mathcal{I},r,m)\models\phi_2$
\item[] $(\mathcal{I},r,m) \models K_i\phi$ iff for all $(r',m')\sim_i(r,m)$,
$(\mathcal{I},r',m')\models\phi$
\item[] $(\mathcal{I},r,m) \models \mbox{{\small $\bigcirc$}}\phi$ iff $(\mathcal{I},r,m+1)\models\phi$
\item[] $(\mathcal{I},r,m) \models \NCirc\phi$ iff $m= 0 $ or $(\mathcal{I},r,m-1)\models\phi$
\item[] $(\mathcal{I},r,m) \models \Box\phi$ iff for all $m'\geq m$,
$(\mathcal{I},r,m')\models\phi$
\item[] $(\mathcal{I},r,m) \models \NBox\phi$ iff for all $m'\leq m$,
$(\mathcal{I},r,m')\models\phi$
\item[] $(\mathcal{I},r,m) \models \Pr_i(\phi)\geq\alpha$ iff
$\mu_{r,m,i}(\{(r',m')~|~(\mathcal{I},r',m')\models\phi\}\cap
\mathcal{K}_i(r,m)\cap\mathscr{C}(r))\geq\alpha$
\item[] $(\mathcal{I},r,m) \models \exists x\, \phi$ if,
for some concrete string $\mathsf{s}$,
$(\mathcal{I},r,m) \models
\phi[\mathsf{s}/x]$, where $\phi[\mathsf{s}/x]$ is the result of
replacing all free occurrences of $x$ in $\phi$ by $\mathsf{s}$.
\end{list}
As usual, we say that $\phi$ is \emph{valid} in $\mathcal{I}$, written $\mathcal{I}\models\phi$,
if $(\mathcal{I},r,m)\models\phi$ for all points $(r,m)$ in $\mathcal{I}$.
We define the probabilistic knowledge operator $K_i^\alpha\phi$ as an
abbreviation for $K_i(\Pr_i(\phi)\ge 1-\alpha)$. This operator simply
means that, essentially, no matter which cell $C$ the agent thinks the
current point is in, the probability of $\phi$ according to $i$ in that cell is at
least $1-\alpha$.
As stated above, we do not impose any \emph{a priori} restrictions
on the interpretation of $\extract{i}{\mathbf{m}}$.
Intuitively, the interpretation
of $\mathsf{extract}$ is meant to capture the capabilities of the agents to
take apart messages and construct new messages. While in principle a
principal may be able to extract $\mathbf{m}_1$ from $(\mathbf{m}_1,\mathbf{m}_2)$,
it may not do so at a particular point in a system, because the
protocol that it is using does not break up $(\mathbf{m}_1,\mathbf{m}_2)$.
Similarly, whether $i$ can extract $\mathbf{m}$ from $\encr{\mathbf{m}}{k}$
depends in part on whether $i$ ``has'' key $k$ in some sense, $i$'s
protocol, and $i$'s computational ability.
We provide an interpretation of $\mathsf{extract}$ that captures the Dolev-Yao
adversary in
\secref{s:ban}.
However, we stress that many other interpretations of $\mathsf{extract}$ are
possible, such as interpretations that capture
guess-and-validate adversaries \citeyear{r:lowe02}.
\section{Interpreting BAN Logic}\label{s:ban}
One of our claims is that the logic we introduced in
\secref{s:logic} is a good foundation for security protocol
logics.
Burrows, Abadi, and Needham \citeyear{r:burrows90} developed BAN logic
from similar intuitions, taking belief as a primitive rather than
knowledge and probability.
To provide evidence for our claim, we show how we can interpret the
constructs of BAN logic by essentially rewriting them into the simpler
primitives of our
logic.
Although we focus here on BAN logic
as an example,
we believe that we could similarly reconstruct other related logics.
\subsection{Definition of BAN Logic}
We reformulate the syntax of BAN logic, along the lines of AT.
The set of formulas and set of messages are defined by mutual induction,
using the grammar below.
Note that messages are defined just as in \secref{s:syntax}.
\begin{eqnarray*}
\mathbf{m} &::= & t ~|~ k ~|~ n ~|~ i ~|~ (\mathbf{m},\mathbf{m}') ~|~ \{\mathbf{m}^i\}_k ~|~ F\\
F & ::= & i ~\mathbf{believes}~ F ~|~ i ~\mathbf{controls}~ F ~|~ i ~\mathbf{sees}~ \mathbf{m} ~|~
i ~\mathbf{said}~ \mathbf{m} ~|~ i\key{k}j ~|~ \pkey{k}j ~|~
\mathbf{fresh}(\mathbf{m}).
\end{eqnarray*}
The superscript $i$ in
$\encr{\mathbf{m}^i}{k}$
represents a ``from''-field, intended to indicate the
original sender of the message.
The intuitive reading of the formulas is as follows:
$i~\mathbf{believes}~ F$ means that principal $i$ believes formula
$F$; $i~\mathbf{controls}~ F$ means that principal $i$ is an authority on
or has authority or jurisdiction over
$F$;
$i ~\mathbf{sees}~ \mathbf{m}$ means that
$i$ has received a message containing $\mathbf{m}$;
$i~\mathbf{said}~ \mathbf{m}$ means
that principal $i$ at some time sent a message containing $\mathbf{m}$
and, if $\mathbf{m}$ is a formula $F$ that was sent recently, that $i$
believes $F$;
$\mathbf{fresh}(\mathbf{m})$ means that message $\mathbf{m}$ was sent recently;
$i\key{k}j$ means that principals $i$ and $j$ can
use the shared key $k$ to communicate (and that the key is a \emph{good} key;
we discuss what counts as a good key below);
$\pkey{k}j$ means that key $k$ is $j$'s public key (and that the
key is a good key).
\begin{figure*}[t]
\hrule
\medskip
\begin{math}
\begin{array}[t]{l}
\mbox{R1.}\quad
\Rule{i~\mathbf{believes}~ j\key{k}i\quad i~\mathbf{sees}~\{F^l\}_k \quad l\ne i}
{i~\mathbf{believes}~ j~\mathbf{said}~ F} \\
\mbox{R2.}\quad
\Rule{i~\mathbf{believes}~ j ~\mathbf{said}~ (F,F')}{i ~\mathbf{believes}~ j~\mathbf{said}~ F}\\
\mbox{R3.}\quad
\Rule{i~\mathbf{believes}~\mathbf{fresh}(F)\quad i~\mathbf{believes}~(j~\mathbf{said}~ F)}
{i~\mathbf{believes}~ j~\mathbf{believes}~ F}\\
\mbox{R4.}\quad
\Rule{i~\mathbf{believes}~ j~\mathbf{controls}~ F ~~ i~\mathbf{believes}~ j~\mathbf{believes}~ F}
{i~\mathbf{believes}~ F} \\
\mbox{R5.}\quad
\Rule{i~\mathbf{sees}~ (F,F')}{i~\mathbf{sees}~ F}\\
\end{array}
\begin{array}[t]{l}
\mbox{R6.}\quad
\Rule{i~\mathbf{believes}~ j\key{k}i\quad i~\mathbf{sees}~\{F^l\}_k \quad l \ne i}{i~\mathbf{sees}~ F}\\
\mbox{R7.}\quad
\Rule{i~\mathbf{believes}~ \pkey{k}i\quad i~\mathbf{sees}~\{F^l\}_k \quad l \ne i}{i~\mathbf{sees}~ F}\\
\mbox{R8.}\quad
\Rule{i~\mathbf{believes}~ \mathbf{fresh}(F)}{i~\mathbf{believes}~ \mathbf{fresh} ((F,F'))}\\
\mbox{R9.}\quad
\Rule{i ~\mathbf{believes}~ i\key{k}j}{i~\mathbf{believes}~ j\key{k}i}\\
\end{array}
\medskip
\end{math}
\hrule
\caption{BAN inference rules}
\label{f:ban}
\end{figure*}
BAN logic uses inference rules to derive new formulas from
others.
These capture the intended meaning of the primitives.
The most significant rules appear in
Figure~\ref{f:ban}.
\subsection{A Probabilistic Interpretation}\label{s:dy}
\begin{tarkin}
We now define a
translation from BAN formulas to formulas in
our logic.
\end{tarkin}
We write $F^T$ to denote the result of translating the BAN logic
formula $F$ to a formula in our logic.
Since formulas include messages and are messages, we also need
to translate messages;
$\mtrans{\mathbf{m}}$ denotes the translation of a message in the BAN
framework to a message in our framework.
Note that $F^T$ (the translation of $F$ viewed as a formula) is slightly
different from $F^M$ (the translation of $F$ viewed as a message),
in that the former is of type formula, whereas the latter is of type
message.
The translation of messages that are not formulas is
defined inductively in the obvious way:
for a primitive message $\mathbf{m}$, $\mtrans{\mathbf{m}} = \mathbf{m}$, and
$\mtrans{(\mathbf{m}_1,\mathbf{m}_2)} = (\mtrans{\mathbf{m}_1},\mtrans{\mathbf{m}_2})$.
We translate encryptions $\encr{\mathbf{m}^i}{k}$ by treating the
``from''-field as concatenated to the end of the encrypted message;
thus,
$\mtrans{\encr{\mathbf{m}^i}{k}} = \encr{(\mtrans{\mathbf{m}},i)}{k}$.
The translation $F^M$ of a formula $F$ viewed as a message
is $\fmla{\ktrans{F}}$, where
$\ktrans{F}$ is the translation of $F$ viewed as a formula.
Our translation for $\mathbf{believes}$ is based on a definition of belief
due to Moses and Shoham \citeyear{MosesShoham}.
They assume that an agent operates with a set of default assumptions,
expressed as a formula $A$. An agent's belief in $\phi$, relative to
assumptions $A$, can then be captured by the formula $K_i(A \Rightarrow
\phi)$.
That is, the agent believes $\phi$ relative to assumptions $A$ if it
knows that $\phi$ holds under assumptions $A$.
\begin{ccsin}
We use
a probabilistic version of this idea.
\end{ccsin}
Like AT, we use a set of good runs
for the assumptions relative to which the agent reasons.
Intuitively, these are the runs in which undesirable events such as the
adversary guessing a nonce do not occur.
We differ from AT in the way that the set of good runs is obtained.
AT define the good runs by a complicated fixed point construction based
on the original set of beliefs ascribed to the agents by the BAN logic
analysis.
We allow any set of runs to be taken as the good runs, but
typically, the prior probability of the set of good runs would be high
(a fact that can be expressed in the logic) so that agents have reasonable
grounds to trust the conclusions they can draw from the assumption
that a run is good.
Moreover, for the soundness of one axiom, we need agents to always
assign positive probability to a run being good.
The particular choice of good runs used in proving that a protocol
satisfies a BAN logic specification will depend on the details of the
protocol and the system used to model the behaviour of the adversary.
Let $\mathit{good}$ be a primitive proposition that expresses ``the
run is good''.
We take the translation of $i~\mathbf{believes}~ F$ to be
\begin{ccsin}
$$\neg K_i^0 \neg
\mathit{good} \land K_i^0(\mathit{good}\Rightarrow\ktrans{F})~.$$
\end{ccsin}
The second clause says that
$i$ believes $F$ if \agpr
knows with probability $1$ that when a run is a good,
$\ktrans{F}$ is true.
The interpretation of belief as knowing with probability $1$
is standard in the economics literature.
We have modified this
so as to
make belief depend only on what happens in
the good runs.
The first clause,
$\neg K_i^0 \neg\mathit{good}$,
requires that the set of good runs has positive
probability
in at least one cell.
This prevents an agent from vacuously believing a fact
just
because
\agpr knows that
the probability of a run being good is 0.
The translation of $\ktrans{(i ~\mathbf{sees}~ \mathbf{m})}$ is
$K_i(\extract{i}{\mtrans{\mathbf{m}}})$.
Thus, agent $i$ ``sees'' $\mathbf{m}$ if \agpr knows that \agpr has extracted it.
We work with this translation here because it helps to satisfy R1, but
it is not the only candidate.
Another reasonable translation, corresponding to the statement
that $i$ has received a string that
he knows contains the encoding of
$\mathbf{m}$, is
$\exists x, y(\intn{\mtrans{\mathbf{m}}} = x\land \recv{i}{y} \land K_i (x
\sqsubseteq y))$.
The difference lies in whether $i ~\mathbf{sees}~ \mathbf{m}$ is meant to imply that $i$
knows how $\mathbf{m}$ is composed.
Roughly speaking, BAN interpret
$i ~\mathbf{said}~ \mathbf{m}$ as ``$\mathbf{m}$ was a
submessage of a message that $i$ sent at some point in the past''. The BAN
reading of $\mathbf{said}$ also involves claims about belief; BAN
assumes that all formulas said recently by $i$ are believed by $i$. We
do not make this assumption in our translation, because we do not view
it as an intrinsic part of $\mathbf{said}$. Rather, we capture it in the
systems for which we prove the translation sound.
Given this, we translate $i ~\mathbf{said}~ \mathbf{m}$ as
$$\exists x,y( \intn{\mtrans{\mathbf{m}}} = x\land \NDiamond \sprev (\neg \send{i}{y}
\land \mbox{{\small $\bigcirc$}} \send{i}{y}\land K_i(x \sqsubseteq y)).$$
Thus, roughly speaking, $i$ said $\mathbf{m}$ if at some
point
strictly
in the past $i$ sent a string
$y=\mathsf{s}'$,
and $i$ knew at the
beginning of the round in which $\mathsf{s}'$ was sent that
$x=\intn{\mtrans{\mathbf{m}}}_r$
was a substring of $\mathsf{s}'$.
(There are some significant subtleties in this translation that relate to a
known error in AT identified in \cite{r:syverson94};
we expand on this in
Section~\ref{sec:subtleties}.)
Capturing that $k$ is a good key between $i$ and $j$ depends
on what we mean by ``good key''.
\begin{ccsin}
There are at least two possible
interpretations. One is that no one
other than possibly $i$ and $j$ has extracted the key.
Accordingly, we
take $\ktrans{(i\key{k}j)}$ to be $\extract{i}{k}\land\extract{j}{k}\land\bigwedge_{i'\not=i,j}\neg
\extract{i'}{k}. $
This translation would not hold in protocols where the key $k$ is
provided to $i$ and $j$ by a key server. To cover this, AT
propose the interpretation ``no one but $i$ and $j$
sends messages encrypted with $k$'' for the length of the protocol
interaction. We could encode this also, as well as other ways to
make explicit the beliefs of the agents about the behaviour of the key server.
(See
Section~\ref{sec:subtleties}
for more discussion of good keys.)
\end{ccsin}
Formula
$\pkey{k}j$ says that $k$ is $j$'s public key,
and that the key is a good key.
The formula is intended to mean that only $j$ knows the key $k^{-1}$.
Thus, its translation is similar in spirit to that of the formula for
shared keys,
and the same comments apply; we take $\ktrans{(\pkey{k}j)}$ to be
$\extract{j}{k^{-1}}\land \bigwedge_{\{i: i\not=j\}} \neg
\extract{i}{k^{-1}} $.
Of course, situations involving key escrow would require a different
translation. Again, the strength of our approach is that it
allows us to easily express such variants.
A message is fresh if it could not have been sent, except possibly
recently, where ``recently'' means ``in the last $l$ steps''. We
leave it to the user to decide
what counts as ``recently'', by choosing a suitable $l$.
Thus, the translation of $\mathbf{fresh}(\mathbf{m})$ is
$$\exists x (\intn{\mtrans{\mathbf{m}}} = x ~\land~
\NCirc^{l}\bigwedge_{i}
\NBox (\neg \exists y ( \neg \send{i}{y} \land
\mbox{{\small $\bigcirc$}} \send{i}{y} \land
x \sqsubseteq y))).
$$
While it may capture a notion relevant to reasoning about replay attacks,
this notion of freshness does not capture what we believe should be
meant by a nonce being ``good''.
Intuitively, this is due to the
requirement for
unpredictability of the nonce; we return
to this issue in
Section~\ref{sec:related}.
We interpret $i ~\mathbf{controls}~ F$
as ``$i$ believes $F$ if and only if $F$ is true''.
Thus, the translation of $i~\mathbf{controls}~F$ is
$K_i^0(\mi{good}\Rightarrow\ktrans{F})\Leftrightarrow\ktrans{F}$.
This captures, to some
extent, the intuition that $i$ is an authority on $F$.
Roughly speaking, there is no way for $F$ to change without agent $i$
knowing it, so $F$ is in some sense ``local'' to agent $i$.
This completes the translation. For the language that does not
include the $\mathbf{controls}$ operator, the translation is linear. A BAN $F$
formula is translated to a modal formula whose length is linear in $|F|$.
With $\mathbf{controls}$ in the language, the translation becomes
exponential.
It
is not clear that formulas with nested occurrences of
$\mathbf{controls}$ arise naturally. For the language with no
nested occurrences of $\mathbf{controls}$, the translation is again linear.
One other comment: although we have called our translation a
``probabilistic translation'', we in fact use probability in only
a limited way. The only probabilistic statements that we make are ``with
probability 1'' ($K_i^0$) and ``with probability greater than 0''
($\neg K_i^0 \neg$). To capture this, we could have simply used a standard
belief operator (that satisfies the axioms of the modal logic KD45),
and avoided dealing with probability
altogether. We have used probability here because in the full paper
we consider a more general probabilistic translation, which is also
sound, where we take believing to be knowing with some
probability $\alpha$. The main consequence of this more general
interpretation is that the translation of the BAN inference rules is
more constrained.
For example, if a BAN inference rule involves beliefs in the
antecedent and in the conclusion of the rule, the probability
associated with those beliefs must be related. We leave further
details to the full paper.
\subsection{Evaluating the Interpretation}\label{a:soundness}
To what extent does the translation above capture BAN logic?
The minimum we can ask for is that the translation validates the
inference rules of BAN logic.
This is what we argue in this section.
In order to validate the BAN inference rules, we need to
restrict to systems that satisfy certain properties. Intuitively, these
restrictions are
made implicitly by BAN logic, and must be made explicit in order
to prove the soundness of the translation.
We say that agents \emph{have no additional prior information
beyond guesses} in an interpreted system $\mathcal{I}$ if the initial
states of all agents includes all public keys, their own
private keys, the nonces required by their protocol (in the
case of nonadversary agents), a finite set of other keys or
nonces they have guessed,
and nothing else.
We also need to
make precise the intuition that agents tell the truth, since
BAN logic assumes that when a (nonadversary) agent
sends a formula, \agpr believes the formula.
Without this requirement, we cannot ensure the validity of
R3.
Implicit in the notion of honesty is the idea that an agent
does not forge ``from''-fields in messages.
Furthermore, BAN logic assumes that
agents' capabilities of creating and decomposing messages
are those characterized by the Dolev-Yao model.
We capture these capabilities,
together with the assumption that agents not forge ``from''-fields, by
providing a suitable interpretation of $\mathsf{extract}$.
The idea is to define a set of strings $\cancompute{i}(r,m)$, which should
be thought of as the set of strings that $i$ can generate given the
information it has in state $r_i(m)$. There are two ways that $i$ can
generate a string. It can pair strings that it can generate to form
a more complicated string, or it can ``pick apart'' a pair
into its components.
Suppose that we have a function $\mathit{init}(\mathit{st})$ that, given a local
state $\mathit{st} = \<e_0, e_1, \ldots, e_m\>$, returns the set of strings
contained in
the initial state $e_0$ (roughly speaking, these are the keys and nonces that
$i$ is initially aware of).
Given a point $(r,m)$,
define $\cancompute{i}(r,m)$ to be the smallest set
$S$ of
strings
satisfying the following conditions:
\begin{enumerate}
\item $\{\mathsf{s}: \recvE{\mathsf{s}}\in r_i(m)\} \cup \{\mathsf{s}: \mathsf{s} \in
\mathit{init}(r_i(m))\}
\cup
\{j: \mbox{$j$ is an agent}\}
\cup \{\phi: \mbox{$\phi$ is a formula}\}
\subseteq S$;
\item $(\mathsf{s},\mathsf{s}') \in S$ iff $\mathsf{s}, \mathsf{s}' \in S$;
\item if $\int{\encr{\mathsf{s}}{k}}{r} \in S$ and $k^{-1} \in
S$, then
$\mathsf{s} \in S$ (recall that if $k$ is symmetric key, then we
identify $k$ and $k^{-1}$);
\item if $\mathsf{s}, k \in S$, then
$\int{\encr{\mathsf{s}}{k}}{r} \in S$.
\end{enumerate}
We now want to connect $\cancompute{i}(r,m)$ with what $i$ knows about
message terms at the point $(r,m)$.
We make some additional assumptions regarding what agents know.
Specifically, if $(r,m) \sim_i
(r',m')$, then we assume
that (a) if
$\int{\encr{\mathsf{s}}{k}}{r}$ and
$k^{-1}$ are both in $\cancompute{i}(r,m)$, then
$\int{\encr{\mathsf{s}}{k}}{r} =\int{\encr{\mathsf{s}}{k}}{r'}$,
and (b) if $\mathsf{s}, k \in
\cancompute{i}(r,m)$, then $\int{\encr{\mathsf{s}}{k}}{r} =
\int{\encr{\mathsf{s}}{k}}{r'}$.
Roughly speaking, (a) says that if $i$ ``has'' the
decryption key $k^{-1}$ and ``has'' the string
$\int{\encr{\mathsf{s}}{k}}{r}$,
which is the
encryption of $\mathsf{s}$ under $k$ in $r$, then $i$ knows that
$\int{\encr{\mathsf{s}}{k}}{r}$
is the encryption of $string$
$\mathsf{s}$
under $k$, while (b) says that if $i$ ``has''
$s$ and $k$, then
$i$ knows that $\int{\encr{s}{k}}{r}$ is the encryption
of $\mathsf{s}$ under $k$.
With these assumptions,
as the following result shows,
$\cancompute{i}(r,m)$ depends only on
$i$'s local state at $(r,m)$.
\begin{proposition}\label{l:cangenerate}
If $r_i(m) = r'_i(m')$, then
$\cancompute{i}(r,m) = \cancompute{i}(r',m).$
\end{proposition}
We say interpreted system $\mathcal{I}$ \emph{models agent $i$ as a Dolev-Yao
agent} if
for all $m\ge 0$ and all messages $\mathbf{m}$,
\begin{enumerate}
\item
$\pi(r(m))(\extract{i}{\mathbf{m}}) = {\bf true}$ if and only if
$\int{\mathbf{m}}{r} \in\cancompute{i}(r,m)$, and
\item
if $\sendE{i,\mathsf{s}}\in r_i(m+1)$ and
$\sendE{i,\mathsf{s}}\not \in r_i(m)$,
then
$\mathsf{s} \in\cancompute{i}(r,m)$.
\end{enumerate}
Note that the second clause restricts agents
to sending messages that they can generate. We
place a further restriction on what are called ``nonforging'' agents.
Although a nonforging agent $i$ can generate all the messages in
$\cancompute{i}(r,m)$, we assume that, when sending a message, $i$ does not
forge signatures; that is, $i$ will not send a message with a
``from"-field that states that the message is from some other agent.
Define $\cancomputeNF{i}(r,m)$ just as $\cancompute{i}(r,m)$, except
that condition (4) is replaced by the following variant:
\begin{enumerate}
\item[(4$'$)]
If $\mathsf{s}, k \in S$, then $\int{\encr{\mathsf{s},i}{k}}{r} \in S$.
\end{enumerate}
Rule (4$'$)
ensures that when the agent constructs an encrypted message, \agpr
includes a ``from''-field set to its own name.
The definition of a \emph{nonforging Dolev-Yao agent} is
just like that of a Dolev-Yao agent, except for the use of
$\cancomputeNF{i}(r,m)$ instead of $\cancompute{i}(r,m)$ in the second clause.
Finally, we say that an agent $i$ is honest if, whenever $i$ says
something, then $i$ believes that it will be true when the message is
received.
(The
honesty
assumption is how we capture BAN's requirement on $\mathbf{said}$
that agents believe that a formula they are sending is true.)
Formally, agent $i$ is an \emph{honest Dolev-Yao agent} in an
interpreted system $\mathcal{I}$ if
\begin{enumerate}
\item $i$ is a nonforging Dolev-Yao agent in $\mathcal{I}$, and
\item
for all BAN formulas $F$,
\begin{ccsin}
$$
\begin{array}{l}
\mathcal{I} \models \exists \mathsf{s}, \mathsf{s}' (\intn{\mtrans{F}} = \mathsf{s} \land \neg
\send{i}{\mathsf{s}'} \land \mbox{{\small $\bigcirc$}}
\send{i}{\mathsf{s}'} \land K_i(\mathsf{s} \sqsubseteq \mathsf{s}')) \Rightarrow
K_i^0(\bigwedge_{0 \le l'\leql} \mbox{{\small $\bigcirc$}}^{l'}\ktrans{F}).
\end{array}
$$
\end{ccsin}
\end{enumerate}
The intuition for the last condition is that an
agent says only things that it
believes
will still be true some
time in the near future when its message is received.
Again, this is parameterized by a time $l$, which should be taken as
the same time parameter used to interpret freshness.
Observe that while the restriction to Dolev-Yao agents is hardwired
into the definitions of $\mathbf{said}$ and $\mathbf{sees}$ by AT, we model it using
$\mathsf{extract}$ instead. This means that our logic can be used to deal
with adversaries other than those that satisfy the Dolev-Yao
properties, without changing the underlying syntax and semantics.
Similarly, rather than hardwiring honesty into the definition of
$\mathbf{said}$, we
model it as an assumption on
the class of systems. We can therefore
model the kind of operators BAN advocate without being tied to the
particular choices made by BAN and their successors.
A further assumption we need to make for the soundness of R3 is
regarding our notion of $\mathit{good}$ runs. It says that,
in a good run, agents always
consider it possible that runs are good (or, more precisely, assign that
event positive probability). We say that a system $\mathcal{I}$
\emph{maintains
goodness} if, for all agents $i$ and all points $(r,m)$, we have
$$(\mathcal{I},r,m) \models \mathit{good} \Rightarrow \neg K_i^0 \neg \mathit{good}.$$
Note that in any system where all finite prefixes of runs have positive
probability, this axiom will be sound.
We would now like to show that the translation of \secref{s:dy}
preserves the validity of the BAN inference rules.
Note that an instance of a BAN inference rule has the form
``from $F_1$ [and $F_2$] infer $F_3$''.
We translate this instance into
a formula of the form $F_1^T [\land F_2^T] \Rightarrow F_3^T$.
Thus, for example, an instance of rule R3 translates to the formula
\begin{ccsin}
$$
\begin{array}{ll}
(\neg K_i^0 \neg \mathit{good} \land
K_i^0(\mathit{good}\Rightarrow\ktrans{(\mathbf{fresh}(F))})\land
K_i^0(\mathit{good}\Rightarrow\ktrans{(j~\mathbf{said}~ F)})) \\ \ \ \ \ \Rightarrow
(\neg K_i^0 \neg \mathit{good} \land
K_i^0(\mathit{good}\Rightarrow (\neg K_j^0 \neg \mathit{good} \land
K_j^0(\mathit{good}\Rightarrow\ktrans{F})))).
\end{array}$$
\end{ccsin}
Note that the translation $\ktrans{(j~\mathbf{said}~ F)}$ involves the message
translation $\mtrans{F}$.
This is why, for instance, even if $F$ and $F'$ are
equivalent, $\ktrans{(j~\mathbf{said}~ F)}$ and $\ktrans{(j~\mathbf{said}~ F')}$ may not be,
while $\ktrans{(j~\mathbf{believes}~ F)}$ and $\ktrans{(j~\mathbf{believes}~ F')}$ are.
The following theorem, whose proof is in the full paper,
assures us that the translation preserves soundness.
In the theorem statement, we use the notation $r_{ij}^T$ to emphasize
that the formulas in the translation of $r$ refer to agents $i$ and $j$,
\begin{theorem}\label{t:ban1}
The translation $r_{ij}^T$ of an instance $r_{ij}$ of the BAN
inference rule {\rm R1} is valid in systems that model
Dolev-Yao agents that have no additional prior information beyond
guesses and where
agent $i$ is a nonforging Dolev-Yao
agent.
The translation $r_{ij}^T$ of an instance $r_{ij}$ of the BAN
inference rule {\rm R$3$} is valid in systems that model Dolev-Yao
agents that have no additional prior information beyond guesses,
maintain goodness,
and
where agent $j$ is honest.
Finally, the translation $r^T$ of an instance $r$ of
{\rm R2} and
{\rm R$n$} for $n\geq 4$ is valid in systems that
model Dolev-Yao agents that have no additional prior information
beyond guesses.
\end{theorem}
Soundness tells us that our translated constructs satisfy properties
analogous to those satisfied by the original BAN constructs.
Of course, there are many translations with this property, some
less interesting than others.
For example, the translation that sends every BAN formula to the
formula $\mathit{true}$ also validates the BAN inference rules.
We hope that the reader agrees that our translation captures the spirit
of the BAN rules.
\subsection{Subtleties in the BAN translation}
\label{sec:subtleties}
There is an important subtlety
in the translation of $i ~\mathbf{said}~ \mathbf{m}$.
It is actually \emph{not} quite the case that $i$ must know at time
$m'-1$ that
$\intn{\mtrans{\mathbf{m}}}$
is a substring of $\mathsf{s}'$; what $i$ must know at
time $m'-1$ is that the string $\mathsf{s}$ that represents
$\mtrans{\mathbf{m}}$
in the
current run is a substring of $\mathsf{s}'$. There is a big difference between
$ K_i \exists x ( \intn{\mathbf{m}} = x \land x \sqsubseteq \mathsf{s}')$
and
$\exists x (\intn{\mathbf{m}} = x \land K_i (x \sqsubseteq \mathsf{s}'))$.
A few examples might help to explain the distinction.
First, suppose that,
in run $r$,
$j$ sends $i$ an unencrypted string $\mathsf{s} = \int{\mathbf{m}}{r}$. This
means that $i$ can extract $\mathbf{m}$ in run $r$. Then $j$ sends $i$ the string
$\mathsf{s}' = \int{\encr{\mathbf{m}}{k}}{r}$.
Finally, $i$ forwards $\mathsf{s}'$ to some other player $j'$ in round $m'$
of $r$.
Since $i$ does not have the key $k$ at time $m'-1$, the beginning of the
round when \agpr sends $\mathsf{s}'$
to $j'$, $i$ does not realize that $\mathsf{s}'$ represents the encryption of $\mathbf{m}$.
That is, although $\mathsf{s} \sqsubseteq_r \mathsf{s}'$, there may be a run
$r'$ such that $r'_i(m'-1) = r_i(m'-1)$ but $\int{\encr{\mathsf{s}}{k}}{r'} \ne
\mathsf{s}'$. Thus, $K_i(\mathsf{s} \sqsubseteq \mathsf{s}')$ does \emph{not} hold
at $(r,m'-1)$ (although $\mathsf{s} \sqsubseteq \mathsf{s}'$ does).
As a consequence, $i ~\mathbf{said}~ \mathbf{m}$ does not hold at $(r,m)$.
Now suppose that $\mathbf{m}_2 = \encr{\mathbf{m}_1}{k'}$,
and that in run $r$,
$ \int{\mathbf{m}_2}{r} = \mathsf{s}$,
where $k'$ is a
key that $i$ does not have, and $j$ sends $i$
the string $\mathsf{s}'=
\int{\encr{\mathbf{m}_2}{k}}{r}$,
where $k$ is a shared key between
$i$ and $j$,
and $i$ then forwards $\mathsf{s}'$ to $j'$ in round $m'$.
Under natural
asssumptions,
since $i$ has key $k$,
$i$ ``understands'' that $\mathsf{s}'$ is the encryption of $\mathsf{s}$ by $k$,
so in all runs $r'$ that $i$ considers possible at $(r,m'-1)$, $\mathsf{s}
\sqsubseteq
\mathsf{s}'$. Thus,
$(\mathcal{I},r,m'-1) \models \intn{\mathbf{m}_2} = \mathsf{s} \land K_i(\mathsf{s} \sqsubseteq \mathsf{s}')$,
so $(\mathcal{I},r,m'-1) \models \exists x ( \intn{\mathbf{m}_2} = x \land K_i(x \sqsubseteq \mathsf{s}'))$.
On
the other hand,
$(\mathcal{I},r,m'-1) \models \neg K_i\exists x (\intn{\mathbf{m}_2}= x \land x
\sqsubseteq\mathsf{s}')$.
Since $i$ cannot decrypt $\mathbf{m}_2$, it may well be that
$\int{\mathbf{m}_2}{r'} \ne \mathsf{s}$ in some run $r'$ that $i$ considers possible.
As we show,
our translation of $\mathbf{said}$ makes R1 sound; the
alternative translation
$$\exists y( \NDiamond \sprev (\neg \send{i}{y}
\land \mbox{{\small $\bigcirc$}} \send{i}{y}\land K_i\exists x (\intn{\mtrans{\mathbf{m}}} = x\land x \sqsubseteq y))$$
does not.
It is not clear exactly which translation most closely captures what BAN
had in mind for $\mathbf{said}$. We suspect that they were not aware of these
subtleties. One piece of evidence in support of this suspicion is that,
as previously noted by Syverson and van Oorschot \cite{r:syverson94}
the AT translation of $\mathbf{said}$ does not make R1 sound (despite AT's
claims to the contrary). They run into trouble precisely on examples
like our second example, with nested encryptions. Note that we are not
claiming that our
translation is the ``right'' translation of $\mathbf{said}$, although something
like our translation seems to be needed to make R1 sound.
We may instead want to consider a different translation, and interpret
$\mathbf{said}$ differently.
What is important is that our logic helps us clarify
the relevant issues.
The meaning of ``good key''
in our gloss of the formula $i\key{k}j$ is
also not as simple as we have made out.
Many protocols studied in the literature assume the
existence of a key server in charge of distributing session keys to
principals. In such a context, a good key is not only known to the
principals exchanging messages, but also of course to the server that
initially distributed the key. In some sense, the interpretation of
``good key'' depends on details of the protocol being executed,
such as
whether it requires a key server.
This to us suggests that ``good key'' is not an appropriate primitive;
it is a complex notion that is too protocol dependent.
(Moreover, we would argue that it is better to make explicit
in the server specification the allowed server behavior with respect to the keys that it generates,
rather
than hide this in a primitive of the logic.)
In any case,
we can easily accommodate
such a definition of ``good key'' with trusted servers
by assuming that the server, as well as $i$ and $j$, can extract the key.
For
simplicity, however, we consider only the interpretation of
``good key'' given above.
Note that both BAN and AT interpret $k$ being a good key as a
statement that also talks about the future; in essence, if $k$ is
a good key, it remains so throughout a protocol interaction. We can
capture such an interpretation by prefixing the translated formula by
a $\Box$ operator.
Of course,
this interpretation precludes the analysis
of protocols that leak the key value. (See Nessett \citeyear{r:nessett90}
and Burrows, Abadi and Needham \citeyear{r:burrows90a}
for discussion.) We would like the analysis to reveal such leaks, rather
than presupposing that they do not happen.
There is yet another subtlety. It is consistent with our translation
that $k$ is a good key between $i$ and $j$, but neither $i$ nor $j$
knows this. One obvious reason is that $i$ and $j$ may consider it
possible that the key has leaked. We might then hope that $i$ and $j$
know that, if the run is $\mathit{good}$ (so that there is no leakage), then $k$
is a good key. But even this may not be the case under our definition. For
example, $i$ may not know that $j$ can extract $k$. Although we do not
need it to show that the BAN axioms are sound, we might consider
requiring that, on runs that are $\mathit{good}$, $i$ and $j$ know that
a key shared between them is good. More precisely, let $E^{\mathit{good}}_G \phi$
be an abbrevation $\land_{i \in G} K_i(\mathit{good} \Rightarrow \phi)$; that is,
all agents in $G$ know that, if the run is good, then $\phi$ holds.
We might want to require that $\ktrans{(i\key{k}j)}$ implies
$E^{\mathit{good}}_{\{i,j\}} \phi$ (where $\phi$ is the translation we give
above).
We could go even further. Rather than just requiring $i$ and $j$ to
know that if the run is good, the key is shared, we might want this fact
to be common knowledge among $i$ and $j$. To make this precise,
let $(E^{\mathit{good}}_G)^{h+1}\phi$ be an abbreviation
for $E^{\mathit{good}}_G ((E^{\mathit{good}}_G)^h \phi)$.
Define $C^{\mathit{good}}_{G} \phi$ so that it holds exactly if
$(E^{\mathit{good}}_G)^h \phi$
holds for all $h
\ge 1$.
We could then consider taking $\ktrans{(i\key{k}j)}$ to be
$C^{\mathit{good}}_{\{i,j\}} \phi$, where $\phi$ is the translation we used
above. We did not do this, in part because it is not clear that it is
really desirable to make such strong requirements for shared keys. For
example,
suppose that in an environment where message transmission is not
completely reliable (so messages may be lost), $i$ and $j$ already have
a shared key $k$, and $i$ decides that it should be refreshed. So $i$
tells $j$ that $k$ should be replaced by $k'$ (using a message encrypted
by $k$). While $j$ can acknowledge receipt of the message and $i$ can
acknowledge the acknowledgment, as is well known,
no amount of back and forth will make it common knowledge that $k'$ is a
shared key, even if all runs are good \cite{HM90}. Nevertheless,
this should not prevent $i$ and $j$ from starting to use key $k'$ as a
shared key.
Again, the main point we want to make here is not that our translation
is the ``right'' translation (that will typically be
application-dependent), but that our logic lets us clarify the issues.
\section{Related Work}
\label{sec:related}
The goal of understanding the foundations of
authentication
logic is not new, and
goes back to early attempts at providing a semantics for the original
BAN logic.
As we mentioned, BAN logic was originally defined through
a set of inference rules without a semantics tied to the actual
protocols that the logic was meant to analyze.
The work of Abadi and Tuttle\citeyear{AT91} and others
\cite{r:gong90,r:syverson94,r:stubblebine96,r:wedel96} sought to
provide a direct semantics for both BAN logic and its subsequent
generalizations meant to make it more widely applicable.
This is not the place to trace the history of BAN logic, but
a big point of contention has always been
the idealization needed to be performed on the
protocols~\cite{r:mao95}.
Idealization can be understood as a way to ascribe a ``meaning'' to
the various steps of a protocol, and some work has gone towards
understanding such idealization~\cite{r:mao95,r:kindred97}.
In contrast to providing a direct semantics and an account of
idealization, we instead supply a semantics to BAN logic operators by
decomposing them into more primitive and well-understood logical
operators;
our semantics makes it possible to avoid
idealization altogether by analyzing the
multiagent systems generated by the protocol under consideration,
subject to an abstraction of cryptography that captures that agents have
uncertainty about how cryptography works.
Several logics for reasoning about security protocols based on
knowledge and not subject to idealization have been proposed, going as
far back as CKT5 \cite{r:bieber90}.
Van der Meyden
and Su have used epistemic logic to
model check the dining cryptographers protocols~\cite{r:meyden04}.
In many such logics (e.g., \cite{r:accorsi01,r:toninho10}) knowledge or
belief is not interpreted as truth at all possible worlds, but rather
as a form of algorithmic knowledge~\cite{r:halpern02e}, much as with
our $\mathsf{extract}$ primitive, but with a fixed semantics.
Surveys of the application of epistemic logic to reason about
security protocols include \cite{DechesneWang2010,Pucella2015}.
In general, work in this area does not use probabilistic operators, as we have done:
we think that ultimately, security needs to be analysed probabilistically.
Cohen and Dam~\cite{cohenthesis,r:cohen05,r:cohen07} identified some
of the same subtleties
we identified in the interpretation of the BAN logic operators, but
they address those issues differently.
They develop two different semantics that use ideas from counterpart theory
and the \emph{de dicto}/\emph{de re} distinction
in modal logic to address the logical omniscience problem.
Similar to AT, what is sent and received in the semantics are
\emph{message terms}, rather than \emph{strings}, as in our work.
Both semantics work with permutations $\rho$ on the set of message terms,
which also extend to transformations on local states.
The first semantics \cite{r:cohen05} defines knowledge at a global state $s$ as
$s \models K_i\phi$ if for all global states $s'$ and permutations $\rho$ such that $s'_i = \rho(s_i)$, we have $s' \models \rho(\phi)$.
This semantics validates the BAN style axiom
$\recv{i}{m} \Rightarrow K_i \recv{i}{m}$ for all messages $m$, including
messages $m$ such as $\encr{M}{k}$ in situations where $i$
cannot extract $k$.
The second semantics \cite{r:cohen07}, which, judging from \cite{cohenthesis},
they appear to consider to be the more satisfactory,
makes a distinction between semantic message variables $x$ and syntactic message variables $m$,
and allows quantification over both. Here the semantics adds an assignment $V$ from
variables to equivalence classes of
ground message
terms with respect to equations capturing
the behaviour of cryptography, such as $\encr{\encr{M}{k}}{k} = M$. They define
\begin{itemize}
\item $s,V \models K_i\phi$ if for all global states $s'$ and permutations $\rho$ such that $s'_i = \rho(s_i)$, we have
$s',\rho \circ V \models \phi$.
\item $s,V \models \forall x (\phi)$ if for all equivalence classes $e$ of ground message terms, we have $s,V[x\mapsto e] \models \phi$.
\item $s,V \models \forall m (\phi[m/x])$ if for all ground message terms $M$, we have $s,V \models \phi[M/x]$.
\end{itemize}
This semantics validates the formula $\forall x(\recv{i}{x} \Rightarrow K_i \recv{i}{x})$ but not the formula
$\forall m(\recv{i}{m} \Rightarrow K_i \recv{i}{m})$. The latter is equivalent to
the conjunction of $\recv{i}{M} \Rightarrow K_i \recv{i}{M}$ for all messages $M$.
In effect, this semantics treats semantic message variables $x$ somewhat similarly to
our \emph{string} interpretation of messages, but whereas our
logic treats message terms and strings as having distinct types,
they allow formulas such as $x=M$ where a semantic variable $x$ is equated
to a message term $M$. The semantics of our logic more concretely matches
an operational semantics, and maintains a type distinction between
messages and strings, so we would express the same equivalence as
$x=[M]$. It would be interesting to investigate whether there is
any formal correspondence between the two approaches. (One apparent
obstacle to this is that our semantics allows a situation where an agent considers
it possible that a string it has received is the encoding of \emph{no} term
(e.g., it is a random string injected in a guessing attack by the adversary)
whereas in the Cohen and Dam semantics, everything that is received
is
semantically an equivalence class of message terms.)
Our approach to logical omniscience is similar to models used in the
cryptographic literature in which encryption is modelled as a
randomly chosen function, not known to the agents
\cite{GoldreichGM86},
who discover its behaviour by making calls to the function on particular values.
Such models can be captured in
our
framework by appropriate construction of the interpreted system.
(It is more common in cryptography, however, to use the \emph{random oracle} model \cite{CanettiGH04},
in which the random function represents a hash function rather than an encryption
function, and to use this as the basis for the construction of ciphers.)
Van Ditmarsch et al \cite{Ditmarsch12} use a similar idea, but their approach is purely epistemic and
they do not have probability in their models. The connections that we draw to BAN logic are not
developed in this work.
We are careful in our interpretation to distinguish between
the freshness of a nonce---that it has not been used before---from its
unpredictability---how likely it is to be guessed.
Freshness is captured by the simple statement that no message containing
the nonce was recently sent, while unpredictability is captured by a
probability distribution over the choice of nonces during a run of the
protocol.
Thus, these two aspects of nonces are kept separate.
There has been little discussion in the literature about this
distinction, and nonces are often implicitly taken to be unpredictable,
even though the framework of analysis used technically captures only
freshness (for instance, the spi calculus~\cite{r:gordon99}, or
MSR~\cite{r:cervesato99}).
When not required to be unpredictable, nonces can be taken to be
sequence numbers, or timestamps.
There has been some work
discussing
the use of timestamps for nonces
(e.g., Neuman and
Stubblebine
\citeyear{r:neuman93}).
Somewhat related are the notions of authentication tests described by
Guttman \citeyear{r:guttman02a},
where a distinction is made between using
nonces that, from the perspective of a given agent, should be secret
when sent (and therefore should be unpredictable) from those that need
to be secret only when received.
The probabilistic interpretation of BAN logic tells us that the
symbolic reasoning done in BAN logic can be understood in terms of more
realistic probabilistic beliefs, accounting for the probabilities
associated with the choice of nonces
and the choices of keys.
{F}rom that perspective, our work
resembles
a general class of
work that seeks to obtain results about more realistic models of
cryptography, either by using symbolic reasoning and showing that
results of such an analysis can be interpreted quantitatively, or by
using a logic that is explicitly more quantitative.
Abadi and Rogaway~\citeyear{r:abadi02a}, building on previous work by
Bellare and Rogaway \citeyear{r:bellare93}, compare the results
obtained by a symbolic analysis with those obtained by a more
computational view of cryptography.
They show that, under various conditions, the former is sound with
respect to the latter, that is, terms that are assumed
indistinguishable in a symbolic analysis remain indistinguishable
under a concrete encryption scheme.
This work has been extended in various ways (e.g.,
\cite{r:micciancio04}).
Other formal approaches to security protocol analysis have tried to
apply techniques from symbolic analysis directly to more realistic
models of cryptography (such as computational
cryptography~\cite{r:goldreich01}) by viewing messages as strings of
bits and adversaries as randomized polynomial-time algorithms.
The resulting logics are fundamentally probabilistic, in a way similar
to our probability-based interpretation of BAN logic.
Examples of such approaches include those of Backes, Pfitzmann, and
Waidner~\citeyear{r:backes03} and Datta et al.~\citeyear{r:datta05}, as
well as automated tools such as CryptoVerif~\cite{r:blanchet08}.
In general, it is difficult in such models to express local
information, that is, the
fact
that at a given point in the
protocol, a particular agent has certain information.
These approaches are geared instead
towards more global properties, which say
there is some probability of a particular formula holding at all points in the execution (or at the end of the interaction).
\section{Conclusion}
We have introduced in this paper a simple modal propositional logic to
reason about security protocols, based on well-understood modal
operators for knowledge, probability, and time. We have shown how
those primitive notions can be used to capture the more high-level
operators of BAN logic, helping us to understand the intuitions
underlying BAN logic
and, more importantly, to capture important aspects of reasoning about
security protocols that we claim cannot be adequately expressed in a
non-epistemic way.
A further advantage of the translation is that it allows us to apply
well developed model-checking techniques
\cite{BoureanuCL09,r:gammie04,HuangLM11,LomuscioQR17,r:meyden04}
to verifying the
correctness of security protocols whose specifications are expressed
using BAN logic. (Here it becomes
useful that the translation is linear, at least, if we restrict the
use of the $\mathbf{controls}$ operator.)
Our ultimate goal is to design a logic that will be useful for
reasoning about all aspects of security, based on the logic we
introduced here. In this paper, we have focused on high-level issues
concerning the expressibility of high-level operators using
well-understood primitive concepts. Other aspects of reasoning about
security protocols that we have hinted at need to be further
investigated.
A particularly significant aspect is how to deal with the issue of what
an adversary can
compute; in this paper, we have sidestepped the problem using the
proposition $\mathsf{extract}$.
We are currently investigating more general approaches to deal with
this issue.
\section*{Acknowledgements} We thank the TARK reviewers for their
insightful comments.
Halpern was supported in part by NSF grants
IIS-0911036 and CCF-1214844, by ARO grant W911NF-14-1-0017,
and by the Multidisciplinary
University Research Initiative (MURI) program administered by the
AFOSR under grant FA9550-12-1-0040.
Van der Meyden was supported by ARC grant DP120102489.
\bibliographystyle{eptcs}
| 2024-02-18T23:39:52.102Z | 2017-07-28T02:04:28.000Z | algebraic_stack_train_0000 | 701 | 11,663 |
|
proofpile-arXiv_065-3515 | \section{Introduction}
Light-pulse atom interferometers (AIs) \cite{Cronin} use the recoil momentum from photon-atom interactions to coherently split and recombine matter waves. They have been used for measuring gravity \cite{Peters,GravIso,Hu}, the gravity gradient \cite{McGuirk,Rosi,Asenbaum}, rotation \cite{Gustavson,stockton,Dutta}, fundamental constants \cite{Fixler,Rosi2014,Lan,Bouchendira,Parker}, and for testing fundamental laws of physics \cite{Zhou,Hartwig,Duan,Hamilton,Jaffe,Kovachy,Yu,Graham,Harms}. Since the laser wavelength defines the photon momentum with high precision, AIs are accurate. Thus they are ideal candidates for inertial sensing or navigation. For this purpose, AIs need to be simple, reliable, and sensitive to multiple axes of acceleration and rotation. Even transportable single-axis AIs, however, require several lasers and laser amplifiers for atom trapping, interferometry, and detection \cite{Freier,Fang,Geiger,Barrett}. So far, the only AI with six-axis sensing
utilized two parabolically launched atom clouds and a complex combination of separate interferometry setups \cite{canual}. It achieved a sensitivity of 22\,$\mu$rad/s/$\sqrt{\rm Hz}$ and 16\,$\mu$m/s$^2/\sqrt{\rm Hz}$ for rotation and acceleration, respectively.
Two-axis of rotations and one-axis of acceleration have also been demonstrated in an atomic fountain interferometer with atomic point sources and spatially resolved detection \cite{Dickerson}. An atomic sensor using Bose-Einstein condensates has simultaneously measured gravity and magnetic field gradients \cite{Hardman}. A dual-axis accelerometer and gyroscope atom interferometer has been built by launching and recapturing two cold ensembles toward each other \cite{Rakholia}. These examples illustrate that multiaxis AIs are more complex than single-axis ones. Additionally, other advances towards field operations, such as cold atom pyramidal gravimeters \cite{Bodart}, atom interferometers with short integration time \cite{Butts}, atom interferometers with optical lattice \cite{Andia}, and atom interferometry in an optical cavity \cite{CavityAI} or a warm vapor \cite{Biedermann}, and atom-chip gravimeters \cite{Abend} have been demonstrated as well. However, multiaxis operation and simplicity have yet to come together in AIs.
Generally, the laser system contributes the most complexity. Magneto-optical traps (MOTs) require six orthogonal beams and matter-wave splitters need relatively high laser intensity and low phase noise. Besides, specific laser frequencies are demanded for different procedures. In order to construct simple and reliable laser systems, fiber lasers \cite{fiberlaser} and integrated diode lasers \cite{diodelaser} are developed. However, to our knowledge, AIs have never been operated based on a single diode laser without optical amplifiers. Laser systems with a single diode laser and pulsed modulators can avoid frequency-locking or phase-locking between different lasers and thus improve robustness. Without optical amplifiers, laser systems will also gain simplicity and power efficiency.
Here, we demonstrate a multiaxis AI based on a single diode laser and a pyramidal MOT.
The pyramidal geometry requires only a single laser beam to trap atoms and form a vertical atom interferometer. Additional beams, orthogonal to the pyramidal faces, allow for a total of five AIs along different axes. Using the Mach-Zehnder geometry and the butterfly geometry
allows for measuring acceleration and rotation separately. A single diode laser serves the multiaxis AI to maintain simplicity. With efficient two-photon Raman transitions and zero differential AC Stark shift, high-contrast fringes have been achieved using a $\mu$K-sample without velocity selection. As a demonstration, we achieve a sensitivity of 6\,$\mu$m/s$^2/\sqrt{\rm Hz}$, 300\,$\mu$rad/s/$\sqrt{\rm Hz}$, and 4\,$\mu$rad/$\sqrt{\rm Hz}$ for acceleration, rotation, and inclination, respectively, limited by vibrational noise. This work offers a path towards building simple, precise and multiaxis AIs.
\section{Multiaxis atom interferometry}
Figure \ref{Pyramid}(a) shows the principle of the multiaxis atom interferometry in a pyramid. The pyramid consists of four orthogonal reflection faces. A MOT is created inside the pyramid by irradiating one laser beam vertically toward the entire pyramid, where six orthogonal trapping beams can be generated by the reflections \cite{Lee,Pollock}. Utilizing the incidence and its reflections from either the whole pyramid or individual pyramidal faces as matter-wave splitters, we can build one vertical AI as well as four angled AIs along different axes.
The matter-wave splitter of our AI is based on Doppler-sensitive two-photon Raman transitions between the $F=3$ and $F=4$ hyperfine ground states of cesium atoms \cite{KasevichChu}. An atom, initially in the state $|F=3, p=0\rangle$, is transferred to a state $|F=4, p=2\hbar k\rangle$, where $\hbar k$ is the photon momentum. To make a beam splitter, a $\pi/2$ pulse places the atom in a superposition of the two states. A mirror is formed by a $\pi$-pulse, which has a 100\% probability of changing the state.
As shown in Fig. \ref{Pyramid}(b), Mach-Zehnder interferometry is performed by a $\pi/2 -\pi-\pi/2$ pulse sequence and is used for measuring acceleration. We assume $\vec{a}\cdot\vec{\Omega} T_0 $ and $\vec{\Omega} \cdot \vec{v}_0$ are negligible compared to the acceleration $\vec a$ and the gravity $\vec g$, where $\vec{\Omega}$ is the rotation, $T_0$ is the sequence time, and $\vec{v}_0$ is the initial velocity of the atom cloud at the first laser pulse. This condition is fulfilled, e.g., in stationary operation or aboard moving vehicles. The phase shift \cite{canual} caused by $\vec a$ and $\vec g$ is expressed as
\begin{equation}
\phi_a = \vec k\cdot (\vec a + \vec g )T^2,
\end{equation}
where $\vec k$ is the effective wave vector of two counterpropagating photons and $T$ is the pulse separation time. A vertical interferometer (addressed by the vertical laser beam and its reflection from the pyramid) and at least two angled ones (formed by beams aimed directly at a pyramid face) allows us to measure the full acceleration vector $\vec a=(a_x, a_y, a_z)$.
To measure rotation independent of acceleration, we use the butterfly geometry with a $\pi/2-\pi-\pi-\pi/2$ pulse sequence, as shown in Fig. \ref{Pyramid}(c). The rotation-induced phase shift \cite{canual,stockton} is
\begin{equation}
\phi_\Omega =\frac 12 \vec k \cdot [(\vec a +\vec g)\times \vec \Omega]T^3.
\end{equation}
Since $\vec g$ is along the $z$ axis, two components of rotation ($\Omega_x, \Omega_y$) can be measured using laser beams pointing at two different pyramid faces. Additionally, measurement of $\Omega_z$ can be achieved by applying appropriate acceleration in the $xy$ plane to the interferometer. This allows us to measure the full rotation vector $\vec \Omega=(\Omega_x, \Omega_y, \Omega_z)$.
\begin{figure}
\centering
\epsfig{file=Pyramid.pdf,width=\linewidth}
\caption{\label{Pyramid} (a) Multiaxis AIs in a pyramid. A cold atom cloud is trapped inside a pyramidal mirror with a top angle of 90$^\circ$. Five pairs of the retro-reflected Raman beams are formed, one along the vertical axis and the other four perpendicular to the pyramidal faces. The two angled Raman pairs in the $yz$ plane are not shown. The angled Raman beams are approximately 45$^\circ$ to the gravity axis. (b) Space-time trajectories of atoms in the Mach-Zehnder geometry and (c) in the butterfly geometry. A matter wave (blue and orange curves) is coherently split, redirected and combined by momentum transfer from laser pulses (green waves).}
\end{figure}
\section{Single-diode atom interferometry}
\subsection{Single-diode laser system}
Atom interferometers consist of three procedures: atom cloud preparation, interferometry, and population detection. Only one diode laser is used for all the functions. The laser system is shown in Fig. \ref{Laser}(a). All laser radiation originates from a 240-mW distributed Bragg reflector diode laser (Photodigm, PH852DBR240T8). A sample of its power is sent to Doppler-free polarization spectroscopy, frequency stabilizing (``locking") the laser to the cesium $F=4 \rightarrow F'=4/5$ $D_2$ crossover transition at 852 nm. An acousto-optical modulator (AOM 1) shifts the sample so that the light reaching the atoms can be at the MOT frequency (about 10\,MHz red from the $4-5$ transition) or the detection frequency (resonant with $4-5$). Adding an offset voltage at the servo input jumps the lock point to the $F=4\rightarrow F'=4$ transition and generates the large detuning necessary for polarization gradient cooling.
The timing sequences of our atom interferometry is shown in Fig. \ref{Laser}(b). The MOT cooling light is the undeflected beam after AOM 3; a repumping frequency is generated by sending a sample of the laser through a fiber electro-optical modulator (EOM). To avoid instability resulting from interference with the MOT light, the EOM is driven such that the carrier frequency is nulled. A liquid crystal retarder is placed after the fiber to convert the linear polarization to the circular polarization, so that counterpropagating $\sigma^+/\sigma^-$ polarization pairs are formed inside the pyramid. Before interferometry, a microwave pulse followed by a blow-away laser pulse (resonant with $4-5$) selects atoms into the magnetically insensitive state.
\begin{figure}
\epsfig{file=Laser.pdf,width=0.45\textwidth}
\caption{\label{Laser} (a) Laser system. AOM, acousto-optic modulator; EOM, fiber-based electro-optical modulator. A distributed Bragg reflector diode laser is frequency stabilized by polarization spectroscopy. The frequency detuning of the laser is controlled by AOM 1. AOM 2 works as a fast switch and AOM 3 controls the laser intensity to the EOM. The vertical beam is toward the entire pyramid, and the angled beams are toward individual pyramidal faces. The pushing beam is for spatially separating atoms in the two ground states. (b) Timing sequences of single-laser atom interferometry. Both the cooling beam and the blow-way beam are generated by the undeflected laser of AOM 3. Both the repumping beam and the Raman beam are generated by the EOM.}
\end{figure}
The Raman frequency pairs for the interferometer are formed by the carrier and the first-order sidebands from the EOM. The Raman pulses have high intensity in the fiber EOM, but their pulse duration is too short to cause photo-refractive damage to the crystal. To minimize phase noise, the EOM is driven by a phase-locked dielectric resonator oscillator. The pyramidal geometry enables lin $\perp$ lin polarization in the vertical AI, and $\sigma^+/\sigma^+$ or $\sigma^-/\sigma^-$ polarization in the angled AIs. The vertical laser to the pyramid is blocked by a shutter when the angled AIs are operated.
For detecting the $F=3$ and $F=4$ populations at the interferometer output ports, a pushing beam, slightly red-detuned to the $F=4 \rightarrow F'=5$ transition, horizontally separates atoms in the two hyperfine ground states \cite{Biedermann2009}. Both the cooling beam and repumping beam are then used for fluorescence imaging. A camera images both populations simultaneously, which makes the interference fringe immune to the fluctuation of atom number and imaging laser power.
\subsection{Zero differential AC Stark shift with small detuning}
In order to drive Raman transitions with modest laser intensity, the performance with a small single photon detuning is investigated. Figure \ref{Levels}(a) shows the energy levels and laser frequencies involved in the Raman transitions. The transitions must satisfy several requirements. Rapid Raman transitions ($\pi$-pulse time 10-20\,$\mu$s) are needed in order to address all atoms from the thermal velocity distribution of the MOT efficiently, but it requires high laser intensity and/or small single-photon detuning $\Delta$. For high accuracy and fringe contrast, the AC Stark shift of the $F=3$ and $F=4$ states needs to be equal, so that it cancels out of the interferometer phase \cite{Peters}. We calculate the effective two-photon Rabi frequency $\Omega_{\rm eff}$, the differential AC Stark shift $\Omega_{\rm AC}$, and the single-photon scattering rate $R_{\rm sc}$. To do so, we define $A_n=\sqrt{I/I_{\rm sat}}\Gamma J_n(\beta)/\sqrt{2}$ to describe the amplitude of each EOM sideband, where $I$ is the total laser intensity, $I_{\rm sat}$ is the saturation intensity, $\Gamma$ is the linewidth, $J_n$ is the Bessel function of order $n$ and $\beta$ is the modulation index of the EOM. With this, we have
\begin{eqnarray}
\Omega_{\rm eff}&=& \sum_{F'=2}^5\sum_{n=-\infty}^\infty \frac{M_{3,0}^{F',-} A_n M_{4,0}^{F',+} A_{n+1}}{2\Delta_3},\\
\Omega_{\rm AC}&=&\ \sum_{F',n}\left(\frac{|M_{3,0}^{F',-}A_n|^2}{4\Delta_3}-\frac{|M_{4,0}^{F',+}A_n|^2}{4\Delta_4}\right),
\end{eqnarray}
where $M_{F, m_F}^{F',\pm} = \langle F, m_F|F', m_F\pm 1 \rangle$ are the cesium $D_2$ dipole matrix elements for $\sigma^\pm$ transitions, expressed as multiples of $\langle J = 1/2||er||J' = 3/2\rangle$ \cite{Steck}, $\Delta_F=n\omega_{\rm hs}+\omega_F^{F'}+\Delta$ is the effective detuning of $F=$ 3 or 4, $-\hbar \omega_4^{F'}$ is the energy of the $|F'\rangle$ excited state relative to the $|F'=5\rangle$ state, and $\omega_3^{F'}=\omega_4^{F'}-\omega_{\rm hs}$. The ground state hyperfine splitting $\omega_{\rm hs}$ is also the EOM driving frequency, and $\Delta$ is the detuning of the carrier relative to the $F=4\rightarrow F'=5$ transition. The scattering rates for atoms starting in $F=3$ or $F=4$ are
\begin{equation}
R_{\rm sc}^F=\sum_{F',n}\frac{\Gamma(M_{F,0}^{F',-}A_n)^2}{\Gamma^2+2(M_{F,0}^{F',-}A_n)^2+4(\Delta_F)^2}.
\end{equation}
From these, the scattering probability for an entire interferometer can be calculated.
\begin{figure}
\epsfig{file=Levels.pdf,width=0.45\textwidth}
\caption{\label{Levels} (a) Energy level scheme of cesium $D_2$ line.
The Raman pairs are formed by the carrier $\nu_1$ and the first-order sidebands $\nu_2$, $\nu_3$ from the EOM. (b) Rabi frequency $\Omega_{\rm eff}$, AC Stark shift $\Omega_{\rm AC}$ and single photon scattering $R_{\rm sc}$ as a function of single photon detuning $\Delta$. The Rabi frequency and AC Stark shift are measured by driving two-photon transitions. The measurement uses atoms inside the pyramid, where they see reflections from the four pyramidal faces. This increases $\Omega_{\rm AC}$ and $R_{\rm sc}$ but not $\Omega_{\rm eff}$. The scattering rate $R_{\rm sc}^3$ is measured as the number of atoms that are transferred from $F=3$ to $F=4$ when the Raman detuning is off the two-photon resonance. Each point is a single experimental shot and the curves are the theory predictions.}
\end{figure}
A suitable detuning $\Delta/(2\pi) \simeq -160\,$MHz (slightly dependent on $\beta$), can be found for which $\Omega_{\rm AC}$ vanishes and both $\Omega_{\rm eff}$, $R_{\rm sc}$ are acceptable. This detuning can easily be reached with an AOM. In this case, the theoretical limit on the contrast of Mach-Zehnder fringes from single-photon scattering is approximately $40\%$.
Figure \ref{Levels}(b) shows a comparison of theory to experiment for the two-photon Rabi frequency, differential AC Stark shift, and single-photon scattering as function of detuning. If we modulate the fiber EOM with an index of about 1, the differential AC Stark shift is zeroed at a red single photon detuning of 158\,MHz. The zero differential AC Stark shift is verified with 30-Hz accuracy by measuring $\omega_{\rm hs}$ with optical Ramsey interferometry. For the Doppler-sensitive Raman transition, the width of a $\pi$-pulse is as short as 12\,$\mu$s. Since Doppler-sensitive Raman transitions can only transfer atoms distributed within a certain velocity bandwidth, the efficiency of Raman transitions can be increased by use of a faster Rabi flopping frequency. Without velocity selection, a $\pi$ pulse transfers as many as 60\% of all atoms.
\section{Experiment and Results}
\subsection{Experiment}
The pyramid is in a glass cube of $25.4\times 25.4\times 25.4$\, mm$^3$, dielectrically coated for equal phase shift at two orthogonal polarizations at 45$^\circ$. The vertical AI (and MOT and detection) beam has a waist of 15 mm ($1/e^2$ radius) and a power of $\sim 60\,$mW before the pyramid. The diagonal beams have waists of 12 mm ($1/e^2$ radius) and power of $\sim 40\,$mW.
We capture approximately 5 million atoms from background cesium vapor in 1\,s. With increased laser detuning of -160\,MHz and decreased ($\sim 1/4$) laser power, polarization gradients cool the atoms to about 2\,$\mu$K in 5\,ms. The cooling beam is turned off 1\,ms before the repumping beam to ensure that all the atoms stay in $F=4$. As the atoms freely fall, a microwave $\pi$-pulse (100\,$\mu$s and +20 dBm) followed by a blow-way laser pulse selects $\sim 5\times 10^5$ atoms from $|F=4, m_F=0\rangle$ into $|F=3, m_F=0\rangle$ with a bias field of 500\,mG. During interferometry, the magnetic quantization axis is aligned with the direction of the Raman pairs in order to enhance the Raman transition between $|F=3, m_F=0\rangle$ and $|F=4, m_F=0\rangle$.
As a proof of multiaxis atom interferometry, we demonstrate three AIs on a passive vibration isolation platform (minusK, 150BM-1) placed on an optical table. One AI uses the vertical Raman pair, and the other two use angled Raman pairs. The vertical AI is operated with $\pi/2 -\pi-\pi/2$ sequences for gravity measurement. The angled AIs are operated with $\pi/2 -\pi-\pi/2$ sequences and $\pi/2-\pi-\pi-\pi/2$ sequences to measure the projected gravity with an angle of approximately 45$^\circ$ and the earth's rotation rate.
\subsection{Acceleration measurement}
Figure \ref{acceleration} shows fringes measured in Mach-Zehnder geometry.
In the vertical AI, the frequency difference between the Raman frequency pair is linearly ramped at approximately 23\,MHz/s to compensate the time-varying Doppler shift of the free-falling atoms. The ramp rate of the angled AI, at about 45$^\circ$ to the vertical, is about 16\,MHz/s, which is a factor of $\sqrt{2}$ smaller than that of the vertical one. While scanning the phase of the last interferometer pulse, the fringes are obtained by counting the atom populations in the two hyperfine ground states. In particular, the pulse sequence time of the vertical AI is constrained by the geometry of the apparatus and that of the angled AIs is limited by the Raman beam waist. The vertical AI has a fringe constrast of 18\% with a sequence time of 80\,ms. The angled AIs have a fringe constrast of 22\% with a sequence time of 40\,ms. As the sequence time is as short as 2\,ms, the fringe contrast is improved to 30\%. The decreasing contrast with longer sequence time can be explained by inhomogeneous Rabi flopping of the three Raman pulses.
\begin{figure}
\epsfig{file=acceleration.pdf,width=0.45\textwidth}
\caption{\label{acceleration} Acceleration-sensitive fringes of Mach-Zehnder AIs. The vertical AI has a sequence time of $2T=80\,$ms. The angled AIs have a sequence time of $2T=40\,$ms. Each point is the average by 10 experimental shots. The curves are sinusoidal fits.}
\end{figure}
The top angle of pyramid is 90$^\circ$$\pm$1$^\prime$, which leads to a systematic error of 42 parts per billion for the vertical AI. For the angled AI, the accuracy of the top angle produces a negligible error in the alignment. However, the error of the projection angle results in systematic errors to the accelerations along $x$ or $y$ axis. Given an angle error of 1$^\prime$ at 45$^\circ$, the systematic error would be about 2 mm/s$^2$.
Figure \ref{tide}(a) shows the Allan deviation of the gravity measurement by the vertical AI, with the tide variation subtracted from the model. The sensitivity is 6\,$\mu$m/s$^2/\sqrt{\rm Hz}$. Measuring gravity continuously during 4 days, the tide variation has been observed, as shown in Fig. \ref{tide}(b). The systematic error of the pyramidal top angle is negligible compared to the current sensitivity. Using a commercial tilt sensor (Jewell Instruments, 756-1326) to monitor the alignment between the Raman beam and the gravity axis, the long-term drift of the platform is calibrated. The sensitivity of the vertical AI is competitive to the state-of-the-art compact AIs \cite{Geiger,Barrett,canual,Rakholia,Bodart,Butts,Andia,CavityAI,Biedermann}.
The sensitivity can be further improved with a faster cycling rate, better vibration isolation, and longer sequence time. Our current cycling time is limited by the computer control software. Once this is overcome, the cycling rate can further be increased by shortening the MOT loading time. Additionally, the vibrational noise can be decreased by better vibration isolation, such as an active feedback system \cite{ZhouM}. Finally, since the sensitivity scales with $T^2$, it can be improved by longer sequence time. For example, a freely-falling time of $\sim$ 300 ms, corresponding to a length of $\sim$ 0.5 m, can improve the current sensitivity by one order of magnitude.
\begin{figure}[!t]
\epsfig{file=tide.pdf,width=0.45\textwidth}
\caption{\label{tide} (a) Allan deviation of the vertical AI corrected from the Earth's tides. The dashed line shows the 1/$\sqrt{\rm Hz}$ scaling. (b) Tide gravity variation. It was measured from August 7, 2017 to August 10, 2017 by the vertical AI, compared with a tidal model. Each data point is averaged for about 30 min.}
\end{figure}
\subsection{Inclination measurement}
The Mach-Zehnder AIs along the diagonal axes measure the projected gravity with an angle of approximately 45$^\circ$, which makes them sensitive to the angle variation between the Raman beam and the gravity axis. As the fiber ports of the Raman beams are also put on the vibration isolation platform, the angled AIs work as atomic inclinometers.
For long-term measurement, we observe that the fluctuation of the projected gravity is correlated with the tilt sensor, as shown in Fig. \ref{tilt}.
This fluctuation is the drift of the platform, which has a period of half an hour. It also indicates that the sensitivity of the vertical AI is limited by vibration noise. The sensitivity of the angled AI is about 25\,$\mu$m/s$^2/\sqrt{\rm Hz}$. According to the projection angle, gravity variation of about 150\,$\mu$m/s$^2$ corresponds to the tilt of 21\,$\mu$rad. The sensitivity of our atomic inclinometer is 4\,$\mu$rad/$\sqrt{\rm Hz}$, compared to 800\,$\mu$rad/$\sqrt{\rm Hz}$ in previous work \cite{Ahlers}.
\begin{figure}
\epsfig{file=tilt.pdf,width=0.45\textwidth}
\caption{\label{tilt} Relative projected gravity measured by one of the angled AIs. Each data point is from one fringe, which consists of 11 shots.
The orange curve shows the tilt measured by a commercial sensor.}
\end{figure}
\subsection{Rotation measurement}
Figure \ref{rotation}(a) shows interference fringes of a symmetric butterfly interferometer (i.e., with pulse separation times $T/2$, $T$, $T/2$) for rotation measurements. With a sequence time of 40\,ms, we achieve a fringe contrast of 11\%. The symmetric configuration is necessary to fully cancel the phase contribution from constant acceleration \cite{npulse}. An asymmetric configuration could be used to suppress parasitic interferometers, further enhancing contrast \cite{stockton,Dutta}. Although the relative AC Stark shift is small, it would still result in systematics for the rotation measurement. In order to cancel even this residual relative AC Stark shift, the rotation-sensitive AI is operated with opposite effective wave vectors $+k$ and $-k$.
\begin{figure}[!t]
\epsfig{file=rotation.pdf,width=0.45\textwidth}
\caption{\label{rotation} (a) Rotation-sensitive fringes of a butterfly AI along the angled axis. The two fringes are obtained with opposite effective wave vectors. The sequence time is $2T=40\,$ms. Each point is the average by 10 experimental shots. The curves are sinusoidal fits. (b) Allan deviation of the gyroscope sensitivity. The dashed line shows the 1/$\sqrt{N}$ scaling, where $N$ is the averaging fringe number. }
\end{figure}
As shown in Fig. \ref{rotation}(b), the phase sensitivity of the butterfly AI is 40\,mrad for a single fringe measurement, which corresponds to the rotation rate sensitivity of 170\,$\mu$rad/s. By operating the AI on the phase-sensitive slopes of the fringes, we achieve a sensitivity of about 300\,$\mu$rad/s/$\sqrt{\rm Hz}$ for rotation measurement. The Earth's rotation rate is measured by the gyroscope. Using the angled Mach-Zehnder AI, we measure the absolute angle between the laser wave vector and the gravity axis. During the rotation rate measurement, the projection angle is monitored by the tilt sensor. We then obtain the Earth's rotation rate of $5(1)\times 10^{-5}$\,rad/s after 20\,min of averaging time. The expected rate at Berkeley (latitude 37.78$^\circ$) is 57.9\,$\mu$rad/s corresponding to a phase shift of 24\,mrad. The phase sensitivity of our gyroscope is similar to that achieved by the state-of-art atom interferometry gyroscopes \cite{canual,Dutta}. The rotation rate sensitivity is constrained by the total phase accumulated from the rotation, which can be improved by longer sequence time.
\section{Conclusion}
In conclusion, we have demonstrated multiaxis AIs with a single-diode laser system and a pyramidal MOT. Efficient Doppler-sensitive Raman transitions are achieved using a small single photon detuning and modest laser intensity. With zero differential AC Stark shift and insignificant single photon scattering, high-contrast fringes are obtained. Gravity as well as the tilt of the platform are measured in Mach-Zehnder geometry with a sensitivity of 6\,$\mu$m/s$^2/\sqrt{\rm Hz}$ and 4\,$\mu$rad/$\sqrt{\rm Hz}$, respectively. Rotation is measured using the butterfly geometry with a sensitivity of 300\,$\mu$rad/s/$\sqrt{\rm Hz}$.
Being simple, precise, and capable of multiaxis operation, our multiaxis and single-diode AI has the potential to become a versatile atomic sensor in rough environments, such as drones, submarines or satellites. Comparing with classical inertial sensors, AIs are more accurate and have better long-term stability. Generally, AIs are too sensitive to operate in environments with strong vibration noise. In order to overcome this problem, either measuring vibration with another sensor or simultaneous multiple AIs are feasible. One example is that AIs have been operated on an airplane by monitoring the vibration noise with a mechanical accelerometer \cite{Geiger}. A simultaneous dual-species atom accelerometer has been proposed to cancel the common vibration noise \cite{Bonnin}. Extending our multiaxis AI to simultaneous operation will make it immune to vibration and enable more applications. Furthermore, marrying our single-diode atom interferometry with grating-based or tetrahedral MOTs \cite{tetrahedralMOT,gratingMOT,chipMOT,SubDoppler}, more deployable atomic sensors are foreseeable, such as on-chip gravimeters, gradiometers and gyroscopes.
\section*{Funding Information}
Bakar Fellows Program; David and Lucile Packard Foundation; NASA Planetary Instrument Definition and Development Program through a Contract with Jet Propulsion Laboratory.
\section*{Acknowledgments}
We thank Cheong Chan, Tatyana Gavrilchenko, Chen Lai, Weicheng Zhong, and Philipp Haslinger for their contributions to the experiment and discussions. J.D. thanks the NSF Graduate Student Fellowship for support.
| 2024-02-18T23:39:52.359Z | 2018-02-13T02:09:58.000Z | algebraic_stack_train_0000 | 718 | 4,535 |
|
proofpile-arXiv_065-3790 | \section{Introduction}
The development of the visual system of humans takes a number of phases, which include tuning the synaptic connections between neurons in the different areas devoted to the processing of different visual stimuli. In newborns, for instance, many connections between the Lateral Geniculate Nucleus (LGN), which is the first part of the brain devoted to visual processing, and the area V1 of the visual cortex are not formed yet. Similarly, the connections between neurons in the area V1 and subsequent areas start developing after the first month of life.
The tuning process of the receptive fields of the neurons of the visual system and the development of their inter-connected network can be compared to the training process of Artificial Neural Networks (ANNs). Since the beginning of their development, indeed, the design of ANNs has been largely inspired by the way the brain works, i.e. processing information via a network of neurons organized in a hierarchical fashion. Despite the resemblance of the Rosenblatt's perceptron with the physiological structure of a neuron, there is no actual relation between the processing of ANNs and the neural processes in the brain.
Many researchers in computer vision and image processing found inspirations from neuro-physiological studies of the visual system of the brain to design novel computational models that could process visual data.
In 1959, Hubel and Wiesel carried out experiments on the visual cortex of cats and demonstrated the existence of the \emph{simple cells}, which are neurons with an elongated receptive field. Their primary function is to detect edges and lines. Originally, the simple cells were modeled using Gabor functions~\cite{daugman1985uncertainty,jones1987evaluation} and used in image processing and computer vision applications, especially for texture description and analysis~\cite{grigorescuTexture}.
Subsequently, Hubel and Wiesel precised that simple cells receive inputs from certain co-linear configurations of the circular receptive field of neurons in the LGN~\cite{hubel1962receptive}. Computational models based on Gabor functions were not able to describe all the properties of simple cells and ignored the contribution of LGN neurons for the processing of visual stimulti. In~\cite{azzopardi2012corf}, a computational model based on the combination of the responses of Difference-of-Gaussians functions, which modeled the LGN receptive fields, was proposed. It achieved better contour detection performance than models based on Gabor functions and showed more properties of the simple cells in area V1 of the visual system of the brain, such as contrast invariant orientation tuning and cross orientation suppression.
Artificial neural networks (ANNs) and, in particular, convolutional neural networks (ConvNets) received much attention and showed some similarities with the visual system of the brain especially regarding its hierarchical organization. Although the training of neural network is formulated as an optimization problem and does not relate with biological processes, in~\cite{alexnet} it was shown that the convolutional kernels learned in the first layer of AlexNet resembled the Gabor functions that were used to model the receptive field of neurons in the area V1 of the visual system. Similarly, unsupervised approaches for image analysis like Independet Component Analysis also learned features for image processing that resemble the Gabor-like receptive fields of neurons in area V1~\cite{ICA}.
Neuro-scientific and neuro-physiological studies of the mechanisms and systems that our brains uses to process external inputs have influenced also the developement of other branches of pattern recognition and artificial intelligence, such as sound signal processing. Patterson \emph{et al.}, in 1986, modeled the response of the cochlea membrane in the inner auditory system as a bank of Gammatone filters~\cite{patterson1986auditory}. They called Gammatonegram the result of the processing of an input signal by a Gammatone filter bank. Similarly to the spectogram, the Gammatonegram is a time-frequency representation of the sound in which the energy distribution over time and specific bandwidths is described. Parts of higher energy intensity correspond to regions of the cochlea membrane that vibrates more according to the energy of the mechanical sound pressure waves that hit the outer part of the auditory system. This model was exploited in~\cite{StrisciuglioCOPE,CopePreliminary2015,COPE2019} as input to a trainable feature extractor, the design of which was inspired by the activation of the inner hair cells, placed behind the cochlea, which convert the vibration into electrical stimuli on the auditory nerve.
This paper focuses on the relation between neuro-scientific studies and progress in Computer Vision and Image Processing, providing an overview of methods and aspects that concern detection and processing of low-level features in images until more complex computations in convolutional networks.
\section{Brain-inspired processing of visual data}
One of the pioneering architectures for image processing and computer vision inspired by knowledge of the brain processes of vision was the neocognitron network~\cite{Fukushima1980}. It modeled the hierarchical arrangement of the visual system of the brain by layers of S- and C-cell components, which are computational models of the simple and complex cells discovered by Hubel and Wiesel~\cite{hubel1962receptive}. The weights of the neocognitron network were learned via an unsupervised training process, based on self-organizing maps. This training resulted in a hierarchy of S- and C-cell units that resembled the organization of the human visual system.
In the following of the section, some of these approaches are discussed, and part of the focus is given to the phenomena of inhibition that contribute to increase the selectivity of neurons to specific visual stimuli and how they are embedded in operators for processing of visual data.
\subsection{Edge and line detection}
Simple cells in area V1 of the visual cortex receive inputs from LGN cells in the thalamus of the brain and have the function of detecting elongated structures that contain high contrast information. The receptive fields of LGN cells are modeled by on- and off-center Difference-of-Gaussians (DoG) functions, while those of simple cells are modeled as co-linear arrangement of DoG functions.
Originally, simple cells were modeled with Gabor functions, bypassing the contribution of the LGN cells. Computational models based on Gabor filters were used for contour and line detection and included in hierarchical architectures for object detection~\cite{serre2005object} and face recognition~\cite{pinto2011scaling} tasks.
Although Gabor filters were used, initially, to model the simple cell receptive fields~\cite{jones1987evaluation}, they did not reproduce certain properties, such as contrast invariant orientation tuning and cross orientation suppression.
These properties were achieved by a non-linear model, named CORF (Combination of Receptive Fields) for contour detection~\cite{azzopardi2012corf}. It is based on the combination of co-linearly aligned DoG functions, modeling the way simple cells combine the response of LGN cells. A mechanism for tolerance to curvature of lines and contours, based on a non-linear blurring, was proposed in the CORF model to improve the results when deployed in image processing pipelines.
An implementation of CORF, named (B-)COSFIRE (Combination of Shifted Filter Responses), where B- stands for bar-selective, was demonstrated to be successful for the detection of thick lines in images and applied to blood vessel delineation in retinal images (see Fig.~\ref{fig:retina})~\cite{StrisciuglioVIP15,AzzopardiStrisciuglio2015}, road and river segmentation in aerial images~\cite{strisciuglio2017delineation}, crack detection in pavement images~\cite{strisciuglio2017detection}. An example of the response map computed by a B-COSFIRE filter and its thresholded binary map are shown in Fig.~\ref{fig:retina}b and Fig.~\ref{fig:retina}c, respectively. A curved receptive field was configured in~\cite{Sivakumar2020}, to detect high curvature points of the retinal vessel tree. In~\cite{Strisciuglio2016,Strisciuglio15}, the authors demonstrated that a bank of B-COSFIRE filters, configured to delineate lines of different thickness, can be used as feature extractors and combined with a classifier to perform complex decisions.
\begin{figure}[!t]
\centering
\setlength{\unitlength}{30mm}
\subfloat[]{\label{fig:retina1}
\includegraphics[height=\unitlength]{retina}
}
\subfloat[]{\label{fig:retina3}
\includegraphics[height=\unitlength]{retina_out}
}
\subfloat[]{\label{fig:retina4}
\includegraphics[height=\unitlength]{retina_seg}
}
\caption{(a) Example retinal image, the (b) response of the B-COSFIRE filter and (c) the corresponding binary map.}
\label{fig:retina}
\end{figure}
\subsection{Object(-part) detection}
The response of neurons in area V1 are forwarded for further processing to neurons in areas V2 an V4 of the visual cortex, which are tuned to respond to sets of curved segments or vertices of some preferred orientation and badnwidth~\cite{Pasupathy12}. These properties can be interpreted as functions for detection of parts of objects.
Based on the principle of combining the responses of line and edge detectors at different orientations and with a certain spatial arrangement, an implementation of the COSFIRE model that takes as input a bank of Gabor filters of different orientation was released~\cite{COSFIRE}. In this case, the receptive fields of neurons in area V1 that give input to those in area V4 were modeled by means of Gabor functions. However, a hierarchical structure of COSFIRE models can be realized for more complex tasks like object recognition or scene understanding~\cite{AzzopardiShape}.
The COSFIRE model of neurons in area V4 can be trained to detect parts of object and used in applications of object recognition. In Fig.~\ref{fig:v4}, we show some examples of the parts of objects on which V4-COSFIRE models are trained. The light-blue ellipses indicate the location and the orientation at which the V1-like neuron responses are considered and their combination models a part of the object of interest. The configured models can be used to recognize parts of objects in other images or together in a filter-bank to extract feature vectors to be used in combination with a classifier.
\begin{figure}[!t]
\centering
\setlength{\unitlength}{30mm}
\includegraphics[height=\unitlength]{digits2}
\caption{The configured COSFIRE filters are represented by the set of light blue ellipses in the top row, whose orientation indicates the preferred orientation of the Gabor filter. In the bottom row, the part of the object that the corresponding COSFIRE filter is able to detect (figure from~\cite{COSFIRE}).}
\label{fig:v4}
\end{figure}
\subsection{Inhibition for image processing}
One important aspect of the visual processes that happens in the visual system is the mechanism of inhibition. The receptive field of a simple cell, known as `classical receptive field'~\cite{hubel1959receptive}, is composed of an excitatory and an inhibitory region. Many simple cells are know to receive push-pull (or antiphase) inhibition~\cite{hubel1965receptive}. This form of inhibition happens when visual stimuli of given orientation and opposite polarity evoke responses of opposite sign~\cite{palmer1981receptive,borg1998visual,ferster1988spatially}. Furthermore, it is known to be the most diffuse form of inhibition in the visual cortex~\cite{anderson2000orientation}. In practice, for a stimulus of given polarity the response of the inhibitory receptive field suppresses the response of the excitatory receptive field.
This phenomenon was implemented in the CORF operator and it was demonstrated to be beneficial for improving contour detection in presence of texture~\cite{azzopardi2014push}. More recently, the effect of the push-pull inhibition was shown to increase the robustness of line detection to various types of noise and textured background: a novel RUSTICO (Robust Inhibition-augmented curvilinear operator) operator was proposed in~\cite{StrisciuglioTIP2019,StrisciuglioECCV18}. It was shown to be very effective for line detection in presence of noise and texture. RUSTICO is designed as an extension of the B-COSFIRE filter for line detection, by including an inhibitory component. In Fig.~\ref{fig:inhib1} and Fig.~\ref{fig:inhib2}, an aerial image of a river and the corresponding ground-truth are shown. The binary response map produced by RUSTICO (Fig.~\ref{fig:inhib4}) shows a more complete reconstruction of the line pattern of interest, i.e. the river, than that in the binary map produced by B-COSFIRE (Fig.~\ref{fig:inhib3}).
\begin{figure}[!t]
\centering
\setlength{\unitlength}{25mm}
\subfloat[]{\label{fig:inhib1}
\includegraphics[height=\unitlength]{river}
}
\subfloat[]{\label{fig:inhib2}
\includegraphics[height=\unitlength]{river_gt}
}
\subfloat[]{\label{fig:inhib3}
\includegraphics[height=\unitlength]{river_bcosfire}
}
\subfloat[]{\label{fig:inhib4}
\includegraphics[height=\unitlength]{river_rustico}
}
\caption{(a) Aerial image of a river and (b) the ground truth of the river area. The (c) binary response map obtained by the B-COSFIRE filter is more noisy and contains less of the detected river patterns than the (d) binary response map of RUSTICO.}
\label{fig:inhibexample}
\end{figure}
Another phenomenon of inhibition found in the visual cortex is the surround suppression. It consists of neurons, whose response is suppressed by that of neighbor neurons in the surrounding of their receptive field~\cite{bishop1973receptive,wiesel1966spatial}. The cells that exhibit this type of inhibition have a non-classical receptive field (NCRF). Practically, this means that the response to a certain stimulus can be influenced by the presence of similar stimuli in the surrounding of the receptive field. This mechanism of surround suppression was included in image processing operators to extend the Canny edge detector~\cite{grigorescu2003contour}, a Gabor filter based contour detector~\cite{grigorescu2004contour} and in an operator with a butterfly-shaped receptive field~\cite{zeng2011contour}.
More recently, the push-pull inhibition and surround suppression were combined in a single operator for contour detection, which outperformed its counterpart operators with single or none inhibition mechanism~\cite{Melotti2020}.
\section{Convolutional networks for visual data processing}
Convolutional Neural Networks (ConvNets) became the \emph{de facto} standard for image processing and computer vision, because of their effectiveness in dealing with various visual recognition tasks. Successful applications of ConvNets are image and object recognition~\cite{resnet}, semantic segmentation~\cite{segnet}, place recognition~\cite{netvlad,leyva2019}, image generation and image-to-image translation~\cite{pix2pix2017}, among others.
ConvNets are based on convolution operations and exploit the characteristic of locality of the patterns of interest. This means that the value at a certain pixel location of a response map is detemined by the linear combination of the values of a small neighborhood of the corresponding pixel in the input image.
From this perspective, ConvNets can be considered as a regularized version of multi-layer perceptron (MLP) networks. The fully-connectedness means that each neuron at a certain layer receives input from all the neurons in the previous layer. In a ConvNet, instead, each neuron (i.e. a convolution kernel) has a very limited number of inputs, and it slides over the input signal to compute its response. Although a single convolution catches local proprieties of the input signal in small-size neighboroods, the hierarchical organization of ConvNets allows to assemble more and more complex patterns in subsequent steps.
The hierarchical organization of ConvNets, which arranges a stack of convolutional layers, non-linear activation functions and sub-sampling operations resembles the hierarchy of the visual system of the brain. Speculations of this type were reinforced by the results obtained by the AlexNet network~\cite{alexnet}. On top of the improvement of the classification accuracy by a large margin with respect to previous approaches, it was shown that the filters learned in the first layer of AlexNet resembled Gabor-like receptive fields (see Figure~\ref{fig:alexnet}), which are accepted computational models of neurons in the area V1 of the visual system of the brain~\cite{Marcelja80}.
Hence, in the first layer of AlexNet edge and elongated structures of different bandwidth are detected. The interpretations consist in that in subsequent layers, the detected edge and line patterns are combined into corner-like structures, similarly to the area V2 and V4 of the visual cortex, and into parts of objects (anterior and posterior TEO).
\begin{figure}[!t]
\centering
\setlength{\unitlength}{110mm}
\includegraphics[width=\unitlength]{alexnet}
\caption{Visualization of the convolutional kernel learned in the first layer of AlexNet.}
\label{fig:alexnet}
\end{figure}
The convolutions used in ConvNet architectures are linear operators and are not able to fully model some non-linear properties of the neurons in the visual cortex, e.g. response saturation or cross-orientation suppression.
In~\cite{volterracnn}, quadradic convolutions, in the form of Volterra kernels, were investigated and deployed as substitute of the convolution operations in existing architectures. This type of convolutions is more suited for a better approximation of the profile of the receptive fields of some neurons in the visual system. The approach was extended in~\cite{Jiang2019}, in which quadratic convolutional kernels contributed to reduce the depth, i.e. the total number of convolutional layers, of existing architectures while keeping the detection and classification performance of the corresponding deeper original networks.
On the one hand, the use of quadratic convolutions is justified by
the closer connection with the function of the receptive field of the complex cells in the visual system, and contributed to a relatively small increase of performance. On the other hand, they require a much larger number of parameters to be learned, slowing down the training and increasing the complexity of the functions to be learned. In~\cite{volterracnn}, indeed, due to computational limits, only the first layer of convolutions was replaced by Volterra kernels.
Another type of non-linear unit was proposed in~\cite{LopezAccess2019}, which incorporate the framework of the COSFIRE model of the neurons in the area V4 of the visual system into a new type of layer for ConvNets.
The response of this layer is computed by combining the response maps of local simpler features according to a spatial structure that is determined in an automatic configuration step. During the training of the network, the CNN-COSFIRE layer can be configured to detect a certain arrangement of local features, so allowing for a larger receptive field that can catch non-local characteristics of the patterns of interest, such as parts of or entire objects. It was successfully demonstrated in applications of object detection and place recognition where few training samples ara available.
\subsection{Inhibition in convolutional networks}
ConvNets learn representations, disentangling complex features of the training data. Inhibition is believed to be a mechanism for regularization and stability of the processes that happens in the visual system~\cite{Lauritzen10201}, and forms of inhibition are learned in ConvNets as well~\cite{Tjostheim2019}.
AlexNet deployed a layer called Local Response Normalizer (LRN), which implemented a surround suppression mechanism called lateral inhibition. This type of inhibition creates a form of competition among neurons in a local neighboround. The LRN builds on the idea of enhancing peak responses and penalizing flat ones on the feature map, making relevant features stand out more clearly. Thus, in the implementation, high local responses of one convolutional kernel inhibit weaker responses of other convolutional kernels in the same local neighbourhood. This serves as a form of regularization of the network and improves recognition performance.
In~\cite{strisciuglio2020pushpull}, a new type of layer that implements the push-pull inhibition mechanism was proposed, which can be used as a substitute of the convolutional layer. The push-pull layer can be trained with back-propagation of the gradient of the error and is interchangeable with any convolutional layer in the network. However, as it is inspired by neuroscientific evidence of inhibition mechanisms that occur in the early stages of the visual cortex, it was deployed as a substitute of the first convolutional layer only~\cite{strisciuglio2020pushpull}.
Using the push-pull layer in ConvNet architectures achieves better performance on image classification tasks when dealing with images that have been corrupted with noise or other types of artefacts (e.g. jpeg compression, blur, contrast changes and so on). Furthermore, when deploying the push-pull layer in ConvNets instead of a convolutional layer, the number of parameters to learn does not increase.
\section{Conclusions}
\label{sec:conclusions}
The research fields of image processing and computer vision were influenced by discoveries and progress in the understanding of the functions of neurons in the visual system. Computational models of different types of neurons formalized by neuro-physiological studies of their responses to visual stimuli have been deployed for image processing, especially related to low-level tasks such as line and contour detection.
In this paper, we reviewed the developments of edge and contour detection algorithms influenced by progress made in the understanding of the visual processes that occur in the visual cortex. We paid large attention to the importance that inhibitory mechanisms, namely push-pull inhibition and surround suppression, have on the robustness of the processing of visual stimuli in noisy and textured scenes.
Furthermore, we covered the connections that neuro-physiological findings hae with the development of Convolutional Networks and how inhibitory phenomena were explicitly implemented in the architecture of these networks with the aim of improving their stability to varying input stimuli.
\section*{Acknowledgments}
I would like to thank Maria Rosaria Strisciuglio for the interesting discussions about the phases of learning and development of the visual system of the brain.
\bibliographystyle{splncs04}
| 2024-02-18T23:39:53.954Z | 2021-03-03T02:24:33.000Z | algebraic_stack_train_0000 | 768 | 3,442 |
|
proofpile-arXiv_065-3806 | \section{Introduction}
Mutual exclusion (ME) is a commonly used technique to handle conflicts in concurrent systems. The problem of mutual exclusion was first defined by Dijkstra \cite{Dij:1965:CACM} more than half a century ago. Mutual exclusion algorithms, commonly known as locks, are used by processes to execute a part of code, called critical section (CS) in isolation without any interference from other processes. The CS typically consists of code that involves access to shared resources, which when accessed concurrently could potentially cause undesirable race conditions. The mutual exclusion problem involves designing algorithms to ensure processes enter the CS one at a time.
Generally, algorithms for mutual exclusion are designed with the assumption that failures do not occur, especially while a process is accessing a lock or a shared resource. However, such failures can occur in the real world. A power outage or network failure might create an unrecoverable situation causing processes to be unable to continue. If such failures occur, traditional mutual exclusion algorithms, which are not designed to operate properly under failures, may deadlock or otherwise fail to guarantee important safety and liveness properties. In many cases, such failures may have disastrous consequences. This gave rise to the problem of \emph{recoverable mutual exclusion (RME)}. The RME problem involves designing an algorithm that ensures mutual exclusion under the assumption that process failures may occur at \emph{any} point during their execution, but the system is able to recover from such failures and proceed without any adverse consequences.
Traditionally, concurrent algorithms use checkpointing and logging to tolerate failures by regularly saving relevant portion of application state to a persistent storage such as hard disk drive (HDD). Accessing a disk is orders of magnitude slower than accessing main memory. As a result, checkpointing and logging algorithms are often designed to minimize disk accesses.
\emph{Non-volatile random-access memory (NVRAM)} is a new class of memory technologies that combines the low latency and high bandwidth of traditional random access memory with the density, non-volatility, and economic characteristic of traditional storage media (\emph{e.g.}, HDD). Existing checkpointing and logging algorithms can be modified to use NVRAMs instead of disks to yield better performance, but, in doing so,we would not be leveraging the true power of NVRAMs \cite{NarHod:2010:ASPLOS, GolRam:2019:DC}. NVRAMs can be used to directly store implementation specific variables and, as such, have the potential for providing near-instantaneous recovery from failures.
By directly storing implementation variables on NVRAMs, most of the application data can be easily recovered after failures. However, recovery of implementation variables alone is not enough. Processor state information such as contents of program counter, CPU registers and execution stack cannot be recovered completely and need to be handled separately. Due to this reason, there is a renewed interest in developing fast and dependable algorithms for solving many important computing problems in software systems vulnerable to process failures using NVRAMs. Using innovative methods, with NVRAMs in mind, we aim to design efficient and robust fault-tolerant algorithms for solving mutual exclusion and other important concurrent problems.
The RME problem in the current form was formally defined a few years ago by Golab and Ramaraju in \cite{GolRam:2016:PODC}. Several algorithms have been proposed to solve this problem \cite{GolRam:2019:DC,GolHen:2017:PODC,JayJos:2017:DISC,JayJay+:2019:PODC,DhoMit:2020:PODC}. However, in order to ensure that the problem of RME is also of practical interest, and not just a theoretical one, memory reclamation poses as a major obstacle in several RME algorithms. Often, RME algorithms allocate memory dynamically which increases the memory footprint of the algorithm over time. These algorithms are typically not equipped with suitable garbage collection to avoid errors that may arise from concurrency and potential failures.
Memory reclamation, in single process systems without failures, follows a straightforward pattern. The process allocates ``nodes'' dynamically, consumes it, and frees it once it has no more need of this node. Freed nodes may later be reused (as part of a different allocation) or returned to the operating system. However, due to some programmer error, if a node that is freed is later accessed by the process in the context of the previous allocation, it may cause some serious damage to the program and the operating system as well. In the context of multi-process systems, when a process frees a node, we may face the same issue without any programmer error. Even if the process that frees the node is able to guarantee that it will not access that node again, there may exist another process that is just about to access or dereference the node in the context of the old allocation.
In order to avoid the aforementioned error, freeing a node is broken down into two tasks. First, a process retires the node, after which, any process that did not have access to the node may no longer be able to get access to the node. Second, the node needs to be reclaimed once it is deemed to be ``safe'', \emph{i.e.}, no process can obtain any further access to the node in the context of the previous allocation. A memory reclamation service is responsible to provide a safe reclamation of a node once it is retired. On the other hand, the responsibility of retiring the node is typically on the programmer that needs to consume the memory reclamation service.
Prior works on memory reclamation \cite{Mic:2004:TPDS, Fra:2004:PhD, mckenney1998read, arcangeli2003using, Bro:2015:PODC, wen:2018:ppopp} provide safe memory reclamation in the absence of failures, but are not trivially suited to account for failures and subsequent recovery using persistent memory. Moreover, most works focus on providing memory reclamation in the context of lock-free data structures.
In this work, we present the first ``general'' recoverable algorithm (that we know of) for memory reclamation in the context of recoverable mutual exclusion. Our algorithm is general enough that it can be plugged into any RME algorithm very easily, while preserving all correctness properties and most desirable properties of the algorithm. On the other hand, it is specific enough to take advantage of assumptions made by RME algorithms. In particular, our algorithm may be blocking, but it is suitable in the context of the RME due to the very blocking nature of the RME problem.
Our approach derives from prior works of EBR \cite{Fra:2004:PhD} (epoch based reclamation) and QSBR \cite{mckenney1998read} (quiescent state based reclamation). However, unlike EBR and QSBR, where the memory consumption may grow unboundedly due to a slow process, our algorithm guarantees a bounded memory consumption. The space overhead of our algorithm is $\mathcal{O}(n^2 * sizeof(node)\ )$, where $n$ is the total number of processes in the system, and a ``node'' is a collection of all the resources used in one passage of the CS.
One of the most important measures of performance of an RME algorithm is the maximum number of \emph{remote memory references (RMRs)} made by a process per critical section request in order to acquire and release the lock as well as recover the lock after a failure. Whether or not a memory reference incurs an RMR depends on the underlying memory model. The two most common memory models used to analyze the performance of an RME algorithm are \emph{cache-coherent (CC)} and \emph{distributed shared memory (DSM)} models. In terms of remote memory references (RMRs), our algorithm is RMR-optimal, \emph{i.e}, it has a constant RMR overhead per passage for both \textit{CC} and \textit{DSM} memory models. Moreover, this algorithm uses only read, write and comparison based primitives,
The main idea behind our approach is
\begin{enumerate*}
\item maintain two pools of ``nodes'', clean (reclaimed) and dirty (retired)
\item wait for dirty nodes to become clean, while consuming the clean pool
\item switch dirty and clean pools
\end{enumerate*}.
Our algorithm operates in tandem with any RME algorithm via two methods/APIs that can be invoked by the programmer to allocate new nodes and retire old nodes.
\paragraph{Roadmap:}
The rest of the text is organized as follows. We describe our system model and formally define the RME and the memory reclamation problem in \autoref{sec:model|problem}.
We define a new object, called the \broadcast{} object and its properties in \autoref{sec:broadcast}. We also present an RMR-optimal solution to the \broadcast{} object for both the CC and DSM model in \autoref{sec:broadcast}.
In \autoref{sec:mem_rec}, we present an algorithm that provides memory reclamation for RME algorithms. This algorithm is RMR-optimal, but not lock-free. In \autoref{sec:application}, we describe how our memory reclamation algorithm can be equipped to existing RME algorithms. A detailed description of the related work is given in \autoref{sec:related}.
Finally, in \autoref{sec:concl_future}, we present our conclusions and outline directions for future research.
\section{System Model and Problem Formulation}
\label{sec:model|problem}
We assume that RME algorithms follow the same model and formulation as used by Golab and Ramaraju \cite{GolRam:2019:DC}.
\subsection{System model}
We consider an asynchronous system of $n$ processes ($p_1, p_2, \dots, p_n$). Processes can only communicate by performing read, write and read-modify-write (RMW) instructions on shared variables. Besides shared memory, each process also has its private local memory that stores variables only accessible to that process (\emph{e.g.}, program counter, CPU registers, execution stack, \emph{etc.}). Processes are not assumed to be reliable and may fail.
A system execution is modeled as a sequence of process steps. In each step, some process either performs some local computation affecting only its private variables or executes one of the available instructions (read, write or RMW) on a shared variable or fails. Processes may run at arbitrary speeds and their steps may interleave arbitrarily. In any execution, between two successive steps of a process, other processes can perform an unbounded but finite number of steps.
To access the critical section, processes synchronize using a recoverable \emph{lock} that provides mutual exclusion (ME) despite failures.
\subsection{Failure model}
We assume the \emph{crash-recover} failure model. A process may fail at any time during its execution by crashing. A crashed process recovers eventually and restarts its execution from the beginning. A crashed process does not perform any steps until it has restarted. A process may fail multiple times, and multiple processes may fail concurrently.
On crashing, a process loses the contents of all volatile private variables, including but not limited to the contents of its program counter, CPU registers and execution stack. However, the contents of the shared variables and non-volatile private variables remain unaffected and are assumed to persist despite any number of failures. When a crashed process restarts, all its volatile private variables are reset to their initial values.
Processes that have crashed are difficult to distinguish from processes that are running arbitrarily slow. However, we assume that every process is live in the sense that a process that has not crashed eventually executes its next step and a process that has crashed eventually recovers. In this work, we consider a failure to be associated with a single process.
\subsection{Process execution model}
The process execution for RME algorithms is modeled using two types of computations, namely \emph{non-critical section} and \emph{critical section}. A critical section refers to the part of the application program in which a process needs to access shared resources in isolation. A non-critical section refers to the remainder of the application program.
\begin{algorithm}[t]
\begin{\algoFontSize}
\DontPrintSemicolon
\While {true}
{
Non-Critical Section (NCS)\;
Recover\;
Enter\;
Critical Section (CS)\;
Exit\;
}
\caption{Process execution model}
\label{algo:PEM}
\end{\algoFontSize}
\end{algorithm}
The execution model of a process with respect to a lock is depicted in Algorithm~\ref{algo:PEM}. As shown, a process repeatedly executes the following five \segment{s} in order: \NCS{}, \Recover{}, \Enter{}, \CS{} and \Exit{}.
The first \segment{}, referred to as \NCS, models the steps executed by a process in which it only accesses variables outside the lock.
The second \segment{}, referred to as \Recover, models the steps executed by a process to perform any cleanup required due to past failures and restore the internal structure of the lock to a consistent state.
The third \segment{}, referred to as \Enter, models the steps executed by a process to acquire the lock so that it can execute its critical section in isolation.
The fourth \segment{}, referred to as \CS, models the steps executed by a process in the critical section where it accesses shared resources in isolation.
Finally, the fifth \segment{}, referred to as \Exit, models the steps executed by a process to release the lock it acquired earlier in \Enter{} \segment.
It is assumed that in the \NCS{} \segment{}, a process does not access any part of the lock or execute any computation that could potentially cause a race condition. Moreover, in \Recover{}, \Enter{} and \Exit{} \segment{s}, processes access shared variables pertaining to the lock (and the lock only). A process may crash at any point during its execution, including while executing \NCS, \Recover, \Enter, \CS{} or \Exit{} \segment{}.
\begin{definition}[passage]
A \emph{passage} of a process is defined as the sequence of steps executed by the process from when it begins executing \Recover{} \segment{} to either when it finishes executing the corresponding \Exit{} \segment{} or experiences a failure, whichever occurs first.
\end{definition}
\begin{definition}[super-passage]
A \emph{super-passage} of a process is a maximal non-empty sequence of consecutive passages
executed by the process, where only the last passage of the process in the sequence is failure-free.
\end{definition}
\subsection{RME problem definition}
\label{sec:problem}
A \emph{history} is a collection of steps taken by processes.
A process $p$ is said to be \emph{active} in a history $H$ if $H$ contains at least one step by $p$.
We assume that every critical section is finite.
A history $H$ is said to be \emph{fair} if
\begin{enumerate*}[label=(\alph*)]
\item it is finite, or
\item if it is infinite and every active process in $H$ either executes infinitely many steps or stops taking steps after a failure-free passage.
\end{enumerate*}
Designing a recoverable mutual exclusion (RME) algorithm involves designing \Recover, \Enter{} and \Exit{} \segment{s} such that the following correctness properties are satisfied.
\begin{description}
\item[Mutual Exclusion (ME)] For any finite history $H$, at most one process is in its \CS{} at the end of $H$.
\item[Starvation Freedom (SF)] Let $H$ be an infinite fair history in which every process crashes only a finite number of times in each super passage. Then, if a process $p$ leaves the \NCS{} \segment{} in some step of $H$, then $p$ eventually enters its \CS{} \segment{}.
\item[Bounded Critical Section Reentry (BCSR)] For any history $H$, if a process $p$ crashes inside its \CS{} \segment{}, then, until $p$ has reentered its \CS{} \segment{} at least once, any subsequent execution of \Enter{} \segment{} by $p$ either completes within a bounded number of $p$'s own steps or ends with $p$ crashing.
\end{description}
Note that mutual exclusion is a safety property, and starvation freedom is a liveness property. The bounded critical section reentry is a safety as well as a liveness property. If a process fails inside its \CS{}, then a shared object or resource (\emph{e.g.}, a shared data structure) may be left in an inconsistent state. The bounded critical section reentry property allows such a process to ``fix'' the shared resource before any other process can enter its \CS{} (\emph{e.g.}, \cite{GolRam:2019:DC,GolHen:2017:PODC,JayJay+:2019:PODC}). This property assumes that the \CS{} is idempotent; i.e, the \CS{} is designed so that, in a super passage, multiple executions of the \CS{} is equivalent to one execution of the \CS{}.
Our correctness properties are the same as those used in \cite{GolRam:2019:DC,GolHen:2017:PODC,JayJay+:2019:PODC}. We have stated them here for the sake of completeness.
In addition to the correctness properties, it is also desirable for an RME algorithm to satisfy the following additional properties.
\begin{description}
\item[Bounded Exit (BE)] For any infinite history $H$, any execution of the \Exit{} \segment{} by any process $p$ either completes in a bounded number of $p$'s own steps or ends with $p$ crashing.
\item[Bounded Recovery (BR)] For any infinite history $H$, any execution of \Recover{} \segment{} by process $p$ either completes in a bounded number of $p$'s own steps or ends with $p$ crashing.
\end{description}
\subsection{Memory Reclamation problem definition}
We only consider those RME algorithms that need to allocate nodes dynamically on the heap. We assume that the underlying RME algorithm needs to use a new node per request. A node is a collection of resources required by the underlying RME algorithm per request.
The general memory reclamation problem involves designing two methods,
\begin{enumerate*}
\item \textit{new\_node(~)}, and
\item \textit{retire(node)}
\end{enumerate*}.
These methods are used to allocate and deallocate nodes dynamically. The \textit{retire(node)} method assumes a node is retired only when there are no more references to it in shared memory, and no new shared references will be created. The responsibility of a memory reclamation service is to provide safe reclamation (defined later) of a node once it is retired. On the other hand, the responsibility of retiring the node is typically on the programmer that needs to consume the memory reclamation service.
In our work, we assume that nodes are reused (instead of freed), once they are reclaimed. As a result, the lifecycle of a node follows four (logical) stages:
\begin{enumerate*}
\item \Freed{}
\item \Allocated{}
\item \Retired{}
\item \Reclaimed{}
\end{enumerate*}.
The lifecycle of a node follows a pattern as shown in \autoref{fig:node_lifecycle}. Initially, a node is assumed to be in the \Freed{} stage. Once it is assigned by the \textit{new\_node(~)} method, it is in the \Allocated{} stage. After getting retired, it is in the \Retired{} stage, and finally, it is moved to the \Reclaimed{} stage by the memory reclamation algorithm. Once a node is reclaimed, it can be reused and will move to the \Allocated{} stage, and so on.
\begin{figure}[t]
\scalebox{1.0}{
\centering
\begin{tikzpicture}[decoration={markings, mark=at position 1 with {\arrow[scale=2,black]{>}}}]]
\node (F) [rectangle, draw]{\textsc{Free}};
\node (A) [rectangle, draw, below=1cm of F]{\textsc{Allocated}};
\node (R) [rectangle, draw, below=1cm of A] {\textsc{Retired}};
\node (M) [rectangle, draw, right=1cm of R] {\textsc{Reclaimed}};
\draw[postaction={decorate}] (F) -- (A);
\draw[postaction={decorate}] (A) -- (R);
\draw[postaction={decorate}] (R) -- (M);
\draw[postaction={decorate}] (M) -- (A);
\end{tikzpicture}
}
\captionof{figure}{The lifecycle of a node}
\label{fig:node_lifecycle}
\end{figure}
Designing a memory reclamation scheme for recoverable mutual exclusion (RME) algorithms involves designing the \textit{new\_node(~)} and \textit{retire(node)} methods such that the following correctness properties are satisfied.
\begin{description}
\item[Safe reclamation] For any history $H$, if process $p_i$ accesses a node $x$, then either $x$ is local to $p_i$, or $x$ is in \Allocated{} or \Retired{} stages.
\end{description}
Note that any RME algorithm only requires a single node at any given point in time. Thus, we would want multiple executions of the new\_node(~) method to return the same node until the node is retired. Similarly, we want to allow the same node to be retired multiple times until a new node is requested.
\begin{description}
\item [Idempotent allocation]
Given any history $H$, process $p_i$ and a pair of operations, $op_1$ and $op_2$, of the \textit{new\_node(~)} method invoked by $p_i$, if there does not exist an invocation of \textit{retire(node)} by $p_i$ between $op_1$ and $op_2$, then either both these operations returned the same node in $H$, or at least one of these operations ended with a crash.
\item [Idempotent retirement]
Given any history $H$, process $p_i$ and a pair of operations, $op_1$ and $op_2$, of the \textit{retire(node)} method invoked by $p_i$, if there does not exist an invocation of \textit{new\_node(~)} by $p_i$ between $op_1$ and $op_2$, then either history $H' = H - \{op_1\}$ or $H'' = H - \{op_2\}$ or both are equivalent to $H$.
\end{description}
In case of failures, it is the responsibility of the underlying algorithm to detect if the failure occurred while executing any method of the memory reclamation code and if so, re-execute the same method.
\subsection{Performance measures}
We measure the performance of RME algorithms in terms of the number of \emph{remote memory references (RMRs)} incurred by the algorithm during a \emph{single} passage. Similarly, the performance of a memory reclamation algorithm for RME is measured in terms of RMR overhead per passage. The definition of a remote memory reference depends on the memory model implemented by the underlying hardware architecture. In particular, we consider the two most popular shared memory models:
\begin{description}
\item[Cache Coherent (CC)]
The CC model assumes a centralized main memory. Each process has access to the central shared memory in addition to its local cache memory. The shared variables, when needed, are cached in the local memory. These variables may be invalidated if updated by another process. Reading from an invalidated variable causes a cache miss and requires the variable value to be fetched from the main memory. Similarly, write on shared variables is performed on the main memory. Under this model, a remote memory reference occurs each time there is a fetch operation from or a write operation to the main memory.
\item[Distributed Shared Memory (DSM)]
The DSM model has no centralized memory. Shared variables reside on individual process nodes. These variables may be accessed by processes either via the interconnect or a local memory read, depending on where the variable resides. Under this model, a remote memory reference occurs when a process needs to perform \textit{any} operation on a variable that does not reside in its own node's memory.
\end{description}
\subsection{Synchronization primitives}
We assume that, in addition to read and write instructions, the system also supports \emph{compare-and-swap (\textit{CAS})} read-modify-write (RMW) instruction.
A compare-and-swap instruction takes three arguments: $address$, $old$ and $new$; it compares the contents of a memory location ($address$) to a given value ($old$) and, only if they are the same, modifies the contents of that location to a given new value ($new$). It returns \textit{true} if the contents of the location were modified and \textit{false} otherwise.
This instruction is commonly available in many modern processors such as Intel~64~\cite{Intel64Manual} and AMD64~\cite{AMD64Manual}.
\section{The \broadcast{} object}
\label{sec:broadcast}
Our memory reclamation technique utilizes a recoverable \broadcast{} object whose primary function is to allow a designated process to signal (and free) multiple waiting processes. The \broadcast{} object is inspired by the SIGNAL object used by \JJJ{} \cite{JayJay+:2019:PODC} to solve the RME problem. Unlike the SIGNAL object, that can signal only one waiting process and perform signalling only once, the \broadcast{} object allows a designated process to signal multiple waiting processes and can be reused, even in the presence of failures.
In essence, the \broadcast{} object is a recoverable MRSW (Multi Reader Single Writer) \textbf{counter} object that supports three operations \bSet{}, \bWait{} and \bRead{}.
\begin{enumerate}[topsep=0ex]
\item \bSet{($x$)} is invoked by a process to set the counter value to $x$
\item \bWait{($x$)} is invoked by a process that intends to wait till the counter value is greater than or equal to $x$
\item \bRead{(~)} is invoked to read the current value of the counter.
\end{enumerate}
This object assumes the following usage:
\begin{enumerate}[topsep=0ex]
\item \bSet{} operation will only be invoked by a designated process $p_w$
\item \bWait{} operation can be invoked by all processes except $p_w$
\item \bSet{} operation must only be invoked in an incremental fashion. In other words, if the last successful \bSet{} operation was \bSet{$(y)$}, then the next invocation may only be \bSet{$(y+1)$} or, to maintain idempotence, \bSet{$(y)$}
\item \bWait{} operation must not be invoked with a parameter whose value is at most one unit greater than the current counter value. Formally, if the last successful \bSet{} operation was \bSet{$(y)$}, and \bWait{$(z)$} is invoked, then $z \leq y+1$.
\end{enumerate}
An implementation of the \broadcast{} object is trivial in the CC model. Using a shared MRSW atomic integer, processes can achieve $\bigO{1}$ RMR-complexity for \bSet{}, \bWait{} and \bRead{}. However, this approach does not work for the DSM model. In the DSM model, each shared variable resides on a single processor node. Thus, if processes wait by spinning on the same shared variable, some processes (from remote nodes) will incur an unbounded number of RMRs. Thus, each process needs to spin on a variable stored in its local processor node. In this case, process $p_w$ needs to broadcast its \bSet{$(x)$} operation to ensure that all processes that are spinning due to an invocation of the \bWait{($x$)} operation are subsequently signalled. This action could potentially incur $\bigO{n}$ RMRs for the \bSet{$(x)$} operation. Thus, a constant-RMR implementation of the \broadcast{} object for the DSM model is non-trivial.
We present an efficient implementation of the \broadcast{} object for the DSM model in \autoref{algo:Broadcast}. This implementation incurs $\bigO{1}$ RMRs for \bSet{}, \bWait{} and \bRead{} and utilizes $\bigO{n}$ space per \broadcast{} object. The main idea in our implementation of the \broadcast{} object is a wakeup chain, created by the designated process $p_w$, such that each process in the wakeup chain wakes up the next process in the chain. To trigger the wakeup in the wakeup chain, process $p_w$ only needs to wake up the first process in the wakeup chain.
\begin{algorithm}[t]
\begin{\algoFontSize}
\begin{multicols}{2}
\SetKw{Shared}{shared non-volatile variables}
\SetKw{Local}{local variables}
\SetKw{Struct}{struct}
\SetKw{Integer}{int}
\SetKw{Boolean}{bool}
\SetKw{Array}{array}
\SetKw{Await}{await}
\SetKw{Writer}{Designated process}
\Shared \\
\Indp
\tcc{Keep track of counter value}
$count$ : atomic integer\;
\tcc{Internal counter to synchronize \bSet{} and \bWait{}}
$interim\_count$ : atomic integer\;
\tcc{value to spin on; the $i$-{th} entry is local to process $p_i$}
$target[1{\dots}n]$: \Array $[1{\dots}n]$ of integer\;
\tcc{announcement of target value to $p_w$; all entries are local to process $p_w$}
$announce[1{\dots}n]$: \Array $[1{\dots}n]$ of integer\;
\tcc{id of next process in wakeup chain; all entries are local to process $p_w$}
$wakeup[1{\dots}n]$: \Array $[1{\dots}n]$ of integer\;
\Indm
\BlankLine
\vspace{1.2\baselineskip}
\SetKwProg{fbWait}{Function}{}{end}
\fbWait{\bWait{}(x)}
{
\tcc{Wait till counter value reaches $x$}
\tcc{Process $p_w$ should never invoke \bWait{(x)}}
$target[i] \leftarrow x$\;
$announce[i] \leftarrow x$\;
\tcc{No need to wait if $p_w$ intends to set counter to $x$}
\If{$interim\_count \geq x$}
{
$target[i] \leftarrow 0$\;
}
\tcc{Spin till some process resets the target value}
\textbf{\Await} $target[i] > 0$\;
$announce[i] \leftarrow 0$\;
$k \leftarrow wakeup[i]$\;
\If{$k > 0$}
{
\tcc{Wake up next process in wakeup chain}
CAS$(target[k], x, 0)$
}
}
\BlankLine
\SetKwProg{fbRead}{Function}{}{end}
\fbRead{\bRead{}(~)}
{
\Return $count$
}
\BlankLine
\columnbreak
\Writer \\
\Indp
\tcc{Host process for \broadcast{} object}
$p_w$: writer process\;
\Indm
\BlankLine
\vspace{0.3\baselineskip}
\SetKwBlock{DummyBlock}{}{}
\SetKw{Initialization}{Initialization}
\Initialization
\SetAlgoNoLine\DummyBlock
{
\SetAlgoLined
\tcc{counts are initially zero}
$count \leftarrow 0$\;
$interim\_count \leftarrow 0$\;
\ForEach{$j \in \{ 1, 2, \dots, n\}$}
{
\tcc{processes are not waiting initially}
$target[j] \leftarrow 0$\;
$announce[j] \leftarrow 0$\;
$wakeup[j] \leftarrow 0$\;
}
}\SetAlgoLined
\BlankLine
\vspace{0.5\baselineskip}
\SetKwProg{fbSet}{Function}{}{end}
\fbSet{\bSet{}(x)}
{
\tcc{Sets the counter value to $x$}
\tcc{\bSet{(x)} may only be invoked by process $p_w$}
$last \leftarrow 0$\;
$j \leftarrow 1$\;
\tcc{Inform all process about an incoming set operation}
$interim\_count \leftarrow x$\;
\While{j < n}
{
\If{$announce[j] = x$}
{
\tcc{Assign $j$ to wakeup the last waiting process}
$wakeup[j] \leftarrow last$\;
\If{$announce[j] = x$}
{
\tcc{$j$ is the last process if it is still waiting}
$last \leftarrow j$\;
}
}
$j \leftarrow j + 1$\;
}
\If{$last > 0$}
{
\tcc{Release the last process and all waiting processes will be automatically released using the wakeup chain}
CAS$(target[last], x, 0)$\;
}
$count \leftarrow x$
}
\BlankLine
\end{multicols}
\caption{Pseudocode for \broadcast{} object for process $p_i$ for the DSM model}
\label{algo:Broadcast}
\end{\algoFontSize}
\end{algorithm}
\subsection{Variables used}
The variable $count$ is used to store the counter value. The variable $interim\_count$ is used by \bSet{($x$)} to temporarily store the new counter value until the operation terminates. Variables $target$, $announce$ and $wakeup$ are arrays of integers with one entry for each process.
The $i$-{th} entry of $target$ is used by process $p_i$ to spin on until the counter value of the \broadcast{} object reaches $target[i]$.
The $i$-{th} entry of $announce$ is used by process $p_i$ to indicate process $p_w$ about its intention to wait for the counter value of the \broadcast{} object to be set to $announce[i]$.
The $wakeup$ array is used to create a wakeup chain. The $i$-{th} entry of $wakeup$ is used by process $p_i$ to determine the next process in the wakeup chain. The $wakeup$ and $announce$ arrays are local to process $p_w$.
\subsection{Algorithm description}
In \bWait{($x$)}, a process $p_i$ first sets $target[i]$ and then $announce[i]$ to $x$, to announce its intention to wait for the counter value to reach $x$. It then checks if \bSet{} has been invoked for this particular value of $x$, by checking the variable $interim\_count$. If yes, it resets $target[i]$ to 0. It then spins, if required, till $target[i]$ is set to 0 by some process in the wakeup chain. Process $p_i$ then clears its announcement and determines the next process $p_k$ in the wakeup chain, where $wakeup[i] = k \neq 0$. It wakes up $p_k$ by updating the value of $target[k]$ from $x$ to $0$ using a CAS instruction. Note that there could be multiple wakeup chains in the algorithm for different $target$ values. However, the algorithm maintains the invariant that all processes in a particular wakeup chain have the same $target$ value.
In \bSet{($x$)}, process $p_w$ first sets the $interim\_count$ to $x$ such that any process that invokes \bWait{} from this point does not get blocked. Then, it creates the wakeup chain of processes in a reverse order by keeping track of the last process in the wakeup chain and double checking the announce array to ensure the process is indeed waiting. Lastly, $p_w$ wakes up the first process in the wakeup chain (indicated by variable $last$) and subsequently all waiting processes will be woken up.
The only use of the \bRead{(~)} operation is to keep track of the last successful \bSet operation. All \bSet{($x$)}, \bWait{($x$)} and \bRead{(~)} operations are idempotent methods and would execute perfectly even if run multiple times as long as they are run to completion once.
\section{The memory reclamation algorithm}
\label{sec:mem_rec}
Our idea relies on the notion of a \textit{grace} period and quiescent states \cite{arcangeli2003using, mckenney1998read, hart2007performance}.
\begin{definition}[Grace period]
A grace period is a time interval $[a, b]$ such that all nodes retired before time $a$ are safe to be reclaimed after time $b$.
\end{definition}
\begin{definition}[Quiescent state]
A process is said to be in a quiescent state at a certain point in time if it cannot access any node from another process using only its local variables.
\end{definition}
Note that quiescent states are defined within the context of an algorithm. Different algorithms may encompass different quiescent states. In the context of quiescent states, a \textit{grace} period is a time interval that overlaps with at least one quiescent state of each process. In order to reuse a node, a process, say $p_i$, must first retire its node and then wait for at least one complete \textit{grace} period to safely reclaim the node. After one complete grace period has elapsed, it is safe to assume that no process would be able to acquire any access to that node.
\emph{Main idea:} In the case of RME algorithms, we assume that when a process is in the \NCS{}, it is in a quiescent state. It suffices to say that after $p_i$ retires its node, if some process $p_j (j \neq i)$ is in the \NCS{} \segment{}, then $p_j$ would be unable to access that node thereafter. In order to safely reuse (reclaim) a node, process $p_i$ determines its grace period in two phases, the snapshot phase and the waiting phase. In the snapshot phase, $p_i$ takes a snapshot of the status of all processes and, in the waiting phase, $p_i$ waits till each process has been in the \NCS{} \segment{} at least once during or after its respective snapshot. In order to remove the RMR overhead caused by scanning through each process, $p_i$ executes each phase in a step manner.
Our memory reclamation algorithm is provides two methods:
\begin{enumerate*}[label=\arabic*)]
\item \textit{\newnode{}(~)}, and
\item \textit{\retire{}(~)}
\end{enumerate*}.
A pseudocode of the memory reclamation algorithm is presented in \autoref{algo:mem_rec}. Any RME algorithm that needs to dynamically allocate memory can utilize our memory reclamation algorithm by invoking these two methods.
The \textit{\newnode{}(~)} method returns a ``node'' that is required by a process to enter the \CS{} of the RME algorithm. Similarly, while leaving the \CS{}, the \textit{\retire{}(~)} method will retire the node used to enter the \CS{},
Our algorithm assumes (and relies on) the fact that each process \textbf{will} request a new node each time before entering the \CS{} and retire its node prior to entering the \NCS{} \segment{}. THE RMR overhead of our algorithm is $\bigO{1}$, while the space overhead is $\bigO{n^2 * sizeof(node)}$.
\begin{algorithm}[t]
\begin{\algoFontSize}
\begin{multicols}{2}
\DontPrintSemicolon
\SetKw{Shared}{shared non-volatile variables}
\SetKw{Local}{local non-volatile variables}
\SetKw{Struct}{struct}
\SetKw{Integer}{int}
\SetKw{Boolean}{bool}
\SetKw{Array}{array}
\SetKw{Await}{await}
\Shared \\
\Indp
\tcc{Counter of CS attempts; $i$-{th} entry is local to process $p_i$}
$start[1 \dots n]$: \Array of integer\;
\tcc{Broadcast object to wait for CS completion; $p_i$ is the writer for the $i$-{th} entry}
$finish[1 \dots n]$: \Array of \broadcast{} object\;
\Indm
\BlankLine
\Local\\
\Indp
\tcc{Array to store last observed value of start of other process}
$snapshot[1 \dots n]:$ \Array of integers\;
\tcc{Pool of nodes for memory management}
$pool[0,1][1 \dots 2n+2]$: two pools of $2n+2$ nodes\;
\tcc{Index of current pool}
$currentpool$: integer\;
\tcc{Index of backup pool}
$backuppool$: integer\;
\tcc{Counter to track steps taken since last pool switch}
$index$: integer\;
\Indm
\BlankLine
\SetKwBlock{DummyBlock}{}{}
\SetKw{Initialization}{initialization}
\Initialization
\SetAlgoNoLine\DummyBlock
{
\SetAlgoLined
\ForEach{$j \in \{ 1, 2, \dots, n\}$}
{
$start[j] \leftarrow 0$\;
}
\ForEach{$p \in \{p_1, p_2, \dots, p_n\}$}
{
$currentpool \leftarrow 0$\;
$backuppool \leftarrow 1$\;
$index \leftarrow 1$\;
}
}\SetAlgoLined
\BlankLine
\SetKwProg{func}{Function}{}{end}
\func{\newnode{}(~)}
{
\label{line:mem_rec:newnode:begin}
\If{$start[i] = finish[i]$}
{
\label{line:mem_rec:newnode:if}
Step(~)\;
\label{line:mem_rec:newnode:step}
$start[i]++$\;
\label{line:mem_rec:newnode:incstart}
}
\tcc{Return the node in $currentpool$ pointed by $index$}
\Return $pool[currentpool][index]$\;
\label{line:mem_rec:newnode:end}
}
\BlankLine
\columnbreak
\func{\retire{}(~)}
{
\label{line:mem_rec:retire:begin}
\If{$start[i] \neq finish[i].\bRead{(~)}$}
{
\label{line_mem_rec:retire:if}
$finish[i].\bSet{(start[i])}$\;
\label{line:mem_rec:retire:incfinish}
}
}
\BlankLine
\func{Step(~)}
{
\tcc{$index$ will progress in each execution}
\tcc{Blocking}
\uIf{$index \leq n$}
{
\tcc{Take snapshot}
$snapshot[index] = start[i]$\;
\label{line:mem_rec:step:snapshot}
$index++$\;
}
\uElseIf{$index > n$ \textbf{and} $index <= 2n$}
{
\tcc{Wait for others to finish doorway}
\If{$index - n \neq i$}
{
\tcc{No need to wait for self}
$finish[index - n].\bWait{(snapshot[index - n])}$\;
\label{line:mem_rec:step:waiting}
}
$index++$\;
}
\uElseIf{$index = 2n + 1$}
{
\tcc{Backup pool is now reliable}
$currentpool \leftarrow backuppool$\;
\label{line:mem_rec:step:pool_set}
$index++$\;
}
\Else
{
\tcc{Reset backuppool}
$backuppool \leftarrow 1 - currentpool$\;
\label{line:mem_rec:step:pool_reset}
\tcc{Reset index}
$index \leftarrow 1$\;
\label{line:mem_rec:step:index_reset}
}
}
\BlankLine
\end{multicols}
\caption{Pseudocode for Memory reclamation for process $p_i$}
\label{algo:mem_rec}
\end{\algoFontSize}
\end{algorithm}
\subsection{Variables used}
There are two types of non-volatile variables used in this algorithm, shared and local. Variables $start$ and $finish$ are shared non-volatile arrays of integers and \broadcast{} objects respectively, with one entry per process.
The $i$-{th} entry of $start$ is used by process $p_i$ to indicate the number of new nodes requested.
The $i$-{th} entry of $finish$ is used by process $p_i$ to indicate the number of nodes retired. The entries of $finish$ are \broadcast{} objects for other processes to spin on until $finish[i]$ exceeds a particular value.
In addition, we use five local non-volatile variables. The variable $snapshot$ is an array of integers to take a snapshot of $start$ array. Variable $pool$ is a collection of $2 * (2n + 2)$ nodes that would be employed by the underlying mutual exclusion algorithm to enter into \CS{}. Variable $currentpool$ is used to keep track of the active nodes in $pool$ and $backuppool$ is used to account for failures while switching the value of $currentpool$. Variable $index$ is used to keep track of the number of times the method $Step(~)$ has been invoked since the last time $currentpool$ was switched.
\subsection{Algorithm Description}
Each process maintains two pools locally, reserve and active, each of $2n + 2$ nodes ($pool[0,1][1,\dots,2n+2]$). The reserve pool contains nodes that have previously been retired and are in the process of reclamation. The active pool contains a mix of reclaimed nodes that are ready for reuse, and retired nodes that were consumed from the active pool while trying to reclaim the nodes from the reserve pool. The retired and safe nodes in the active pool are separated by the local variable $index$.
The $start$ and $finish$ counters function in sync and differ by at most one. If $start[i] - finish[i] = 1$ for some $i$, it implies that process $p_i$ has left the \NCS{}. On the other hand, if $start[i] - finish[i] = 0$, it implies that process $p_i$ is in the \NCS{} and in a quiescent state. In order to enter the \CS{}, a $p_i$ first requests for a new node by invoking the \textit{\newnode{}(~)} method (\autoref{line:mem_rec:newnode:begin}). This indicates that the process has left the \NCS{} and thus increments the $start[i]$ counter (\autoref{line:mem_rec:newnode:incstart}). Similarly, once a process needs to retire a node, it invokes the \textit{\retire{}} method (\autoref{line:mem_rec:retire:begin}) wherein it updates the $finish$ counter (\autoref{line:mem_rec:retire:incfinish}). The $start$ and $finish$ counters are guarded by if-blocks (\autoref{line:mem_rec:newnode:if}, \autoref{line_mem_rec:retire:if}) to warrant idempotence in case of multiple failures.
A process can consume nodes from the active pool only after taking steps towards reclaiming nodes from the reserve pool (\autoref{line:mem_rec:newnode:step}). The memory reclamation steps are implemented in the $Step(~)$ method. The role of the $Step(~)$ method is two-fold. Firstly, it advances the local variable $index$ during each successful execution in order to guarantee a fresh node on every invocation of the \textit{\newnode{}(~)} method. Second, the $Step(~)$ method performs memory reclamation in three phases.
\begin{enumerate}
\item Snapshot (\autoref{line:mem_rec:step:snapshot}): $p_i$ takes a snapshot of $start[j]$ for all $j \in \{1,\dots,n\}$
\item Waiting (\autoref{line:mem_rec:step:waiting}): $p_i$ waits for $finish[j]$ to ``catch up'' to $start[j]$ using a \broadcast{} object as described in \autoref{sec:broadcast}. Simply put, $p_i$ waits for \textit{very old} unsatisfied requests of other processes to be satisfied. In this context, a request is very old if $p_i$ overtook it $n$ times. This ensures that each process has been in a quiescent state before $p_i$ goes to the pool swapping phase
\item Pool swap (\autoref{line:mem_rec:step:pool_set} and \autoref{line:mem_rec:step:pool_reset}): If process $p_i$ reaches this phase, it implies that at least one \textit{grace} period has elapsed since the nodes in the reserve pool were retired. At this point it is safe to reuse nodes from the reserve pool and $p_i$ simply swaps the active and reserve pool. In order to account for failures, this swap occurs over two invocations of the $Step(~)$ method and the $index$ variable is then reset (\autoref{line:mem_rec:step:index_reset}).
\end{enumerate}
Note that the algorithm is designed in such a way that multiple executions of the \textit{\newnode{}(~)} method will return the same node until the \textit{\retire{}} method is called and vice versa. This design aids to introduce idempotence in and accommodates the failure scenario where a process crashes before being able to capture the node returned by the \textit{\newnode{}(~)} method.
\section{Applications}
\label{sec:application}
Golab and Ramaraju's algorithms \cite{GolRam:2016:PODC} have a bounded space complexity, but use the MCS algorithm as their base lock. The space complexity of the MCS algorithm may grow unboundedly. Using our memory reclamation algorithm, we can bound the space complexity of their algorithms.
Two known sub-logarithmic RME algorithms, from Golab and Hendler \cite{GolHen:2017:PODC}, and, from \JJJ{} \cite{JayJay+:2019:PODC}, both use MCS queue-based structures. Memory reclamation in these algorithms is not trivial and requires careful analysis and proofs. Our memory reclamation algorithm fits perfectly with these algorithms. The main idea is to employ one instance of the memory reclamation algorithm at each level of the sub-logarithmic arbitration tree. As a result, the overall space complexity of these algorithms can be bounded by $\bigO{n^3}$.
Dhoked and Mittal's algorithm \cite{DhoMit:2020:PODC}, also uses a MCS-queue based structure where the space complexity may grow unboundedly. Using a separate instance of our memory reclamation algorithm for each level of their adaptive algorithm, we can bound the space complexity of their algorithm to $\bigO{n^2 * \nicefrac{\log n}{\log\log n}}$
\section{Related Work}
\label{sec:related}
\subsection{Memory reclamation}
In \cite{Mic:2004:TPDS}, Michael used \textit{hazard pointers}, a wait-free technique for memory reclamation that only requires a bounded amount of space. Hazard pointers are special shared pointers that protect nodes from getting reclaimed. Such nodes can be safely accessed. Any node that is not protected by a hazard pointer is assumed to be safe to reclaim. Being shared pointers, hazard pointers are expensive to read and update.
In \cite{Fra:2004:PhD}, Fraser devised a technique called epoch based reclamation (EBR). As the name suggests, the algorithm maintains an epoch counter $e$ and three limbo lists corresponding to epochs $e-1$, $e$ and $e+1$. The main idea is that nodes reitred in epoch $e-1$ are safe to be reclaimed in epoch $e+1$. This approach is not lock-free and a slow process may cause the size of the limbo lists to increase unboundedly.
In \cite{mckenney1998read}, Mckenney and Slingwine present the RCU framework where they demonstrate the use of quiescent state based reclamation (QSBR). QSBR relies on detecting quiescent states and a grace period during which each thread passes through at least one quiescent state. Nodes retired before the grace period are safe to be reclaimed after the grace period. In \cite{arcangeli2003using}, Arcangeli et. al. make use of the RCU framework and QSBR reclamation for the System V IPC in the Linux kernel.
In \cite{Bro:2015:PODC}, Brown presents DEBRA and DEBRA+ reclamation schemes. DEBRA is a distributed extension of EBR where each process maintains its individual limbo lists instead of shared limbo lists and epoch computation is performed incrementally. DEBRA+ relies on hardware assistance from the operating system to provide signalling in order to prohibit slow or stalled processes to access reclaimed memory.
\subsection{Recoverable Mutual Exclusion}
Golab and Ramaraju formally defined the RME problem in \cite{GolRam:2016:PODC}. They also presented four different RME algorithms---a 2-process RME algorithm and three $n$-process RME algorithms. The first algorithm is based on Yang and Anderson's lock \cite{YanAnd:1995:DC}, and is used as a building block to design an $n$-process RME algorithm. Both these RME algorithms use only read, write and comparison-based primitives. The worst-case RMR complexity of the 2-process algorithm is $\bigO{1}$ whereas that of the resultant $n$-process algorithm is $\bigO{\log n}$. Both RME algorithms have optimal RMR complexity because, as
shown in~\cite{AttHen+:2008:STOC, AndKim:2002:DC, YanAnd:1995:DC}, any mutual exclusion algorithm that uses only read, write and comparison-based primitives has worst-case RMR complexity of $\bigOmega{\log n}$. The remaining two algorithms are used as transformations which can be applied to the MCS algorithm. The third algorithm transforms the MCS algorithm to yield a constant RMR complexity in the absence of failures, but unbounded worst case RMR complexity. The fourth algorithm transforms the MCS algorithm to achieve bounded RMR complexity in the worst case.
Later, Golab and Hendler \cite{GolHen:2017:PODC} proposed an RME algorithm with sub-logarithmic RMR complexity of $\bigO{\nicefrac{\log n}{\log \log n}}$ under the CC model using MCS queue based lock~\cite{MelSco:1991:TrCS} as a building block. This algorithm was later shown to be vulnerable to starvation~\cite{JayJay+:2019:PODC}. Ramaraju showed in~\cite{Ram:2015:Thesis} that it is possible to design an RME algorithm with $\bigO{1}$ RMR complexity provided the hardware provides a special RMW instruction to swap the contents of two arbitrary locations in memory atomically. Unfortunately, at present, no hardware supports such an instruction to our knowledge.
In~\cite{JayJos:2017:DISC}, Jayanti and Joshi presented a fair RME algorithm with $\bigO{\log{n}}$ RMR complexity. Their algorithm satisfies bounded (wait-free) exit and FCFS (first-come-first-served) properties and only requires a bounded amount of space consumption. In~\cite{JayJay+:2019:PODC}, \JJJ{} proposed an RME algorithm
that uses MCS queue-based structure to achieve sub-logarithmic RMR complexity of $\bigO{\nicefrac{\log n}{\log \log n}}$. To our knowledge, this is the best known RME algorithm as far as the worst-case RMR complexity is concerned that also satisfies bounded recovery and bounded exit properties.
In \cite{DhoMit:2020:PODC}, Dhoked and Mittal use the MCS queue-based lock to present an adaptive transformation to any RME algorithm whose RMR complexity is constant in the absence of failures and gradually adapts to the number of failures. The RMR complexity of their algorithm is given by $min\{\sqrt{F}, \nicefrac{\log n}{\log\log n}\}$. Using a weaker version of starvation freedom, Chan and Woelfel \cite{ChaWoe:2020:PODC} present a novel solution to the RME problem that incurs a constant number of RMRs in the amortized case, but its worst case RMR complexity may be unbounded.
In~\cite{GolHen:2018:PODC}, Golab and Hendler proposed an RME algorithm under the assumption of system-wide failure (all processes fail and restart) with $\bigO{1}$ RMR complexity.
\section{Conclusion and Future work}
\label{sec:concl_future}
In this work, we formalized the problem of memory reclamation for recoverable mutual exclusion algorithms and present a plug-and-play solution that can be used by existing and new RME algorithms. Our algorithm is RMR-optimal for both the CC and DSM models. Next steps would be to design a recoverable memory reclamation for RME that satisfies some notion of fairness. Another direction of work involves formulating the problem of memory reclamation for recoverable lock-free data structures and designing algorithms for the same.
\bibliographystyle{ACM-Reference-Format}
| 2024-02-18T23:39:54.018Z | 2021-03-03T02:19:11.000Z | algebraic_stack_train_0000 | 771 | 8,013 |
|
proofpile-arXiv_065-3816 |
\subsection{Case Study 2: CIFAR-10}
\vspace*{-1mm}
In order to demonstrate that our approach is applicable to a range of problems we applied our method to a second well known classification problem, CIFAR-10. The data set consists of 60,000 32$\times$32 colour images in 10 classes with 5000 training images and 1000 test images per class. Table~\ref{tab:cifar_names} shows the names of the classes in this benchmark. The complexity and diversity of the images in this set is a more challenging classification task than the traffic sign problem. We again constructed models of increasing complexity with two models at each level. The accuracy of these models for the unperturbed test set is given in Table~\ref{tab:cifar-accuracy}.
\begin{table}[t]
\vspace*{-4mm}
\centering
\sffamily
\caption{CIFAR-10 class descriptions}
\label{tab:cifar_names}
\vspace*{1mm}
\setlength\tabcolsep{5pt}
\begin{scriptsize}
\begin{tabular}{ccccccccccc}
\toprule
class &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 \\
name &airplane & automobile & bird & cat & deer &dog &frog &horse & ship&truck \\
\bottomrule
\end{tabular}
\end{scriptsize}
\vspace{-5mm}
\end{table}
\begin{table}[t]
\setlength\tabcolsep{5pt}
\centering
\sffamily
\caption{CIFAR-10 model accuracy \label{tab:cifar-accuracy}}
\vspace*{1mm}
\begin{scriptsize}
\begin{tabular}{clcclcclc}
\toprule
\multicolumn{2}{c}{Model} & Accuracy& \multicolumn{2}{c}{Model} & Accuracy& \multicolumn{2}{c}{Model} & Accuracy\\ \midrule
4A & \multirow{2}{*}{Small Relu} & 49.11&
5A & \multirow{2}{*}{Large Relu}& 53.20&
6A & \multirow{2}{*}{CNN} & 84.07\\
4B & & 47.45 &
5B && 53.04&
6B & & 85.17\\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{table}
\vspace*{1.5mm} \noindent \textbf{DeepCert\ with test-based verification.} Model accuracy in the presence of the three forms of contextual perturbation are shown in Figure~\ref{fig:cifar-accuracy-perturbation}. We once more note the accuracy degrades as $\epsilon$ is increased for all perturbation types. For haze we observe a point at which the best model changes. This indicates that a system which is able to switch between models as the level of haze increases may demonstrate improved robustness. We also note that the CNN models outperform the simpler models by a significant margin under most conditions. For blur, however, when $\epsilon > 0.7$ the CNN models under perform the simpler models.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.328\textwidth}
\includegraphics[width=\textwidth]{images/Cifar/cifar-haze_accuracy.pdf}
\caption{Haze}
\label{fig:CifarHazeAccuracy}
\end{subfigure}
\begin{subfigure}[b]{0.328\textwidth}
\includegraphics[width=\textwidth]{images/Cifar/cifar-contrast_accuracy.pdf}
\caption{Contrast}
\label{fig:CifarContrastAccuracy}
\end{subfigure}
\begin{subfigure}[b]{0.328\textwidth}
\includegraphics[width=\textwidth]{images/Cifar/cifar-blur_accuracy.pdf}
\caption{Blur}
\label{fig:CifarBlueAccuracy}
\end{subfigure}
\caption{CIFAR-10 model robustness\label{fig:cifar-accuracy-perturbation}}
\vspace{-5mm}
\end{figure}
Figure~\ref{fig:cifar-class-performance} shows the class accuracy for the CNN models subjected to the blur perturbation. We observe that the performance of classes between the models varies as shown in the traffic sign sign study.
The accuracy of class~3 in model~6A, for example, is lower than that seen in Model~6B until $\epsilon >0.7$.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Cifar/model6a-blur_class-accuracy.pdf}
\caption{Model 6A}
\label{fig:Model6ABlurClass}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Cifar/model6b-blur_class-accuracy.pdf}
\caption{Model 6B}
\label{fig:Model6BBlurClass}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Cifar/model6a-blur_epsilons.pdf}
\caption{Model 6A}
\label{fig:Model6ABlurClassBox}
\end{subfigure}
\hspace*{4mm}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Cifar/model6b-blur_epsilons.pdf}
\caption{Model 6B}
\label{fig:Model6BBlurClassBox}
\end{subfigure}
\caption{CIFAR-10 class robustness with respect to blur\label{fig:cifar-class-performance}}
\vspace*{-5mm}
\end{figure}
\vspace*{1.5mm}\noindent \textbf{DeepCert\ with formal verification.} Formal verification was applied to models 4A and 4B by again choosing 30 samples which we perturbed with haze. The results were in line with those found for the traffic sign model, but in addition we found a sample (\#14) for model 4A which returned a lower robustness bound than when using test-based verification. Table~\ref{tab:nonLinearEpsilon} shows the predicted class $\hat{y}$ for this sample as $\epsilon$ is increased. We note that the sample is misclassified at $\epsilon=0.0723$ which was found using Marabou, it then returns to classifying the sample correctly before misclassifying again at $\epsilon=0.365$, the value found through testing. This confirms that, whilst testing may correctly identify the robustness bound for the majority of cases, formal verification is required for guarantees of robustness.
\begin{table}[t]
\centering
\setlength\tabcolsep{10pt}
\sffamily
\caption{Formal versus test-based verification, correct label $y=9$}
\label{tab:nonLinearEpsilon}
\vspace*{1mm}
\begin{scriptsize}
\begin{tabular}{cccc} \toprule
$\epsilon$ & $\hat{y}$ & $\epsilon$ & $\hat{y}$ \\ \midrule
0.002 & 9 & 0.15 & 1\\
0.035 & 9 & 0.18 & 9\\
0.050 & 9 & 0.2 & 9\\
0.0723 & 1 & 0.03 & 9\\
0.1 & 1 & 0.365 & 2 \\ \bottomrule
\end{tabular}
\end{scriptsize}
\vspace{-5mm}
\end{table}
\section{Conclusions and Future Work\label{sec:conclusions}}
In this paper we have introduced DeepCert, a tool-supported method for the systematic verification of contextually relevant robustness for neural network classifiers. We have shown that the accuracy of a DNN image classifier is a function of the perturbation type to which sample images are exposed, and that through a systematic verification of the robustness with respect to these perturbations a more informed decision may be made to select a DNN model.
In future work we plan to investigate the use of alternative formal verification techniques with DeepCert, and the use of more complex models of natural phenomena, parameterised for use within the framework. We also intend to investigate methods for allowing for the systematic assessment of robustness within regions of the input space e.g. rain drops on a lens effecting part of an image.
\section{Experimental Results~\label{sec:experimental}}
\vspace*{-2mm}
\subsection{Case Study 1: Road Traffic Speed Sign Classification}
\vspace*{-1mm}
Our first case study uses a subset of the German Traffic Sign benchmark~\cite{stallkamp2011german} where each sample is a 32$\times$32 RGB image.
From this set we selected the seven classes which represented speed signs, the number of samples in each class are shown in Table~\ref{tab:cs1Data}.
We then built classification models at three levels of complexity with two models per level. The accuracy for all six models is reported in Table~\ref{tab:cs1Models} which shows accuracy increasing with model complexity.
\begin{table}[t]
\centering
\caption{German Speed Sign Classification: Data and Models}
\vspace*{-2mm}
\begin{scriptsize}
\begin{subtable}{.5\linewidth}
\centering
\caption{Data Sets\label{tab:cs1Data}}
\sffamily
\begin{tabular}{cccc}
\toprule
Class & Description & \# Train &\# Test \\ \midrule
0 & 30 km/h & 1980 & 720\\
1 & 50 km/h & 2010 & 750\\
2 & 60 km/h & 1260 & 450\\
3 & 70 km/h & 1770 & 660\\
4 & 80 km/h & 1650 & 630\\
5 & 100 km/h & 1290 & 450\\
6 & 120 km/h & 1260 & 450\\ \bottomrule
\end{tabular}
\end{subtable}%
\begin{subtable}{.5\linewidth}
\centering
\caption{Models\label{tab:cs1Models}}
\begin{tabular}{clc}
\toprule
Model & Description & Accuracy\\ \midrule
1A & \multirow{2}{*}{Small ReLu only model} & 0.816\\
1B & & 0.847\\ \midrule
2A & \multirow{2}{*}{Large ReLu only model}& 0.868\\
2B && 0.866\\ \midrule
3A & \multirow{2}{*}{CNN Model} & 0.988\\
3B & & 0.984\\
\bottomrule
\end{tabular}
\end{subtable}
\end{scriptsize}
\vspace*{-4mm}
\end{table}
\vspace*{1.5mm}\noindent
\textbf{DeepCert\ with test-based verification.} For each model we applied our method using test-based verification, an initial value of $\epsilon =0.5$ and a binary search heuristic with a maximum permissible interval of 0.002.
Figure~\ref{fig:ModelHaze} shows the impact of haze on model accuracy as $\epsilon$ is increased.
While Table~\ref{tab:cs1Models} shows model~3A to be the most accurate (0.988) without perturbation, we note that for $\epsilon \gtrapprox 0.7$, model~3B achieves superior accuracy.
This behaviour is more clearly seen if we consider the ReLu-only models. Here model~2A has the best initial performance, but this rapidly deteriorates as $\epsilon$ increases such that other models are superior for even small amounts of haze.
These results demonstrate the dangers of selecting a model on the basis of the accuracy reported for unperturbed samples, and show how DeepCert\ enables a more meaningful model selection for the operational context. Indeed, were the system to be equipped with additional sensing, to assess the level of haze present, the engineer may choose to switch between models as the level of haze increased.
\begin{figure}[b]
\vspace*{-4mm}
\centering
\includegraphics[width=0.5\linewidth]{images/Accuracy/gtsb-haze_accuracy.pdf}
\vspace{-2mm}
\caption{Model robustness to haze.}\label{fig:ModelHaze}
\end{figure}
Our method also allows for the identification of those classes particularly susceptible to contextual perturbations. Figure~\ref{fig:hazeDistribution} shows the performance of the convolutional neural network (CNN) models at different levels of perturbation. We note that class 1 is largely insensitive to haze, this is because an image perturbed with $\epsilon=1$ results in a solid colour image which is classified as class 1 by both models. For all other classes the accuracy reduces as haze increases. The amount of degradation is seen to be dependent on the sample class and the model used. For example, class 0 is more robust to haze in model 3B than in 3A with class~3 more robust in model 3A.
Figure~\ref{fig:Model5HazeClassBox} and~\ref{fig:Model6HazeClassBox} show the distribution of $\epsilon$ values required to cause misclassification.
For class 3 we see that a number of samples are misclassified for small perturbations using model 3B but not 3A. An engineer wishing to deploy model~3B may examine these outliers to determine any correlation in image features. This may then allows for mitigation strategies at run-time or retraining with additional data samples.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Haze/model3a-haze_class-accuracy.pdf}
\vspace*{-1.5mm}
\caption{Model~3A}
\label{fig:Model5HazeClass}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Haze/model3b-haze_class-accuracy.pdf}
\vspace*{-1.5mm}
\caption{Model~3B}
\label{fig:Model6HazeClass}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Haze/model3a-haze_epsilons.pdf}
\vspace*{-1.5mm}
\caption{Model~3A}
\label{fig:Model5HazeClassBox}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/Haze/model3b-haze_epsilons.pdf}
\vspace*{-1.5mm}
\caption{Model~3B}
\label{fig:Model6HazeClassBox}
\end{subfigure}
\vspace*{-1mm}
\caption{Model Robustness with respect to haze\label{fig:hazeDistribution}}
\vspace{-5mm}
\end{figure}
Our method also allows for the generation of meaningful counter examples for image based classifiers. Figure~\ref{fig:CE-Haze-Model5} shows counterexamples for model~3A and illustrates the average level of haze which each class can withstand before misclassification occurs.
This visual representation of perturbation levels allows domain experts to consider the robustness of the model with respect to normal operating conditions.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{images/Haze/CounterExampleHaze1.png}
\vspace*{-2.5mm}
\caption{Counterexamples for model~3A. Upper row is the original image, lower row has perturbation applied at the average level required for misclassification.}
\label{fig:CE-Haze-Model5}
\end{figure}
Having demonstrated our approach using the haze perturbation we now show results for the contrast and blur effects. Model accuracy in the presence of these perturbations is shown in Figure~\ref{fig:ModelAccuracy2}. We see that whilst the accuracy of models degrades as the amount of perturbation increases, the shape of the curves and the effect on individual models is different.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]{images/Accuracy/gtsb-contrast_accuracy.pdf}
\caption{Contrast}
\label{fig:ModelAccuracyContrast}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]{images/Accuracy/gtsb-blur_accuracy.pdf}
\caption{Blur}
\label{fig:ModelAccuracyBlur}
\end{subfigure}
\vspace*{-1mm}
\caption{Model accuracy with respect to increased contrast and blur effects\label{fig:ModelAccuracy2}}
\vspace*{-4mm}
\end{figure}
Model~3A was the most accurate model for much of the perturbation range under the effects of haze, while model~3B is superior with respect to contrast effects. We also see that while model~2B was relatively robust to haze, its robustness to contrast is poor. This shows that selecting a single model for all environmental conditions is unlikely to provide optimal performance. Our method allows for a greater understanding of models weaknesses when in the presence of natural phenomena and may allow for more intelligent choices to be made.
\noindent
\textbf{DeepCert\ with formal verification.}
For model~1A and 1B, we ran our method on the first 30 images correctly classified as class 3 in the test sets to compute the minimum $\epsilon$ values for hazing using contextual perturbations and for traditional $l_{\infty}$ norm perturbations. For all 30 samples the value of $\epsilon$ found through formal verification was the same as that for the test based verification, although we can not guarantee this to be true for all samples in the testing set.
Table~\ref{tab:verification} shows selected results from the formal verification compared with the test-based verification
Sample $\#4$ has an $l_\infty$ norm for model~1A that is lower than that of model~1B. This would indicate that model~1B is more robust. Examining contextual robustness, however, we see that model~1A is able to withstand more haze before misclassification occurs. A similar result is shown for sample $\#52$. This time however model~1A would be judged more robust by the $l_\infty$ measure whilst model~1B is more robust according to the contextual measure.
Other samples report identical $l_\infty$ measures between models (samples 114, 47, 3 and 15) yet their response to haze is different e.g. sample $\#114$ using model~1A is able to withstand almost twice as much haze as model~1B.
These results demonstrate that our methods are able to use formal verification techniques, where the model form allows for such analysis. We also note that non-contextual point robustness is insufficient to assess the robustness of models in the presence of contextual perturbations.
\begin{table}[t]
\caption{Minimum $\epsilon$ values for $l_{inf}$ and hazing perturbation on test images. \label{tab:verification}}
\vspace*{1mm}
\setlength\tabcolsep{10pt}
\centering
\sffamily
\begin{scriptsize}
\begin{tabular}{@{}ccccccc@{}}
\toprule
& \multicolumn{3}{c}{Model 1A} & \multicolumn{3}{c}{Model 1B} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
& \multicolumn{2}{c}{Verification} & \multicolumn{1}{c}{Test} & \multicolumn{2}{c}{Verification} & \multicolumn{1}{c}{Test} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
sample & $l_{\infty}$ & Haze & Haze & $l_{\infty}$ & Haze & Haze \\
\cmidrule{1-7}
4 & 0.002 & 0.623 & 0.623 & 0.006 & 0.525 & 0.525 \\
114 & 0.002 & 0.451 & 0.451 & 0.002 & 0.225 & 0.225 \\
47 & 0.006 & 0.592 & 0.592 & 0.006 & 0.752 & 0.752 \\
52 & 0.006 & 0.830 & 0.830 & 0.010 & 0.654 & 0.654 \\
3 & 0.010 & 0.764 & 0.764 & 0.010 & 0.713 & 0.713 \\
15 & 0.010 & 0.760 & 0.760 & 0.010 & 0.810 & 0.810 \\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{table}
\section{Implementation\label{sec:implemetnation}}
\vspace*{-2mm}
We implemented our method using a Python framework which we have made available on our tool website \href{https://contextualrobustness.github.io}{https://deepcert.github.io}. The repository includes all models used in the paper, the code for the DeepCert\ tool with the encoded perturbations presented in the paper, the supporting scripts required to generate the performance visualisations and instructions on how to use the framework. In addition, a version of Marabou is provided with a Python interface in which the haze perturbation from the previous section is encoded.
\section{Introduction}
Deep neural network (DNN) image classifiers are increasingly being proposed for use in safety critical applications~\cite{gauerhof2020assuring,mitani2020detection,picardi2020assurance,tabernik2019deep}, where their accuracy is quoted as close to, or exceeding, that of human operators~\cite{de2018clinically}. It has been shown, however, that when the inputs to the classifier are subjected to small perturbations, even highly accurate DNNs can produce erroneous results~\cite{GoodfellowSS14,grosse2017statistical,yuan2019adversarial}. This has lead to intense research into verification techniques that check whether a DNN is robust to perturbations within a small distance from a given input, where this distance is measured using an $L_p$ norm (e.g., the Euclidean norm for $p=2$)~\cite{DuttaJST18,KaBaDiJuKo17Reluplex,katz2019marabou,PuTa10}.
These techniques are particularly useful for identifying potential adversarial attacks on DNNs~\cite{GoodfellowSS14,kurakin2016adversarial,Moosavi-Dezfooli16,PapernotMJFCS16}. They are also useful when small changes in the DNN inputs correspond to meaningful changes in the real world, e.g., to changes in the speed and course of an aircraft for the ACAS Xu DNN verified in~\cite{KaBaDiJuKo17Reluplex}.
For DNN image classifiers, small $L_p$-norm image changes are not always meaningful. Changes that may be more meaningful for such DNNs (e.g., image blurring, hazing, variations in lighting conditions, and other natural phenomena) can also cause misclassifications, but are difficult to map to small pixel variations
\cite{hamdi2020towards,mohapatra2020verifying}, and thus cannot be examined using traditional DNN verification techniques. What is needed for the comparison and selection of DNN image classifiers used in safety-critical systems is a \emph{contextually relevant robustness verification} method capable of assessing the robustness of DNNs to these real-world phenomena~\cite{ashmore2019assuring,Carlini017,TianPJR18,zhang2018deeproad}. Moreover, this verification needs to be performed at DNN level (i.e., across large datasets with imagine samples from all relevant classes) rather than for a single sample image.
The tool-supported DeepCert\footnote{\underline{Deep} neural network \underline{C}ont\underline{e}xtual \underline{r}obus\underline{t}ness} method introduced in our paper addresses these needs by enabling:
\vspace*{-1.75mm}
\begin{enumerate}
\item The formal encoding of contextually relevant image perturbations at quantified perturbation levels $\epsilon\in[0,1]$.
\item The verification of contextually relevant DNN robustness, to establish how the accuracy of a DNN degrades as the perturbation level $\epsilon$ increases. DeepCert\ can perform this verification using either test-based (fast but approximate) or formal verification (slow but providing formal guarantees).
\item The generation of contextually relevant counterexamples. These counterexamples provide engineers with visually meaningful information about the level of blur, haze, etc. at which DNN classifiers stop working correctly.
\item The selection of DNNs appropriate for the operational context (i)~envisaged when a safety-critical system is designed, or (ii)~observed by the deployed system during operation.
\end{enumerate}
\vspace*{-1.75mm}
We organised the rest of the paper as follows. Section~\ref{sec:method}
describes our DeepCert\ verification method, explaining its encoding of contextual perturbations, and detailing how it can be instantiated to use test-based and formal verification. Section~\ref{sec:implemetnation} presents the DeepCert\ implementation, and Section~\ref{sec:experimental} describes the experiments we performed to evaluate it. Finally, Section~\ref{sec:related} discusses related work, and Section~\ref{sec:conclusions} provides a summary and outlines future research directions.
\section{DeepCert\ verification method\label{sec:method}}
\vspace*{-2.4mm}
\subsection{Overview}
\vspace*{-1.5mm}
Figure~\ref{fig:Process} shows our DeepCert\ method for the systematic verification of contextually relevant DNN robustness. DeepCert\ accepts as input a set of $m\geq 1$ DNN models, $\bar{\mathcal{M}}$, and a dataset of $n\geq 1$ labelled image samples, $\Omega$. Each element $u \in \Omega$ is a tuple $u = (X, y)$ where $X\in \mathcal{X}$ is the input sample, $\mathcal{X}$ is the DNN input space, and $y$ is a label indicating the class into which the models should place the sample.
During model evaluation, each model $\mathcal{M}_i \in \bar{\mathcal{M}}$ is evaluated against each labelled data sample $(X_j, y_j)\in \Omega$, to find a robustness measure for that sample. The results are then presented to the engineer as visualisations that enable model-level contextual robustness evaluation and comparison.
\begin{figure}[tb]
\centering
\includegraphics[width=0.96\linewidth]{images/ContextRobustnessProcess4.pdf}
\caption{DeepCert\ process for verifying contextually meaningful DNN robustness.}
\label{fig:Process}
\vspace*{-4mm}
\end{figure}
The sample evaluation (top of Figure~\ref{fig:Process}) is a three-stage iterative process.
The first stage (A) encodes the contextual perturbation using a function $g:\mathcal{X}\times [0,1]\rightarrow 2^\mathcal{X}$ that maps the data sample $X_j\in\mathcal{X}$ and a \emph{perturbation level} $\epsilon\in [0,1]$ to a set of DNN inputs $\mathcal{Z} = g(X_j, \epsilon)\in 2^\mathcal{X}$ corresponding to images obtained by applying the contextual perturbation being verified (e.g., haze or blur) to the original image sample $X_j$. As we explain later in this section, $g$ applies the perturbation at level $\epsilon$ when DeepCert\ employs test-based verification, and at \emph{all} levels in the range $[0,\epsilon]$ when DeepCert\ employs formal verification.
The second stage (B) verifies whether the model $\mathcal{M}_i$ is robust to the contextual perturbation $(\mathcal{Z},y_j)$, i.e., whether it classifies all images from $\mathcal{Z}$ as belonging to class $y_j$. The output of this stage is a Boolean value, $\mathsf{true}$ ($\mathsf{T}$) or $\mathsf{false}$ ($\mathsf{F}$).
The final state (C) is a search heuristic that supplies the $\epsilon$ value used for the contextul perturbation encoding from stage~A, and employs binary search to identify perturbation level bounds $\underline{\epsilon},\bar{\epsilon}\in[0,1]$ such that:
\vspace*{-2mm}
\begin{itemize}
\item either $\underline{\epsilon}<\bar{\epsilon}$, the correct class $y_j$ is predicted for $\epsilon = \underline{\epsilon}$, and a misclassification occurs for $\epsilon =\bar{\epsilon}$;
\item or $\underline{\epsilon}=\bar{\epsilon}=0$, and the DNN misclassifies $X_j$ (with no perturbation applied).
\end{itemize}
\vspace*{-2mm}\noindent
After checking whether $X_j$ is classified correctly by model $\mathcal{M}_i$, the search heuristic starts with $\underline{\epsilon}=0$ and $\bar{\epsilon}=1$, halves the width of the interval $[\underline{\epsilon},\bar{\epsilon}]$ in each iteration, and terminates when the width $\bar{\epsilon}-\underline{\epsilon}$ of this interval drops below a predefined value $\omega$. The final interval $r_{i,j}=[\underline{\epsilon},\bar{\epsilon}]$ is then returned.
Applying sample evaluation to each model $\mathcal{M}_i\!\in\!\mathcal{M}$ and every sample $X_j\!\in\!\Omega$ provides a result set $R = \{r_{1,1}, r_{1,2}, \cdots r_{m,n}\}$, where $r_{i,j} $ is the interval for the $i$-th model and $j$-th image sample. For each result, a counterexample $X_j'$ can be generated, if one exists (i.e., if $\underline{\epsilon}<1$), by perturbing the sample $X_j$ at level $\epsilon=\bar{\epsilon}$. Evaluating $X'_j$ using model $\mathcal{M}_i$ produces a misclassification label $\hat{y}_j$.
Visualisations of model and class robustness are then produced in which the accuracy of the models is presented as a function of the perturbation parameter $\epsilon$. By examining the accuracy of models across the range of expected perturbations, we can identify the conditions under which model switch should occur, e.g. one model may perform well at low levels of haze whilst a second may be superior as the level of haze present increases. Where the visualisations indicate that a particular class accuracy is highly sensitive to changes in $\epsilon$ this may indicate the need to choose a less sensitive model, or to gather additional training data.
\vspace*{-1.5mm}
\subsection{DeepCert\ instantiation for test-based verification\label{sec:MethodTest}}
\vspace*{-1.5mm}
For test-based verification, the contextual perturbation encoding function $g$ maps an image $X$ to a set $\mathcal{Z}$ comprising a single modified image $X'$ obtained by applying a perturbation function:
\begin{equation}
x'_{i,j} = \mathit{perturbation}(X_{i,j},\epsilon),
\end{equation}
where $x'_{i,j}$ is the pixel at position $(i,j$) in the modified image $X'$ and $X_{i,j}$ is a subset of pixels from the original image $X$.
For colour images, a sample $X$ is encoded as an array of pixels each of which is a 3-tuple of values representing the red, green and blue components of the colour in that pixel. We detail below the encoding of three typical contextual perturbations (Figure~\ref{fig:epsilonImage}).
\vspace*{1.5mm}\noindent
\textbf{Haze encoding.}
Haze represents a phenomenon where particles in the atmosphere scatter the light reaching the observer. The effect is to drain colour from the image and create a veil of white, or coloured, mist over the image. While realistic approaches to the modelling of haze require complex models~\cite{zhang2017towards}, simplifying assumptions can be made. Assuming the haze is uniform, a haze colour may be defined as $C^{f} = (r,g,b)$ and applied to the image as:
\begin{equation}
x'_{i,j} = (1-\epsilon) x_{i,j} + \epsilon~ C^{f}
\end{equation}
where $\epsilon \in [0,1]$ is a proxy for the density of the haze. When $\epsilon = 0$ the image is unaltered and when $\epsilon = 1$ the image is a single solid colour $C^f$. Multiplication and addition are applied to the pixel in an element-wise manner.
\vspace*{1.5mm}\noindent
\textbf{Contrast variation encoding.}
When fixed aperture lenses are employed, or when the dynamic range of the scene is extreme, the contrast in the image may become compressed. This effect may be modelled as:
\begin{equation}
x'_{i,j} = \textsf{Max}\left(0,\textsf{Min}\left(1,\frac{x_{i,j} - (0.5*\epsilon)}{1- \epsilon}\right) \right)
\end{equation}
The effect of applying this function is to make bright parts of the image lighter and dark parts of the image darker.
\subsubsection{Blur encoding.}
Blurring in an image occurs when parts of the image are out of focus due to the limited capabilities of the optics employed in the system or when grease or water droplets are present on the lens.
Blur can be synthesised using a convolutional kernel of size $2k_d + 1$ where the value of a pixel in the output image is calculated as a weighted sum of neighbouring pixels:
\vspace*{-2.5mm}
\begin{equation}
x'_{i,j} = \sum_{k=-k_d}^{k_d} \sum_{l=-k_d}^{k_d} \alpha_{k,l}\cdot x_{i+k,j+l}
\end{equation}
\vspace*{-1mm}\noindent
The weights $\alpha_{k,l}\in(0,1)$ are calculated by discretising a two-dimensional Gaussian curve, where the sum of weights is equal to one, $\sum_{k=-k_d}^{k_d} \sum_{l=-k_d}^{k_d} \alpha_{k,l} = 1$.
In our work, we define $\epsilon$ to be proportional to the standard deviation of the Gaussian distribution across the kernel and calculate the weights accordingly.
\begin{figure}[t]
\centering
\includegraphics[width=0.62\textwidth]{images/EpsilonImages.pdf}
\vspace*{-2mm}
\caption{Context perturbations applied to image sample\label{fig:epsilonImage}}
\vspace{-5mm}
\end{figure}
\vspace*{-2.5mm}
\subsection{DeepCert\ instantiation for formal verification}
\vspace*{-1mm}
While test-based verification is computationally efficient, this efficiency is obtained by sacrificing completeness, i.e. if the perturbed image corresponding to an $\epsilon$ value of $p$ is not an adversarial example, we cannot guarantee that the network is robust against all perturbations with $\epsilon$ smaller than $p$. Formal verification tools, by contrast, can provide such guarantees, but typically impose constraints on the types of models and perturbations which can be analysed.
To demonstrate the use of formal verification within DeepCert, we integrated it with Marabou \cite{katz2019marabou}, a complete verification toolbox for analyzing DNNs.
Marabou handles common piecewise linear activation functions (e.g., ReLU, Max-Pool, Sign), integrates multiple state-of-the-art bound tightening techniques \cite{eran,tjeng2017evaluating,WangPWYJ18F}, and supports parallel processing \cite{wu2020parallelization}. Given a neural network and a verification query, Marabou constructs a set of linear and piecewise linear constraints. The satisfiability of the conjunction of those constraints is evaluated using either an MILP-solver or the Reluplex procedure \cite{KaBaDiJuKo17Reluplex}. Given sufficient time, Marabou will either conclude that the query is unsatisfiable or return a satisfying assignment to the query. For this work we extended Marbou to allow for the encoding of contextual perturbations using an input perturbation function, as detailed below for haze.
\vspace*{1.5mm}
\noindent
\textbf{Haze encoding.}
Given a DNN model $\mathcal{M}$, an image $X$, a fog colour $C^f$, and a maximum perturbation bound $p$, we introduce variables $\vect{X}, \vect{Y}$ and $\epsilon$, denoting the DNN inputs, the DNN outputs and the perturbation bound, respectively. $\vect{X}$ has the same shape as $X$. We then construct the following set of constraints:
\vspace*{-6mm}
\begin{subequations}
\begin{align}
\vect{Y} &= \mathcal{M}(\vect{X}) \label{eqn:haze1} \\ \displaybreak[3]
0 &\leq \epsilon \leq p \label{eqn:haze2}\\ \displaybreak[3]
\bigwedge_{i\leq |\vect{X}|} \big( \vect{x_i} &= (1-\epsilon) x_i + \epsilon~C^f \big) \label{eqn:haze3} \\ \displaybreak[3]
\bigvee_{\substack{i<=|\vect{Y}| \\ \vect{y_i} \neq \vect{y_{real}}}} \vect{y_i} & \geq \vect{y_{real}} \label{eqn:haze4}
\end{align}
\label{eqn:haze}
\end{subequations}
\vspace*{-3mm}\noindent
Checking the satisfiability of the constraints allows us to state if the network is robust against the haze perturbation for $\epsilon \leq p$.
Constraint \eqref{eqn:haze1} denotes the relationship between $\vect{X}$ and $\vect{Y}$. It is a piecewise linear constraint if $\mathcal{M}$ only contains piecewise linear activation functions. Constraint \eqref{eqn:haze2} represents the perturbation bounds. Constraint \eqref{eqn:haze3} defines the input variables as results of the hazing perturbation. Finally, let $\vect{y_{real}}$ be the correct label, constraint \eqref{eqn:haze4} denotes that the output variable corresponding to the correct label is not greater than that of some other label. The network is locally adversarially robust against haze perturbation with $\epsilon \leq p$ if, and only if, the conjunction of the constraints above is unsatisfiable. If the constraints above is satisfiable, there exists a perturbation within $\epsilon$ such that some output other than $\vect{y_{real}}$ is maximal.
\section{Related Work\label{sec:related}}
It is well known~\cite{SzegedyZSBEGF13} that neural networks, including highly trained and smooth networks, are vulnerable to adversarial perturbations; these are small changes to an input (which are imperceptible to the human eye) that lead to mis-classifications. The vast majority of the work in this area focuses on formulating adversarial examples with respect to perturbations defined with $L_p$ norms. The problem is typically formulated as follows: for a given network $F$ and an input $x$, find an input $x'$ for which $F(x')\neq F(x)$ while minimising $\|x-x'\|$.
The metric used to compute the distance between points is typically the Euclidean distance ($L_2$ norm), the Manhattan distance ($L_1$ norm), or the Chebyshev distance ($L_{\infty}$ norm). Methods for finding adversarial examples and for checking robustness of neural networks to adversarial perturbations range from heuristic and optimisation-based techniques~\cite{GoodfellowSS14,kurakin2016adversarial,PapernotMJFCS16,Carlini017,Moosavi-Dezfooli16} to formal analysis techniques which are based on constraint solving, interval analysis or abstract interpretation~\cite{HuangKWW17,KaBaDiJuKo17Reluplex,DBLP:conf/sp/GehrMDTCV18,WangPWYJ18F,WangPWYJ18E,DuttaJST18,abs-1712-06174}.
In contrast to these works, which focus on local robustness, we take a more global view, as we aim to evaluate models on many input points and use the results to assess and compare models and inform developers' choices. Furthermore, we aim to study more natural (contextual) perturbations, as we do not limit ourselves to $L_p$ norms.
Other researchers have started to look into robustness verification beyond the $L_p$-norm threat model. For instance, Semantify-NN~\cite{mohapatra2020verifying} addresses robustness verification against {\em semantic} adversarial attacks,
such as colour shifting and lighting adjustment. It works by inserting semantic perturbation layers to the input layer of a given model, and leverages existing $L_p$-norm based verification tools to verify the model robustness against semantic perturbations. In our work, we also leverage an off-the-shelf verification tool (namely Marabou) to enable verification with respect to semantically meaningful perturbations. We do not modify the models, but instead encode the checks as Marabou queries.
| 2024-02-18T23:39:54.055Z | 2021-03-03T02:24:30.000Z | algebraic_stack_train_0000 | 772 | 5,334 |
|
proofpile-arXiv_065-4117 |
\section{Experimental Results \& Analysis}
In this section, we report our quantitative and qualitative analyses for validating the benefit of our proposed method.
Our experiments are designed to compare different baselines and ablations for detecting and tagging relationships given a video as well as recognizing relationships in a fine-grained manner. \\
\input{cr_exp_dataset.tex}
\begin{table*}[t!]
\begin{center}
\footnotesize
\fontsize{7pt}{10pt}
\selectfont
\vspace{-4mm}
\begin{tabular}{|cc||ccc||ccc||cccc|}
\hline
\multirow{3}{*}{Method} & Correponding & \multicolumn{3}{c||}{Relationship Detection} & \multicolumn{3}{c||}{Relationship Tagging} & \multicolumn{4}{c|}{Relationship Recognition} \\ &
Image-Relationship or & \multicolumn{3}{c||}{relationship} & \multicolumn{3}{c||}{relationship} & object & verb & scene & relationship \\
& Video-Activity Method & R@50 & R@100 & mAP & P@1 & P@5 & P@10 & Acc@1 & Acc@1 & Acc@1 & Acc@1
\\ \hline \hline
VidVRD~\cite{shang2017video} & Visual Phrases~\cite{sadeghi2011recognition} & 13.62 & 18.36 & 3.12 & 3.97 & 4.62 & 4.26 & 28.70 & 63.64 & 34.91 & 7.83 \\
UEG & VRD$_V$~\cite{lu2016visual} & 22.53 & 29.70 & 7.93 & 16.05 & 11.47 & 8.72 & 41.74 & 64.70 & 34.62 & 11.94 \\
UEG$^\dagger$ & VRD~\cite{lu2016visual} & 22.35 & 29.65 & 7.90 & 16.10 & 11.38 & 8.67 & 41.70 & 64.73 & 35.17 & 11.85 \\
SEG & DRN~\cite{dai2017detecting} & 23.68 & 31.56 & 8.77 & 18.04 & 12.50 & 9.37 & 42.84 & 64.36 & 35.28 & 12.60 \\
STEG & AsyncTF~\cite{sigurdsson2017asynchronous} & 23.79 & 31.65 & 8.84 & 18.46 & 12.57 & 9.37 & 42.87 & 64.53 & 35.71 & 12.76 \\
\textbf{GSTEG (Ours)} & - & \textbf{24.95} & \textbf{33.37} & \textbf{9.86} & \textbf{19.16} & \textbf{12.93} & \textbf{9.55} & \textbf{43.53} & \textbf{64.82} & \textbf{40.11} & \textbf{14.73} \\ \hline
\end{tabular}
\end{center}
\vspace{-2mm}
\caption{Evaluation for different methods on Charades dataset. Our method outperforms all competing baselines across the three tasks. }
\label{tbl:charades}
\vspace{-4mm}
\end{table*}
\input{cr_exp_task.tex}
\input{cr_exp_pre.tex}
\begin{figure*}[t!]
\vspace{-4mm}
\includegraphics[width=\textwidth]{fig/qualitative.pdf}
\vspace{-5mm}
\caption{\small Examples from ImageNet Video dataset of Relationship Detection (Left) \& Tagging (Right) using baselines, ablations, and our full model. The bar plots illustrate the R@100 (left) and P@5 (right) difference comparing our model to VidVRD~\cite{shang2017video}. To show the results on all the methods, green boxes refer to a video where our model performs better and orange boxes refer to a video where VidVRD performs better. For tagging (right), we use green to highlight the correctly tagged relation and yellow for incorrectly tagged relation. The numbers in bracket represent the order of detection or tagging. Best viewed in color.}
\label{fig:qualitative}
\vspace{-4mm}
\end{figure*}
\input{cr_exp_rea.tex}
\input{cr_result.tex}
\section{Introduction}
Relationship reasoning is a challenging task that not only involves detecting low-level entities (subjects, objects, etc.) but also recognizing the high-level interaction between them (actions, sizes, parts, etc.). Successfully reasoning about relationships not only enables us to build richer question-answering models (e.g., {\em Which objects are larger than a car?}), but also helps in improving image retrieval~\cite{lu2016visual}(e.g., images with {\em elephants drawing a cart}), scene graph parsing~\cite{zellers2018neural} (e.g., {\em woman has helmet}), captioning~\cite{zhang2017visual}, and many other visual reasoning tasks.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.43\textwidth]{fig/illus1.pdf}
\end{center}
\vspace{-4mm}
\caption{\small Visual relationship reasoning in images (top) vs. videos (bottom): Given a single image, it is ambiguous whether the {\em monkey} is creeping up or down the {\em car}. Using a video not only helps to unambiguously recognize a richer set of relations, but also model temporal correlations across them (e.g., {\em creep down} and {\em jump left}).}
\label{fig:illus1}
\vspace{-3mm}
\end{figure}
\begin{figure*}[t!]
\vspace{-3mm}
\includegraphics[width=\textwidth]{fig/illus2.pdf}
\vspace{-4mm}
\caption{An overview of our Proposed Gated Spatio-Temporal Energy Graph. Given an input instance (a video clip), we predict the output relationships (e.g., \{{\em monkey, creep down, car\}}, etc.,) by reasoning over a fully-connected spatio-temporal graph with nodes $\mathbf{S}$ (Subject), $\mathbf{P}$ (Predicate) and $\mathbf{O}$ (Object). Unlike previous works that assumed a non-gated (i.e., predefined or globally-learned) pairwise energy function, we explore the use of gated energy functions (i.e., conditioned on the specific visual observation) . Best viewed zoomed in and in color.}
\label{fig:illus2}
\vspace{-5mm}
\end{figure*}
Most contemporary research in visual relationship reasoning has been focused in the domain of static images. While this has resulted in several exciting and attractive reasoning modules~\cite{sadeghi2011recognition,lu2016visual,zhang2017visual,liang2017deep,yin2018zoom,zhu2018deep,cui2018context,liang2018visual}, it lacks the ability from reasoning about complex relations that are inherently temporal and/or correlated in nature. For example, in Fig.~\ref{fig:illus1} it is ambiguous to infer from a static image whether the monkey is creeping down or up the car. Also, it is difficult to model relations that are often correlated through time, such as {\em man enters room} and {\em man open door}.
In this paper, we present a novel approach for reasoning about visual relationships in videos. Our proposed approach jointly models the spatial and temporal structure of relationships in videos by constructing a fully-connected spatio-temporal graph (see Fig.~\ref{fig:illus2}). We refer to our model as a Gated Spatio-Temporal Energy Graph. In our graph, each node represents an entity and the edges between them denote the statistical relations. Unlike much of the previous work~\cite{krahenbuhl2011efficient,zheng2015conditional,schwing2015fully,dai2017detecting,sigurdsson2017asynchronous} that assumed a predefined or globally-learned pairwise energy function, we introduce an observation-gated version that allows us to make the statistical dependency between entities adaptive (conditioned on the observation).
Our adaptive parameterization of energy function helps us model the natural diversification of relationships in videos. For instance, the dependency between {\em man} and {\em cooking} should be different conditioned on the observation (i.e., whether the location is {\em kitchen} or {\em gym}). However, given the large state space of observations (in videos), directly maintaining observation-dependent statistical dependencies may be computationally intractable~\cite{mnih2013playing,tsai2017discovering}. Towards this end, we develop an amortized parameterization of our new gated pairwise energy function, which combines ideas from clique template~\cite{taskar2002discriminative,taylor2009factored,mccallum2009factorie}, neural networks~\cite{goodfellow2016deep,tsai2017discovering}, and tensor factorization~\cite{koren2009matrix} for achieving efficient inference and learning.
We evaluate our model on two benchmark datasets, ImageNet Video~\cite{ILSVRC15} and Charades~\cite{sigurdsson2016hollywood}. Our method achieves state-of-the-art performance across three standard relationship reasoning tasks: detection, tagging, and recognition. We also study the utility of our model in the zero-shot setting and learning from semantic priors.
\section{Conclusion}
In this paper, we have presented a Gated Spatio-Temporal Energy Graph (GSTEG)
model for the task of visual relationship reasoning in videos. In the graph, we consider a spatially and temporally fully-connected structure with an amortized observation-gated parameterization for the pairwise energy functions. The gated design enables the model to detect adaptive relations between entities conditioned on the current observation (i.e., current video). On two benchmark video datasets (ImageNet Video and Charades), our method achieves state-of-the-art performance across three relationship reasoning tasks (Detection, Tagging, and Recognition).
\section*{Acknoledgement}
Work done when YHHT was in Allen Institute for AI. YHHT and RS were supported in part by the DARPA grants D17AP00001 and FA875018C0150, NSF IIS1763562, and Office of Naval Research N000141812861. LPM was supported by the National Science Foundation (Award \# 1722822). SD and AF were supported by NSF IIS-165205, NSF IIS-1637479, NSF IIS-1703166, Sloan Fellowship, NVIDIA Artificial Intelligence Lab, and Allen Institute for artificial intelligence. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation, and no official endorsement should be inferred. We would also like to acknowledge NVIDIA’s GPU support.
{
\small
\bibliographystyle{ieee_fullname}
\section{Gated Spatio-Temporal Energy Graph}
\section{Proposed Approach}
\label{sec:prop}
The task of video relationship reasoning not only requires modeling the entity predictions spatially and temporally, but also maintaining a changeable correlation structure between entities across videos with various contents. To this end, we propose a Gated Spatio-Temporal Fully-Connected Energy Graph for capturing the inherently rich video structure into account.
We start by defining our notations using Fig.~\ref{fig:illus2} as a running example. The input instance $X$ lies in a video segment and consists of $K$ synchronous input streams $X = \{X^k\}_{k=1}^K$. In this example, input streams are \{object trajectories, predicate trajectories, subject trajectories\}, and thus $K=3$, where trajectories refer to the consecutive frames or bounding boxes in the video segment. Each input stream contains observations for $T$ time steps (i.e., $X^k = \{X^k_t\}_{t=1}^{T}$), where for example object trajectories represent object bounding boxes through time. For each input stream, our goal is to predict a sequence of entities (labels) $Y^k = \{Y^k_t\}_{t=1}^T$. In Fig.~\ref{fig:illus2}, the output sequence of predicate trajectories represent predicate labels through time. Hence we formulate the data-entities tuple as $(X, Y)$ with $Y = \{Y^1_t, Y^2_t \cdots, Y^K_t\}_{t=1}^{T}$ representing a set of sequence of entities.
The entity $Y^k_t$ should spatially relate to entities $\{\{Y^1_t, Y^2_t \cdots, Y^K_t\} \setminus \{Y^k_t\}\}$ and temporally relate to entities $\{\{Y^k_1, Y^k_2 \cdots, Y^k_T\} \setminus \{Y^k_t\}\}$. For example, suppose that the visual relationships observed in a grocery store are \{\{mother, pay, money\}, \{infant, get, milk\}, \{infant, drink, milk\}\}; spatial correlation must exist between mother/pay/money and temporal correlation must exist between pay/get/drink. We also note that implicit correlation may also exist between $Y^k_t$ and $Y^{k'}_{t'}$ for $t \neq t', k \neq k'$.
Based on the structural dependencies between entities, we propose to construct a Spatio-Temporal Fully-Connected Energy Graph (see Sec.~\ref{subsec:struc_pred}), where each node represents an entity and each edge denotes the statistical dependencies between the connected nodes. To further take account that the statistical dependency between ``get'' and ``drink'' may be different depending on different observations (i.e., location in grocery store v.s. home), we introduce an observation-gated parameterization for pairwise energy functions. In the new parameterization, we amortize the potentially large computational cost by using clique templates~\cite{taskar2002discriminative,taylor2009factored,mccallum2009factorie}, neural network approximation~\cite{mnih2013playing,tsai2017discovering}, and tensor factorization~\cite{koren2009matrix} (see Sec.~\ref{subsec:amadmessage}).
\subsection{Spatio-Temporal Fully-Connected Graph}
\label{subsec:struc_pred}
By treating the predictions of entities as random variables, the construction of the graph can be realized by forming a Markov Random Field (MRF) conditioned on a global observation, which is the input instance (i.e., $X$). Then, the tuple ($X, Y$) can be modeled as a Conditional Random Field (CRF) parametrized by a Gibbs distribution of the form: $
P\Big(Y = y|X\Big) = \frac{1}{Z(X)}\mathrm{exp}\Big(-\mathbf{E}(y|X)\Big),
$
where $Z(X)$ is the partition function and $\mathbf{E}(y|X)$ is the energy of assigning labels $Y =y = \{y^1_t, y^2_t, \cdots, y^K_t\}_{t=1}^T$ conditioned on $X$. Assuming only pairwise cliques in the graph \big(i.e., $P(y|X):= P_{\psi, \varphi}(y|X), \mathbf{E}(y|X) := \mathbf{E}_{\psi,\varphi}(y|X)$\big), the energy can be expressed as:
\vspace{-1mm}
\begin{equation}
\small
\mathbf{E}_{\psi,\varphi}(y|X)
= \sum_{t,k} \psi_{t,k}(y_t^k|X) + \sum_{\{t,k\}\neq \{t',k'\}} \varphi_{t,k,t',k'}(y_t^k, y_{t'}^{k'}|X),
\label{eq:energy}
\end{equation}
where $\psi_{t,k}$ and $\varphi_{t,k,t',k'}$ are the unary and pairwise energy, respectively. In Eq.~\eqref{eq:energy}, the unary energy, which is defined on each node in the graph, captures inverse likelihood for assigning $Y_t^k = y_t^k$ conditioned on the observation $X$. Typically, this term can be derived from an arbitrary classifier or regressor, such as a deep neural network~\cite{lecun2015deep}. On the other hand, the pairwise energy models interactions of label assignments across nodes $Y_t^k = y_t^k, Y_{t'}^{k'} = y_{t'}^{k'}$ conditioned on the observation $X$. Therefore, the pairwise term determines the statistical dependencies between entities spatially and temporally. However, the parameterization in most previous works on fully-connected CRF~\cite{zheng2015conditional,schwing2015fully,sigurdsson2017asynchronous,dai2017detecting} assumes that the pairwise energy function is non-adaptive to the current observation, which may not be ideal to model changeable dependencies between entities across videos. In the following Sec.\ref{subsec:amadmessage}, we propose an observation-gated parametrization for pairwise energy function to address the issue.
\subsection{Gated Pairwise Energy Function}
\label{subsec:amadmessage}
Much of existing work uses a simplified parameterization of pairwise energy function and typically considers only the {\em smoothness} of the joint label assignment. For instance, in Asynchronous Temporal Field~\cite{sigurdsson2017asynchronous}, $\varphi_\cdot(y_t^k, y_{t'}^{k'}|X)$ is defined as $\mu(y_t^k, y_{t'}^{k'})K(t, t')$, where $\mu$ represents the label compatibility matrix and $K(t,t')$ is an affinity kernel measurement which represents the discounting factor between $t$ and $t'$. Similarly, in the image segmentation domain~\cite{zheng2015conditional,schwing2015fully}, $\varphi_\cdot(s_i, s_j|I)$ is defined as $\mu(s_i, s_j)K(I_i, I_j)$, where $s_{\{i,j\}}$ is the segment label and $I_{\{i,j\}}$ is the input feature for location $\{i,j\}$ in image $I$. In these models, the pairwise energy comprises an observation-independent label compatibility matrix followed by a spatio or temporal discounting factor. We argue that the parametrization of pairwise energy function should be more expressive.
To this end, we define the pairwise energy as:
\begin{equation}
\begin{split}
\varphi_{t,k,t',k'}(y_t^k, y_{t'}^{k'}|X) & := \left \langle f^\varphi \right \rangle_{X, t, t',k,k',y_t^k, y_{t'}^{k'}},
\end{split}
\end{equation}
where $f^\varphi$ can be seen as a discrete lookup table that takes the input $X$ of size $|X|$ and outputs a large transition matrix of size $(T^2K^2-1)\times |Y_t^k|\times |Y_{t'}^{k'}|$, and where $\left \langle \cdot \right \rangle_z$ represents its $z_{th}$ item. Directly maintaining this lookup table is computationally intractable due to the large state space of $X$. Considering a simple case that $X$ is a pairwise-valued $32\times 32$ image, we have $|X| = 2^{32\times 32}$ possible states. The state space complexity aggravates when $X$ becomes an RGB-valued video. Thanks to the recent advances in graphical models~\cite{taskar2002discriminative,taylor2009factored,mccallum2009factorie}, deep neural networks~\cite{mnih2013playing,tsai2017discovering}, and tensor factorization~\cite{koren2009matrix}, our workaround is to parametrize and approximate $f^\varphi$ as $f^\varphi_\theta$ with learnable parameters $\theta$ as follows:
\begin{equation}
\begin{split}
&\left \langle f^\varphi \right \rangle_{X, t, t',k,k',y_t^k, y_{t'}^{k'}} \approx f^\varphi_\theta (X_t^k,t,t',k,k',y_t^k, y_{t'}^{k'}) \\
= & \left\{\begin{matrix}
\left \langle g^{kk'}_\theta(X_t^k) \otimes h^{kk'}_\theta(X_t^k) \right \rangle_{y_t^k, y_{t'}^{k'}} & t = t' \\ K_\sigma \Big(t,t'\Big)
\left \langle r^{kk'}_\theta(X_t^k) \otimes s^{kk'}_\theta(X_t^k)\right \rangle_{y_t^k,y_{t'}^{k'}} & t\neq t'
\end{matrix}\right. ,
\end{split}
\label{eq:pair}
\end{equation}
where $g^{kk'}_\theta(\cdot), r^{kk'}_\theta(\cdot) \in \mathbb{R}^{|Y_t^k|\times r}$ and $h^{kk'}_\theta(\cdot),s^{kk'}_\theta(\cdot) \in \mathbb{R}^{|Y_{t'}^{k'}|\times r}$ represent the $r$-rank projection from $X_t^k$, which is modeled by a deep neural network. $A \otimes B = AB^\top$ denotes the function on matrix $A$ and $B$, and results in a transition matrix of size $|Y_t^k|\times |Y_{t'}^{k'}|$. $K_\sigma \Big(t,t'\Big)$ is the Gaussian kernel with bandwidth $\sigma$ representing discounting factor for different time steps
\iffalse
\begin{figure}[t!]
\vspace{-5mm}
\begin{center}
\includegraphics[width=0.48\textwidth]{fig/gated.pdf}
\end{center}
\vspace{-4mm}
\caption{\small Parametrization for our gated pairwise energy function when considering $t=t'$. From the observation, we train $2$ networks to predict two low-rank matrices (in blue and red) that are approximating the pairwise energy function (in yellow). Best viewed in color.}
\label{fig:param}
\vspace{-5mm}
\end{figure}
\fi
The intuition behind our parametrization is as follows: First, we note that clique templates~\cite{taskar2002discriminative,taylor2009factored,mccallum2009factorie} are adopted spatially and temporally, which leads to scalable learning and inference. Second, the idea of using neural networks for approximating the lookup tables ensures both parameter efficiency and generalization~\cite{goodfellow2016deep,tsai2017discovering}. The lookup table maintains the state transitions of $\mathcal{X} \rightarrow \mathcal{Y}^k \times \mathcal{Y}^{k'}$ where calligraphy font denotes the corresponding state space. Finally, we choose $r << \mathrm{min}_{k}\{|Y_t^k|\}$ so that a low-rank decomposition is performed on the transition matrix from $Y_t^k$ to $Y_{t'}^{k'}$.
The low-rank decomposition allows us to substantially reduce the number of learnable parameters.
To summarize, our design for $f^\varphi_\theta$ amortize the large space complexity for $f^\varphi$ and is gated by observation.
\subsection{Inference, Message Passing, and Learning}
\label{subsec:infer}
Minimizing the CRF energy in Eq.~\eqref{eq:energy} returns the most probable label assignment problem of $Y=\{y^1_t, y^2_t, \cdots, y^K_t\}_{t=1}^T$ given the observation $X$. However, the exact inference in a fully connected CRF is often computationally intractable even with variables enumeration or elimination~\cite{koller2009probabilistic}.
In this work, we adopt the commonly used mean-field algorithm~\cite{koller2009probabilistic} as approximate inference, which finds the approximate posterior distribution $Q(Y)$ such that $Q(\cdot)$ is closest to $P_{\psi, \varphi}(Y|X)$ in terms of $\mathcal{KL}(Q//P_{\psi, \varphi})$ within the class of distributions representable as a product of independent marginals $Q(Y) = \prod_{t,k} Q(Y_t^k)$.
Following~\cite{koller2009probabilistic}, inference can now be realized as the naive mean-field updates with the coordinate descent optimization, and it can be expressed in terms of fixed-point message passing equations:
\vspace{-1mm}
\begin{equation}
Q(y_t^k) \propto \Psi_{t,k}\Big({y_t^k|X}\Big) \prod_{\{t', k'\} \neq \{t, k\}}m_{t',k',t,k}(y_{t}^{k}|X)
\label{eq:variational}
\end{equation}
\vspace{-1mm}
\\with $\Psi_{t,k} = \mathrm{exp}\Big(-\psi_{t,k}\Big)$ representing the unary potential and $m_\cdot(\cdot)$ denoting the message having form\footnote{In Supplementary, we make connection from our gated amortized parametrization for pairwise energy function in message form with Self-Attention~\cite{vaswani2017attention} in machine translation and Non-Local Means~\cite{buades2005non} in Image Denoising.} of
\begin{equation}
m_{\cdot}(\cdot) =\mathrm{exp}\Big( - \sum_{y_{t'}^{k'}}\varphi_{t,k,t',k'}(y_t^k, y_{t'}^{k'}|X)Q(y_{t'}^{k'}) \Big).
\label{eq:message}
\end{equation}
To parametrize the unary energy function, we use a similar formulation:
\vspace{-2mm}
\begin{equation}
\begin{split}
\psi_{t,k} (y_t^k|X) := & \left \langle f^\psi \right \rangle_{X , t, k, y_t^k} \\
\approx & f^\psi_\theta(X_t^k, t, k, y_t^k) = \left \langle w^k_\theta (X_t^k) \right \rangle_{y_t^k},
\end{split}
\label{eq:unary}
\end{equation}
\vspace{-3mm}
\\where $w_\theta^k \in \mathbb{R}^{|Y_t^k|}$ represents the projection from $X_t^k$ to logits of size $|Y_t^k|$, modeled by a deep neural network.
Lastly, we cast the learning problem as minimizing conditional cross-entropy between the proposed distribution and the true one, where $\theta$ denotes the parameters we need in our model: $\theta^* = \mathrm{arg\,min}_\theta \,\,\mathbb{E}_{X, Y}[-\mathrm{log}\,Q (Y)]$.
\section{Related Work}
\noindent {\bf Video Activity Recognition.} The notion of activity in a video represents the interaction between objects~\cite{goyal2017something,kay2017kinetics} or the interaction between an object and a scene~\cite{sigurdsson2016hollywood}. While related to our task of relation reasoning, activity recognition does not require explicit prediction of all entities, such as subject, object, scene, and their relationships. The term {\em relation} used in activity recognition and relationship reasoning has different connotations. In the visual relationship reasoning literature, it refers to the correlation between different entities, such as object, verb, and scene, while in activity recognition, it refers to either correlation between activity predictions (i.e., single entity) or correlation between video segments. For example, \cite{zhou2017temporal} proposed Temporal Relation Network to reason the temporal `relations' across frames at multiple time scales. \cite{girdhar2017actionvlad} introduced a spatio-temporal aggregation on local convolutional features for better learning representations in the video. \cite{wang2018non} proposed Non-Local Neural Networks to model pairwise relations for every pixel in the feature space from low-layers to higher-layers. The work was extended to~\cite{wang2018videos} for constructing a Graph Convolutional Layer that further modeled relation between object-level features.
\noindent {\bf Visual Relationship Reasoning.} Most recent works in relation reasoning have focused their analysis on static images~\cite{yin2018zoom,zhu2018deep,cui2018context,liang2018visual}. For example, \cite{sadeghi2011recognition} introduced the idea of visual phrases for compositing visual concepts of subject, predicate, and object. \cite{lu2016visual} decomposed the direct visual phrase detection task into individual detection on subject, predicate, and object leading to improved performance. \cite{dai2017detecting} further applied conditional random fields on top of the individual predictions to leverage their statistical correlations. \cite{liang2017deep} proposed a deep variation-structured reinforcement learning framework and then formed a directed semantic action graph. The global interdependency in this graph facilitated predictions in local regions of the image. One of the key challenges of learning relationships in videos has been the lack of relevant annotated datasets. In this context, the recent work of~\cite{shang2017video} is inspiring as it contributes manually annotated relations for the ImageNet video dataset. Our work improves upon ~\cite{shang2017video} on multiple fronts: (1) Instead of assuming no temporal contingency between relationships, we introduce a gated fully-connected spatio-temporal energy graph for modeling the inherently rich structure from videos; (2) We extend the study of relation triplet from subject/predicate/object to a more general setting, such as object/verb/scene~\cite{sigurdsson2016hollywood}; (3) We consider a new task `relation recognition' (apart from relation detection and tagging) which requires the model to make predictions in a fine-grained manner; (4) For various metrics and tasks, our model demonstrates improved performance.
\noindent {\bf Deep Conditional Random Fields.} Conditional Random Fields (CRFs) have been popularly used to model the statistical dependencies among predictions in images~\cite{he2004multiscale,zheng2015conditional,schwing2015fully,sadeghi2015viske,dai2017detecting} and videos~\cite{quattoni2007hidden,sigurdsson2017asynchronous}. Several extensions have been recently introduced for fully-connected CRF graphs. For example,~\cite{zheng2015conditional,schwing2015fully,sigurdsson2017asynchronous} attempted to express fully-connected CRFs as recurrent neural networks and made the whole network end-to-end trainable, which has led to interesting applications in image segmentation~\cite{zheng2015conditional,schwing2015fully} and video activity recognition tasks~\cite{sigurdsson2017asynchronous}. In the characterization of CRFs, the unary energy function represents the inverse likelihood for assigning a label, while the binary energy function measures the cost of assigning multiple labels jointly. However, most of the existing parameterizations of binary energy functions~\cite{krahenbuhl2011efficient,zheng2015conditional,schwing2015fully,dai2017detecting,sigurdsson2017asynchronous} have limited or no connections to observed variables. Such parameterizations may not be optimal for video relationship reasoning due to the adaptive idiosyncrasy for statistical dependencies between entities. To address the issue, we instead propose an observation-gated pairwise energy function with efficient and amortized parameterization.
\section{Results and Discussion}
\subsection{Quantitative Analysis}
\label{sec:result}
\noindent {\em ImageNet Video.} Table.~\ref{tbl:imagenet} shows our results and comparisons to the baselines.
We first observe that, for every metric across the three tasks (detection, tagging, and recognition), our proposed method (GSTEG) outperforms all the competing methods.
Comparing the numbers between UEG and UEG$^\dagger$, we find that language priors can help promote visual relation reasoning. We also observe performance improvement from UEG to SEG, which could be explained by the fact that SEG explicitly models the spatial statistical dependency in \{subject, predicate, object\} and leads to a better relation learning between different entities. However, comparing SEG to STEG, the performance drops in some metrics, indicating that modeling temporal statistical dependency using a fixed pairwise energy parameterization may not be ideal. For example, although STEG gives a much better relationship recognition results as compared to SEG, it becomes worse in R@50 for detection and P@5 for tagging.
This indicates that observation-gated parametrization for pairwise energy is able to capture different structure for different videos. When comparing energy graph models, VidVRD
is able to outperform all our ablation baselines (except for the full version) in relation detection and tagging. However, it suffers from relation recognition, which requires a fine-grained understanding of visual relation in the given object and subject tracklets.
Apart from the `standard evaluation', we also considered the `zero-shot' setting, where {\em zero-shot} refers to the evaluation on the relative complement of training triplets in evaluation triplets. More specifically, in the ImageNet Video dataset, the number of all possible relation triplets is $35\times132\times35=161,700$. While the training set contains $2,961$ relation triplets (i.e., $1.83\%$ of $161,700$), the evaluation set has $1,011$ relation triplets (i.e., $0.63\%$ of $161,700$). The number of zero-shot relation triplets is $258$, which is $25.5\%$ in the evaluation set. Zero-Shot evaluation is very challenging due to the fact that we need to infer the never-seen relationship in the training set. We observe that, for most cases, our proposed method reaches the best performance compared to various baselines. The exception is mAP, where VidVRD attains the best performance using a structural objective. However, the overall trend of zero-shot evaluation mirrors standard evaluation. \\
\begin{figure*}[t!]
\vspace{-5mm}
\includegraphics[width=\textwidth]{fig/pairwise.pdf}
\vspace{-5mm}
\caption{\small Analysis of non-gated and gated pairwise energies: Given an input video (top left) from Charades (that has \{{\em object, verb, scene}\} relationships), the matrices (top right) visualize the non-gated and gated pairwise energies between the verbs and objects (rows: 33 verbs, cols: 38 objects). Notice that for the verb {\em sit} (highlighted in red), the gated energy with objects {\em chair}, and {\em table} is lower compared to the corresponding non-gated pairwise energies, thereby helping towards improved relationship reasoning. A similar behavior is observed in case of verb to scene pairwise function (bottom left) as well as verb to verb pairwise function (bottom middle), which models the temporal correlations e.g., sit/sit or sit/stand. Best viewed in color and color in the matrix or vector is normalized in its own scale.}
\label{fig:interpretation}
\vspace{-4mm}
\end{figure*}
\noindent {\em Charades.} Our results and comparisons are shown in Table.~\ref{tbl:charades}. We find that our method outperforms all relevant baselines. We also note some interesting differences between the trend of results in Charades vs. ImageNet Video: First, comparing UEG to UEG$^\dagger$, we observe that language priors do not really help the visual relationship reasoning in Charades. We argue that it may because of the larger inter-class distinction in Charades' categories set. For example, dog/cat or horse/zebra or sit front/front/jump front share some similarity in the category set in ImageNet Video, while the categories are less semantically similar in Charades. Second, STEG constantly outperforms SEG which indicates modeling a fixed temporal statistical dependency between entities may aid the visual relationship reasoning in Charades. We hypothesize that, as compared to the ImageNet Video dataset that has a diversified set of videos in the wild between animals or inorganic substances, Charades contains videos of human indoor activities where relations between entities are much easier to model by a fixed dependency. Finally, we observe that VidVRD performs substantially worse compared to all the other models, suggesting that the structural loss introduced by VidVRD may not generalize well to other datasets. In case of Charades, we do not perform zero-shot evaluation as the number of zero-shot relation triplets is low. \Big(The number of all the possible relation triplets is $33\times38\times16=20,064$. The training set contains $2,285$ relation triplets (i.e., $11.39\%$ of $20,064$) and the evaluation set contains $1,968$ relation triplets (i.e., $9.81\%$ of $20,064$). The number of zero-shot relation triplets is $46$, which is $2.34\%$ in the evaluation set.\Big)
In Supplementary, we also provide the results when leveraging language priors into our model and also provide the comparisons with Structural-RNN~\cite{jain2016structural} and Graph Convolutional Network~\cite{wang2018videos}.
\vspace{-1mm}
\subsection{Qualitative Analysis}
\label{subsec:qualitative}
We next illustrate our qualitative results in Fig.~\ref{fig:qualitative} in the ImageNet Video dataset. For the relationship detection, in a scene with a person interacting with a horse, our model successfully detects $5$ out of $6$ relationships, while failing to detect horse-stand\_right-person in the top $100$ detected relationships. In another scene with a car interacting with a person, our model only detects $1$ relationship out of $7$ ground-truth relationships. We argue that the reason may be because of the sand occlusion and the small size of a person. For relationship tagging, in a scene with a person riding a bike over another person, our model successfully tags all four relationships in the top $5$ tagged results. Nevertheless, the third tagged result person-sit\_above-bicycle also looks visually plausible in this video. In another scene with a person playing with a dog on a sofa, our model fails to tag any correct relationships in the top $5$ tagged results. Our model incorrectly identified dog as cat, representing the main reason why it failed.
Since pairwise energy in a graphical model represents the negative statistical dependency between entities, in Fig.~\ref{fig:interpretation}, for a video in Charades dataset, we provide the illustration of pairwise energy when considering our gated and non-gated parameterization. Observe that the pairwise energies between the related entities are lower for the gated parameterization as compared to the non-gated one, suggesting that the gating mechanism is able to aid video relationship reasoning by improving statistical dependency between spatially or temporally correlated entities.
\section{Towards Leveraging Language Priors}
\label{sec:transfer}
The work of~\cite{lu2016visual} has emphasized the role of language priors in alleviating the challenge of learning relationship models from limited training data. Motivated by their work, we also study the role of incorporating language priors in our framework.
In Table. 1 in the main text, comparing UEG to UEG$^\dagger$, we have seen that language priors aid in improving the relationship reasoning performance. Considering our example in Sec. 3.1 in main text, when the training instance is \{mother, pay, money\}, we may also want to infer that \{father, pay, money\} is a more likely relationship as opposed to \{cat, pay, money\} (as mother and father are semantically similar compared to mother and cat). Likewise, we can also infer \{mother, pay, check\} from the semantic similarity between {\em money} and {\em check}.
\cite{lu2016visual} adopted a triplet loss for pairing word embeddings of object, predicate, and subject. However, their method required sampling of all the possible relationships and was also restricted to the number of entities spatially (e.g, $K=3$). Here, we present another way to make the parameterized pairwise energy also be gated by the prior knowledge in semantic space. We let the prior from semantic space be encoded as word embedding: $S = \{S^k\}_{k=1}^K$ in which $S^k \in \mathbb{R}^{|Y_t^k| \times d}$ denoting prior of labels with length $d$. We extend Eq. (3) in the main text as
\vspace{-1mm}
\begin{equation}
\footnotesize
\begin{split}
&f^\varphi_\theta (S, X_t^k,t,t',k,k',y_t^k, y_{t'}^{k'}) \\
= & f^\varphi_\theta (X_t^k,t,t',k,k',y_t^k, y_{t'}^{k'}) + u_\theta( \left \langle S^k \right \rangle_{y_t^k}) \cdot v_\theta(\left \langle S^{k'} \right \rangle_{y_{t'}^{k'}} ),
\end{split}
\label{eq:pair_prior}
\end{equation}
where $u_\theta(\cdot) \in \mathbb{R}$ and $v_\theta(\cdot) \in \mathbb{R}$ maps the label prior to a score. Eq.~\eqref{eq:pair_prior} suggests that the label transition from $Y_t^k$ to $Y_{t'}^{k'}$ can also attend to the affinity inferred from prior knowledge.
We performed a preliminary evaluation on the relation recognition task in the ImageNet Video dataset using $300$-dim Glove features~\cite{pennington2014glove} as word embeddings. For subject, predicate, object, and relation triplet, Acc@1 metric improves from $90.60$, $28.78$, $89.79$, and $25.01$ to $90.97$, $29.54$, $90.57$, and $26.48$.
\begin{center}
\begin{figure*}[t!]
\vspace{-3mm}
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.2\linewidth}
\vspace{-2mm}
\includegraphics[width=\linewidth]{fig/comparisons_1.pdf}
\end{minipage}
\hfill \hspace{-50mm}
\begin{minipage}{0.78\linewidth}
\begin{center}
\fontsize{6pt}{8pt}
\selectfont
\begin{tabular}{|c||ccc||ccc||cccc|}
\hline
\multirow{3}{*}{Method} & \multicolumn{3}{c||}{Relationship Detection} & \multicolumn{3}{c||}{Relationship Tagging} & \multicolumn{4}{c|}{Relationship Recognition} \\ & \multicolumn{3}{c||}{relationship} & \multicolumn{3}{c||}{relationship} & {\em A} & {\em B} & {\em C} & relationship \\
& R@50 & R@100 & mAP & P@1 & P@5 & P@10 & Acc@1 & Acc@1 & Acc@1 & Acc@1
\\ \hline \hline
\multicolumn{11}{|c|}{Standard Evaluation for ImageNet Video dataset: {\em A} = subject, {\em B} = predicate, {\em C} = object} \\ \hline \hline
Structural RNN~\cite{jain2016structural} & 6.89 & 8.62 & 6.89 & 46.50 & 33.30 & 26.94 & 88.73 & 27.47 & 88.52 & 23.80 \\ Graph Convolution~\cite{wang2018videos} & 6.02 & 7.53 & 8.21 & 38.50 & 30.20 & 22.70 & 86.29 & 24.22 & 85.77 & 19.18 \\ \textbf{GSTEG (Ours)} & \textbf{7.05} & \textbf{8.67} & \textbf{9.52} & \textbf{51.50} & \textbf{39.50} & \textbf{28.23} & \textbf{90.60} & \textbf{28.78} & \textbf{89.79} & \textbf{25.01} \\ \hline \hline
\multicolumn{11}{|c|}{Zero-Shot Evaluation for ImageNet Video dataset: {\em A} = subject, {\em B} = predicate, {\em C} = object} \\ \hline \hline Structural RNN~\cite{jain2016structural} & 0.12 & 0.19 & 0.10 & 1.36 & \textbf{1.92} & 1.85 & 70.60 & 6.71 & 67.59 & 2.78 \\ Graph Convolution~\cite{wang2018videos} & \textbf{0.16} & 0.16 & \textbf{0.20} & 1.37 & \textbf{1.92} & 1.51 & 75.00 & 5.32 & 72.45 & 3.94 \\
\textbf{GSTEG (Ours)} & \textbf{1.16} & \textbf{2.08} & 0.15 & \textbf{2.74} & \textbf{1.92} & \textbf{1.92} & \textbf{82.18} & \textbf{7.87} & \textbf{79.40} & \textbf{6.02} \\ \hline \hline
\multicolumn{11}{|c|}{Standard Evaluation for Charades dataset: {\em A} = object, {\em B} = verb, {\em C} = scene} \\ \hline \hline Structural RNN~\cite{jain2016structural} & 23.63 & 31.15 & 8.73 & 17.18 & 12.24 & 9.18 & 42.73 & 64.32 & 34.40 & 12.40 \\ Graph Convolution~\cite{wang2018videos} & 23.53 & 31.10 & 8.56 & 16.96 & 12.23 & 9.43 & 42.19 & \textbf{64.82} & 36.11 & 12.75 \\
\textbf{GSTEG (Ours)} & \textbf{24.95} & \textbf{33.37} & \textbf{9.86} & \textbf{19.16} & \textbf{12.93} & \textbf{9.55} & \textbf{43.53} & \textbf{64.82} & \textbf{40.11} & \textbf{14.73} \\ \hline
\end{tabular}
\end{center}
\end{minipage}
\end{minipage}
\includegraphics[width=0.984\textwidth]{fig/comparisons_2.pdf}
\vspace{0mm}
\caption{\small [Bottom] Table summarizing the novelty of our proposed approach v.s. competing methods, [Top-left] Comparison of the graphical structures, [Top-right] Empirical comparisons between our approach and other Structural RNN~\cite{jain2016structural} and Graph Convolution~\cite{wang2018videos}. Our model performs well across all the three tasks.}
\label{fig:comp}
\end{figure*}
\end{center}
\section{Connection to Self Attention and Non-Local Means}
\label{sec:connect}
In our main text, the message form (eq. (5)) with our observation-gated parametrization (eq. (3) with $t=t'$) can be expressed as follows:
\begin{equation*}
\footnotesize
\begin{split}
&-\mathrm{log}\,m_{t', k', t, k}(y_t^k|X) =\sum_{y_{t'}^{k'}}\varphi_{t,k,t',k'}(y_t^k, y_{t'}^{k'}|X)Q(y_{t'}^{k'})\\
\approx & \left\{\begin{matrix}
\sum_{y_{t'}^{k'}} \left \langle g^{kk'}_\theta(X_t^k) \otimes h^{kk'}_\theta(X_t^k) \right \rangle_{y_t^k, y_{t'}^{k'}} Q(y_{t'}^{k'}) & t = t' \\ \sum_{y_{t'}^{k'}} K_\sigma \Big(t,t'\Big)
\left \langle r^{kk'}_\theta(X_t^k) \otimes s^{kk'}_\theta(X_t^k)\right \rangle_{y_t^k,y_{t'}^{k'}} Q(y_{t'}^{k'}) & t\neq t'
\end{matrix}\right. .
\end{split}
\end{equation*}
The equation can be reformulated in matrix form:
\begin{equation*}
-\mathrm{log}\,{\bf m}_{t', k', t, k}
\approx \mathrm{Query} \cdot \mathrm{Key}^\top \cdot \mathrm{Value},
\end{equation*}
where
\begin{equation*}
\begin{split}
& \left \langle {\bf m}_{t', k', t, k} \right \rangle_{y_t^k} = m_{t', k', t, k}(y_t^k|X) \\
& \left \langle \mathrm{Value} \right \rangle_{y_{t'}^{k'}} = \left\{\begin{matrix}
Q(y_{t'}^{k'}) & t = t' \\ K_\sigma \Big(t,t'\Big) \cdot Q(y_{t'}^{k'}) & t\neq t'
\end{matrix}\right. \\
& \mathrm{Query} = \left\{\begin{matrix}
g^{kk'}_\theta(X_t^k) & t = t' \\ r^{kk'}_\theta(X_t^k) & t\neq t'
\end{matrix}\right. \\
& \mathrm{Key} = \left\{\begin{matrix}
h^{kk'}_\theta(X_t^k) & t = t' \\ s^{kk'}_\theta(X_t^k) & t\neq t'
\end{matrix}\right. .
\end{split}
\end{equation*}
We now link this message form with self attention in Machine Translation~\cite{vaswani2017attention} and Non-Local Mean in Computer Vision~\cite{buades2005non}. Self-Attention is expressed as the form of
\begin{equation*}
\mathrm{softmax}\Big(\mathrm{Query} \cdot \mathrm{Key}^\top\Big) \cdot \mathrm{Value}
\end{equation*}
with $\mathrm{Query}$, $\mathrm{Key}$, and $\mathrm{Value}$ depending on input (termed observation in our case).
In both Self Attention and our message form, the attended weights for $\mathrm{Value}$ is dependent on observation. The difference is that we do not have a row-wise softmax activation to make the attended weights sum to $1$. The derivation is also similar to Non-Local Means~\cite{buades2005non}. Note that Machine Translation~\cite{vaswani2017attention} focuses on the updates for features across temporal regions, Non-Local Mean~\cite{buades2005non} focuses on the updates for the features across spatial regions, while ours focuses on the updates for the entities prediction (i.e., as a message passing).
\section{Comparisons with SRNN~\cite{jain2016structural} \& GCN~\cite{wang2018videos}}
Here, we provide comparisons with Structural-RNN (SRNN)~\cite{jain2016structural} and Graph Convolutional Network (GCN)~\cite{wang2018videos} for comparisons. We note that these approaches are designed for video activity recognition, which cannot be directly applied in video visual relationship detection. In Fig.~\ref{fig:comp} (top-left), we show how we minimally modifying SRNN and GCN for evaluating them in video relationship detection. The main differences are: 1) our model constructs a fully-connected graph for entire video, while SRNN has a non-fully connected graph and GCN considers only building a graph on partial video ($\sim$32 frames), and 2) the message passing across node represents prediction's dependency for our model, while it indicates temporally evolving edge features for SRNN and similarity-reweighted features for GCN.
\section{Activity Recognition in Charades}
Sigurdsson {\em et al.}~\cite{sigurdsson2017asynchronous} proposed Asynchronous Temporal Fields (AsyncTF) for recognizing $157$ video activities. As discussed in Related Work (see Sec. 2), video activity recognition is a downstream task of visual relationship learning: in Charades, each activity (in $157$ activities) is a combination of one category {\em object} and one category in {\em verb}. We now cast how our model be transformed into video activity recognition. First, we change the output sequence to be $Y = \{Y_t\}_{t=1}^T$, where $Y_t$ is the prediction of video activity. Then, we apply our Gated Spatio-Temporal Energy Graph on top of the sequence of activity predictions. In this design, we achieve the mAP of 33.3\%. AsyncTF reported the mAP 18.3\% for using only RGB values from a video.
\section{Feature Representation in Pre-Reasoning Modules}
\noindent {\bf ImageNet Video.} We now provide the details for representing $X_t^p$, which is the predicate feature in $t_{th}$ chunk of the input instance. Note that we use the relation feature from prior work~\cite{shang2017video} (the feature can be downloaded form~\cite{VidVRDURL}) as our predicate feature. The feature comprises three components: (1) improved dense trajectory (iDT) features from subject trajectory, (2) improved dense trajectory (iDT) features from object trajectory, and (3) relative features describing the relative positions, size, and motion between subject and object trajectories. iDT features are able to capture the movement and also the low-level visual characteristics for an object moving in a short clip. The relative features are able to represent the relative spatio-temporal differences between subject trajectory and object trajectory. Next, the features are post-processed as bag-of-words features after applying a dictionary learning on the original features. Last, three sub-features are concatenated together for representing our predicate feature.
\noindent {\bf Charades.} We use the output feature layer from I3D network~\cite{carreira2017quo} to represent our object ($X_t^o$), verb ($X_t^v$), and scene feature ($X_t^s$). The I3D network is pre-trained from Kinetics dataset~\cite{carreira2017quo} (the model can be downloaded from~\cite{I3DURL}) and the output feature layer is the layer before output logits.
\section{Intractable Inference during Evaluation}
In ImageNet Video dataset, during evaluation, for relation detection and tagging, we have to enumerate all the possible associations of subject or object tracklets. The number of possible associations grows exponentially by the factor of the number of chunks in a video, which will easily become computationally intractable. Note that the problem exists only during evaluation since the ground truth associations (for subject and object tracklets) are given during training. To overcome the issue, we apply the greedy association algorithm described in~\cite{shang2017video} for efficiently associating subject or object tracklets. The idea is as follows. First, we adopt the inference only in a chunk. Since the message does not pass across chunks, at this step, we don't need to consider associations (for subject or object tracklets) across chunks. In a chunk, for a pair of subject and object tracklet, we have a predicted relation triplet. Then, from two overlapping chunks, we only associate the pair of the subject and object tracklets with the same predicted relation triplet and high tracklets vIoU (i.e., $>0.5$). Comparing to the original inference, this algorithm exponentially accelerates the time computational complexity. On the other hand, in Charades, we do not need associate object tracklets. Thus, the intractable computation complexity issue does not exist. The greedy associate algorithm is not required for Charades.
\section{Training and Parametrization Details}
We specify the training and parametrization details as follows.
\noindent {\bf ImageNet Video.} Throughout all the experiments, we choose Adam~\cite{kingma2014adam} with learning rate $0.001$ as our optimizer, $32$ as our batch size, $30$ as the number of training epoch, and $3$ as the number of message passing. We initialize the marginals to be the marginals estimated from unary energy.
\begin{itemize}
\item Rank number $r$: 5
\item $g_\theta^{kk'}(X_t^k)$: $|X_t^k|\times (|Y_t^k| \times r)$ fully-connected layer, resize to $|Y_t^k| \times r$
\item $h_\theta^{kk'}(X_t^k)$: $|X_t^k|\times \times (|Y_{t'}^{k'}| \times r)$ fully-connected layer, resize to $|Y_{t'}^{k'}| \times r$
\item $r_\theta^{kk'}(X_t^k)$: $|X_t^k|\times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times (|Y_t^k| \times r)$ fully-connected layer, resize to $|Y_t^k| \times r$
\item $s_\theta^{kk'}(X_t^k)$: $|X_t^k|\times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times (|Y_{t'}^{k'}| \times r)$ fully-connected layer, resize to $|Y_{t'}^{k'}| \times r$
\item $\sigma$: $10$
\item $w_\theta^{kk'}(X_t^k)$: $|X_t^k|\times|Y_t^k|$ fully-connected layer
\end{itemize}
\noindent {\bf Charades:} Throughout all the experiments, we choose SGD with learning rate $0.005$ as our optimizer, $40$ as our batch size, $5$ as the number of training epoch, and $5$ as the number of message passing. We initialize the marginals to be the marginals estimated from unary energy.
\begin{itemize}
\item Rank number $r$: 5
\item $g_\theta^{kk'}(X_t^k)$: $|X_t^k|\times (|Y_t^k| \times r)$ fully-connected layer, resize to $|Y_t^k| \times r$
\item $h_\theta^{kk'}(X_t^k)$: $|X_t^k|\times (|Y_{t'}^{k'}| \times r)$ fully-connected layer, resize to $|Y_{t'}^{k'}| \times r$
\item $r_\theta^{kk'}(X_t^k)$: $|X_t^k|\times (|Y_t^k| \times r)$ fully-connected layer, resize to $|Y_t^k| \times r$
\item $s_\theta^{kk'}(X_t^k)$: $|X_t^k|\times \times (|Y_{t'}^{k'}| \times r)$ fully-connected layer, resize to $|Y_{t'}^{k'}| \times r$
\item $\sigma$: $300$
\item $w_\theta^{kk'}(X_t^k)$: $|X_t^k|\times|Y_t^k|$ fully-connected layer
\end{itemize}
\section{Parametrization in Leveraging Language Priors}
Additional networks in the experiments towards leveraging language priors are parametrized as follows:
\begin{itemize}
\item $d$: 300 (because we use 300-dim. Glove~\cite{pennington2014glove} features)
\item $u_\theta (\cdot)$: $d\times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times 1$ fully-connected layer
\item $v_\theta (\cdot)$: $d\times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times 1024$ fully-connected layer, ReLU Activation, Dropout with rate $0.3$, $1024 \times 1$ fully-connected layer
\end{itemize}
\section{Category Set in Dataset}
\label{sec:categ}
For clarity, we use bullet points for referring to the category choice in datasets for the different entity.
\begin{itemize}
\item {\em subject} / {\em object} in ImageNet Video (total $35$ categories)
\begin{itemize}
\item airplane, antelope, ball, bear, bicycle, bird, bus, car, cat, cattle, dog, elephant, fox, frisbee, giant panda, hamster, horse, lion, lizard, monkey, motorcycle, person, rabbit, red panda, sheep, skateboard, snake, sofa, squirrel, tiger, train, turtle, watercraft, whale, zebra
\end{itemize}
\item {\em predicate} in ImageNet Video (total $132$ categories)
\begin{itemize}
\item taller, swim behind, walk away, fly behind, creep behind, lie with, move left, stand next to, touch, follow, move away, lie next to, walk with, move next to, creep above, stand above, fall off, run with, swim front, walk next to, kick, stand left, creep right, sit above, watch, swim with, fly away, creep beneath, front, run past, jump right, fly toward, stop beneath, stand inside, creep left, run next to, beneath, stop left, right, jump front, jump beneath, past, jump toward, sit front, sit inside, walk beneath, run away, stop right, run above, walk right, away, move right, fly right, behind, sit right, above, run front, run toward, jump past, stand with, sit left, jump above, move with, swim beneath, stand behind, larger, walk past, stop front, run right, creep away, move toward, feed, run left, lie beneath, fly front, walk behind, stand beneath, fly above, bite, fly next to, stop next to, fight, walk above, jump behind, fly with, sit beneath, sit next to, jump next to, run behind, move behind, swim right, swim next to, hold, move past, pull, stand front, walk left, lie above, ride, next to, move beneath, lie behind, toward, jump left, stop above, creep toward, lie left, fly left, stop with, walk toward, stand right, chase, creep next to, fly past, move front, run beneath, creep front, creep past, play, lie inside, stop behind, move above, sit behind, faster, lie right, walk front, drive, swim left, jump away, jump with, lie front, left
\end{itemize}
\item {\em verb} in Charades (total $33$ categories)
\begin{itemize}
\item awaken, close, cook, dress, drink, eat, fix, grasp, hold, laugh, lie, make, open, photograph, play, pour, put, run, sit, smile, sneeze, snuggle, stand, take, talk, throw, tidy, turn, undress, walk, wash, watch, work
\end{itemize}
\item {\em object} in Charades (total $38$ categories)
\begin{itemize}
\item None, bag, bed, blanket, book, box, broom, chair, closet/cabinet, clothes, cup/glass/bottle, dish, door, doorknob, doorway, floor, food, groceries, hair, hands, laptop, light, medicine, mirror, paper/notebook, phone/camera, picture, pillow, refrigerator, sandwich, shelf, shoe, sofa/couch, table, television, towel, vacuum, window
\end{itemize}
\item {\em scene} in Charades (total $16$ categories)
\begin{itemize}
\item Basement, Bathroom, Bedroom, Closet / Walk-in closet / Spear closet, Dining room, Entryway, Garage, Hallway, Home Office / Study, Kitchen, Laundry room, Living room, Other, Pantry, Recreation room / Man cave, Stairs
\end{itemize}
\end{itemize}
{
\small
\bibliographystyle{ieee_fullname}
| 2024-02-18T23:39:55.463Z | 2019-03-28T01:20:58.000Z | algebraic_stack_train_0000 | 831 | 8,206 |
|
proofpile-arXiv_065-4168 | \section{Introduction}
\setcounter{footnote}{0}
The Andromeda Galaxy, also known as M31, is very similar to the MW. It has a spiral structure and is comprised of multiple components including a central super-massive black hole, bulge, galactic disk (the disk of stars, gas, and dust), stellar halo, and circumgalactic medium, all of which have been studied extensively~\citep{roberts1893selection,slipher1913radial,pease1918rotation,hubble1929spiral,babcock1939rotation,mayall1951comparison,arp1964spiral,Rubin:1970zza,roberts1975rotation,henderson1979model,beck1982distribution,brinks1984high,blitz1999high,ibata2001giant,deHeij:2002ne,Ferguson:2002yi,Braun:2003ey,galleti20042mass,zucker2004new,ibata2005accretion,barmby2006dusty,GildePaz:2006bw,Ibata:2007xz,Li:2007ud,faria2007probing,huxor2008globular,richardson2008nature,braun2009wide,McConnachie:2009up,Garcia:2009hu,Saglia:2009tp,2010A&A...511A..89C,peacock2010m31,Hammer:2010ug,Mackey:2010ix,Li:2010kf,2012ApJ...745..121L,McConnachie:2012vd,Lewis:2012dj,Bate:2013jha,veljanoski2014outer,huxor2014outer,Ade:2014bjw,bernard2015nature,lehner2015evidence,bernard2015nature,mcmonigal2015major,Conn:2016nnu,kerp2016survey}. Furthermore, the Andromeda Galaxy, like all galaxies, is thought to reside within a massive DM halo \citep{Rubin:1970zza,roberts1975rotation,faber1979masses,Bullock:1999he,carignan2006extended,seigar2008revised,Banerjee:2008kt,tamm2012stellar,Velliscig:2015ffa}. The DM halo of M31 is predicted to extend to roughly 300 kpc from its center and have a mass on the order of $10^{12} M_\odot$, which amounts to approximately $90\%$ of the galaxy's total mass~\citep{Klypin:2001xu,seigar2008revised,2010A&A...511A..89C,tamm2012stellar,Fardal:2013asa,Shull:2014uia,lehner2015evidence}. For cold DM, the halo is also predicted to contain a large amount of substructure~\citep{Braun:1998ik,blitz1999high,deHeij:2002ne,Braun:2003ey,Diemand:2006ik,Kuhlen:2007ku,Springel:2008cc,Zemp:2008gw,Moline:2016pbm}, a subset of which hosts M31's population of satellite dwarf galaxies \citep{McConnachie:2012vd,martin2013pandas,collins2013kinematic,Ibata:2013rh,pawlowski2013dwarf,Conn:2013iu}. The combined M31 system, together with a similar system in the MW, are the primary components of the Local Group. The distance from the MW to M31 is approximately 785 kpc~\citep{Stanek:1998cu,McConnachie:2004dv,conn2012bayesian}, making it relatively nearby. Consequently, M31 appears extended on the sky. Because of this accessibility, M31 offers a prime target for studying galaxies; and indeed, a wealth of information has been gained from observations in all wavelengths of the electromagnetic spectrum, e.g., see the references provided at the beginning of the introduction.
The \textit{Fermi} Large Area Telescope ({\it Fermi}--LAT) is the first instrument to significantly detect M31 in $\gamma$-rays\ \citep{Fermi-LAT:2010kib,ogelman2010discovery}. Prior to {\it Fermi}--LAT\ other pioneering experiments set limits on a tentative signal~\citep{fichtel1974high,pollock1981search,sreekumar1994study,Hartman:1999fc}, with the first space-based $\gamma$-ray\ observatories dating back to 1962~\citep{kraushaar1962search,kraushaar1972high}. Note that M31 has not been significantly detected by any ground-based $\gamma$-ray\ telescopes, which are typically sensitive to energies above $\sim$100 GeV~\citep{Abeysekara:2014ffg,Funk:2015ena,Bird:2015npa,tinivella2016review}.
The initial M31 analysis performed by the {\it Fermi}--LAT\ Collaboration modeled M31 both as a point source and an extended source, finding marginal preference for extension at the confidence level of 1.8$\sigma$~\citep{Fermi-LAT:2010kib}. In order to search for extension, a uniform intensity elliptical template is employed, where the parameters of the ellipse are estimated from the IRIS 100 $\mu$m observation of M31~\citep{MivilleDeschenes:2004ci}. This emission traces a convolution of the interstellar gas and recent massive star formation activity~\citep{Yun:2001jx,Reddy:2003xn,Fermi-LAT:2010kib} and can be used as a template for modeling the $\gamma$-ray\ emission.
Since the initial detection further studies have been conducted~\citep{Dugger:2010ys,Li:2013qya,Pshirkov:2015hda,Pshirkov:2016qhu,Ackermann:2017nya}. A significant detection of extended $\gamma$-ray\ emission with a total extension of 0.9$^\circ$ was reported by~\citet{Pshirkov:2016qhu}, where the morphology of the detected signal consists of two bubbles symmetrically located perpendicular to the M31 disk, akin to the MW Fermi bubbles. Most recently the \emph{Fermi}-LAT Collaboration has published their updated analysis of M31~\citep{Ackermann:2017nya}. This study detects M31 with a significance of nearly $10\sigma$, and evidence for extension is found at the confidence level of $4\sigma$. Of the models tested, the best-fit morphology consists of a uniform-brightness circular disk with a radius of 0.4$^\circ$ centered at M31. The $\gamma$-ray\ signal is not found to be correlated with regions rich in gas or star formation activity, as was first pointed out by~\citet{Pshirkov:2016qhu}.
In this work we make a detailed study of the $\gamma$-ray emission observed towards the outer halo of M31, including the construction of specialized interstellar emission models to characterize the foreground emission from the MW, and an in-depth evaluation of the systematic uncertainties related to the observations. Our ultimate goal is to test for a $\gamma$-ray signal exhibiting spherical symmetry with respect to the center of M31, since there are numerous physical motivations for such a signal.
In general, disk galaxies like M31 may be surrounded by extended CR halos~\citep{Feldmann:2012rx,Pshirkov:2015hda}. Depending on the strength of the magnetic fields in the outer galaxy, the CR halo may extend as far as a few hundred kpc from the galactic disk. However, the actual extent remains highly uncertain. The density of CRs in the outer halo is predicted to be up to 10\% of that found in the disk~\citep{Feldmann:2012rx}. Disk galaxies like M31 are also surrounded by a circumgalactic medium, which is loosely defined as a halo of gas (primarily ionized hydrogen) in different phases which may extend as far as the galaxy's virial radius~\citep{Gupta:2012rh,Feldmann:2012rx,lehner2015evidence,Pshirkov:2015hda,howk2017project}. In addition, the stellar halo of M31 is observed to have an extension $\gtrsim$50 kpc~\citep{Ibata:2007xz,McConnachie:2009up,Mackey:2010ix}. CR interactions with the radiation field of the stellar halo and/or the circumgalactic gas could generate $\gamma$-ray emission.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{FM31_full.pdf}
\includegraphics[width=0.49\textwidth]{FM31_full_saturated.pdf}
\caption{Observed counts (left) and saturated counts (right) for a $60^\circ$ radius centered at M31, and an energy range of 1--100 GeV. The green dashed circle ($21^\circ$ in radius) corresponds to a 300 kpc projected radius centered at M31, for an M31--MW distance of 785 kpc, i.e.\ the canonical virial radius of M31. Also shown is M31's population of dwarf galaxies. M31 and M33 are shown with cyan triangles, and the other dwarfs are shown with $1^\circ$ green circles, each centered at the optical center of the respective galaxy. The sizes of the circles are a bit arbitrary, although they roughly correspond to the point spread function (PSF, 68\% containment angle) of \textit{Fermi}-LAT, which at 1 GeV is $\sim$$1^\circ$. Most of the MW dwarfs are not detected by \textit{Fermi}-LAT, and so we do not necessarily expect the individual M31 dwarfs to be detected. The primary purpose of the overlay is to provide a qualitative representation of the extent of M31's outer halo, and to show its relationship to the MW disk. Note that $\sim$3 dwarfs (which are thought to be gravitationally bound to M31) reach as for as $\sim$300 kpc, with one dwarf (And XXVIII) reaching as far as $\sim$360 kpc, as seen in the figure.}
\label{fig:observed_counts}
\end{figure*}
Some hints of the extent and distribution of the M31 halo may be gained from observations of the distributions of well-studied objects, clearly tied to the M31 system. In Section~\ref{sec:gas_related_emission} we compare the distribution of the observed $\gamma$-ray emission in the M31 field to such features as M31's population of globular clusters~\citep{galleti20042mass,huxor2008globular,peacock2010m31,Mackey:2010ix,veljanoski2014outer,huxor2014outer} and M31's population of satellite dwarf galaxies~\citep{McConnachie:2012vd,martin2013pandas,collins2013kinematic}. We note that {\it Fermi}--LAT\ does not detect most of the MW dwarfs~\citep{Ackermann:2015zua}, and likewise we do not necessarily expect to detect most of the individual M31 dwarfs. The dwarfs are included here primarily as a qualitative gauge of the extent of M31's DM halo, and more generally, in support of formulating the most comprehensive picture possible of the M31 region. We also compare the observed $\gamma$-ray emission to the M31 cloud~\citep{blitz1999high,kerp2016survey}, which is a highly extended lopsided gas cloud centered in projection on M31. It remains uncertain whether the M31 cloud resides in M31 or the MW, although most recently~\citet{kerp2016survey} have argued that M31's disk is physically connected to the M31 cloud.
Lastly, we note that due to its mass and proximity, the detection sensitivity of M31 to DM searches with $\gamma$-rays\ is competitive with the MW dwarf spheroidal galaxies, particularly if the signal is sufficiently boosted by substructures~\citep{Falvard:2002ny,Fornengo:2004kj,Mack:2008wu,Dugger:2010ys,Conrad:2015bsa,Gaskins:2016cha}. Moreover, M31 is predicted to be the brightest extragalactic source of DM annihilation~\citep{lisanti2018search,lisanti2018mapping}. At a distance of $\sim$785 kpc from the MW \citep{Stanek:1998cu,McConnachie:2004dv,conn2012bayesian} and with a virial radius of a few hundred kpc~\citep{Klypin:2001xu,seigar2008revised,2010A&A...511A..89C,tamm2012stellar,Fardal:2013asa,Shull:2014uia,lehner2015evidence}, the diameter of M31's DM halo covers $\gtrsim$42$^{\circ}$ across the sky. However, there is a high level of uncertainty regarding the exact nature of the halo geometry, extent, and substructure content~\citep{Kamionkowski:1997xg,Braun:1998ik,blitz1999high,deHeij:2002ne,Braun:2003ey,Helmi:2003pp,Bailin:2004wu,Allgood:2005eu,Bett:2006zy,Hayashi:2006es,Kuhlen:2007ku,Banerjee:2008kt,Zemp:2008gw,Saha:2009dt,Law:2009yq,Banerjee:2011rr,Velliscig:2015ffa,Bernal:2016guq,garrison2017not}.
Our analysis proceeds as follows. In Section~\ref{sec:Data and Models} we describe our data selection and modeling of the interstellar emission. In Section~\ref{sec:FM31 Baseline} we present the baseline analysis of the M31 field and perform a template fit, including the addition of M31-related components to the model. In Section~\ref{sec:smooth_residual_emission} we compare the radial intensity profile and emission spectrum of the M31-related components to corresponding predictions for DM annihilation towards the outer halo of M31, including contributions from both the M31 halo and the MW halo in the line of sight. In Section~\ref{sec:gas_related_emission} we compare the structured $\gamma$-ray emission in the M31 field to a number of complementary M31-related observations. Section \ref{sec:fianl} provides an extended summary of the analysis and results. Supplemental information is provided in Appendices. In Appendix~\ref{sec:IEM_Summary} we briefly describe the models for diffuse Galactic foreground emission. In Appendix~\ref{sec:different_IEMs} we consider some additional systematics pertaining to the observations. Appendix~\ref{sec:DM} provides the details of calculations of the DM profiles discussed in the paper.
\section{Data and Models} \label{sec:Data and Models}
\subsection{Data} \label{sec:Data Selection}
The \textit{Fermi Gamma-ray Space Telescope} was launched on June 11, 2008. The main instrument on board \textit{Fermi} is the Large Area Telescope. It consists of an array of 16 tracker modules, 16 calorimeter modules, and a segmented anti-coincidence detector. {\it Fermi}--LAT\ is sensitive to $\gamma$-rays in the energy range from approximately 20 MeV to above 300 GeV. A full description of the telescope, including performance specifications, can be found in~\citet{Atwood:2009ez}, \citet{Abdo:2009gy}, and \citet{Ackermann:2012kna}.
Our region of interest (ROI) is a region with a radius of $60^\circ$ centered at the position of M31, $(l,b) = (121.17^{\circ}, -21.57^{\circ})$. We employ front and back converting events corresponding to the P8R2\_CLEAN\_V6 selection. The events have energies in the range 1--100 GeV and have been collected from 2008-08-04 to 2016-03-16 (7.6 years). The data are divided into 20 bins equally spaced in logarithmic energy, with $0.2^\circ\times0.2^\circ$ pixel size.
The analysis is carried out with the {\it Fermi}--LAT\ ScienceTools (version v10r0p5)\footnote{Available at \url{http://fermi.gsfc.nasa.gov/ssc/data/analysis}}. In particular, the binned maximum likelihood fits are performed with the {\it gtlike} package.
Figure~\ref{fig:observed_counts} shows the total observed counts between 1--100 GeV for the full ROI. Two different count ranges are displayed. The map on the left shows the full range. The bright emission along 0$^\circ$ latitude corresponds to the plane of the MW. The map on the right shows the saturated counts map, emphasizing the lower counts at higher latitudes. Overlaid is a green dashed circle ($21^\circ$ in radius) corresponding to a 300 kpc projected radius centered at M31, for an M31-MW distance of 785 kpc, i.e.~the canonical virial radius of M31. Also shown is M31's population of dwarf galaxies. The primary purpose of the overlay is to provide a qualitative representation of the extent of M31's outer halo, and to show its relationship to the MW disk. Note that we divide the full ROI into subregions, and our primary field of interest is a $28^\circ \times 28^\circ$ square region centered at M31, which we refer to as field M31 (FM31), as further discussed below.
\subsection{Foreground Model and Isotropic Emission} \label{sec:iem}
The foreground emission from the MW and the isotropic component (the latter includes unresolved extragalactic diffuse $\gamma$-ray emission, residual instrumental background, and possibly contributions from other Galactic components which have a roughly isotropic distribution) are the dominant contributions in $\gamma$-rays\ towards the M31 region. We use the CR propagation code GALPROP\footnote{Available at \url{https://galprop.stanford.edu}}(v56) to construct specialized interstellar emission models (IEMs) to characterize the MW foreground emission, including a self-consistent determination of the isotropic component. These foreground models are physically motivated and \emph{are not} subject to the same caveats\footnote{The list of caveats on the {\it Fermi}--LAT\ diffuse model is available at \url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats.html} \label{caveats}} for extended source analysis as the default IEM provided by the \textit{Fermi}-LAT Collaboration for point source analysis (hereafter FSSC IEM)~\citep{Acero:2016qlg}. Here we provide a brief description of the GALPROP model \citep{Moskalenko:1997gh,Moskalenko:1998gw,Strong:1998pw,Strong:1998fr,2006ApJ...642..902P,Strong:2007nh,Vladimirov:2010aq,Johannesson:2016rlh,porter2017high,Johannesson:2018bit,PhysRevC.98.034611}, and more details are given in Appendix~\ref{sec:IEM_Summary}.
The GALPROP model calculates self-consistently spectra and abundances of Galactic CR species and associated diffuse emissions (radio, X-rays, $\gamma$-ray{s}) in 2D and 3D. The CR injection and propagation parameters are derived from local CR measurements. The Galactic propagation includes all stable and long-lived particles and isotopes ($e^\pm$, $\bar{p}$, H-Ni) and all relevant processes in the interstellar medium. The radial distribution of the CR source density is parametrized as
\begin{equation} \label{eq:1}
\rho(r) = \left(\frac{r + r_1}{r_\odot + r_1}\right)^{a}\times \exp \left(-b \times \frac{r - r_\odot}{r_\odot + r_1}\right),
\end{equation}
where $r$ is the Galactocentric radius, $r_\odot= 8.5$ kpc, and the parameter $r_1$ regulates the CR density at $r=0$. The injection spectra of CR species are described by the rigidity (R) dependent function
\begin{equation} \label{eq:2}
q(R) \propto (R/R_0)^{-\gamma_0}\prod_{i=0}^2\bigg[1 + (R/R_i)^\frac{\gamma_i - \gamma_{i+1}}{s_i}\bigg]^{s_i},
\end{equation}
where $\gamma_i (i =0, 1, 2, 3)$ are the spectral indices, $R_i (i = 0, 1, 2)$ are the break rigidities, $s_i$ are the smoothing parameters ($s_i=\mp0.15$ for $|\gamma_i |\lessgtr |\gamma_{i+1} |$), and the numerical values of all parameters are given in Table~\ref{tab:GALPROP_parameters}. Some parameters are not in use, so for $p$ and He, we have only $\gamma_{i=0, 1, 2}$ and $R_{i=0, 1}$.
\begin{deluxetable}{lcc}[tbh!]
\tablecolumns{5}
\tablewidth{0mm}
\tablecaption{GALPROP Model Parameters\label{tab:GALPROP_parameters}}
\tablehead{
\colhead{Parameter} &
\colhead{M31 IEM} &
\colhead{IG IEM}}
\startdata
\tablenotemark{a} $z$ [kpc] &4 &6 \\
\tablenotemark{a} $r$ [kpc] & 20 &30 \\
\tablenotemark{b} $a$ &1.5 &1.64 \\
\tablenotemark{b} $b$ & 3.5 &4.01 \\
\tablenotemark{b} $r_1$ & 0.0 &0.55 \\
\tablenotemark{c} $D_0$ [10$^{28}$ cm$^2$ s$^{-1}$] & 4.3&7.87\\
\tablenotemark{c} $\delta$ &0.395 & 0.33\\
\tablenotemark{c} $\eta$ & 0.91 &1.0\\
\tablenotemark{c} Alfv\'en speed, $v_{\rm A}$ [km s$^{-1}$] &28.6 &34.8\\
\tablenotemark{d} $v_{\rm conv,0}$ [km s$^{-1}$] &12.4 &\nodata \\
\tablenotemark{d} $dv_{\rm conv}/dz$ [km s$^{-1}$ kpc$^{-1}$ ] &10.2 &\nodata\\
\tablenotemark{e} $R_{p,0}$ [GV] &7 &11.6 \\
\tablenotemark{e} $R_{p,1}$ [GV] &360 &\nodata\\
\tablenotemark{e} $\gamma_{p,0}$ & 1.69 &1.90\\
\tablenotemark{e} $\gamma_{p,1}$ & 2.44 &2.39\\
\tablenotemark{e} $\gamma_{p,2}$ & 2.295 &\nodata\\
\tablenotemark{e} $R_{\rm He,0}$ [GV] &7 &\nodata\\
\tablenotemark{e} $R_{\rm He,1}$ [GV] &330 &\nodata\\
\tablenotemark{e} $\gamma_{\rm He,0}$ &1.71&\nodata\\
\tablenotemark{e} $\gamma_{\rm He,1}$ &2.38&\nodata\\
\tablenotemark{e} $\gamma_{\rm He,2}$ &2.21&\nodata\\
\tablenotemark{e} $R_{e,0}$ [GV] &0.19 &\nodata \\
\tablenotemark{e} $R_{e,1}$ [GV] &6 &2.18 \\
\tablenotemark{e} $R_{e,2}$ [GV] &95 & 2171.7 \\
\tablenotemark{e} $\gamma_{e,0}$ &2.57&\nodata\\
\tablenotemark{e} $\gamma_{e,1}$ &1.40&1.6\\
\tablenotemark{e} $\gamma_{e,2}$ &2.80 &2.43\\
\tablenotemark{e} $\gamma_{e,3}$ &2.40& 4.0 \\
\tablenotemark{f} $J_{p}$ [$\mathrm{10^{-9} \ cm^{-2} \ s^{-1} \ sr^{-1} \ MeV^{-1}}$] &4.63&4.0\\
\tablenotemark{f} $J_{e}$ [$\mathrm{10^{-11} \ cm^{-2} \ s^{-1} \ sr^{-1} \ MeV^{-1}}$] &1.44&0.011 \\
\tablenotemark{g} A5 [kpc] &8--10&8--10\\
\tablenotemark{g} A6 [kpc] &10--11.5&10--50\\
\tablenotemark{g} A7 [kpc] &11.5--16.5&\nodata\\
\tablenotemark{g} A8 [kpc] &16.5--50&\nodata\\
\tablenotemark{h} IC Formalism&Anisotropic&Isotropic
\enddata
\tablecomments{For reference, we also give corresponding values for the (``Yusifov'') IEMs used in~\citet{TheFermi-LAT:2015kwa} for the analysis of the inner Galaxy (IG).}
\tablenotetext{a}{Halo geometry: $z$ is the height above the Galactic plane, and $r$ is the radius.}
\tablenotetext{b}{CR source density. The parameters correspond to Eq.~(\ref{eq:1}).}
\tablenotetext{c}{Diffusion: $D(R)$ $\propto \beta^\eta R^\delta$. $D(R)$ is normalized to $D_0$ at 4.5 GV.}
\tablenotetext{d}{Convection: $v_{\rm conv}(z)=v_{\rm conv,0}+(dv_{\rm conv}/dz)z$.}
\tablenotetext{e}{Injection spectra: The spectral shape of the injection spectrum is the same for all CR nuclei except for protons. The parameters correspond to Eq.~(\ref{eq:2}).}
\tablenotetext{f}{The proton and electron flux are normalized at the Solar location at a kinetic energy of 100 GeV. Note that for the IG IEM the electron normalization is at a kinetic energy of 25 GeV.}
\tablenotetext{g}{Boundaries for the annuli which define the IEM. Only A5 (local annulus) and beyond contribute to the foreground emission for FM31.}
\tablenotetext{h}{Formalism for the inverse Compton (IC) component.}
\end{deluxetable}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.48\textwidth]{CR_proton_flux_KE_2_7.pdf}
\includegraphics[width=0.48\textwidth]{CR_helium_flux_KE_2_7.pdf}
\includegraphics[width=0.48\textwidth]{CR_electron_and_positron_flux_KE_semilogx.pdf}
\caption{The local interstellar spectra (LIS) for CR protons (top), He (middle), and all electrons ($e^- +e^+$) (bottom). The latest AMS-02 measurements from~\citet{PhysRevLett.113.221102,PhysRevLett.114.171103,PhysRevLett.115.211101} are shown with red squares. The green dashed line shows the results from~\citet{Boschini:2017fxq,Boschini:2018zdv}, which employ GALPROP and HelMod together in an iterative manner to derive the LIS. We adopt their derived GALPROP CR parameters, and the LIS for our IEM (M31 IEM: solid black line) are roughly the same. The thin dotted black line shows the LIS modulated with HelMod \citep{Boschini:2017fxq,Boschini:2018zdv}. Yellow triangles show the Voyager 1 $p$ and He data in the local interstellar medium \citep{2016ApJ...831...18C}. Voyager 1 electron data are below 100 MeV and, therefore, are not shown. In addition we show the LIS for the (``Yusifov'') IEM in~\citet{TheFermi-LAT:2015kwa}, which we use as a reference model in our study of the systematics for the M31 field (see Appendix~\ref{sec:IG_IEMs}).}
\label{fig:CR_LIS}
\end{figure}
Heliospheric propagation is calculated using the dedicated code HelMod\footnote{Available at \url{http://www.helmod.org/}}. HelMod is a 2D Monte Carlo code for heliospheric propagation of CRs, which describes the solar modulation in a physically motivated way. It was demonstrated that the calculated CR spectra are in a good agreement with measurements including measurements outside of the ecliptic plane at different levels of solar activity and the polarity of the magnetic field. The result of the combined iterative application of the GALPROP and HelMod codes is a series of local interstellar spectra (LIS) for CR $e^-$, $e^+$, $p$, He, C, and O nuclei \citep{Boschini:2017fxq,Boschini:2018zdv,2018ApJ...858...61B} that effectively disentangle two tremendous tasks such as Galactic and heliospheric propagation.
For our analysis we used a GALPROP-based combined diffusion-convection-reacceleration model with a uniform spatial diffusion coefficient and a single power law index over the entire rigidity range as described in detail in \citet{Boschini:2017fxq}. Since the distribution of supernova remnants (SNRs), conventional CR sources, is not well determined due to the observational bias and the limited lifetime of their shells, other tracers are often employed. In our calculations we use the distribution of pulsars \citep{yusifov2004revisiting} that are the final state of evolution of massive stars and can be observed for millions of years. The same distribution was used in the analysis of the $\gamma$-ray{} emission from the Inner Galaxy (IG) \citep{TheFermi-LAT:2015kwa}.
We adopt the best-fit GALPROP parameters from~\citet{Boschini:2017fxq,Boschini:2018zdv}, which are summarized in Table~\ref{tab:GALPROP_parameters}. The spectral shape of the injection spectrum is the same for all CR nuclei except for protons. The corresponding CR spectra are plotted in Figure~\ref{fig:CR_LIS}. Also plotted in Figure~\ref{fig:CR_LIS} are the latest AMS-02 measurements from~\citet{PhysRevLett.113.221102,PhysRevLett.114.171103,PhysRevLett.115.211101} and Voyager 1 $p$ and He data in the local interstellar medium \citep{2016ApJ...831...18C}. The modulated LIS are taken from \citet{Boschini:2017fxq,Boschini:2018zdv} and correspond to the time frame of the published AMS-02 data. In addition, we plot the LIS for the (``Yusifov'') IEMs used in~\citet{TheFermi-LAT:2015kwa} for the analysis of the inner Galaxy (IG), which we use as a reference model in our study of the systematics for the M31 field (see Appendix~\ref{sec:IG_IEMs}). Overall, the LIS for the M31 model are in good agreement with the AMS-02 data.
We note that there is a small discrepancy in the modulated all-electron ($e^- + e^+$) spectrum between $\sim$4--10 GeV that, however, does not affect our results. Electrons in this energy range do not contribute much to the observed diffuse emission. The upscattered photon energy is $\epsilon_1\sim\epsilon_0\gamma^2$, where $\epsilon_0$ and $\gamma$ are the energy of the background photon and the Lorentz-factor of the CR electron, correspondingly. For our range of interest $\epsilon_1$$\sim$5 GeV, we need CR electrons of $\sim$35 GeV for $\epsilon_0\sim$1 eV optical photons and even higher for IR and CMB, while the number density of optical photons in the ISM is very small. Additionally, we perform several systematic tests throughout this work, including fits with three different IEMs (M31, IG, and FSSC IEMs), as well as a fit in a tuning region surrounding FM31 on the south.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=1\textwidth]{Total_IEM.pdf}
\caption{The total interstellar emission model (IEM) for the MW integrated in the energy range 1--100 GeV. The color corresponds to the intensity, and is shown in logarithmic scale. The intensity level is for the initial GALPROP output, before tuning to the $\gamma$-ray data. The map is shown in a Plate Carr\'{e}e projection, and the pixel size is 0.25 deg/pix. The model has contributions from $\pi^0$-decay, (anisotropic) IC emission, and Bremsstrahlung. Overlaid is the region of interest (ROI) used in this analysis. From the observed counts (Figure~\ref{fig:observed_counts}) we cut an $84^\circ\times84^\circ$ ROI, which is centered at M31. The green dashed circle is the 300 kpc boundary corresponding to M31's canonical virial radius (of $\sim$$21^\circ$), as also shown in Figure~\ref{fig:observed_counts}. We label the field within the virial radius as field M31 (FM31), and the region outside (and south of latitudes of $-21.57^\circ$) we label as the tuning region (TR). Longitude cuts are made on the ROI at $l=168^\circ \ \mathrm{and} \ l=72^\circ$, as discussed in the text. For reference we also show the Galactic center region (GC), which corresponds to a $15^\circ\times15^\circ$ square centered at the GC.}
\label{fig:galactic_diffuse_schematic}
\end{figure*}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.5\textwidth]{measurement_schematic.pdf}
\caption{Schematic of the eight concentric circles which define the annuli (A1--A8) in the IEM, as described in the text. The ranges in Galactocentric radii are reported in the legend. Note that the full extension of A8 is not shown. Only A5--A8 contribute to the Galactic foreground emission for the field used in this analysis.}
\label{fig:measurement_schematic}
\end{figure}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{HI_A5.pdf}
\includegraphics[width=0.49\textwidth]{HI_A6.pdf}
\includegraphics[width=0.49\textwidth]{HI_A7.pdf}
\includegraphics[width=0.49\textwidth]{HI_A8.pdf}
\includegraphics[width=0.49\textwidth]{H2_A5.pdf}
\includegraphics[width=0.49\textwidth]{H2_A6-A8.pdf}
\includegraphics[width=0.49\textwidth]{HII_A1-A8.pdf}
\includegraphics[width=0.49\textwidth]{Bremsstrahlung.pdf}
\caption{Gas-related components of the IEM ($\pi^0$-decay related to H~{\sc i}, H~{\sc ii}, and H$_2$, and Bremsstrahlung emission) integrated in the energy range 1--100 GeV. The components correspond to different annuli, as indicated above each plot. The color corresponds to the intensity, and is shown in logarithmic scale. The intensity level is for the initial GALPROP outputs, before tuning to the $\gamma$-ray data. The maps are shown in a Plate Carr\'{e}e projection, and the pixel size is 0.25 deg/pix. Overlaid is the ROI used in this analysis, as well as the GC region (see Figure~\ref{fig:galactic_diffuse_schematic}).}
\label{fig:maps_1}
\end{figure*}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{AIC_A5.pdf}
\includegraphics[width=0.49\textwidth]{AIC_A6-A7.pdf}
\includegraphics[width=0.49\textwidth]{AIC_A8.pdf}
\caption{Anisotropic Inverse Compton (AIC) components of the interstellar emission model for the MW in the energy range 1--100 GeV. The color corresponds to the intensity, and is shown in logarithmic scale. The intensity level is for the initial GALPROP outputs, before tuning to the $\gamma$-ray data. The map is shown in a Plate Carr\'{e}e projection, and the pixel size is 0.25 deg/pix. The IC A6 and A7 components are highly degenerate, and so we combine them into a single map A6$+$A7. Overlaid is the ROI used in this analysis, as well as the GC region (see Figure~\ref{fig:galactic_diffuse_schematic}). Note that we use the anisotropic IC maps as our default component. Unless otherwise stated, all reference to the IC component implies the anisotropic formalism.}
\label{fig:maps_2}
\end{figure}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{AIC_Space_Ratio.pdf}
\includegraphics[width=0.40\textwidth]{AIC_Energy_Dependence.pdf}
\caption{The IEM employs the anisotropic IC sky maps, as discussed in the text. For comparison we show the differential flux ratio (AIC/IC) between the anisotropic (AIC) and isotropic (IC) inverse Compton components (all-sky). The top figure shows the spatial variation of the ratio at 1 GeV. The bottom figure shows the energy dependence of the ratio for 4 different spatial points, including M31. The ratio is close to unity towards the GC, increases with Galactic longitude and latitude, and reaches maximum at mid-latitudes towards the outer Galaxy. Note that we use the anisotropic IC maps as our default component. Unless otherwise stated, all reference to the IC component implies the anisotropic formalism.}
\label{fig:AIC_ratio}
\end{figure}
Figure~\ref {fig:galactic_diffuse_schematic} shows the total IEM in the energy range 1--100 GeV. The model includes $\pi^0$-decay, inverse Compton (IC), and Bremsstrahlung components. Overlaid is the ROI used in this analysis. From the observed counts (Figure~\ref{fig:observed_counts}) we cut an $84^\circ\times84^\circ$ ROI, which is centered at M31. The green dashed circle is the 300 kpc boundary corresponding to M31's canonical virial radius (of $\sim$$21^\circ$), as also shown in Figure~\ref{fig:observed_counts}.
We label the field within the virial radius as FM31, and the region outside (and below latitudes of $-21.57^\circ$) we label as the tuning region (TR). Longitude cuts are made on the ROI at $l=168^\circ \ \mathrm{and} \ l=72^\circ$. The former cut is made to stay away from the outer Galaxy, where the gas distribution becomes more uncertain, due to the method used for placing the gas at Galactocentric radii, i.e.\ Doppler shifted 21-cm emission. The latter cut is made to prevent the observations from including additional model component (i.e.~A4, as described below), which would further complicate the analysis.
The $\gamma$-ray\ maps generated by GALPROP correspond to ranges in Galactocentric radii, and their boundaries are shown in Figure~\ref{fig:measurement_schematic} (A1--A8), which also depicts an overhead view of the annuli. The line of sight for the ROI, as seen from the location of the Solar system, is indicated with dash-dot red lines. Maps for the individual processes are shown in Figures~\ref{fig:maps_1} and~\ref{fig:maps_2}.
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{Isotropic_Systematics_AIC.pdf}
\caption{The spectrum of the isotropic component has a dependence on the IEM and the ROI used for the calculation, as well as the data set. For the M31 IEM (which uses the AIC sky maps) we calculate the \textbf{all-sky} (solid black line) isotropic component in the following region: $|b| \geq 30^\circ, \ 45^\circ \leq l \leq 315^\circ$. We also calculate the isotropic component in the different sky regions: \textbf{north}: $b\geq 30^\circ, \ 45^\circ \leq l \leq 315^\circ$ (orange dashed line); \textbf{south}: $b\leq -30^\circ,\ 45^\circ \leq l \leq 315^\circ$ (green dashed line); \textbf{east}: $|b|\geq 30^\circ, \ 180^\circ \leq l \leq 315^\circ$ (blue dashed line); and \textbf{west}: $|b|\geq 30^\circ, \ 45^\circ \leq l \leq 180^\circ$ (purple dashed line). See Table~\ref{tab:norm_isotropic} for the corresponding best-fit normalizations. Magenta triangles show the all-sky isotropic component for the M31 IEM derived using the isotropic IC formalism. The brown squares show the official FSSC isotropic spectrum (iso\_P8R2\_CLEAN\_V6\_v06). The gray band is our calculated isotropic systematic uncertainty for the IG IEM, which uses the isotropic IC formalism (see Appendix~\ref{sec:IG_IEMs}).}
\label{fig:Isotropic_Sytematics}
\end{figure}
The H~{\sc i}\ maps GALPROP employs are based on LAB\footnote{The Leiden/Argentine/Bonn Milky Way H~{\sc i}\ survey} + GASS\footnote{GALEX Arecibo SDSS Survey; GALEX = the Galaxy Evolution Explorer, SDSS = Sloan Digital Sky Survey} data, which for our ROI corresponds to LAB data only~\citep{kalberla2005leiden}. We note that there is a newer EBHIS\footnote{The Effelsberg-Bonn H~{\sc i}\ Survey} survey that covers the whole northern sky, but for our purposes the LAB survey suffices. Besides, the development of the new H~{\sc i}\ maps for GALPROP based on the EBHIS survey would require a dedicated study. The H~{\sc i}-related $\gamma$-ray emission depends on the H~{\sc i}\ column density, which depends on the spin temperature of the gas. We assume a uniform spin temperature of 150 K. The gas is placed at Galactocentric radii based on the Doppler-shifted velocity and Galactic rotation models. FM31 has a significant emission associated with H~{\sc i}\ gas. The emission is dominated by A5, with further contribution from A6--A7.
On the other hand, there is very little contribution from H$_2$, which is concentrated primarily along the Galactic disk. The emission in FM31 only comes from A5. The 2.6 mm line of the $^{12}\rm{CO}$ molecular $J = 1 \rightarrow 0$ transition is used as a tracer of H$_2$, assuming a proportionality between the integrated line intensity of CO, $W(\rm{CO})$, and the column density of H$_2$, $N(\rm{H_2})$, given by the factor $X_{\rm{CO}}$. We use the $X_{\rm{CO}}$ values from~\citet{TheFermi-LAT:2015kwa}, which are tabulated at different Galactocentric radii with power law interpolation. In particular, the values relevant for this analysis are $1.4\times10^{20}$, $7.2\times10^{19}$, and $7.0\times10^{20}$ (cm$^{-2}$ K$^{-1}$ km$^{-1}$ s ), for radii 7.5, 8.7, and 11.0 (kpc), respectively.
The foreground emission from H~{\sc ii}\ is subdominant. Modeling of this component is based on pulsar dispersion measurements. We use the model from~\citet{gaensler2008vertical}.
The distribution of He in the interstellar gas is assumed to follow that of hydrogen, with a He/H ratio of 0.11 by number. Heavier elements in the gas are neglected.
Our model also accounts for the dark neutral medium (DNM), or dark gas, which is a component of the interstellar medium that is not well traced by 21-cm emission or CO emission, as described in~\citet{grenier2005unveiling}, \citet{Ackermann:2012pya}, and~\citet{Acero:2016qlg}. For any particular region the DNM comprises unknown fractions of cold dense H~{\sc i}\ and CO-free or CO-quiet H$_2$. Details for the determination of the DNM component are described in~\citet{Ackermann:2012pya}.
In summary, a template for the DNM is constructed by creating a map of ``excess'' dust column density $\rm{E(B-V)_{res}}$. A gas-to-dust ratio is obtained for both H~{\sc i}\ and CO using a linear fit of the N(H~{\sc i}) map and W(CO) map to the $\rm{E(B-V)}$ reddening map of~\citet{schlegel1998maps}. In general, the method is all-sky, and a constant gas-to-dust ratio is assumed throughout the Galaxy. Subtracting the correlated parts from the total dust results in the residual dust emission, $\rm{E(B-V)_{res}}$, which is then associated with the DNM. In the current study the DNM is incorporated into the H~{\sc i}\ templates; see \citet{Ackermann:2012pya} for details.
The IC component arises from up-scattered low-energy photons of the Galactic interstellar radiation field (ISRF) by CR electrons and positrons. The ISRF (optical, infrared, and cosmic microwave background) is the result of the emission by stars, and scattering, absorption, and re-emission of absorbed starlight by dust in the interstellar medium. The ISRF is highly anisotropic since it is dominated by the radiation from the Galactic plane. An observer in the Galactic plane thus sees mostly head-on scatterings even if the distribution of the CR electrons is isotropic. This is especially evident when considering inverse Compton scattering by electrons in the halo, i.e.\ the diffuse emission at high Galactic latitudes.
We employ the anisotropic formalism of the IC component \citep{Moskalenko:1998gw}. From the GALPROP code we use the standard ISRF model file (standard.dat) and standard scaling factors of 1.0 for optical, infrared, and microwave components. In Figure~\ref{fig:AIC_ratio} we show the differential flux ratio (AIC/IC) between the anisotropic (AIC) and isotropic (IC) inverse Compton components (all-sky). The top figure shows the spatial variation of the ratio at 1 GeV. The ratio is close to unity towards the GC, increases with Galactic longitude and latitude, and reaches maximum at mid-latitudes towards the outer Galaxy. The bottom figure shows the energy dependence of the ratio for 4 different spatial points, including M31. Note that unless otherwise stated, \emph{all reference to the IC component implies the anisotropic formalism.} Also, the $\gamma$-ray skymaps for IC A6 and A7 are highly degenerate, and so we combine them into a single map A6$+$A7.
The IC component anti-correlates with the isotropic component. The isotropic component includes unresolved extragalactic diffuse emission, residual instrumental background, and possibly contributions from other Galactic components which have a roughly isotropic distribution. The spectrum of the isotropic component depends on the IEM and the ROI used for the calculation. The spectrum also depends on the data set, since the residual instrumental background differs between data sets. We calculate the isotropic component self-consistently with the M31 IEM, and the spectrum is shown in Figure~\ref{fig:Isotropic_Sytematics}. Table~\ref{tab:norm_isotropic} gives the corresponding best-fit normalizations for the diffuse components.
The main calculation is performed over the full sky excluding regions around the Galactic plane and the Inner Galaxy: $|b| \geq 30^\circ, \ 45^\circ \leq l \leq 315^\circ$. We note that even though it is not actually an all-sky fit, we refer to it as ``all-sky'' for simplicity hereafter. The fit includes 3FGL sources fixed, sun and moon templates fixed, \citet{wolleben2007new} component (Loop I two-component spatial template), all-sky $\pi^0$-decay and (anisotropic) IC normalization scaled, and all-sky Bremsstrahlung fixed. Besides, we calculate the isotropic component in the different sky regions: north, south, east, and west, as detailed in Figure~\ref{fig:Isotropic_Sytematics}. Also shown are the isotropic components resulting from the M31 IEM using the isotropic IC formalism, the FSSC IEM, and the IG IEM (which uses the isotropic IC formalism). At lower energies the intensities of the spectra calculated in the south and west (both regions associated with the M31 system) are lower than that of the spectra calculated in the north and east. Correspondingly, the IC normalizations are higher for the south and west. Interestingly, independently of the IEM used in the fit, the isotropic spectrum features a bump at $\sim$10 GeV.
\begin{deluxetable}{lllll}[tbh!]
\tablecolumns{5}
\tablewidth{0mm}
\tablecaption{Normalizations for Calculations of the Isotropic Component\label{tab:norm_isotropic}}
\tablehead{
\colhead{Region} &
\colhead{$\pi^0$} &
\colhead{AIC}}
\startdata
All-sky &1.319 $\pm$ 0.005 &1.55 $\pm$ 0.04 \\
North&1.430 $\pm$ 0.010 &1.14 $\pm$ 0.05 \\
South &1.284 $\pm$ 0.006 &1.86 $\pm$ 0.05 \\
East&1.397 $\pm$ 0.009 &1.07 $\pm$ 0.05\\
West&1.287 $\pm$ 0.006 &1.88 $\pm$ 0.05
\enddata
\tablecomments{See Figure~\ref{fig:Isotropic_Sytematics} for definition of the regions.}
\end{deluxetable}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.45\textwidth]{ROI_Model.pdf}
\includegraphics[width=0.45\textwidth]{Mask_300_North_Model.pdf}
\caption{Total model counts for the full ROI. For the tuning region (TR) we mask within the 300 kpc circle and latitudes above $-21.57^\circ$, as discussed in the text.}
\label{fig:TR}
\end{figure}
In general, the model contains inherent systematic uncertainties due to a number of different factors, including the correlations between the different model components, uncertainties related to the determination of the DNM, and the presence of any un-modeled spatial variation in the spin temperature, CR density, and/or ISRF density. These issues will be addressed throughout this analysis.
\subsection{Tuning the IEM} \label{sec:tuning}
Figure~\ref{fig:TR} shows the total model counts for the full ROI. The bottom panel shows the TR, for which we mask the 300 kpc circle around M31 and latitudes north of $-21.57^\circ$. The primary purpose of the TR is to fit the normalization of the isotropic component. The isotropic component by definition is an all-sky average, but it may have some local spatial variations, since the instrumental background may also vary over the sky. The TR is also used to set the initial normalizations of the IC components, since they are anti-correlated with the isotropic component.
The fit is performed by uniformly scaling each diffuse component as well as all 3FGL sources in the region. Note that the model includes all sources within $70^\circ$ of the ROI center, but only the sources in the TR are scaled in the fit. As a test, we also perform the fit by keeping fixed the 3FGL sources in the TR, and we find that the best-fit normalizations of the diffuse components are not very sensitive to the scaling of the point sources. Likewise, it is not necessary to scale the point sources outside of the TR, which are included in order to account for the spillover of the instrumental PSF. The fit uses the spectral shape of the isotropic spectrum derived from the all-sky analysis. The H~{\sc ii}\ component is fixed to its GALPROP prediction, since it it subdominant compared to the other components. The Bremsstrahlung component possesses a normalization of 1.0 $\pm$ 0.6, consistent with the GALPROP prediction. In our further fits in the FM31 region these components remain fixed to their all-sky GALPROP predictions.
Figure~\ref{fig:flux_and_residuals_TR} shows the best-fit spectra and fractional count residuals resulting from the fit in the TR. The corresponding best-fit normalizations and integrated flux are reported in Table~\ref{tab:norm_TR}. The isotropic component possesses a normalization of 1.06 $\pm$ 0.04, consistent with the all-sky average. The H~{\sc i}\ $\pi^0$ A6 component shows a fairly high normalization with respect to the model prediction, which is likely related to the fact that it only contributes near the edge of the region.
The fractional residuals are fairly flat over the entire energy range, but somewhat worsen at higher energies, although they remain consistent with statistical fluctuations. We note that there does appear to be a subtle systematic bias in the fractional residuals, where the data are being over-modeled between $\sim$6--20 GeV and $\sim$50--100 GeV, with excess emission between $\sim$20--50 GeV. This may be due to the spectral shape of the 3FGL sources in the region that is not properly accounted. For the sources we use their spectral parameterizations rather than the binned data points, which may or may not be a good representation of the true spectra at high energies where the statistical fluctuations are significant.
Figure~\ref{fig:TR_correlation} shows the correlation matrix\footnote{The correlation ($C$) of two parameters $A$ and $B$ is defined in terms of the covariance ($cov$) and the standard deviation ($\sigma$): $C = cov_{AB}\times (\sigma_A \sigma_B)^{-1}.$} for the fit. The isotropic component is anti-correlated with the IC components. The IC components are also anti-correlated with the H~{\sc i}\ A5 component. The H$_2${} component shows very little correlation with the other components, but its contribution is very minimal in the TR.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.49\textwidth]{flux_and_residuals_Mask_300_North_Tuning_Region.pdf}
\caption{Flux (upper panel) and fractional count residuals (lower panel) for the fit in the TR. The H~{\sc ii}\ component is fixed to its GALPROP prediction. The normalizations of all other diffuse components are freely scaled, as well as all 3FGL sources in the region. The residuals show fairly good agreement over the entire energy range.}
\label{fig:flux_and_residuals_TR}
\end{figure}
Figure~\ref{fig:spatial_residuals_TR} shows the spatial count residuals for three different energy bins, as indicated above each plot. The bins are chosen to coincide with positive residual emission which is observed in FM31, as discussed in Section~\ref{sec:FM31 Baseline}. Residuals are shown using a colormap from the colorcet package~\citep{kovesi2015good}.
Two notable features can be observed in the residuals. Near $(l,b)$$\approx$$(156^\circ, -35^\circ)$ a deep hole can be seen in the first energy bin. Comparing to the H~{\sc i}\ column density maps (see Figure~\ref{fig:maps_1}), this over-modeling is likely related to a feature in the gas. Note that the hole also contains a BL LAC (3FGL J0258.0+2030). The second notable feature is located near $(l,b)$$\approx$$(84^\circ, -40^\circ)$. This is a flat spectral radio quasar (3FGL J2254.0+1608). As a test, these trouble-regions were masked and it is found that they do not significantly impact the normalizations of the diffuse components. Otherwise the residual maps in all three energy bins are pretty smooth, exhibiting no obvious features.
\section{Analysis of the M31 Field} \label{sec:FM31 Baseline}
\subsection{Baseline Fit and Point Source Finding Procedure} \label{sec:psalgorithm}
The data set employed in this work is approximately two times larger than the one used to derive the 3FGL. Therefore, in conjunction with the baseline fit, we search for additional point sources in FM31 to account for any un-modeled point-like structure that may otherwise contribute to the residual emission. The procedure we employ is similar to the one developed in~\citet{TheFermi-LAT:2015kwa}. The point sources are initially modeled with the 3FGL. A maximum likelihood fit is performed by freeing the normalization of the 3FGL sources, as well as the H~{\sc i}- and H$_2$-related components. The top of FM31 also has contribution from IC A8, and its normalization is freed in the fit. The normalizations of the isotropic and IC components (A5 and A6 -- A7) remain fixed to their best-fit values obtained in the TR. The H~{\sc ii}\ and Bremsstrahlung components are fixed to their GALPROP predictions. Note that the Bremsstrahlung component possesses a normalization of 1.0 $\pm$ 0.6 in the TR, consistent with the GALPROP prediction.
\begin{deluxetable}{lcccccc}
\tablecolumns{7}
\tablewidth{0mm}
\tablecaption{Baseline Values for the IEM Components in the TR \label{tab:norm_TR}}
\tablehead{
\colhead{Component} &
\colhead{Normalization} &
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Intensity ($\times 10^{-8})$}\\
&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)}
}
\startdata
H~{\sc i}\ $\pi^0$, A5 &1.10 $\pm$ 0.03&439.4 \p11.0 &153.1 $\pm$ 3.8 \\
H~{\sc i}\ $\pi^0$, A6 &5.0 $\pm$ 1.3 &10.6 $\pm$ 2.8 &3.7 $\pm$ 1.0 \\
H$_2$\ $\pi^0$, A5 &2.1 $\pm$ 0.1&12.6 $\pm$ 0.7 &4.4 $\pm$ 0.3 \\
Bremsstrahlung & 1.0 $\pm$ 0.6&100.4 $\pm$ 58.3&35.0 $\pm$ 20.3\\
IC, A5 &2.3 $\pm$ 0.1 &274.7 $\pm$ 14.0&95.7 $\pm$ 4.9 \\
IC, A6 -- A7&3.5 $\pm$ 0.4 &45.7 $\pm$ 4.8 &15.9 $\pm$ 1.7\\
Isotropic &1.06 $\pm$ 0.04 &248.1 $\pm$ 10.4 &86.4 $\pm$ 3.6
\enddata
\tablecomments{The normalizations of the diffuse components are freely scaled, as well as all 3FGL sources in the region. The fit uses the all-sky isotropic spectrum. Intensities are calculated by using the total area of the TR, which is 0.287 sr. Note that the reported errors are 1$\rm{\sigma}$ statistical only (and likewise for all tables).}
\end{deluxetable}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{TR_Correlation.pdf}
\caption{Correlation matrix for the fit in the TR. For brevity IC A6 -- A7 is labeled as ICA67, and the isotropic component is labeled as Iso.}
\label{fig:TR_correlation}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{Residuals_0-5_GeV_TR_coolwarm.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_5-13_GeV_TR_coolwarm.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_13-20_GeV_TR_coolwarm.pdf}
\caption{Spatial count residuals (data $-$ model) resulting from the fit in the TR for three different energy bands, as indicated above each plot. The energy bins are chosen to coincide with an excess which is later observed in the fractional energy residuals for the fit in FM31, as discussed in the text. The color scale corresponds to counts/pixel, and the pixel size is $0.2^\circ \times 0.2^\circ$. The images are smoothed using a $1^\circ$ Gaussian kernel. This value corresponds to the PSF (68\% containment angle) of \textit{Fermi}-LAT, which at 1 GeV is $\sim$$1^\circ$.}
\label{fig:spatial_residuals_TR}
\end{figure*}
\begin{deluxetable}{lccccc}[tbhp!]
\tablecolumns{6}
\tablewidth{0mm}
\tablecaption{New point sources for FM31 \label{tab:PS_FM31}}
\tablehead{
\colhead{Name} &
\colhead{TS} &
\colhead{$l$} &
\colhead{$b$} &
\colhead{Index} &
\colhead{Flux\,($\times 10^{-10})$ }\\
&
&
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{$\alpha$}&
\colhead{(ph cm$^{-2}$ s$^{-1}$)}
}
\startdata
FM31\_1 & 34 & 124.58 & $-32.60$ & $2.61$ $\pm$ 0.34 & 2.9 $\pm$ 0.7\\
FM31\_2 & 31 & 122.66 & $-29.25$ & $2.78$ $\pm$ 0.33 & 2.8 $\pm$ 0.7 \\
FM31\_3 & 31 & 117.71 & $-26.83$ & $2.33$ $\pm$ 0.27 & 2.5 $\pm$ 0.6 \\
FM31\_4 & 29 & 131.86 & $-27.70$ & $2.14$ $\pm$ 0.24 & 1.9 $\pm$ 0.5 \\
FM31\_5 & 24 & 127.49 & $-9.62$ & $3.81$ $\pm$ 0.67 & 3.9 $\pm$ 0.9 \\
FM31\_6 & 23 & 129.91 & $-10.13$ & $3.09$ $\pm$ 0.39 & 3.4 $\pm$ 0.9 \\
FM31\_7 & 18 & 128.32 & $-10.58$ & $2.25$ $\pm$ 0.31 & 2.3 $\pm$ 0.8 \\
FM31\_8 & 18 & 111.53 & $-22.79$ & $3.32$ $\pm$ 0.55 & 2.7 $\pm$ 0.8 \\
FM31\_9 & 17 & 118.05 & $-31.02$ & $2.41$ $\pm$ 0.34 & 1.7 $\pm$ 0.6 \\
FM31\_10 & 17 & 119.73 & $-25.66$ & $4.26$ $\pm$ 1.26 & 2.1 $\pm$ 0.6 \\
FM31\_11 & 16 & 110.44 & $-25.71$ & $2.90$ $\pm$ 0.47 & 2.1 $\pm$ 0.7 \\
FM31\_12 & 15 & 108.73 & $-29.55$ & $2.17$ $\pm$ 0.36 & 1.5 $\pm$ 0.6\\
FM31\_13 & 14 & 126.34 & $-11.63$ & $3.12$ $\pm$ 0.57 & 2.4 $\pm$ 0.8 \\
FM31\_14 & 14 & 118.27 & $-9.50$ & $3.97$ $\pm$ 0.96 & 2.7 $\pm$ 0.9 \\
FM31\_15 & 13 & 110.61 & $-33.64$ & $3.90$ $\pm$ 0.95 & 1.8 $\pm$ 0.6 \\
FM31\_16 & 13 & 120.13 & $-30.65$ & $2.81$ $\pm$ 0.55 & 1.7 $\pm$ 0.6 \\
FM31\_17 & 12 & 133.80 & $-8.37$ & $2.29$ $\pm$ 0.44 & 1.7 $\pm$ 0.8\\
FM31\_18 & 11 & 126.84 & $-20.78$ & $2.23$ $\pm$ 0.37 & 1.3 $\pm$ 0.5 \\
FM31\_19 & 11 & 106.53 & $-28.95$ & $4.85$ $\pm$ 1.60 & 1.7 $\pm$ 0.6 \\
FM31\_20 & 11 & 116.65 & $-25.21$ & $5.39$ $\pm$ 1.48 & 1.6 $\pm$ 0.6 \\
FM31\_21 & 10 & 127.83 & $-27.92$ & $2.48$ $\pm$ 0.45 & 1.3 $\pm$ 0.5
\enddata
\tablecomments{The sources are fit with a power law spectral model $dN/dE \propto E^{-\alpha}$. The table gives the best-fit index, as well as the total flux, integrated between 1 GeV--100 GeV.}
\end{deluxetable}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{TS_Map_Initial.pdf}
\caption{The TS map is calculated after the baseline fit in FM31 (tuned). Overlaid are the additional point sources that we found using our point source finding procedure. Red crosses represent new sources with TS$\geq$25 and red slanted crosses represent new sources with 9$\leq$TS$<$25.}
\label{fig:TS_map}
\end{figure}
A wavelet transform is applied to the residual map to find additional point source candidates. We employ PGWave~\citep{1997ApJ...483..350D}, included in the {\it Fermi}--LAT\ ScienceTools, which finds the positions of the point source candidates according to a user-specified signal-to-noise criterion (we use 3$\sigma$) based on the assumption of a locally flat background. Since PGWave does not provide spectral information, we model the spectrum of each point source candidate with a power law function and determine the initial values of the parameters via a maximum likelihood fit in the field, while all other components are held constant.
The determination of the spectrum is further refined by performing additional maximum likelihood fits concurrently with the other components in the region, i.e.\ 3FGL point sources, H~{\sc i}\ A5--A7, and H$_2${} A5. All point sources within a 30$^\circ$ radius of the field center are included in the model; however, only sources within a 20$^\circ$ radius are fit. The extra padding is included to account for the instrumental PSF. Owing to the large number of point sources involved, the fit is performed iteratively starting with the point sources (and point source candidates) with largest significance of detection. All point source candidates with a test statistic (TS)\footnote{For a more complete explanation of the TS resulting from a likelihood fit see~\citet{1996ApJ...461..396M} and \url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone\_Likelihood/}} TS$\geq$9 are added to the model. Parameters for the additional point sources are summarized in Table~\ref{tab:PS_FM31}.
Figure~\ref{fig:TS_map} shows the TS map calculated after the initial fit in FM31, before finding additional point sources. To reduce computational time, all components are held fixed to their best-fit values obtained in the initial fit. The TS map is calculated using the \textit{gttsmap} function included in the ScienceTools. Note that we do not include an M31 template for the calculation. Overlaid on the map are the additional point sources that we found using our point source finding procedure. In total we found 4 sources with TS$\geq$25 (besides the M31 source), and 17 sources with 9$\leq$TS$<$25. A point source is found corresponding to the M31 disk, but this source is removed for the baseline fit, and no M31 component is included (likewise the M31 source is not listed in Table~\ref{tab:PS_FM31}). Many of the new sources are correlated with large-scale structures which are also visible in the residual maps, and they are likely spurious sources which are actually features in the diffuse emission.
Figure~\ref{fig:flux_and_residuals_FM31_Tuned} shows the final results for the flux and count residuals for the baseline fit in FM31, including additional point sources, with the normalizations of the isotropic and IC components fixed to their best-fit values obtained in the TR. The corresponding best-fit normalizations and integrated flux are reported in Table~\ref{tab:norm_FM31_Tuned}. Note that the reported errors are 1$\sigma$ statistical error only.
Below $\sim$5 GeV the emission is dominated by H~{\sc i}\ A5, IC A5, and the isotropic component, in order of highest to lowest. A cross-over then occurs, and above $\sim$5 GeV the order is reversed. The 3FGL sources also become more dominant at higher energies. The cumulative spectrum of the additional point sources is consistent with that of the 3FGL sources, although the flux is roughly an order of magnitude less.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.49\textwidth]{flux_and_residuals_FM31_Tuned.pdf}
\caption{Flux (upper panel) and fractional count residuals (lower panel) for the fit in FM31 (tuned). The H~{\sc ii}\ and Bremsstrahlung components are fixed to their GALPROP predictions. The normalizations of the IC (A5 and A6 -- A7) and isotropic components are held fixed to the values obtained in the tuning region. The normalizations of the H~{\sc i}- and H$_2$-related components are fit to the $\gamma$-ray data in FM31, as well as 3FGL sources within $20^\circ$ of M31, and additional point sources which we find using our point source finding procedure. Note that the top of FM31 has contribution from IC A8, and its normalization is also freed in the fit. The fractional residuals show an excess between $\sim$3--20 GeV reaching a level of $\sim$4\% (error bars show 1$\sigma$ statistical error). Above and below this range the data are being over-modeled as the fit tries to balance the excess with the negative residuals. This is in contrast to the fit in the TR, which shows fairly good agreement over the entire energy range. For reference, the residuals (data -- model) are also plotted in the upper panel (faint gray band).}
\label{fig:flux_and_residuals_FM31_Tuned}
\end{figure}
The fractional residuals show an excess between $\sim$3--20 GeV at the level of $\sim$4\%, and the data is being somewhat over-modeled above and below this range. The over-modeling is expected as the fit tries to balance the excess with the negative residuals. This is in contrast to the TR which shows fairly good agreement over the entire energy range. The normalizations of H~{\sc i}\ A5 and A6 are low with respect to the GALPROP predictions, and likewise with respect to the values obtained in the TR and the all-sky fit. The normalization of H~{\sc i}\ A7 is high with respect to the GALPROP prediction. The normalization of H$_2${} is also high, but its contribution is minimal in FM31.
The spatial count residuals (data $-$ model) resulting from the baseline fit are shown in Figures~\ref{fig:spatial_residuals_FM31_tuned} and~\ref{fig:spatial_residuals_gray_FM31_tuned}. The residuals are integrated in three different energy bins, as indicated above each plot. The energy bins are chosen to coincide with the positive residual emission observed in the fractional energy residuals. The residuals show structured excesses and deficits. In the first energy bin a large arc structure is observed. The upper-left corner shows bright excess emission, which extends around the field towards the projected position of M33. This structure is similar to what is seen in the TS map (Figure~\ref{fig:TS_map}). Positive residual emission is also observed at the position of the M31 disk. In addition, the first energy bin shows deep over-modeling towards the top of the map and around the M31 disk. The second energy bin shows positive residual emission which is roughly uniform throughout the field, although the arc structure is also visible. In the third energy bin some holes can be seen corresponding to poorly modeled 3FGL sources, but otherwise no obvious structures can be identified.
Figure~\ref{fig:spatial_residuals_gray_FM31_tuned} shows the same spatial residuals in gray scale, intentionally saturated in order to bring out weaker features. Overlaid are the point sources in the region, both 3FGL (green markers) and additional sources found in this analysis (red markers). Most of the additional sources are correlated with the arc structure. A majority of the 3FGL sources are AGN and are modeled with power-law (PL) spectra. We attempted to optimize the 3FGL spectra by fitting with a LogParabola spectral model, but this did not significantly change the positive residual emission, as discussed further in Appendix~\ref{sec:different_IEMs}.
\begin{deluxetable}{lccc}[tbh!]
\tablecolumns{4}
\tablewidth{0mm}
\tablecaption{Baseline Values for the IEM Components in FM31 (Tuned) \label{tab:norm_FM31_Tuned}}
\tablehead{
\colhead{Component} &
\colhead{Normalization} &
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Intensity ($\times 10^{-8})$}\\
&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)}
}
\startdata
H~{\sc i}\ $\pi^0$, A5 &0.82 $\pm$ 0.01&149.7 $\pm$ 2.5 &63.6 $\pm$ 1.1 \\
H~{\sc i}\ $\pi^0$, A6 &0.1 $\pm$ 0.2&1.1 $\pm$ 2.4 & 0.5 $\pm$ 1.0 \\
H~{\sc i}\ $\pi^0$, A7 &3.2 $\pm$ 0.4&17.1 $\pm$ 2.0 & 7.3 $\pm$ 0.9 \\
H$_2$\ $\pi^0$, A5 &2.9 $\pm$ 0.3& 3.9 $\pm$ 0.4&1.7 $\pm$ 0.2 \\
IC, A8 & 61.3 $\pm$ 13.0 &11.3 $\pm$ 2.4& 4.8 $\pm$ 1.0
\enddata
\tablecomments{The normalizations of the isotropic and IC components (A5 and A6 -- A7) are held fixed to their best-fit values obtained in the TR. The normalizations of the $\pi^0$-related (H~{\sc i}\ and H$_2$) components are fit to the $\gamma$-ray data in FM31. Note that the top of FM31 has contribution from IC A8, and its normalization is also freely scaled. We also fit all 3FGL sources within $20^\circ$ of M31, as well as additional point sources which we find using our point source finding procedure. Intensities are calculated by using the total area of FM31, which is 0.2352 sr. Note that the reported errors are 1$\rm{\sigma}$ statistical only (and likewise for all tables).}
\end{deluxetable}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.33\textwidth]{Residuals_0_5_FM31_coolwarm.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_5_13_FM31_coolwarm.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_13_20_FM31_coolwarm.pdf}
\caption{Spatial count residuals (data $-$ model) resulting from the fit in FM31 (tuned) for three different energy bands, as indicated above each plot. The energy bins are chosen to coincide with the excess observed in the fractional residuals. The color scale corresponds to counts/pixel, and the pixel size is $0.2^\circ \times 0.2^\circ$. The images are smoothed using a $1^\circ$ Gaussian kernel. This value corresponds to the PSF (68\% containment angle) of \textit{Fermi}-LAT, which at 1 GeV is $\sim$$1^\circ$. For reference, the position of M33, $(l,b) = (133.61^\circ, -31.33^\circ)$, is shown with a yellow triangle.}
\label{fig:spatial_residuals_FM31_tuned}
\end{figure*}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.33\textwidth]{Residuals_0_5_gray_FM31.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_5_13_gray_FM31.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_13-20_gray_FM31.pdf}
\caption{Same residual maps as shown in Figure~\ref{fig:spatial_residuals_FM31_tuned}. Here we show the maps in gray scale, and intentionally saturate the images to bring out weaker features. Overlaid are the point sources in the region. Crosses show sources with TS$\geq$25 and slanted crosses show sources with 9$\leq$TS$<$25. Fermi 3FGL sources are shown in green, and new sources found in this analysis are shown in red.}
\label{fig:spatial_residuals_gray_FM31_tuned}
\end{figure*}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A5_gas_filled.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A6_gas_filled.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A7_gas_filled.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A5_gas_open.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A6_gas_open.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A7_gas_open.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A5_gas_Disk.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A6_gas_Disk.pdf}
\includegraphics[width=0.33\textwidth]{Residuals_all_FM31_A7_gas_Disk.pdf}
\caption{\textbf{Top Row:} H~{\sc i}\ column density contours for A5, A6, and A7, as indicated above each plot. For reference, a yellow circle ($0.4^\circ$) centered at M31 is overlaid, and a yellow triangle is overlaid at the position of M33. The units are $10^{20} \ \mathrm{cm^{-2}}$, and the levels are indicated on the maps. \textbf{Middle Row:} The same H~{\sc i}\ column density contours are overlaid on the residual maps for FM31. The maps are integrated over the entire energy range 1--100 GeV. The residual emission is observed to be correlated with the column densities. In addition, the column densities of A6 and A7 are observed to be correlated with the major axis of M31 (the position angle of M31 is 38$^\circ$). \textbf{Bottom Row:} The same maps as for the middle row but for a $5^\circ$ radius centered at M31. Contours for the IRIS 100 $\mu$m map of M31 are overlaid. The levels shown range from 6--22 MJy sr$^{-1}$. Also overlaid are the regions corresponding to the two main cuts (space and velocity) which are made on the underlying gas maps when constructing the MW IEM, as detailed in the text. Lastly, we overlay the 3FGL sources (magenta crosses) in the region with TS$\geq$25. In particular, we consider the two point sources located closest to the M31 disk, since we are interested in the true morphology of the M31 emission. The source located to the right of the disk (3FGL J0040.3+4049) is a blazar candidate and has an association. The source located to the left of the disk (3FGL J0049.0+4224) is unassociated.}
\label{fig:gas_column_densities}
\end{figure*}
\subsection{Analysis of the Galactic H~{\sc i}-related Emission in FM31} \label{sec:Galactic_gas_analysis}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{flux_and_residuals_FM31_True.pdf}
\caption{Additional freedom is given to the baseline fit. The IC components are fit simultaneously with the other contributing diffuse components and point sources. The isotropic component remains fixed to its value obtained in the TR (1.06).}
\label{fig:flux_and_residuals_true}
\end{figure}
The structured excesses and deficits are an indication that the foreground emission may not be accurately modeled. In particular, the large arc structure observed in the first energy bin points to poorly modeled H~{\sc i}\ gas in the line of sight. The H~{\sc i}-related $\gamma$-ray emission depends on the column density of the gas, which in turn depends on the spin temperature. For this analysis the spin temperature is assumed to have a uniform value of 150 K, however, in reality it may vary over the region.
To further investigate the systematic uncertainty relating to the characterization of H~{\sc i}\ in the line of sight, we first compare the residual maps to the column densities for A5--A7, as shown in Figure~\ref{fig:gas_column_densities}. For visual clarity, the top row shows the column density filled contour maps. The units are $10^{20} \ \mathrm{cm}^{-2}$, and the levels are indicated on the maps. The second row shows the H~{\sc i}\ contours overlaid to the residual map integrated between 1--100 GeV. The residual emission is observed to be correlated with the column densities. In addition, the column densities of A6 and A7 are observed to be correlated with the major axis of M31 (the position angle of M31 is 38$^\circ$).
The last row shows the same maps as the middle row, but for a $5^\circ$ radius centered at M31. The IRIS 100 $\mu$m map of M31 is overlaid. Also overlaid are the regions corresponding to the two main spatial cuts which are made on the underlying H~{\sc i}\ maps when constructing the MW IEM. The spatial cuts correspond to cuts in velocity space, where the velocity is defined relative to the local standard of rest (LSR). Here we summarize all of the pertinent cuts made to the underlying H~{\sc i}\ gas maps:
\begin{itemize}
\item[$\circ$] M31 cut (solid red box in Figure~\ref{fig:gas_column_densities}):\\
$119^\circ \leq l \leq 123^\circ$, $-23.5^\circ \leq b \leq -19.5^\circ$,\\
$V_{\rm LSR} < -120 \ \mathrm{km \ s^{-1}}$;
\item[$\circ$] M31 cut (dashed green box in Figure~\ref{fig:gas_column_densities}):\\
$121^\circ \leq l \leq 123^\circ$, $-22^\circ \leq b \leq -19.5^\circ$,\\
$-120 \ \mathrm{km \ s^{-1}} < V_{\rm LSR} < -50 \ \mathrm{km \ s^{-1}}$;
\item[$\circ$] M33 cut:\\ $132.5^\circ < l < 134.5^\circ, -33 < b < -30$ \\
$-460 \ \mathrm{km \ s^{-1}} \leq V_{\rm LSR} \leq -60 \ \mathrm{km \ s^{-1}}$;
\item[$\circ$] Anything above a given height $z$ is assumed to be local gas (A5). The height is 1 kpc for $R$$<$8 kpc, but then increases linearly with $R$ with a slope of 0.5 kpc/kpc. The cut is applied after determining the radial distance with the rotation curve and obtaining an estimate of $z$;
\item[$\circ$] Everything with $|V_{\rm LSR}| > 170 \ \mathrm{km \ s^{-1} \ and} \ |b|>5^\circ$ is considered to be extragalactic;
\item[$\circ$] Everything with $V_{\rm LSR} < -100 \ \mathrm{km \ s^{-1} \ and} \ |b|>30^\circ$ is considered to be extragalactic.
\end{itemize}
Note that these are the same cuts which are made for the official FSSC IEM. It was pointed out in~\citet{Ackermann:2017nya} that for $-50$ km s$^{-1}$$ < V_{\rm LSR} < -30$ km s$^{-1}$, foreground emission from the MW blends with the remaining signal from M31 at the north-eastern\footnote{For all directions relating to M31, north is up, and east is to the left.} tip of M31, and it is estimated that on some lines of sight in this direction up to $\sim$40\% of the M31 signal might have been incorporated in the MW IEM. Besides, there may be additional H~{\sc i}\ gas in M31's outer regions which is wrongfully assigned to the MW, as discussed further in Section~\ref{sec:gas_related_emission}. Overall, the cuts (velocity and space) made to the underlying H~{\sc i}\ maps may be introducing systematics in the morphology of the extended M31 emission.
Also shown in Figure~\ref{fig:gas_column_densities} are the 3FGL sources in the region with TS$\geq$25. In particular, we consider the two point sources located closest to the M31 disk, since we are ultimately interested in ascertaining the true morphology of the M31 emission. The source located to the right of the disk (3FGL J0040.3+4049) is a blazar candidate and has an association. The source located to the left of the disk (3FGL J0049.0+4224) is unassociated. We identify this source as potentially spurious, in that it may actually be part of a larger diffuse structure.
Because of the poor data--model agreement and the poor description of the H~{\sc i}-related components, we allowed for additional freedom in the fit by also scaling the IC components (A5 and A6--A7) in FM31. The fit is performed just as for the baseline fit. Figure~\ref{fig:flux_and_residuals_true} shows the resulting flux and residuals, and the corresponding best-fit normalizations are reported in Table~\ref{tab:norm_FM31_True}. Overall, a better fit is obtained. The likelihood value is $-\log L = 143268$, compared to the tuned fit which is $-\log L = 143302$.
The H~{\sc i}\ A5 component obtains a normalization of 1.04, which is comparable to the value obtained in the TR, and close to the GALPROP model prediction. The normalization of H~{\sc i}\ A6 is still low at $\sim$40\% of the model prediction. We note that the H~{\sc i}\ A6 flux is less than that of H~{\sc i}\ A7, which is due to the fact that the radial extension of A6 is 1.5 kpc, compared to A7 which has a radial extension of 5 kpc. The normalization of IC A5 is consistent with the value obtained in the TR. On the other hand, the normalization of IC A6--A7 has a value of 0.9 $\pm$ 0.3, compared to the TR value of 3.5 $\pm$ 0.4. The normalization of IC A8 is very high, but this is a weak component with contribution only towards the top of the field. Note that because IC A8 only contributes at the very top of the field, it is not well constrained, and this allows its normalization to get high, but its overall effect on the residuals remains subdominant. Despite the additional freedom the model is unable to flatten the positive residual emission between $\sim$3--20 GeV, and it actually becomes slightly more pronounced. The spatial residuals for this fit are qualitatively consistent with the residuals in Figure~\ref{fig:spatial_residuals_FM31_tuned}. The correlation matrix for the fit is given in Figure~\ref{fig:baseline_true_correlation}.
As already discussed, the H~{\sc i}\ column density depends on the value of the spin temperature, which is used to convert the observed 21-cm brightness temperature to column densities. In general the spin temperature may have some spatial variation. The CR density may also vary over the field, and likewise for the ISRF density. To account for these possibilities we divide FM31 into three equal subregions: top, middle, and bottom. Each subregion is then further divided equally into right and left. In each subregion we rescale the diffuse components. The point sources remain fixed to the best-fit values obtained in the baseline fit (with IC scaled).
The fractional energy residuals that result from this rescaling are shown in Figure~\ref{fig:top_middle_bottom}. The black data points show the residuals resulting from the baseline fit (over the entire field) calculated in the given subregion. The top row shows the residuals for the fit performed in the top, middle, and bottom regions, respectively. The second and third rows show the results for rescaling the normalizations in the regions which are further divided into right and left.
Even with these smaller subregions the model is unable to flatten the positive residual emission between $\sim$3--20 GeV. Note that for many of these subregions the best-fit normalizations of the diffuse components resulting from the rescaling are not very physical, as some of the components go to zero, since they are not very well constrained and the fit simply tries to optimize the likelihood. Nevertheless, the model is still unable to fully flatten the residuals.
\begin{deluxetable}{lcccccc}[tbh!]
\tablecolumns{7}
\tablewidth{0mm}
\tablecaption{Baseline Values for the IEM Components in FM31 (IC scaled) \label{tab:norm_FM31_True}}
\tablehead{
\colhead{Component} &
\colhead{Normalization} &
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Intensity ($\times 10^{-8})$}\\
&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)}
}
\startdata
H~{\sc i}\ $\pi^0$, A5 &1.04 $\pm$ 0.04&189.3 $\pm$ 6.9 &80.5 $\pm$ 2.9 \\
H~{\sc i}\ $\pi^0$, A6 &0.4 $\pm$ 0.2 &4.4 $\pm$ 2.5 &1.9 $\pm$ 1.0 \\
H~{\sc i}\ $\pi^0$, A7 &2.9 $\pm$ 0.4 &15.8 $\pm$ 2.1 &6.7 $\pm$ 8.8 \\
H$_2$\ $\pi^0$, A5 &2.7 $\pm$ 0.3&3.7 $\pm$ 0.4 &1.6 $\pm$ 0.2 \\
IC, A5 &2.4 $\pm$ 0.1 &125.0 \p7.0 &53.1 $\pm$ 3.0 \\
IC, A6 -- A7&0.9 $\pm$ 0.3 &17.3 $\pm$ 6.4 &7.3 $\pm$ 2.7\\
IC, A8 & 80.5 $\pm$ 16.4 &14.8 $\pm$ 3.0&6.3 $\pm$ 1.3
\enddata
\tablecomments{The isotropic component is held fixed to the best-fit value obtained in the TR (1.06). All other diffuse sources and point sources are freely scaled in FM31, including the IC components. This is in contrast to the FM31 tuned fit, where the IC components are held fixed to the best-fit values obtained in the TR. Intensities are calculated by using the total area of FM31, which is 0.2352 sr.}
\end{deluxetable}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.45\textwidth]{FM31_Baseline_True_Correlation.pdf}
\caption{Correlation matrix for the FM31 baseline fit with the IC components scaled.}
\label{fig:baseline_true_correlation}
\end{figure}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.33\textwidth]{top.pdf}
\includegraphics[width=0.33\textwidth]{middle.pdf}
\includegraphics[width=0.33\textwidth]{Bottom.pdf}
\includegraphics[width=0.33\textwidth]{top_left.pdf}
\includegraphics[width=0.33\textwidth]{middle_left.pdf}
\includegraphics[width=0.33\textwidth]{bottom_left.pdf}
\includegraphics[width=0.33\textwidth]{top_right.pdf}
\includegraphics[width=0.33\textwidth]{middle_right.pdf}
\includegraphics[width=0.33\textwidth]{bottom_right.pdf}
\caption{Fractional residuals calculated in different spatial regions. The field is evenly divided into top, middle, and bottom. Each slice is then further divided into right and left. The regions are indicated above each plot. Black data points show the residuals resulting from the baseline fit (which is over the entire field, with IC scaled in addition to the other contributing components). We then rescale the diffuse components in the different subregions, masking the rest of the region, and keeping the point sources fixed to their baseline values (green data points). This is done to allow for a spatially varying spin temperature and/or CR and ISRF densities, which would in turn change the normalizations of the $\gamma$-ray components. Even in these smaller regions the diffuse components are unable to flatten the residuals, with the exception of the bottom right, which is pretty flat.}
\label{fig:top_middle_bottom}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.33\textwidth]{M31_Cloud_Positive_Residuals.pdf}
\includegraphics[width=0.33\textwidth]{M31_Cloud_Negative_Residuals.pdf}
\includegraphics[width=0.33\textwidth]{Full_Arc_Template.pdf}
\caption{The first two panels show the spatial count residuals integrated between 1--100 GeV, resulting from the baseline fit (see Figure~\ref{fig:flux_and_residuals_true}). In order to construct a template for the large arc extending from the top left corner to the projected position of M33 (arc template), we divide the total residual map into positive residuals (left) and negative residuals (middle). The maps show the geometry used to help facilitate the template construction (the green axes, circle, and ellipse), as detailed in the text. The corresponding geometrical parameters are given in Table~\ref{tab:template_parameters}. The resulting arc template is shown in the far right panel. In addition to fitting the full arc template, we also perform a variation of the fit in which the arc template is divided into a north component (arc north: $b > -16.5^\circ$) and a south component (arc south: $b \leq -16.5^\circ$), where the spectral parameters of each component are allowed to vary independently. The cut is made right below the bright emission in the upper-left corner, and it allows the north component to be at a different distance along the line of sight than the south component, as discussed in the text. The cyan triangle shows the projected position of M33.}
\label{fig:template_geometry}
\end{figure*}
Meanwhile, the residuals do start to become a bit more uniformly distributed. For example, when performing the fit over the entire field, the residuals in the top left are much more pronounced than the top right. For the rescaling in the different subregions, the top left residuals are decreased (between $\sim$3--20 GeV), whereas the top right residuals become a bit more pronounced. The same general trend can be seen in most of the subregions. The residuals are fairly flat in the bottom right, however, the bottom left (which contains M33) shows positive residual emission.
\subsection{Arc Template} \label{sec:arc_template}
Thus far the model has been unable to flatten the positive residual emission observed between $\sim$3--20 GeV. Furthermore, the spatial residuals show structured excess and deficits. It may be due to some foreground MW gas that is not well traced by the 21-cm emission. On the other hand, or in addition, the positive residual emission may be related to the M31 system, for which no model components are currently included. We note that the residuals behave qualitatively the same even when masking the inner region of the M31 disk (0.4$^\circ$).
Our ultimate goal is to test for a $\gamma$-ray signal exhibiting spherical symmetry with respect to the center of M31, since there are numerous physical motivations for such a signal. However, before adding these components to the model, we employ a template approach to account for the arc-like feature observed in the spatial residuals, which may be related to foreground MW emission, and is not obviously related to the M31 system.
The first two panels in Figure~\ref{fig:template_geometry} show the spatial residuals integrated between 1--100 GeV, resulting from the baseline fit (see Figure~\ref{fig:flux_and_residuals_true}). In order to construct a template for the large arc extending from the top left corner to the projected position of M33 (arc template), we divide the total residual map into positive residuals (left) and negative residuals (middle). Overlaid is the geometry used to help facilitate the template construction. All geometry is plotted based on the general equation of an ellipse, which can be written as
\begin{equation}
\begin{split}
&a^{-2}\left\{(x-h)\cos\phi + (y-k)\sin\phi\right\}^2 \\
&+ \ b^{-2}\left\{(x-h)\sin\phi + (y-k)\cos\phi\right\}^2 = 1,
\end{split}
\end{equation}
where the center is given by $(h,k)$, $a$ and $b$ are the major and minor axes, respectively, and $\phi$ is the orientation angle of the ellipse. All geometrical parameters are given in Table~\ref{tab:template_parameters}. Note that the geometry corresponds to the $\gamma$-ray emission as observed in the stereographic projection, with the pole of the projection centered at M31. The plotted coordinate system (solid axes) is centered at M31 and oriented with respect to the position angle of the M31 disk ($38^\circ$). The large dashed green circle has a radius of $8.5^\circ$ ($R_{\rm tan}=117 \ \mathrm{kpc}$). The corresponding border facilitates the cut for the north-east side, and the radius is determined by the bright emission in the upper-left corner. The inner ellipse is used to facilitate the cut on the south-west side. This cut follows the natural curvature of the arc. Any emission not connected to the large arc is removed.
\begin{deluxetable}{lcccc}[tbh!]
\tablecolumns{5}
\tablewidth{0mm}
\tablecaption{Geometrical Parameters for the Arc Template\label{tab:template_parameters}}
\tablehead{
\colhead{Component} &
\colhead{2$a$ [deg]}&
\colhead{2$b$ [deg]}&
\colhead{$\phi$ [deg]}
}
\startdata
M31 position angle axis &25&0 &38\\
M31 perpendicular axis&25&0&128 \\
Dashed circle &17&17 &38\\
Dashed ellipse&17&7&38
\enddata
\tablecomments{M31 geometry is centered at $(h,k) = (121.17^\circ, -21.57^\circ)$. Angles are defined with respect to the positive $x$-axis (Cartesian plane), and they correspond to the major axis of the ellipse. Note that the geometry corresponds to the $\gamma$-ray emission as observed in the stereographic projection, with the pole of the projection centered at M31.}
\end{deluxetable}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.4\textwidth]{intensity_and_residuals_MW_M33_Full_PL.pdf}
\includegraphics[width=0.4\textwidth]{intensity_and_residuals_MW_M33_North_South.pdf}
\caption{Spectra and fractional energy residuals resulting from the arc fit. \textbf{Left:} The full arc component is given a PL spectrum, and the normalization and index are fit simultaneously with the other components in the region, just as for the baseline fit. Black dashed lines show the H~{\sc i}\ A5 (top), A6 (bottom), and A7 (middle) components from the baseline fit (not the arc fit). Note that A7 has a greater radial extension than that of A6, and likewise it has a greater overall flux. Correspondingly, the gray markers (squares, circles, and triangles) show the H~{\sc i}\ A5--A7 spectra resulting from the arc fit. The blue solid line is the best-fit spectrum for the arc template. The bottom panel shows the remaining fractional residuals. For reference, the residuals (data -- model) are also plotted in the upper panel (faint gray band). \textbf{Right:} The arc template is given additional freedom by dividing it into north and south components. The arc components are given PLEXP spectral models, and the spectral parameters (normalization, index, and cutoff) are freely scaled with the other components. Downward pointing blue and green triangles give upper-limits. Bands give the 1$\sigma$ error. The arc template is unable to flatten the excess between $\sim$3--20 GeV.}
\label{fig:M31_Arc_flux_and_Residuals}
\end{figure*}
\begin{deluxetable*}{lcccc}[tbh!]
\tablecolumns{5}
\tablewidth{0mm}
\tablecaption{Normalizations of the Diffuse Components, Integrated Flux, and Likelihoods for the Arc Fits \label{tab:arc_fit_normalizations}}
\tablehead{
\colhead{Component} &
\colhead{Arc Full (PL)} &
\colhead{Arc North and South (PLEXP)}&
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Intensity ($\times 10^{-8})$}\\
&
&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)}
}
\startdata
H~{\sc i}\ $\pi^0$, A5 &0.74 $\pm$ 0.04 &0.75 $\pm$ 0.04& 137.3 $\pm$ 8.0 & 58.4 $\pm$ 3.4\\
H~{\sc i}\ $\pi^0$, A6 &1.1 $\pm$ 0.2 &1.2 $\pm$ 0.2 &11.7 $\pm$ 2.5&5.0 $\pm$ 1.1\\
H~{\sc i}\ $\pi^0$, A7 &3.0 $\pm$ 0.4&3.0 $\pm$ 0.4&16.2 $\pm$ 2.1 &6.9 $\pm$ 0.9\\
H$_2$\ $\pi^0$, A5 &2.6 $\pm$ 0.3 &2.7 $\pm$ 0.3 &3.7 $\pm$ 0.4 &1.6 $\pm$ 0.2 \\
IC, A5 &2.5 $\pm$ 0.1 &2.6 $\pm$ 0.1&134.2 $\pm$ 7.4&57.1 $\pm$ 3.1 \\
IC, A6 -- A7&1.6 $\pm$ 0.3 &1.5 $\pm$ 0.3&28.5 $\pm$ 6.4& 12.1 $\pm$ 2.7\\
IC, A8 &92.0 $\pm$ 17.0 &62.0 $\pm$ 18.2 &11.4 $\pm$ 3.3& 4.8 $\pm$ 1.4\\
$-\log L$&142972&142954&\nodata&\nodata
\enddata
\tablecomments{Columns 2--3 give the best fit normalizations for the diffuse components. The last two columns report the total integrated flux and intensity between 1--100 GeV for the arc north and south fit, which is the fit with the best likelihood. Note that the normalizations for the diffuse components are comparable for both variations of the fit. The bottom row gives the resulting likelihood for each respective fit. Intensities are calculated by using the total area of FM31, which is 0.2352 sr. }
\end{deluxetable*}
\begin{deluxetable*}{lccccccc}[tbh!]
\tablecolumns{8}
\tablewidth{0mm}
\tablecaption{Results for the Arc Templates\label{tab:arc_fit_params}}
\tablehead{
\colhead{Template} &
\colhead{area} &
\colhead{TS} &
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Intensity ($\times 10^{-8})$}&
\colhead{Counts}&
\colhead{Index}&
\colhead{Cutoff, $E_c$}\\
&
\colhead{(sr)}&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)} &
&
$\alpha$&
\colhead{(GeV)}
}
\startdata
Arc Full (PL) &0.080232 & 651&26.0 $\pm$ 1.4&32.4 $\pm$ 1.7 &6872& 2.38 $\pm$ 0.05&\nodata \\
Arc North (PLEXP) &0.033864& 457 &15.7 $\pm$ 1.4 & 46.4 $\pm$ 4.1 &4071& 2.0 $\pm$ 0.2&18.3 $\pm$ 14.8 \\
Arc South (PLEXP) &0.046368 &416 &12.0 $\pm$ 1.0 & 25.9 $\pm$ 2.2 &3210& 2.3 $\pm$ 0.1& 24.6 $\pm$ 19.7
\enddata
\tablecomments{The TS is defined as $-2\Delta\log L$, and it is the value reported by {\it pylikelihood} (a fitting routine from the {\it Fermi}--LAT{} ScienceTools package), without refitting. Fits are made with a power-law spectral model $dN/dE\propto E^{-\alpha}$ and with a model with exponential cut off $dN/dE\propto E^{-\alpha} \exp{(-E/E_c)}.$}
\end{deluxetable*}
The resulting normalized template is shown in the far right panel of Figure~\ref{fig:template_geometry}. By adding the arc template to the model we obtain a cleaner view towards M31's outer halo, and we are able to make inferences regarding the origin of the arc structure. We test two variations of the fit. In one variation we add a single template for the full arc. The arc is given a PL spectral model and the spectral parameters (normalization and index) are fit simultaneously with the other components in the region, just as for the baseline fit. In the second variation of the fit, the arc template is divided into a north component (arc north: $b > -16.5^\circ$) and a south component (arc south: $b \leq -16.5^\circ$). The cut is made right below the bright emission in the upper-left corner. Both components are given PLEXP spectral models (power law function with exponential cutoff), and the spectral parameters (normalization, index, and cutoff) of each component are allowed to vary independently. This allows the north component to be at a different distance along the line of sight than the south component, since different distances may correspond to different spectral parameters. Note that we also tried a number of different variations to the arc fit, and they all gave similar results as the two variations that we show here.
Results for the fits are given in Figure~\ref{fig:M31_Arc_flux_and_Residuals}. The top panels show best-fit spectra, and bottom panels show the remaining fractional residuals. For comparison, black dashed lines show the best-fit H~{\sc i}\ spectra that result from the baseline fit, as shown in Figure~\ref{fig:flux_and_residuals_true}. For visual clarity, we show just the arc template and gas-related components. Spectra for the other components are qualitatively consistent with the results shown in Figure~\ref{fig:flux_and_residuals_true}. The arc template is unable to flatten the positive residual emission between $\sim$3--20 GeV, but the split arc fit with PLEXP spectral models does provide flatter residuals above $\sim$20 GeV. The correlation matrix for the arc north and south fit is shown in Figure~\ref{fig:Arc_correlation}.
Table~\ref{tab:arc_fit_normalizations} gives the best-fit normalizations for the diffuse components for both fits, as well as the overall likelihoods. Note that the normalizations are comparable for both fit variations. The last two columns report the total integrated flux and intensity for the arc north and south fit, which has the best likelihood. The corresponding best-fit parameters for the arc template components are reported in Table~\ref{tab:arc_fit_params}. For the baseline fit (Figure~\ref{fig:flux_and_residuals_true}) the total integrated flux for H~{\sc i}\ A5 is (189.3 $\pm$ 6.9) $\times 10^{-9}$ ph cm$^{-2}$ s$^{-1}$. For the arc north and south fit the total integrated flux for H~{\sc i}\ A5 plus the arc flux is (165.0 $\pm$ 10.4) $\times 10^{-9}$ ph cm$^{-2}$ s$^{-1}$. Thus with the arc template the total H~{\sc i}\ A5 flux is decreased by $\sim$13\%. The flux is later increased when adding the M31-related components to the model, in addition to the arc template, as discussed in Section~\ref{sec:M31_components}. With the arc template the H~{\sc i}\ A6 normalization has a value close to the GALPROP prediction. The normalization for IC A8 remains high, but this is a weak component with contribution only towards the top of the field.
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.45\textwidth]{Arc_north_and_south_Correlation.pdf}
\caption{The correlation matrix for the arc north (AN) and south (AS) fit.}
\label{fig:Arc_correlation}
\end{figure}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.3\textwidth]{Residuals_bin_1_PLEXP_North_and_South_Arc_coolwarm_Fractional.pdf}
\includegraphics[width=0.3\textwidth]{Residuals_bin_2_PLEXP_North_and_South_Arc_coolwarm_Fractional.pdf}
\includegraphics[width=0.3\textwidth]{Residuals_bin_3_PLEXP_North_and_South_Arc_coolwarm_Fractional.pdf}
\caption{Spatial count residuals resulting from the arc fit. To give a sense of the deviations, here we show the fractional residuals, where we divide by the model counts for each pixel. The residuals are integrated in three energy bins, just as for the residuals in Figure~\ref{fig:spatial_residuals_FM31_tuned}. We show residuals from the arc north and south fit, with PLEXP spectral model. Residuals for the full arc fit with PL spectral model are very similar. The arc structure no longer dominates the residuals, as expected. The position of M33 is indicated with a yellow triangle, and the center of M31 is indicated with a $0.4^\circ$ open circle.}
\label{fig:M31_Arc_spatial_residuals}
\end{figure*}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.45\textwidth]{emissivity.pdf}
\caption{The average local (A5) emissivity per H atom. The solid gray curve comes from the baseline fit with IC scaled, and it gives the proper estimate of the emissivity in FM31. The dashed gray curve comes from the arc fit with PL spectral model, and it only includes the contribution from the H~{\sc i}\ A5 component, but not the emission associated with the arc. The blue data points (squares) are from \citet{Casandjian:2015hja}, and the corresponding error bars are systematic$+$statistical. The fit includes absolute latitudes between $10^\circ$--$70^\circ$. The data points for the different regions (red circles, green upward-pointing triangles, and yellow rightward-pointing triangles) are from~\citet{ackermann2012fermi}, and the corresponding error bars are statistical only (1$\sigma$). The teal band shows the total uncertainty (statistical$+$systematic) from the same analysis (from the erratum). The different regions are among the nearest molecular cloud complexes, within $\sim$300 pc from the solar system. We also plot the measurements from~\citet{abdo2009fermi} (black leftward-pointing triangles), as determined from a mid-latitude region in the third Galactic quadrant.}
\label{fig:emissivity}
\end{figure}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.37\textwidth]{FM31_dust_temperature.pdf}
\includegraphics[width=0.37\textwidth]{FM31_Reddening.pdf}
\caption{Top panel shows the dust temperature map for FM31, and the bottom panel shows the dust reddening map, from~\citet{schlegel1998maps}, as discussed in the text. Overlaid are contours for the arc template. Contours for the IRIS 100 $\mu$m map of M31 are also overlaid in the top panel. The cyan triangle shows the (projected) position of M33.}
\label{fig:FM31_dust}
\end{figure}
Spatial residuals resulting from the arc north and south fit are shown in Figure~\ref{fig:M31_Arc_spatial_residuals}. Results for the full arc fit are very similar. To give a sense of the deviations, we show the fractional residuals, where we divide by the model counts for each pixel. The residuals are divided into three energy bins, just as for the residuals in Figure~\ref{fig:spatial_residuals_FM31_tuned}. The arc structure no longer dominates the residuals, as expected. In the first energy bin bright emission can be seen at the center of the map, corresponding to the inner galaxy of M31. In addition, the residuals in the first bin still show structured excesses and deficits, possibly associated with emission from M31's outer disk and halo. The second energy bin coincides with the positive residual emission observed in the fractional energy residuals. The spatial distribution of the emission is roughly uniform throughout the field, although small-scale structures can be observed. The third energy bin is roughly uniform with no obvious features. The distribution of the residual emission in FM31 is further quantified in Section~\ref{sec:symmetry}, where we consider the symmetry of the excess.
In Figure~\ref{fig:emissivity} we plot the measured local average emissivity per H atom, resulting from all fits in FM31. The solid gray curve comes from the baseline fit with IC scaled, and gives the proper estimate of the emissivity in FM31. The dashed gray curve comes from the arc fit with PL spectral model, and it only includes the contribution from the H~{\sc i}\ A5 component, but not the emission associated with the arc. The best-fit normalizations are listed in the legend. Also plotted is the corresponding measurement made in~\citet{Casandjian:2015hja}, which is determined from a fit including absolute latitudes between $10^\circ$--$70^\circ$. Additionally, we plot the results from~\citet{ackermann2012fermi}, for which the emissivity is determined from different nearby molecular cloud complexes, within $\sim$300 pc from the solar system. Lastly, we plot the measurements from~\citet{abdo2009fermi}, as determined from a mid-latitude region in the third Galactic quadrant, i.e.\ $200^\circ < l < 260^\circ$ and $22^\circ < |b| < 60^\circ$. The local emissivity as determined from FM31 is slightly lower (referring to the baseline normalization of 1.04), but it is consistent within 1$\sigma$ with these other measurements. This is not surprising since the analysis by \citet{ackermann2012fermi} is based on observations of the well-defined gas clouds residing within $\sim$300 pc from the solar system. Meanwhile, our ``local ring'' is 2 kpc thick (Table~\ref{tab:GALPROP_parameters}), while FM31 is projected toward the outer Galaxy where the CR density is predictably low.
As we see, inclusion of the arc template into the fit improves its quality significantly. Meanwhile, the origin of the arc itself remains unknown. As we show below, the arc is most likely associated with the interstellar gas, its under-predicted column density, and/or with particles whose spectrum is distinctly flatter than the rest of CRs.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.4\textwidth]{Loop_3color.pdf}
\includegraphics[width=0.4\textwidth]{Loops_all_with_half_L3s.pdf}
\caption{{\bf Left:} FM31 residuals from the baseline fit (with IC scaled) with the Loop III shell plotted over it. The two lines correspond to two somewhat different positions and radii obtained from continuum and polarization observations \citep{2015MNRAS.452..656V}. The shell radius is approximate and the shell itself can be several degrees thick. The shaded area gives an idea of the error associated with the parameters of the shell. {\bf Right:} M31's virial radius (300 kpc) is shown with a cyan dashed circle, and cyan triangles show the positions of M31 and M33. The gray circles show Loop III at the top and Loop II at the bottom. Loop IIIs (which is only visible in polarization) is shown with a dash-dot magenta circle.}
\label{fig:LoopIII}
\end{figure*}
In Figure~\ref{fig:FM31_dust} we show the dust temperature map and the $\rm{E(B-V)}$ reddening map for FM31 from~\citet{schlegel1998maps}. Overlaid are contours for the arc template. The levels correspond to the normalized flux, and they range from 1--20 in increments of 5. The dust temperature serves as a possible proxy for the gas temperature. In this analysis we have assumed a uniform spin temperature of 150 K, but as can be seen in the top panel of Figure~\ref{fig:FM31_dust}, much of the arc template correlates with cold regions in the dust, indicating that at least part of the corresponding residuals may be caused by an under prediction of the H~{\sc i}\ column density.
As can be seen in Figure~\ref{fig:FM31_dust}, much of the arc template closely correlates with the foreground dust, and likewise it correlates with the local H~{\sc i}\ column density, as seen in Figure~\ref{fig:gas_column_densities}, indicating that the corresponding emission is most likely due to inaccuracies in the foreground model. Although our model already corrects for the DNM, the method is full sky and may use an incorrect gas to dust ratio for this particular region. In addition, the method also assumes a linear conversion between gas and dust, which may not actually be the case. Also, we note that while the spatial correlation between the arc template and properties of the dust is clearly visible towards the Galactic plane and the extended arm at the far right of the map, the region associated closest with M33 (in projection) and its general vicinity is not as obviously correlated.
The analysis described in this section clearly shows that the arc is associated with the gas, but its components have the spectral index of $\sim$2.0--2.4, noticeably flatter than the rest of the H~{\sc i}\ gas $\sim$2.75 in the ROI (Figure~\ref{fig:M31_Arc_flux_and_Residuals}). This may imply that the spectrum of CR particles interacting with gas in this direction is flatter than the spectrum of the old component of CRs that is altered by the long propagation history. Indeed, radio observations and sometimes X-rays and $\gamma$-ray{s} reveal structures that cover a considerable area of the sky and are often referred to as ``radio loops''. The most well-known is Loop I, which has a prominent part of its shell aligned with the North Polar Spur, but other circular structures and filaments also become visible in polarization skymaps. There are, at least, 17 known structures \citep[for details see][and references therein]{2015MNRAS.452..656V} with the radii of tens of degrees that can be as large as $\sim$$80^\circ$ for the Loop XI. The spectral indices of these structures indicate a non-thermal (synchrotron) origin for the radio emission, but the origin of the loops is not completely clear. One of the major limitations is the lack of precise measurements of their distances. The current explanations include old and nearby SNR, bubbles/shells powered by OB associations, and some others.
It turns out that a part of the shell of Loop III seems to be associated with the north part of the arc (Figure~\ref{fig:LoopIII}) and Loops II and IIIs are covering the entire ROI. Presence of accelerated electrons associated with the Loop III shell hints that protons with a flat spectrum can also be present there. This may explain the distinctly different spectral index of the arc template and an exponential cutoff significantly below 50 GeV (Figure~\ref{fig:M31_Arc_flux_and_Residuals} right) that corresponds to the ambient particle energies below $\sim$1 TeV. Here we are not speculating further if the whole arc or only a part of it is associated with the Loop III shell or with other Loops, leaving a detailed analysis for a followup paper.
\subsection{M31 Components} \label{sec:M31_components}
The baseline model seems unable to account for the total emission in FM31. We now proceeded to add to the model M31-related components, for which we make the simplifying assumption of spherical symmetry with respect to the center of M31. For the inner galaxy we add a uniform disk with a radius of 0.4$^\circ$, consistent with the best-fit morphology in~\citet{Ackermann:2017nya}. We add a second uniform template centered at M31 with a radial extension of $0.4^\circ < r \leq 8.5^\circ$. This is the geometry as determined in Figure~\ref{fig:template_geometry}, which was used to help facilitate the construction of the arc template. We note that although the outer radius was set by the bright residual emission in the upper-left corner, it also happens to encompass a large H~{\sc i}\ cloud centered in projection on M31, possibly associated with the M31 system (i.e.\ the M31 cloud), as well as a majority of M31's globular cluster population and stellar halo, which will be further discussed in Section~\ref{sec:gas_related_emission}. The radial extension corresponds to a projected radius of 117 kpc. We label this component as FM31 spherical halo.
Lastly, we add a third uniform template with a radial extension of $r > 8.5^\circ$, covering the remaining extent of the field. This corresponds to M31's far outer halo, and likewise it begins to approach the MW plane towards the top of the field. This is the template that suffers most from Galactic confusion. We label this component as FM31 far outer halo.
All of the M31-related component are given PLEXP spectral models, and the spectral parameters (normalization, index, cutoff) are fit with the arc template and the other baseline components. We note that the spectra of the M31 components have also been fit with a power law per every other energy band, as well as a standard power law, and the results are consistent with the PLEXP model (see Section~\ref{sec:FSSC_IEM}).
The fit is performed in the standard way just as for the baseline fit. We perform two main variations of the fit, amounting to different variations of the arc template. For one variation we use the full arc template with PL spectral model. For the second variation we use the north and south arc templates with PLEXP spectral models.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{flux_and_residuals_MW_M33_Full_PL_Three_ring.pdf}
\includegraphics[width=0.49\textwidth]{flux_and_residuals_MW_M33_North_South_Three_ring.pdf}
\caption{M31-related components are added to the model, in addition to the arc template, and standard baseline components. The left panel is for the full arc template with PL spectral model, and the right panel is for the north and south arc templates with PLEXP spectral model, just as in Figure~\ref{fig:M31_Arc_flux_and_Residuals}. Black dashed lines show the best-fit spectra for the H~{\sc i}\ A5 (top), A6 (bottom), and A7 (middle) components. The black dashed-dot line shows the isotropic component, which remains fixed to its best-fit value obtained in the tuning region, just as for all other fits. The best-fit spectra of the remaining components are similar to that shown in Figure~\ref{fig:flux_and_residuals_true}, and are left out here for visual clarity. Downward pointing triangles give upper-limits. Bands give the 1$\sigma$ error. The bottom panel shows the remaining fractional residuals, which are fairly flat over the entire energy range, and likewise show a normal distribution with a mean of zero.}
\label{fig:M31_components}
\end{figure*}
\begin{deluxetable*}{lcccc}[tbh!]
\tablecolumns{5}
\tablewidth{0mm}
\tablecaption{Normalizations of the Diffuse Components, Integrated Flux, and Likelihoods for the Arc Fits with M31 Components\label{tab:arc_and_M31_norms}}
\tablehead{
\colhead{Component} &
\colhead{Arc Full (PL)} &
\colhead{Arc North and South}&
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Intensity ($\times 10^{-8})$}\\
&
&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)}
}
\startdata
H~{\sc i}\ $\pi^0$, A5 &0.85 $\pm$ 0.05&0.88 $\pm$ 0.05&159.8 $\pm$ 9.1 & 67.9 $\pm$ 3.9 \\
H~{\sc i}\ $\pi^0$, A6 &0.9 $\pm$ 0.2&1.0 $\pm$ 0.2 & 10.3 $\pm$ 2.5 & 4.4 $\pm$ 1.1 \\
H~{\sc i}\ $\pi^0$, A7 &2.8 $\pm$ 0.4&2.9 $\pm$ 0.4& 15.3 $\pm$ 2.1 & 6.5 $\pm$ 0.9 \\
H$_2$\ $\pi^0$, A5 &2.7 $\pm$ 0.3&2.7 $\pm$ 0.3&3.7 $\pm$ 0.4& 1.6 $\pm$ 0.2 \\
IC, A5 &2.2 $\pm$ 0.2&2.2 $\pm$ 0.2& 115.2 $\pm$ 8.6 & 49.0 $\pm$ 3.7 \\
IC, A6 -- A7&1.2 $\pm$ 0.4&1.0 $\pm$ 0.4&20.1 $\pm$ 7.0 & 8.6 $\pm$ 3.0 \\
IC, A8 &88.5 $\pm$ 19.0&59.7 $\pm$ 20.2&11.0 $\pm$ 3.6 & 4.7 $\pm$ 1.5 \\
$-\log L$&142933&142919&\nodata&\nodata
\enddata
\tablecomments{Columns 2 and 3 give the best fit normalizations for the diffuse components. The last two columns report the total integrated flux and intensity between 1--100 GeV for the arc north and south fit. The bottom row gives the resulting likelihood for each respective fit. Intensities are calculated by using the total area of FM31, which is 0.2352 sr.}
\end{deluxetable*}
The intensities and residuals resulting from the fits with the arc template and M31 components are shown in Figure~\ref{fig:M31_components}. The left panel is for the full arc template with PL spectral model. The right panel is for the north and south arc templates with PLEXP spectral model. Black dashed lines show the best-fit spectra for the H~{\sc i}\ A5 (top), A6 (bottom), and A7 (middle) components. The black dashed-dot line shows the isotropic component, which remains fixed to its best-fit value obtained in the tuning region, just as for all other fits. The best-fit spectra of the remaining components are similar to that shown in Figure~\ref{fig:flux_and_residuals_true}, and are left out here for visual clarity. The bottom panel shows the remaining fractional residuals, which are fairly flat over the entire energy range, and likewise show a normal distribution with a mean of zero. The best fit normalizations and flux for the diffuse components are reported in Table~\ref{tab:arc_and_M31_norms}, as well as the fit likelihood. Best-fit parameters for the arc template and M31-related components are reported in Tables~\ref{tab:arc_M31_fit_params_PL} and~\ref{tab:arc_M31_fit_params_North_and_South}.
We note that for the M31-related components the TS is defined as $-2\Delta\log L$, and it is the value reported by {\it pylikelihood} (a fitting routine from the {\it Fermi}--LAT{} ScienceTools package), without refitting. In order to obtain a more conservative estimate of the statistical significance of the M31-related components, and in particular, the components corresponding to the outer halo, we make the following calculation. We define the null model as consisting of the standard components (point sources and diffuse), arc template (north and south), and M31 inner galaxy component. Then for the alternative model we also include the spherical halo and far outer halo components. We find that the alternative model is preferred at the confidence level of roughly 8$\sigma$ ($-2\Delta\log L$=63).
The total integrated flux for the H~{\sc i}\ A5 component plus the arc north and south components is (185.6 $\pm$ 12.9) $\times 10^{-9}$ ph cm$^{-2}$ s$^{-1}$, consistent with that of the baseline fit (with IC scaled). The normalization of the H~{\sc i}\ A6 component is consistent with the GALPROP prediction. The normalization of the H~{\sc i}\ A7 component is still a bit high (2.8 $\pm$ 0.4). The normalizations of the IC A5 and A6-A7 components are consistent with the all-sky average obtained in the isotropic calculation (Table~\ref{tab:norm_isotropic}). The intensity of the arc south component at $\sim$10 GeV is at the same level as that of the M31-related components, and its spectrum is softer than the spectrum of the north component.
\begin{deluxetable*}{lcccccccc}[tbh!]
\tablecolumns{9}
\tablewidth{0mm}
\tablecaption{Results for the Arc Template (Full, PL) and M31 Components\label{tab:arc_M31_fit_params_PL}}
\tablehead{
\colhead{Template} &
\colhead{area} &
\colhead{TS} &
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Energy Flux ($\times 10^{-12})$}&
\colhead{Intensity ($\times 10^{-8})$}&
\colhead{Counts}&
\colhead{Index}&
\colhead{Cutoff, $E_c$}\\
&
\colhead{(sr)}&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(erg cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)} &
&
$\alpha$&
\colhead{(GeV)}
}
\startdata
Arc Full (PL) &0.080232&616&25.5 $\pm$ 1.4&118.5 $\pm$ 7.0&31.8 $\pm$ 1.7 &6739&2.42 $\pm$ 0.05&\nodata \\
FM31 Inner Galaxy &0.000144&55&0.5 $\pm$ 0.1&1.7 $\pm$ 0.4&347.2 $\pm$ 69.4 &141&2.8 $\pm$ 0.3&96.4 $\pm$ 151.6\\
FM31 Spherical Halo &0.0684&34&4.2 $\pm$ 1.6&19.4 $\pm$ 6.2&6.1 $\pm$ 2.3 &1158&0.7 $\pm$ 1.1&2.9 $\pm$ 2.9 \\
FM31 Far Outer Halo &0.166656&32&4.3 $\pm$ 1.9 &33.8 $\pm$ 9.0 &2.6 $\pm$ 1.1&1142&--1.4 $\pm$ 1.2&2.0 $\pm$ 0.7
\enddata
\tablecomments{The TS is defined as $-2\Delta\log L$, and it is the value reported by pylikelihood, without refitting. Fits are made with a power-law spectral model $dN/dE\propto E^{-\alpha}$ and with a model with exponential cut off $dN/dE\propto E^{-\alpha} \exp{(-E/E_c)}.$}
\end{deluxetable*}
\begin{deluxetable*}{lcccccccc}[tbh!]
\tablecolumns{9}
\tablewidth{0mm}
\tablecaption{Results for the Arc Template (North and South, PLEXP) and M31 Components\label{tab:arc_M31_fit_params_North_and_South}}
\tablehead{
\colhead{Template} &
\colhead{area} &
\colhead{TS} &
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Energy Flux ($\times 10^{-12})$}&
\colhead{Intensity ($\times 10^{-8})$}&
\colhead{Counts}&
\colhead{Index}&
\colhead{Cutoff, $E_c$}\\
&
\colhead{(sr)}&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(erg cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)} &
&
$\alpha$&
\colhead{(GeV)}
}
\startdata
Arc North &0.033864 & 438 &15.5 $\pm$ 1.3 &78.9 $\pm$ 6.4 &45.8 $\pm$ 3.8& 4027 &2.2 $\pm$ 0.1 &84.5 $\pm$ 100.4 \\
Arc South &0.046368& 395 &11.8 $\pm$ 0.7 &47.8 $\pm$ 4.1 &25.4 $\pm$ 1.5 & 3155& 2.5 $\pm$ 0.1 &100.0 $\pm$ 6.6 \\
FM31 Inner Galaxy &0.000144 & 53 &0.5 $\pm$ 0.08 &1.7 $\pm$ 0.4&347.2 $\pm$ 55.6 & 139 & 2.8 $\pm$ 0.3&100.0 $\pm$ 10.6 \\
FM31 Spherical Halo &0.0684 & 39 &4.5 $\pm$ 1.2 & 22.0 $\pm$ 6.4 &6.6 $\pm$ 1.8 & 1223 &0.9 $\pm$ 0.8 &4.0 $\pm$ 3.6 \\
FM31 Far Outer Halo &0.166656 & 30 &3.8 $\pm$ 1.3 & 31.6 $\pm$ 8.7 &2.3 $\pm$ 0.8 & 1020 &--1.8 $\pm$ 1.3 & 1.8 $\pm$ 0.6
\enddata
\tablecomments{The TS is defined as $-2\Delta\log L$, and it is the value reported by pylikelihood, without refitting. Fits are made with a model with exponential cut off $dN/dE\propto E^{-\alpha} \exp{(-E/E_c)}.$}
\end{deluxetable*}
\begin{deluxetable*}{lcccccccc}[tbh!]
\tablecolumns{9}
\tablewidth{0mm}
\tablecaption{Results for the Symmetry Test\label{tab:symmetry_test}}
\tablehead{
\colhead{Template} &
\colhead{area} &
\colhead{TS} &
\colhead{Flux ($\times 10^{-9})$}&
\colhead{Energy Flux ($\times 10^{-12})$}&
\colhead{Intensity ($\times 10^{-8})$}&
\colhead{Counts}&
\colhead{Index}&
\colhead{Cutoff, $E_c$}\\
&
\colhead{(sr)}&
&
\colhead{(ph cm$^{-2}$ s$^{-1}$)} &
\colhead{(erg cm$^{-2}$ s$^{-1}$)} &
\colhead{(ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$)} &
&
$\alpha$&
\colhead{(GeV)}
}
\startdata
Spherical Halo North &0.0342 &89 &5.1 $\pm$ 1.3&22.4 $\pm$ 5.2&14.9 $\pm$ 3.8 &1388&1.2 $\pm$ 0.6 & 4.2 $\pm$ 3.3 \\
Spherical Halo South &0.0342 &28 &2.7 $\pm$ 1.2&11.9 $\pm$ 5.1 &7.9 $\pm$ 3.5&743&1.9 $\pm$ 0.5 &11.6 $\pm$ 15.0 \\
Far Outer Halo North &0.0833 &89 &6.8 $\pm$ 2.1 &47.6 $\pm$ 9.6 &8.2 $\pm$ 2.5&1805&--0.6 $\pm$ 0.8&2.4 $\pm$ 0.8 \\
Far Outer Halo South &0.0833 &31 &4.7 $\pm$ 2.4&16.9 $\pm$ 11.6 &5.6 $\pm$ 2.9&1233 &2.7 $\pm$ 0.4&97.5 $\pm$ 21.9
\enddata
\tablecomments{The TS is defined as $-2\Delta\log L$, and it is the value reported by pylikelihood, without refitting. Fits are made with a model with exponential cut off $dN/dE\propto E^{-\alpha} \exp{(-E/E_c)}.$}
\end{deluxetable*}
In Appendix~\ref{sec:different_IEMs} we perform additional systematic checks. Using the M31 IEM we allow for extra freedom in the fit. We also repeat the analysis with two alternative IEMs, namely, the IG IEM and FSSC IEM. Each alternative IEM has its own self-consistently derived isotropic spectrum and additional point sources. Full details of these tests are given in Appendix~\ref{sec:different_IEMs}. Here we summarize the main findings.
Using the M31 IEM we allow for extra freedom in the fit by varying the index of the IC components with a PL scaling. In this case the IC components show a spectral hardening towards the outer Galaxy, for both the TR and FM31. However, this is unable to flatten the excess in FM31, and the properties of the excess remain qualitatively consistent with the results presented above.
Using the M31 IEM we also vary the index of the H~{\sc i}-related components using a PL scaling. In the TR the local annulus shows no change in the index. However, in FM31 there is a hardening of the index for the local annulus, with a significantly increasing hardening towards the outer Galaxy. This result is in direct contrast to the gradual softening which has been reported by other studies~\citep{Acero:2016qlg,yang2016radial}. FM31 clearly shows an anomaly with respect to these other measurements, as well as an anomaly with respect to the results in the TR and the GALPROP predictions (see Section \ref{sec:M31_IEM_extra}). The anomaly is most clearly evident for the outer Galaxy rings, A6 and A7, and it is also these rings which are found to be partially correlated with the M31 system, as is clearly seen in Figure~\ref{fig:gas_column_densities}. In particular, the H~{\sc i}\ A7 component obtains a best-fit index $\Delta\alpha$ of --0.39 $\pm$ 0.11, which corresponds to an effective index of 2.37, compared to its GALPROP prediction of 2.76. This result further supports the conclusion that there is some significant anomaly in FM31. This particular fit is also able to do a better job at flattening the excess in the fractional energy residuals, however, some excess emission still remains. To quantify the remaining excess we fit the M31-related components. In this case the spherical halo is still detected at $\sim$3--4$\sigma$ and the spectral properties are qualitatively consistent with the main results.
For the IG IEM the spectrum of the isotropic component is determined at high latitudes ($|b|>50^\circ$), and the normalization is held fixed to its nominal value (1.0). This is in contrast to the M31 IEM, for which we use the all-sky isotropic spectrum, with the normalization determined in a tuning region directly below FM31. The fit is otherwise performed in the standard way. The residuals are qualitatively consistent with what we find for the M31 IEM.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.7\textwidth]{Systematic_Residuals_all.pdf}
\caption{A systematic excess can be observed between $\sim$3--20 GeV at the level of $\sim$3--5\%. Systematic over-modeling is also present above and below this range. We note that there is one model for which the signal can be flattened (shown with green circles). This results from using the FSSC IEM (intended for point source analysis) and fitting both the isotropic and Galactic diffuse (including the index) in the signal region. The FSSC IEM is not intended for extended source analysis, and this result illustrates how the application of an improper IEM for analysis of largely extended emission can alter the physical results. The M31 IEM is our benchmark model. The different models are as follows: \textbf{black squares:} FSSC IEM, fitting the isotropic and Galactic diffuse (with index fixed) in the signal region, using Clean data, corresponding to the fit in Figure~\ref{fig:flux_and_residuals_FSSC}; \textbf{blue upward-pointing triangles:} same as for the black squares but using UltraCleanVeto (UCV) data, see Section \ref{sec:FSSC_IEM} for details; \textbf{green circles:} same as for the black squares but also freeing the index of the Galactic diffuse; \textbf{orange diamonds:} M31 IEM baseline fit, varying the index of the IC components A5-A8 using a power law scaling, corresponding to the fit in Figure~\ref{fig:IC_index_scaled}; \textbf{purple rightward-pointing triangles:} M31 IEM baseline fit, varying the index of the H~{\sc i}-related components A5--A8 using a power law scaling, corresponding to the fit in Figure~\ref{fig:HI_index_scaled}. Note that in this case FM31 shows a significant anomaly in the index of the gas-related emission towards the outer Galaxy, as is clearly shown in Figure~\ref{fig:HI_index_scaled}. \textbf{blue band:} M31 IEM baseline fit, corresponding to the fit in Figure~\ref{fig:flux_and_residuals_true}; \textbf{green band:} M31 IEM tuned fit, corresponding to the fit in Figure~\ref{fig:flux_and_residuals_FM31_Tuned}; \textbf{pink band:} M31 IEM arc fit, corresponding to the fit in Figure~\ref{fig:M31_Arc_flux_and_Residuals} (this is our primary model); \textbf{black band:} inner Galaxy (IG) IEM, corresponding to the fit in Figure~\ref{fig:flux_and_residuals_IG}.}
\label{fig:residuals_all}
\end{figure*}
We also repeat the fit using the FSSC IEM. We fit both the isotropic component and the Galactic diffuse component in the signal region, as well as the point sources. We perform the fit with and without freeing the index of the Galactic diffuse component. In the latter case the excess remains qualitatively consistent with what we find for the M31 IEM (both the fractional count residuals and the spatial residuals). However, in the former case the IEM is able to flatten the excess in the fractional count residuals (the spatial residuals remain qualitatively the same). This illustrates how the application of an improper IEM for analysis of largely extended emission can alter the physical results.
We note that as a test we have also performed the fit with the M31 IEM by freely scaling the isotropic component in FM31, along with the other diffuse components and point sources. In this case the isotropic component obtains a normalization of 1.46 $\pm$ 0.06, and the excess in the fractional count residuals remains qualitatively the same. We do not consider this to be a proper procedure for our analysis, but nevertheless this test shows that even with an increase in the normalization of the isotropic emission upwards of 46\% the residual is still observed.
A summary of the excess in the fractional count residuals for all fit variations tested in this analysis is shown in Figure~\ref{fig:residuals_all}. We conclude that a systematic excess is present between $\sim$3--20 GeV at the level of $\sim$3--5\%. The signal is only flattened with the FSSC IEM (intended for point source analysis), when fitting all components in the signal region (including the index of the Galactic diffuse component), whereas all other fits result in an excess. Our benchmark model is the M31 IEM.
\subsection{Symmetry of the Residual Emission in FM31} \label{sec:symmetry}
In this section we further test the symmetry of the residual emission in FM31. We divide the spherical halo and far outer halo templates into north and south components. The cut is made at the midpoint of FM31 along the horizontal direction (parallel to the Galactic plane), corresponding to a latitude of $\sim$$-21.5^\circ$. This allows for deviation from spherical symmetry, as well as a gradient with respect to the Galactic plane.
We first calculate the fractional count residuals in the different regions without fitting any of the M31-related templates. These results are shown in Figure~\ref{fig:M31_fractional_residuals}, and they correspond to the spatial residuals shown in Figure~\ref{fig:M31_Arc_spatial_residuals}, resulting from the baseline fit with the arc north and south templates. The excess can be seen for both the spherical halo and far outer halo regions. For the spherical halo region, the excess appears to be more prominent in the north compared to the south, although it is present in both. For the far outer halo region, the excess is prominent in the north, whereas the residuals in the south are pretty flat.
We quantify the symmetry of the residual emission by fitting templates for the different regions simultaneously with the other components of the IEM. The M31-related components include the inner galaxy and the northern and southern regions of the spherical halo and far outer halo (5 components in total). Each component is given a PLEXP spectral model, and the spectral parameters are allowed to vary independently (although the components are fit simultaneously). The fit also includes the arc north and south components. Lastly, we scale the diffuse components and point sources in the standard way.
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{SH_fractional_residuals.pdf}
\includegraphics[width=0.33\textwidth]{SHN_fractional_residuals.pdf}
\includegraphics[width=0.33\textwidth]{SHS_fractional_residuals.pdf}
\includegraphics[width=0.33\textwidth]{FOH_fractional_residuals.pdf}
\includegraphics[width=0.33\textwidth]{FOHN_fractional_residuals.pdf}
\includegraphics[width=0.33\textwidth]{FOHS_fractional_residuals.pdf}
\caption{The fractional count residuals calculated over the different spatial regions corresponding to the spherical halo and far outer halo components, as indicated above each plot. Note that these are the residuals before adding the M31-related components, and they correspond to the spatial residuals shown in Figure~\ref{fig:M31_Arc_spatial_residuals}, resulting from the baseline fit with the arc north and south templates. The goal here is to further examine the symmetry of the residual emission associated with the M31-related components. We consider the northern and southern regions of the templates, where the cut is made at the midpoint of FM31 along the horizontal direction (parallel to the Galactic plane), corresponding to a latitude of $-21.5^\circ$. The first column shows the residuals calculated over the entire region, for the spherical halo and far outer halo, respectively. The second column shows the residuals in the north, and the third column shows the residuals in the south.}
\label{fig:M31_fractional_residuals}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{spherical_halo_symmetry.pdf}
\includegraphics[width=0.33\textwidth]{far_outer_halo_symmetry.pdf}
\includegraphics[width=0.33\textwidth]{all_symmetry.pdf}
\caption{The best-fit spectra resulting from the symmetry test fit, where the spherical halo and far outer halo templates are divided into north and south components, and the spectral parameters for each component are allowed to vary independently. The cut is made at the midpoint of FM31 along the horizontal direction (parallel to the Galactic plane), corresponding to a latitude of $-21.5^\circ$. The northern components are shown with square markers, and the southern components are shown with circle markers. Downward pointing triangles give upper limits. Also overlaid are the spectra for the full component fit (with arc north and south), as shown in Figure~\ref{fig:M31_components}.}
\label{fig:M31_symmetry}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{ANSPLEXP_OGS_Correlation.pdf}
\caption{Correlation matrix for the symmetry test fit. In addition to the standard components, the fit includes components for the arc north and south (AN and AS), inner galaxy (not shown here), spherical halo north and south (SHN and SHS), and the far outer halo north and south (FHN and FHS).}
\label{fig:symmetry_correlation}
\end{figure}
The resulting spectra for the northern and southern regions of the spherical halo and far outer halo are shown in Figure~\ref{fig:M31_symmetry}. For reference, we also overlay the spectra for the full M31-related components (from Figure~\ref{fig:M31_components}). The spectra for the arc components are very similar to the results shown in Figure~\ref{fig:M31_components}, and therefore we do not show them here. The corresponding best-fit parameters for the halo components are reported in Table~\ref{tab:symmetry_test}. All components are significantly detected (with a significance $>5\sigma$).
The spherical halo region is slightly brighter in the north than the south. The best-fit spectra for the two components have similar spectral shapes and are qualitatively consistent with that of the full template. We note that we have elected to define north and south with respect to the plane of the MW. However, if the spherical halo component is in fact physically associated with the M31-system, then it may be just as well to cut the two halves with respect to the major axis of M31 ($38^\circ$), which may increase the symmetry between north and south. However, our primary objective here is to simply quantify the gross properties of the residual emission, and a more detailed determination of the morphology is left for a follow-up study.
The far outer halo region shows a significant spectral variation between the north and south. The northern component has a high spectral curvature, identical to the spectral shape that results when fitting the full template, and is generally brighter than the southern component.
The correlation matrix for the fit is shown in Figure~\ref{fig:symmetry_correlation}. The southern components for both the spherical halo and far outer halo have a stronger anti-correlation with the IEM, compared to the northern components. In particular, the southern components have relatively strong anti-correlations with ICA5 and H~{\sc i}\ A7. We also note that the southern component of the spherical halo has some anti-correlation with the arc template, whereas the northern component does not. The normalizations of the diffuse components are mostly in agreement with those obtained for the fit with the full M31-related templates. However, the IC A6-A7 component obtains a best-fit normalization of 0.42 $\pm$ 0.38, which may not be very physical, and is in contrast to the values obtained for the other fits in FM31. These results highlight one major shortcoming of this test; that is, the northern and southern regions correlate differently with the IEM, and this can potentially lead to inaccuracies regarding the actual symmetry of the tentative signal. This is especially problematic for the excess in FM31, since the corresponding emission lies well below the foreground/background emission.
The fit with the north and south M31-related templates further shows the importance of the MW modeling and also that the excess is likely to contain a significant MW component. In particular, the excess emission associated with the far outer halo is likely to be related to the MW. Indeed, the Galactic disk region directly above FM31 has many complications, and it is known to contain extended excess $\gamma$-ray emission of unknown origin~\citep{Acero:2016qlg}. In addition, the region (in projection) also contains an extended high-velocity cloud known as Complex H~\citep{hulsbosch1975studies,blitz1999high,Lockman:2003zs,Simon:2005vh}, which has been postulated to be either a dark galaxy of the Local Group or an example of a cold accretion flow onto the MW~\citep{Simon:2005vh}. Here we only point out a couple of these associated difficulties, but our primary goal is to quantify the rough properties of the excess emission.
A portion of the excess emission is also likely related to the M31 system, and in particular, the emission associated with the spherical halo region. We note that of the four halo components, the overall intensity is highest for the northern spherical halo. Given the significant modeling uncertainties, we make the simplifying conclusion that the excess emission in FM31 is significantly detected and has a total radial extension upwards of $\sim$120--200 kpc from the center of M31. The lower limit corresponds to the boundary of the spherical halo, and the upper limit corresponds to the boundary of the far outer halo. This conclusion encapsulates the possibility that the excess emission may have contributions from both M31 and the MW, and it also refers to the emission associated with the arc template, the nature of which remains unclear.
\section{The Smooth Component of the Residual Emission in FM31 and Dark Matter} \label{sec:smooth_residual_emission}
The dominant component of the residual emission in FM31 has a total radial extension upwards of $\sim$120--200 kpc from the center of M31, corresponding to the excess between $\sim$3--20 GeV in the fractional count residuals. It is plausible that a portion of the signal may be related to M31's DM halo. In general, the exact properties of M31's DM halo remain highly uncertain, i.e.\ the geometry, extent, and substructure content. Here we make some simplifying assumptions to get a rough sense of the consistency between the observed signal and a possible DM interpretation. In particular, we check for consistency with the DM interpretation of the excess $\gamma$-ray emission observed in the Galactic center~\citep{Goodenough:2009gk, Hooper:2010mq,Hooper:2011ti, Abazajian:2012pn,Hooper:2013rwa,Gordon:2013vta,Huang:2013pda,Abazajian:2014fta,Zhou:2014lva,Calore:2014xka,Abazajian:2014hsa,Calore:2014nla,Huang:2015rlu,TheFermi-LAT:2015kwa,Daylan:2014rsa,Carlson:2016iis,Fermi-LAT:2017yoi,Karwin:2016tsw,TheFermi-LAT:2017vmf,agrawal2017point}. This by no means encompasses all possibilities, and more detailed evaluations are left for future studies.
In addition to M31's DM halo, we also consider the contribution from the MW's DM halo along the line of sight, since this component has not been explicitly accounted for in our analysis. If such a component actually exists, then it may be at least partially absorbed by the isotropic component, as well as the other components of the IEM, but it will not necessarily be fully absorbed, and a portion of such a signal could be contained in the M31-related components.
The left panel of Figure~\ref{fig:M31_radial_profile} shows the radial profile of the $\gamma$-ray intensity for the M31-related components. Red square markers show the fit with the full M31-related templates, including the arc north and south with PLEXP. Purple circle markers show the fit with the M31-related templates divided into north and south components (from Figure~\ref{fig:M31_symmetry}). The individual intensities of the divided north and south components are a bit higher than the intensity of the combined template because of the different correlation of the tentative signal in these regions with the IEM components (see Figure~\ref{fig:symmetry_correlation} and the corresponding discussion in Section~\ref{sec:symmetry}). The intensity of the M31-related emission is far less in the outer regions than it is towards the inner galaxy. Furthermore, the signal is not detected in the TR. This is consistent with the hypothesis that the emission originates (at least partially) from the M31 system.
In the figure, we compare the radial dependence of the observed intensity to the predicted intensity for a DM signal. Plots of the corresponding $J$-factors and a description of all parameters for the predicted $\gamma$-ray flux due to DM annihilation are given in Appendix~\ref{sec:DM}. For the \emph{DM attribute quantity}, Eq.~(\ref{eq1}),
we use the best-fit values as determined from the GC excess in~\citet{Karwin:2016tsw}. The uncertainty bands for each of the three intensity profiles come from the uncertainty in the \emph{DM attribute quantity} (as described in Appendix~\ref{sec:DM}). The black band shows the corresponding intensity profile for the MW DM component along the line of sight. Note that in general there is also expected to be an additional contribution from the local DM filament between M31 and the MW.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.49\textwidth]{radial_profile_J.pdf}
\includegraphics[width=0.49\textwidth]{GC_schematic.pdf}
\caption{\textbf{Left:} Radial intensity profile for the M31-related components. Red square markers show the results from the north and south arc template with PLEXP. The profiles for the PL arc fit are basically the same. Purple circle markers show the results from the fit with the M31-related templates divided into north and south components (from Figure~\ref{fig:M31_symmetry}). For reference, we compare the radial profile to expectations for DM annihilation in the line of sight. Note that this also includes the contribution from the MW's DM halo in the line of sight, which has not been accounted for in our analysis, and may be at least partially embedded in the isotropic component and Galactic diffuse components. Likewise, the M31-related components may contain a significant contribution from the MW's extended halo. Details regarding the DM profiles are given in Appendix~\ref{sec:DM}. \textbf{Right:} Spectral shape comparison to the Galactic center excess (for an arbitrary normalization), as observed in~\citet{TheFermi-LAT:2015kwa}. Also shown is a prediction for CRs interacting with the ionized gas of the circumgalactic medium from~\citet{Feldmann:2012rx}. Note that the prediction is for a MW component, but we are primarily interested in a spectral shape comparison.
}
\label{fig:M31_radial_profile}
\end{figure*}
We find that the radial intensity profile of the positive residual emission in FM31 is roughly consistent with a cold DM scenario that includes a large boost factor due to substructures. Granted, however, the exact partitioning of individual contributions to the signal remains unclear, i.e.\ primary emission from M31's DM halo, secondary emission in M31, emission from the local DM filament between M31 and the MW, and emission from the MW's DM halo along the line of sight. We note that for the radial intensity profile in Figure~\ref{fig:M31_radial_profile} we have not included a MW prediction for the high substructure model. Our main intention here is just to get a rough sense of the consistency with a DM interpretation, but in general the MW substructure high prediction would also be relevant, and it would imply that a significant portion of the MW halo signal would need to be almost fully absorbed by the isotropic component and other components of the IEM.
We again stress that the properties of the excess emission observed towards the outer halo have a strong dependence on the modeling of the IEM. This is partially reflected by the large uncertainty in the radial profile between the two different fit variations, as can be seen in the left panel of Figure~\ref{fig:M31_radial_profile}. We also stress that the excess in FM31 is likely to contain a significant contribution from the MW. In particular, the emission associated with the far outer halo is more likely to be related to the MW than the M31 system. But still, the nature of this emission remains unclear.
The right panel in Figure~\ref{fig:M31_radial_profile} shows a spectral shape comparison with the excess emission observed in the Galactic center~\citep{TheFermi-LAT:2015kwa}. The band for the Galactic center excess shows the systematic $+$ statistical uncertainty (although it is dominated by the systematics), and it is shown for an arbitrary normalization. We find that the spectra of the M31-related components are qualitatively consistent with the uncertainty band of the Galactic center excess. We note that the spectrum of the far outer halo component has a higher curvature at low energies. If this is indeed a real feature of the signal (and not just a systematic effect), then it could be related to secondary processes. If the DM produces some fraction of leptons, then the leptons may generate secondary $\gamma$-ray emission from IC and Bremsstrahlung, due to interactions with the interstellar radiation fields and gas~\citep{Cirelli:2013mqa,Lacroix:2014eea,Abazajian:2014hsa}. For M31, the secondary emission may have a dependence on the radial distance from the center of M31, since the stellar halo and gaseous halo also have a radial dependence. However, this possibility would need to be quantified to get a better sense of the effect.
Also plotted in the right panel of Figure~\ref{fig:M31_radial_profile} is the isotropic component. The intensity of the M31-related components is below that of the isotropic component by a factor of $\sim$5. There is a bump in the isotropic spectrum around $\sim$10 GeV (as is more clearly seen in Figure~\ref{fig:Isotropic_Sytematics}), and this energy also somewhat corresponds to the peak emission of the M31-related components. This might suggest that the isotropic emission may include a contribution that originates from similar processes in the extended halo of the MW. As it pertains to DM in particular, this issue is significantly complicated and is beyond the scope of this work, but related discussions can be found in \citet{Cuoco:2010jb}, \citet{Cholis:2013ena}, \citet{Fornasa:2015qua}, \citet{Ajello:2015mfa}, and \citet{Ackermann:2015tah}.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.33\textwidth]{zoom_2_Cube_Helix_Brem.pdf}
\includegraphics[width=0.33\textwidth]{zoom_1_Cube_Helix_Brem.pdf}
\includegraphics[width=0.33\textwidth]{zoom_0_Cube_Helix_Brem.pdf}
\includegraphics[width=0.33\textwidth]{M33_zoom_2_Cube_Helix_Brem.pdf}
\includegraphics[width=0.33\textwidth]{M33_zoom_1_Cube_Helix_Brem.pdf}
\includegraphics[width=0.33\textwidth]{cold_view_Brem_Cube_Helix.pdf}
\caption{Residual maps showing the structured emission integrated in the energy range 1--100 GeV. The color scale corresponds to counts/pixel, and the pixel size is $0.2^\circ \times 0.2^\circ$. The images are smoothed using a $1^\circ$ Gaussian kernel. This value corresponds to the PSF (68\% containment angle) of \textit{Fermi}-LAT, which at 1 GeV is $\sim$$1^\circ$. Maps are shown in the cubehelix color scheme~\citep{green2011colour}. In the top row contours for the IRIS 100 $\mu$m map of M31 are overlaid, and three zoom levels ($2^\circ$, $7^\circ$, full field) centered at M31 are shown. The white circle ($1^\circ$) shows the position of M33. The bottom row shows two zoom levels ($1^\circ$, $3^\circ$) centered at M33, and the H~{\sc i}\ integrated intensity map (units of K) of M33 is overlaid. In the third panel we show the M31 zoom 0 map rescaled, in order to provide a sense of the relative intensity towards the MW disk. \textbf{We stress that these maps have not subtracted any Galactic H~{\sc i}-related emission.}}
\label{fig:positive_residuals_full}
\end{figure*}
\begin{figure}[tbh!]
\centering
\includegraphics[width=0.45\textwidth]{distribution_all_Brem.pdf}
\caption{Pixel distribution of the smoothed residual map (1 GeV -- 100 GeV) after removing the H~{\sc i}-related components, as shown in Figure~\ref{fig:positive_residuals_full}. The yellow dashed lines are at 0 and 4 counts.}
\label{fig:map_detatils}
\end{figure}
\begin{figure}[tb!]
\centering
\includegraphics[width=0.49\textwidth]{FM31_Rich_Brem_CubeHelix.pdf}
\caption{To the structured $\gamma$-ray emission in FM31 we overlay some M31-related observations from other wavelengths. We stress that this is only done as a qualitative gauge of M31's outer halo. In the figure we have not subtracted any Galactic H~{\sc i}-related emission, and we do not expect the M31-related observations to outshine the MW emission, as discussed in the text. Contours for the IRIS 100 $\mu$m map of M31 are overlaid. The solid cyan circle ($0.4^\circ$) shows the boundary of the FM31 inner galaxy component, and the black dashed circle ($8.5^\circ$) shows the outer boundary of the FM31 spherical halo component, as detailed in Section~\ref{sec:M31_components}. H~{\sc i}\ emission contours from the HI4PI all-sky survey (based on EBHIS and GASS)~\citep{bekhti2016hi4pi}, integrated over the velocity range $-600\ \mathrm{km \ s^{-1}} \leq V_{\rm LSR} \leq -95\ \mathrm{km \ s^{-1}}$ are overlaid. M31's confirmed globular clusters are shown with black stars. M31's population of dwarf galaxies is shown with open black triangles. The M31 cloud can be seen (although obscured by globular clusters). We note the serendipitous enclosure by the spherical halo of the M31 cloud, as well as a majority of M31's globular cluster population and dwarf galaxies. H~{\sc i}\ contours corresponding to M33 can be seen in the lower-left corner. The hook-shaped gas cloud to the right of M33 is Wright's cloud. The red gas contours towards the top of the map are clouds of Complex H. The black H~{\sc i}\ contours towards the top of the field correspond to the plane of the MW, and likewise for the bright (white) $\gamma$-ray emission. To the far right of the field a bright arm of emission extends to higher latitudes. Although not considered when making the overlay, the M31-related observations can be seen to trace the left boundary of the arm. This may be an observational bias, due to foreground gas and dust.
\textbf{We stress that these maps have not subtracted any Galactic H~{\sc i}-related emission.}}
\label{fig:FM31_rich}
\end{figure}
We note that the DM could be decaying (see~\citet{Blanco:2018esa} and references therein). In this case the $\gamma$-ray signal would be morphologically more consistent with the excess observed in FM31 without requiring a large boost from substructures, since it scales as the DM density, as opposed to the square of the density for annihilation. Here we restrict the interpretation to annihilating DM, also in the context of the GC excess. We leave a more complete DM study, including decaying DM, to a followup work.
We also note that aside from DM, another possible interpretation of the signal, if it truly originates from the M31 system, would be that it arises from CR interactions with the ionized gas of M31's circumgalactic medium. We do not rule out this possibility; however, if the emission is dominated by CR interactions with the ionized gas, then this would imply that the CR spectrum and distribution are significantly different in M31's outer galaxy than that measured locally in the MW.
Additionally, the observed intensity of the M31-related components would imply a relatively high emissivity in M31's outer regions compared to the local MW measurements. However, from a study of the $\gamma$-ray emission from a sample of high velocity clouds and intermediate velocity clouds in the halo of the MW, \citet{Tibaldo:2015ooa} concluded that the $\gamma$-ray emissivity per H atom of the clouds decreases as a function of distance from the disk, with indications of a $\sim$50\%-80\% decline of the CR density within a few kpc.
Likewise, from an analytical study of the MW, \citet{Feldmann:2012rx} estimate that the CR density in the outer halo may be up to 10\% of that found in the disk. Their predicted $\gamma$-ray\ spectrum is shown in Figure~\ref{fig:M31_radial_profile}, right panel, with a green forward-hatch band. Note that the predicted intensity level in their model is based on the prediction for a MW signal, but we are mostly interested in a spectral shape comparison. The study in~\citet{Feldmann:2012rx} uses a distribution of H~{\sc ii}\ gas derived using a high resolution hydrodynamical simulation, along with reasonable estimates for the distribution of CRs in the outer halo of the MW. The spatial extent of the CR halo is the greatest modeling uncertainty. The two CR distributions used in their calculation fall to half of their density (not including the density within the disk itself) by 60 kpc and 360 kpc, respectively. These distributions define their uncertainty band in the figure.
Considering the radial extent, spectral shape, and intensity of the M31-related components, it is seemingly unlikely that the corresponding emission is dominated by CR interactions with the ionized gas of M31's circumgalactic medium.
\section{The Structured $\gamma$-ray Emission in FM31 and Complementary M31-related Observations} \label{sec:gas_related_emission}
Although the M31-related components are detected with high statistical significance and for multiple IEMs (Appendix~\ref{sec:different_IEMs}), the corresponding intensity lies below that of the isotropic emission, and therefore the signal has a strong dependence on the systematic uncertainties of the isotropic component. In addition, our analysis has demonstrated that the characterization of H~{\sc i}\ along the line of sight is a significant systematic uncertainty for analysis of the M31 field, including the contribution from the DNM. Overall, $\gamma$-ray observations of M31's outer halo are significantly complicated by confusion with the Galactic and isotropic emission, due to the halo's large extension on the sky.
To gauge the full extent of the uncertainty pertaining to the H~{\sc i}-related components, and to help mitigate the uncertainty pertaining to the isotropic component, in this section we supplement our analysis by observing the structured $\gamma$-ray emission in FM31 in a (semi) model-independent way. As a qualitative gauge, we also compare this emission to some of the main tracers of M31's outer disk and halo.
We observe the $\gamma$-ray emission in a (semi) model-independent way by removing the H~{\sc i}-related A5--A8 components from the model (including the Bremsstrahlung component). In addition, we remove the two point sources closest to the M31 disk (3FGL J0040.3+4049 and 3FGL J0049.0+4224), and we remove the new point sources that we find with our point source finding procedure, since most of these sources are found to correlate with the diffuse structures in the residuals (see Figure~\ref{fig:TS_map}). All other sources are held fixed to their best-fit values obtained in the baseline fit (with IC scaled). This effectively amounts to removing only the known smooth diffuse sources and point sources from the data, or equivalently, observing only the structured emission.
The resulting count residuals (data $-$ model) integrated between 1--100 GeV are shown in Figure \ref{fig:positive_residuals_full}. The color scale corresponds to counts/pixel, and the pixel size is $0.2^\circ \times 0.2^\circ$. The images are smoothed using a $1^\circ$ Gaussian kernel. This value roughly corresponds to the PSF (68\% containment angle) of \textit{Fermi}-LAT, which at 1 GeV is $\sim$$1^\circ$. The corresponding pixel distribution is shown in Figure~\ref{fig:map_detatils}. All of the pixels have positive counts, which is why we set the lower limit of the plot range to zero. Maps are shown in the cubehelix color scheme~\citep{green2011colour}. Contours for the disk regions of M31 and M33~\citep{gratier2010molecular} are overlaid. Bright emission corresponding to M31's inner galaxy can be observed. The emission can be seen to extend continuously along M31's major axis in the north-east\footnote{For M31-related directions, north points up, and east points to the left.} direction, which then continues to extend upward until blending with the bright emission of the MW plane. This feature is lopsided, as the south-west side shows a more distinct cutoff away from the inner galaxy. The large arc feature observed in the residuals is also clearly visible in the emission.
We have found that the M31-related components are roughly consistent with arising from DM annihilation. Since there is still a high level of uncertainty regarding the actual nature of DM, especially on galactic scales, we cannot rule out the possibility that the smooth residual emission may in fact have a DM origin. The same also applies for some of the structured emission in FM31. We, therefore, consider the main tracers of M31's outer disk and halo, since these are some of the few observational handles available when searching for a DM signal from the outer regions of the M31 system.
In Figure~\ref{fig:FM31_rich} we overlay the boundaries for the M31 inner galaxy (solid cyan circle) and spherical halo (dashed black circle) components. We also overlay the M31 disk, the M31 cloud \citep{blitz1999high,kerp2016survey}, M33, Wright's cloud \citep{wright1979tail}, M31's population of globular clusters \citep{galleti20042mass,huxor2008globular,peacock2010m31,Mackey:2010ix,veljanoski2014outer,huxor2014outer}, M31's population of satellite galaxies~\citep{McConnachie:2012vd,martin2013pandas,collins2013kinematic,Ibata:2013rh,pawlowski2013dwarf}, and clouds of Complex H \citep{hulsbosch1975studies,blitz1999high,Lockman:2003zs,Simon:2005vh}. The spherical halo component is found to enclose 61\% (22/36) of M31's dwarf galaxy population, which increases to 72\% (26/36) if including the dwarfs which are within $\sim$$1^\circ$ of the spherical halo boundary. We stress that this is only done as a qualitative gauge of M31's outer halo. We do not expect these systems to outshine the local MW emission. In particular, we do not expect to detect the individual M31 dwarfs, since they are mostly undetected in the MW. We also do not expect to detect the individual globular clusters. We do note, however, that we find features in the data that are positionally coincident with some of these tracers, and most prominently with the M31 cloud. Further investigation is left for a follow-up study.
\section{Summary, Discussion, and Conclusion} \label{sec:fianl}
The goal of this work is to search for extended $\gamma$-ray emission originating beyond the galactic disk of M31, and to examine the implications for CRs and DM. There are two primary motivations for this search. First, CR interactions with M31's circumgalactic medium and/or stellar halo could generate a detectable signal in $\gamma$-rays. Secondly, M31's DM halo has a large extension on the sky and could produce a detectable signal within currently allowed DM scenarios, which would be complementary to other targets, and specifically, the Galactic center. Our primary field of interest (FM31) is a $28^\circ \times 28^\circ$ square region, which amounts to a projected radius of $\sim$200 kpc from the center of M31. Our study complements previously published results on M31~\citep{Fermi-LAT:2010kib,ogelman2010discovery,Pshirkov:2015hda,Pshirkov:2016qhu,Ackermann:2017nya} and is the first to explore the farthest reaches of the M31 system in $\gamma$-rays.
Because of the extended nature of the signal we are investigating, modeling the bright foreground of the MW is the biggest challenge in performing this analysis. The IEM provided by the FSSC cannot be used as a primary foreground model for this study, as it \emph{is not} intended for the analysis of extended sources\textsuperscript{\ref{caveats}}
\citep{Acero:2016qlg}. We construct specialized interstellar emission models for the analysis of FM31 by employing the CR propagation code GALPROP, including a self-consistent determination of the isotropic component. Additionally, we use a template approach to account for inaccuracies in the foreground model relating to the neutral gas along the line of sight.
The parameters of the GALPROP model are tuned to the measured local interstellar spectra of CRs, including the latest AMS-02 measurements. We have adopted the best-fit parameters from the tuning procedure performed in~\citet{Boschini:2017fxq,Boschini:2018zdv}, where GALPROP and HelMod are implemented in an iterative manner, thereby accounting for solar modulation in a physically motivated way when fitting to the local CR measurements.
The total interstellar emission model consists of individual components for $\pi^0$-decay, IC, and Bremsstrahlung, and the components are defined in Galactocentric annuli. In total there are 8 annuli, but for FM31 only annulus 5 (the local annulus) and beyond contribute to the foreground emission. FM31 has a significant emission associated with H~{\sc i}\ gas, but there is very little emission from H$_2${} gas. A uniform spin temperature of 150 K is assumed for the baseline IEM. The foreground emission from H~{\sc ii}\ and Bremsstrahlung are subdominant. Our model also accounts for the DNM. The anisotropic formalism is employed for the calculation of the IC component. To model the point sources in the region, we employ the 3FGL as a starting point, and because of the larger statistics of our data set, we account for additional point sources self-consistently with the M31 IEM by implementing a point source finding procedure, which is based on a wavelet transform algorithm.
We calculate the isotropic component self-consistently with the M31 IEM. The main calculation is performed over the full sky in the following region: $|b| \geq 30^\circ, \ 45^\circ \leq l \leq 315^\circ$. To better determine the normalization of the isotropic component we use a tuning region (TR) directly below FM31, outside of the virial radius. The best-fit normalization is found to be 1.06 $\pm$ 0.04, and this remains fixed for all other fits with the M31 IEM. The isotropic component anti-correlates with the IC components, and we also use the TR to initially constrain the normalizations of the IC components (A5 and A6-A7) for the fit in FM31. The fit in the TR yields a model that describes the data well across the entire region and at all energies. The best-fit normalizations of the IEM components in the TR are all in reasonable agreement with the GALPROP predictions.
For the initial baseline fit in FM31 we freely scale the normalizations of the H~{\sc i}\ and H$_2${} $\pi^0$-related components concurrently with the point sources. The normalizations of the isotropic and IC components (A5 and A6-A7) remain fixed to their best-fit values obtained in the TR. The top of FM31 has a minor contribution from IC A8, and it is also freely scaled in the fit. Lastly, the H~{\sc ii}\ and Bremsstrahlung components remain fixed to their GALPROP predictions. Note that the Bremsstrahlung component possesses a normalization of 1.0 $\pm$ 0.6 in the TR, consistent with the GALPROP prediction.
The baseline fit in FM31 results in positive residual emission in the fractional count residuals between $\sim$3--20 GeV. The residual emission in this corresponding energy range is fairly smooth and extends over the entire field. The spatial residuals also show structured excesses and deficits, primarily at lower energies ($\sim$1--3 GeV). Because of this poor data-model agreement, additional freedom is given to the fit, including freely scaling the IC components in FM31 and rescaling the diffuse components in smaller subregions. The latter fit is performed in order to allow for any un-modeled spatial variation in the CR density, ISRF density, and/or spin temperature. We find that the general features of the residual emission persist even with these variations.
A significant fraction of the structured excess emission in FM31 is found to be spatially correlated with the H~{\sc i}\ column density and the foreground dust, including regions where the dust is relatively cold. This may be indicative of a spatially varying spin temperature, which is not properly accounted for by the rescaling in the smaller subregions. Correspondingly, the structured residual emission may be related to inaccuracies in the modeling of the DNM, which in general is determined as part of an all-sky procedure. A part of the shell of Loop III is also present in FM31, while Loops II and IIIs cover it completely. This may imply that some of the gas-related emission in the region is produced by a population of particles with the spectrum that is harder than that of the old CR population. Note that the H~{\sc i}\ $\pi^0$-related $\gamma$-ray component is dominant in FM31 for energies below $\sim$5 GeV.
We, therefore, refine the baseline IEM by constructing a template to account for potential mis-modeling of these components. The template is obtained by selecting the excess emission in FM31 that correlates with H~{\sc i}\ tracers. We refer to this as the arc template. This procedure accounts for any un-modeled H~{\sc i}\ (or other Galactic gas), as well as any mis-modeling in its line of sight distance, spin temperature, and spectral index variations.
We find that the specialized IEMs for the analysis of FM31, both the baseline model and the baseline model with the arc template, yield an extended excess at the level of $\sim$3--5\% in the $\sim$3--20 GeV energy range. We have also tested a number of additional systematic variations to the fit. With the M31 IEM we allowed for additional freedom by varying the index of the IC components and the H~{\sc i}-related components using a PL scaling. The fit was also performed with two alternative IEMs, namely, the IG and FSSC IEMs. Each alternative IEM has its own self-consistently derived isotropic component and additional point sources. In addition, we tested systematic variations to the spectra of 3FGL sources (although the point sources are not a major uncertainty for this analysis). In total we perform 9 main variations of the fit (see Figure~\ref{fig:residuals_all}), using 3 different IEMs (although all IEMs share similar underlying H~{\sc i}\ maps). The excess is observed for all of the physically motivated IEMs intended for extended source analysis.
Using our benchmark model (the M31 IEM) we have demonstrated that the excess is robust against the systematic studies of the MW foreground emission that we have considered, and that it significantly decreases outside of FM31 (as evidenced by the lack of a similar excess in the TR). This indicates that the excess originates at least partially from outside of the MW and it is significant towards M31. However, we do not rule out the possibility that the signal may also include a MW component, as discussed below.
We note that apart from the structured residual emission correlated with the foreground gas and dust, which is accounted for with the arc template, other structured excesses and deficits in FM31 are found to be correlated with the major axis of the M31 disk. Likewise, a portion of the H~{\sc i}\ column densities in the outer Galaxy (A6 and A7) are found to be correlated with M31's major axis as well. This is an indication that some of the gas which is currently assigned to the MW may actually reside in the M31 system, as was also pointed out in~\citet{Ackermann:2017nya}. This will be fully addressed in a forthcoming work.
A component of the residual emission in FM31 is observed to be positionally coincident with the projected position of M33, and a portion of this emission may have an actual physical association; however, further investigation has been left for future studies. Aside from the structured excesses and deficits, which are observed primarily in the lower energy range ($\sim$1--3 GeV), the majority of the excess emission is roughly uniformly distributed across FM31, corresponding to the positive residual emission observed in the fractional count residuals between $\sim$3--20 GeV.
To determine whether the excess presents a spherically symmetric gradient about the center of M31, which would lend support to the hypothesis that it originates from there, we perform a further fit in FM31 by including three symmetric uniform templates centered at M31. This also allows us to quantify the spectrum and gradient of the positive residual emission. The templates are fit concurrently with the other components of the baseline IEM, including the arc template.
The inner disk (inner galaxy) has a radial extension of 0.4$^\circ$ (5.5 kpc projected radius). This is the best-fit morphology as determined in~\citet{Ackermann:2017nya}, and it corresponds to the bright $\gamma$-ray emission towards M31's inner galaxy. The intermediate ring (spherical halo) has a radial extension from $0.4^\circ < r \leq 8.5^\circ$ (117 kpc projected radius). This extension excludes most of the residual emission associated with the arc template, while also enclosing a majority of M31's globular cluster population and stellar halo, as well as the M31 cloud. The outer ring (far outer halo) covers the remaining extent of FM31, corresponding to a total projected radius of $\sim$200 kpc, and likewise it begins to approach the MW plane towards the top of the field. We find that all templates are significantly detected (with a significance of $\geq 5 \sigma$). Furthermore, the M31-related components are able to flatten the positive residual emission in the fractional count residuals.
For the fit with the arc template and M31-related components, the best-fit normalizations of the IEM components are overall in good agreement with the GALPROP predictions, and they also agree with the best-fit normalizations obtained for the all-sky fit in the determination of the isotropic component. The total integrated flux for the H~{\sc i}\ A5 component plus the arc north and south components is 185.6 $\pm$ 12.9 ph cm$^{-2}$ s$^{-1}$, consistent with that of the baseline fit (with IC scaled). In turn, the corresponding local average emissivity is consistent with the measurements made in \citet{abdo2009fermi}, \citet{ackermann2012fermi}, and \citet{Casandjian:2015hja}.
The normalization of the H~{\sc i}\ A6 component is consistent with the GALPROP prediction. The normalization of the H~{\sc i}\ A7 component is a bit high at 2.8 $\pm$ 0.4 (as for all fits in FM31), but this component may contain a fraction of gas that actually resides in the M31 system, as was already discussed, and will be further discussed below. The normalizations of the IC A5 and A6-A7 components are consistent with the all-sky average obtained in the isotropic calculation (Table~\ref{tab:norm_isotropic}). The normalization of the IC A8 component is high, which is true for all fits in FM31, but this component is subdominant and only contributes along the top of the field, corresponding to the Galactic plane.
The spectrum and intensity for the inner galaxy are consistent with previously published results. We note however that the spectrum derived between 1--100 GeV is softer than that derived between 300 MeV -- 300 GeV (although consistent within errors). This is due to the energy range used for the calculation. The spherical halo and far outer halo have intensities that are much dimmer than the inner galaxy, and present a mild intensity gradient, tapering off with distance from the center of M31. Their spectra are significantly different from all the other extended components in FM31. They peak between $\sim$5--10 GeV, and drop off below and above these energies more steeply than all other contributions. We find it difficult to reconcile these spectra with the possibility that the excess emission originates solely within the MW, further setting it apart from known Galactic sources. Beyond these general features, the spectra for the two outer annuli differ from each other with the far outer halo presenting a harder spectrum at low energies.
To further test the symmetry of the residual emission in FM31, we also perform a fit in which we divide the spherical halo and far outer halo templates into north and south components, allowing the spectral parameters of each component to vary independently (although all components are fit simultaneously). The cut is made at the midpoint of FM31 along the horizontal direction (parallel to the Galactic plane), corresponding to a latitude of $-21.5^\circ$. The fit is otherwise performed just as for the fit with the full M31-related templates (including the arc north and south). We find that all components are significantly detected (with a significance $>5\sigma$). The results for this test further demonstrate the importance of the MW modeling and that the excess is likely to have a significant MW component. In particular, the emission associated with the far outer halo is more likely to be related to the MW than the M31 system. But even still, the nature of this emission remains unclear.
Given the approximately uniform spatial distribution of the excess emission (as most clearly indicated by the fit with the full M31-related templates), understanding its interplay with the isotropic component is crucial. We have investigated this issue and concluded that the excess emission is robust within the systematic uncertainties in the isotropic component we have considered. Our treatment of the isotropic component can primarily be found in Section~\ref{sec:tuning}, Figure~\ref{fig:Isotropic_Sytematics}, Appendix~\ref{sec:IG_IEMs}, and Appendix~\ref{sec:FSSC_IEM}. We note, however, that the isotropic emission has a bump-like feature in the energy range that somewhat overlaps with the peak in the spectrum of the M31-related components (as is most clearly seen in Figure~\ref{fig:Isotropic_Sytematics}). This might suggest that the isotropic emission may include a component that originates from similar processes in the extended halo of the MW.
These results show that if the excess emission originates from the M31 system (at least partially), its extension reaches a distance upwards of $\sim$120--200 kpc from the center of M31. This is consistent with the expectation for a DM signal, as the virial radius for the DM halo extends at least this far. To test this interpretation, we compare these results with the predictions for a DM signal that originates from the M31 halo, with a spectrum and annihilation cross-section consistent with a DM interpretation of the GC excess. We also consider the contribution from the MW's DM halo along the line of sight, since this component has not been explicitly accounted for in this analysis. If such a component actually exists, then it may be at least partially embedded in the isotropic component, as well as the other components of the IEM, but it will not necessarily be fully absorbed. Note that in general there is also expected to be some contribution from the local DM filament between M31 and the MW.
We consider different assumptions for the amount of DM substructure in M31 (and the MW), and we find that if a cold DM scenario is assumed that includes a large boost factor due to substructures, the observed excess emission is consistent with this interpretation. Granted, however, the exact partitioning of individual contributions to the signal remains unclear, i.e.\ primary emission from M31's DM halo, secondary emission in M31, emission from the local DM filament between M31 and the MW, and emission from the MW's DM halo along the line of sight.
This is an intriguing finding, however, its implications are far reaching, and better understanding the MW foreground is crucial before drawing any stronger conclusions. Another crucial aspect is complementarity with other DM targets. Although these results are consistent with other observations in $\gamma$-rays, namely the GC excess and the constraints from dwarf spheroidal galaxies, they imply that a large boost factor from substructures would contribute to a DM signal from the MW halo. As already stated, this contribution has not been accounted for in this analysis and might be at least partially embedded in the isotropic component as well as other components of the MW foreground. Likewise, the M31-related components might contain some contribution from the MW DM halo along the line of sight, as well as some contribution from the local DM filament between M31 and the MW. From our substructure calculations we estimate that the intensity of a MW DM contribution in FM31 may be on the order of $\sim$1--10\% of the isotropic intensity. Investigating this possibility in more detail requires a dedicated analysis which is beyond the scope of this work.
The CR halo of M31 might extend tens to hundreds of kpc from the center of M31. It is possible that some of the emission in FM31 results from CR interactions with the ionized gas of M31's circumgalactic medium and/or stellar halo, which also extend well beyond the galactic disk. However, based on the radial extent, spectral shape, and intensity of the M31-related components, it is seemingly unlikely that the corresponding emission is dominated by these types of CR interactions.
We have also investigated the structured residual emission in FM31, as well as the emission correlated with the H~{\sc i}\ $\gamma$-ray maps, and compared them to different tracers of M31's outer disk and halo. These tracers include the M31 cloud, and M31's populations of globular clusters and satellite galaxies. We find features in the data that are positionally coincident with some of these tracers, and most prominently with the M31 cloud. This is a further indication that some of the structured emission observed in FM31 originates from M31 rather than the MW. This in turn implies that the total $\gamma$-ray emission from the M31 system extends well beyond the inner regions of the galactic disk. The M31 system is very rich, and further analysis of these findings is beyond the scope of this paper. Our primary focus in this analysis is the more significant smoother component of the signal.
In summary, we present the first search for extended emission from M31 in $\gamma$-rays out to a distance of $\sim$200 kpc from its center. We find evidence for an extended excess that appears to be distinct from the conventional MW foreground, having a total radial extension upwards of 120--200 kpc from the center of M31. We discuss plausible interpretations for the excess emission but emphasize that uncertainties in the MW foreground, and in particular modeling of the H~{\sc i}-related components, have not been fully explored and may impact the results. The results also have a close link with the isotropic component (and likewise the IC components), which may be inevitable considering the nature of the signal under investigation. We find that a DM interpretation provides a good description of the observed emission and is consistent with the GC excess DM interpretation. However, better understanding of the systematics, and complementarity with other DM searches, as discussed in the paper, is critical to settle the issue.
\section*{Acknowledgements}
The authors thank Tsunefumi Mizuno, Gulli J\'ohannesson, Alex Drlica-Wagner, and Troy Porter for many useful comments made at the preparation stage of the manuscript. The authors are also pleased to acknowledge conversations with Ketron Mitchell-Wynne, Sean Fillingham, Tim Tait, Philip Tanedo, Mike Cooper, James Bullock, Manoj Kaplinghat, Kevork N. Abazajian, Sebastian Trojanowski, Ferdinand Badescu, Volodymyr Takhistov, Deano Farinella, and Dan Hooper. A majority of the data analysis has been performed on UCI's HPC, and CK thanks Harry Mangalam for his assistance on numerous occasions. CK also thanks James Chiang for his assistance with the Fermi Science Tools. The work of CK and SM is supported in part by Department of Energy grant DESC0014431. SSC is supported by National Science Foundation Grant PHY-1620638 and a McCue Fellowship. IM acknowledges partial support from NASA grant NNX17AB48G.
The \textit{Fermi}-LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States; the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique/Institut National de Physique Nucl\'eaire et de Physique des Particules in France; the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy; the Ministry of Education, Culture, Sports, Science, and Technology (MEXT); the High Energy Accelerator Research Organization (KEK) and the Japan Aerospace Exploration Agency (JAXA) in Japan; and the K.~A.~Wallenberg Foundation, the Swedish Research Council, and the Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. This work performed in part under DOE
Contract DE-AC02-76SF00515.
| 2024-02-18T23:39:55.679Z | 2019-06-14T02:06:52.000Z | algebraic_stack_train_0000 | 841 | 28,606 |
|
proofpile-arXiv_065-4373 | \section{Introduction}
The seemingly unrelated regression (SUR) model consists of a system of linear
multiple regression equations such that each equation has a different
continuous dependent variable with a potentially different set of exogenous
explanatory variables (covariates) and the errors are correlated across
equations \citep{Zellner-1962}. When the conditions of the SUR model apply,
estimators obtained from SUR are more efficient relative to ordinary least
squares estimators. The optimality feature and other theoretical properties
of the SUR estimator within the frequentist framework are well studied in
\citet{Srivastava-Dwivedi-1979}, \citet{Srivastava-Giles-1987} and
\citet{Fiebig-2001}. The Bayesian approach to estimating SUR model was
introduced in \citet{Zellner-1971}, where the author analytically derived the
conditional posterior densities of the parameters. Given the conditional
posteriors, the model can then be estimated using a Markov chain Monte Carlo
(MCMC) technique, known as Gibbs sampling \citep{Geman-Geman-1984,
Casella-George-1992}. Since the introduction in \citet{Zellner-1971}, the
literature on Bayesian analysis of SUR has grown considerably in various
directions, including estimation \textit{via} MCMC
\citep{Percy-1992,Griffiths-Chotikapanich-1997,Griffiths-Valenzuela-2006} and
direct Monte Carlo approach \citep{Zellner-Ando-2010,Ando-Zellner-2010},
prediction in SUR model \citep{Percy-1992} and several model extensions that
include restricted SUR \citep{Steel-1992}, SUR with serially correlated
errors and time varying parameters \citep{Chib-Greenberg-1995} and
semiparametric inference in SUR model \citep{Koop-etal-2005}.
The existing literature on SUR models including the quoted articles have
worked based on the assumption that the covariates are measured correctly.
Nonetheless, in practice there can emerge situations where one or more of the
covariates are recorded with error, thus giving rise to SUR with measurement
error (hereafter SURME). Modeling measurement error within a SUR structure or
more generally in a multi-equation system has largely gone unnoticed in the
literature (both frequentist and Bayesian), the only exception is
\citet{Carroll-etal-Book-2006,Carroll-etal-2006} explained in the next
paragraph. In contrast, there has been considerable work on single equation
models with measurement error. Within a linear regression framework, it is
well known that measurement error in the data leads to bias and inconsistency
in ordinary least squares (OLS) estimator (see for instance
\citet{Cheng-VanNess-1999}, \citet{Fuller-1987},
\citet{Wansbeek-Meijer-2000}, \citet{Rao-etal-2008} and
\citet{Hu-Wansbeek-2017}). To achieve consistency of OLS estimator, side
assumptions are required such as known
measurement error variance or known \textit{reliability ratio}.\footnote
If $w$ and $z$ are two random variables such that $w=z+u$ and the error $u$
is independent of $z$, then the reliability ratio $R_{z}$ is defined as the
true variance divided by the total variance, \textit{i.e.},
R_{z}=Var(z)/(Var(z)+Var(u))$. By definition $0\leq R_{z}\leq 1$.} However,
consistent estimator of regression parameters without the side assumptions
can be constructed when measurement errors have replicated observations
\citep{Shalabh-2003}. Measurement error in nonlinear models is discussed in
\citet{Carroll-etal-2006} along with the Bayesian analysis of linear and
non-linear measurement error models.
Within the multi-equation framework, \citet{Carroll-etal-2006} consider a
combination of linear mixed measurement error model and SUR model to
understand the properties of measurement error in food frequency
questionnaire data for protein and energy. They adopt the frequentist
estimation approach and use a nearby adaptive method based on weighted Akaike
information criterion (AIC) to select the best fitting model, a form of model
averaging which is popular in the Bayesian literature.
\citet{Carroll-etal-2006} find that a fully parameterized model in which
measurement errors in the two nutrients are modeled jointly, offers no gain
in efficiency compared to fitting each model separately. However, when some
parameters are set to zero resulting in a reduced model, considerable gains
in efficiency is attained. We may adopt the frequentist approach to
estimating SURME model with \emph{structural} measurement error, but it is
fraught with difficulty because the number of parameters become larger than
the number of normal equations derived from the likelihood function. In such
cases, side assumptions can be used to identify the model as done in linear
regression, but even then deriving the maximum likelihood estimators for
SURME model is a challenging task. Besides, ignoring measurement error in the
data can lead to a poor model fit.
In this paper, we introduce two novel methods---a pure Bayesian algorithm and
a mean field variational Bayes (MFVB) technique---to estimate the SURME model
where each equation can potentially have a different covariate that is
measured with error. Both the approach employs a classical structural form of
measurement error and the link between the covariate measured with error and
the other covariates (with no measurement error) is modeled through an
exposure equation. Identification of parameters is achieved by placing a
prior distribution on the measurement error variance. The pure Bayesian
approach is analytically simpler and produces tractable conditional
distributions which enables the use of Gibbs sampling. However, the MCMC
draws of the parameters corresponding to the covariate measured with error
tend to be highly correlated. To reduce autocorrelation in MCMC draws, one
may consider \emph{thinning} i.e., use every $l$-th draw in estimating the
parameter. Thinning is debatable and while some authors such as
\citet{Owen-2017} recommend thinning, others such as \citet{Link-Eaton-2012}
advise against the use of thinning. So, we explore other methods and come up
with a more elegant solution to the problem of high autocorrelation i.e., the
MFVB approach to estimating SURME model.
We illustrate both the techniques in multiple simulation studies and compare
the results to a standard SUR model, where we ignore or do not model the
measurement error. In the first set of simulation studies, data are generated
from a SURME model using different values of the variance of the true
unobserved variable, while holding the reliability ratio fixed. In the second
set of simulations, data are generated using different values of reliability
ratio, holding the variance of the true unobserved variable at a fixed value.
The results suggest that both the proposed methods perform well and correctly
highlight the importance of modeling measurement error within the SUR
structure when variables are measured with error. In addition, the SURME
model is implemented in an application drawn from the health literature and
estimated using the two proposed methods. Specifically, weight and high
density lipoprotein (\emph{HDL}) are jointly modeled as a function of several
covariates and blood pressure, which is common to both equations and
considered to have measurement error. Blood pressure is modeled as a function
of the covariates in the exposure equation. Model selection exemplify the
practical utility of the SURME model compared to the standard SUR model.
The remainder of the paper is organized as follows. Section 2 presents the
SURME model, derives the joint posterior density and proposes a Gibbs
sampling algorithm to estimate the model. Section 3 develops the MFVB
approximation of the MCMC algorithm. Section 4 demonstrates the two
algorithms in several Monte Carlo simulation exercises and Section 5 presents
an application drawn from health literature. Section 6 concludes.
\section{The SURME Model and Estimation via Gibbs sampling}
The seemingly unrelated regression with measurement error (SURME) model
incorporates measurement error for covariates in the SUR model and can be
expressed in terms of the following equations,
\begin{equation}
y_{mi}=x_{mi}^{\prime }\beta _{m}+z_{mi}\gamma _{m}+\varepsilon _{mi},
\qquad m=1,...,M; \; i=1,...,N, \label{sec2:eq_1}
\end{equation}
where the response $y_{mi}$ is a scalar, $x_{mi}^{\prime }$ is $\left(
1\times k_{m}\right) $ vector of covariates, $z_{mi}$ is a true unobserved
scalar covariate that is prone to measurement error, and the subscripts $m$
and $i$ denote the equation number and individual/observation, respectively.
Stacking the equations for each $i$, we can write model (\ref{sec2:eq_1}) as
follows,
\begin{equation}
y_{i}=X_{i}\beta +Z_{i}\gamma +\varepsilon _{i}, \qquad i=1,..,N,
\label{sec2:eq_2}
\end{equation}
where $y_{i}=\left( y_{1i},...,y_{Mi}\right) ^{\prime }$ and $\gamma =\left(
\gamma _{1},...,\gamma _{M}\right) ^{\prime }$ are vectors of dimension
$\left( M\times 1\right)$, and $\beta =\left( \beta _{1},...,\beta
_{M}\right) ^{\prime }$ is of dimension $(K \times 1)$, where $K =
k_{1}+\cdots + k_{M}$. The matrices,
\begin{equation*}
X_{i}=\left(
\begin{array}{ccc}
x_{1i}^{\prime } & \cdots & 0 \\
& \ddots & \\
0 & \cdots & x_{Mi}^{\prime
\end{array
\right)
\qquad \mathrm{and} \qquad
Z_{i}=\left(
\begin{array}{cccc}
z_{1i} & \cdots & 0 \\
& \ddots & \\
0 & \cdots & z_{Mi}
\end{array
\right),
\end{equation*}
are of dimension $(M \times K)$ and $(M \times M)$, respectively. In
addition, the error $\varepsilon_{i}$ is assumed to be independently and
identically distributed (\emph{i.i.d.}) as a normal distribution i.e.,
$\varepsilon_{i} \sim N(0, \Sigma_{\varepsilon})$ for $i=1,\cdots,N$, where
the covariance,
\begin{displaymath}
\Sigma_{\varepsilon} = \left( \begin{array}{ccc}
\sigma_{11} & \cdots & \sigma_{1M} \\
\vdots & \ddots & \vdots \\
\sigma_{M1} & \cdots & \sigma_{MM}
\end{array} \right),
\end{displaymath}
is a symmetric matrix that permits nonzero correlation across equations (or
first subscript) for any given individual (or second subscript) and ties each
independent regression into a system of equations, hence the phrase seemingly
unrelated regression. Measurement error in reference to model
(\ref{sec2:eq_2}) arises because $Z_{i}$ is not observed, instead we observe
$W_{i}$ which is a sum of the true unobserved quantity $Z_{i}$ and a
measurement error term $u_{i}$. This definition implies a \emph{classical
measurement error} \citep{Fuller-1987}. Additionally, we assume that the true
unobserved quantity $Z_{i}$ follows a distribution, so that the measurement
error model is of the \emph{structural form}. This can be represented as
follows,
\begin{equation}
\widetilde{W}_{i}=\widetilde{Z}_{i}+\widetilde{u}_{i}, \qquad
\widetilde{u}_{i}\sim N_{M}\left( 0,\sigma _{u}^{2}I_{M}\right), \qquad
\textit{classical structural form}, \label{sec2:eq_3}
\end{equation}
where for algebraic simplification, we use the notations
$\widetilde{W}_{i}=\left( w_{1i},...,w_{Mi}\right) ^{\prime }$,
$\widetilde{Z}_{i}=\left( z_{1i},...,z_{Mi}\right) ^{\prime }$,
$\widetilde{u}_{i}=\left( u_{1i},...,u_{Mi}\right) ^{\prime }$, then
$W_{i}=diag(\widetilde{W}_{i})$, $ Z_{i}=diag(\widetilde{Z}_{i})$,
$u_{i}=diag(\widetilde{u}_{i})$ are $\left( M\times M\right) $ diagonal
matrices and $I_{M}$ is a ($M \times M$) identity matrix.
An interesting addition to equation~\eqref{sec2:eq_3} is to relate the
primary explanatory variable of interest (here $Z_{i}$) to other covariates
($X_{i}$), giving rise to the \emph{exposure model}. The term ``exposure
model'' comes from epidemiology, where the primary explanatory variable is
affected by exposure to ``toxicants'' or ``risk factors''. Therefore, the
potential links between the latent variable $Z$ and the other covariates $X$
can be expressed as follows,
\begin{equation}
\widetilde{Z}_{i}= X_{i} \omega + \widetilde{\varepsilon}_{z,i}, \qquad
\widetilde{\varepsilon}_{z,i} \sim N_{M}\left( 0, \sigma _{Z}^{2}I_{M} \right),
\qquad \textit{exposure model}. \label{sec2:eq_4}
\end{equation}
The three equations~\eqref{sec2:eq_2}, \eqref{sec2:eq_3} and
\eqref{sec2:eq_4} together define our SURME model and the resulting
likelihood is derived as follows,
\begin{align}
\begin{split}
& f(y,W,Z|X,\Delta) = \prod_{i=1}^{N} f(y_{i}, W_{i}, Z_{i}|X, \Delta )\\
& = \prod_{i=1}^{N} \bigg\{ f(y_{i}| W_{i}, Z_{i},X, \Delta ) \times
f(W_{i}|Z_{i}, X,\Delta) \times f(Z_{i}|X,\Delta) \bigg\}\\
& = \prod_{i=1}^{N} \bigg\{ f(y_{i}| Z_{i},X, \Delta ) \times
f(W_{i}|Z_{i}, X,\Delta) \times f(Z_{i}|X,\Delta) \bigg\}\\
& = \prod_{i=1}^{N} \bigg\{ (2\pi)^{-M/2} \, |\Sigma_{\varepsilon}|^{-1/2}
\exp\Big[-\frac{1}{2} (y_{i} - X_{i}\beta - Z_{i}\gamma)'
\Sigma_{\varepsilon}^{-1}(y_{i} - X_{i}\beta - Z_{i}\gamma) \Big] \\
& \hspace{0.53in} \times (2\pi)^{-M/2} \, (\sigma_{u}^{2})^{-M/2}
\exp\Big[-\frac{1}{2 \sigma_{u}^{2}} (\widetilde{W}_{i} - \widetilde{Z}_{i})'
(\widetilde{W}_{i} - \widetilde{Z}_{i}) \Big] \\
& \hspace{0.53in} \times (2\pi)^{-M/2} \, (\sigma_{Z}^{2})^{-M/2}
\exp\Big[-\frac{1}{2 \sigma_{Z}^{2}} (\widetilde{Z}_{i} - X_{i}\omega)'
(\widetilde{Z}_{i} - X_{i}\omega) \Big] \bigg\},
\end{split}
\label{sec2:eq_5}
\end{align}
where $\Delta \equiv (\beta, \gamma, \Sigma_{\varepsilon}, \omega,
\sigma_{Z}^{2},\sigma_{u}^{2})$ and as mentioned earlier, $\widetilde{W}_{i}$
and $\widetilde{Z}_{i}$ are column vectors that contain the diagonal elements
of the matrices $W_{i}$ and $Z_{i}$, respectively.
Before proceeding with estimation, we add a few words on identification
issues that typically arise with measurement error models. In linear
regression with measurement error, identification of parameters require
additional assumptions. Such assumptions can be constant measurement error
variance, known reliability ratio or some other conditions as presented in
\citet{Cheng-VanNess-1999}. The same identification conditions are also
applicable to the proposed SURME model under the existing distributional
assumptions. Nonetheless, we follow a purely Bayesian approach and employ
prior distributions to identify the parameters of the model \citep[see][Chap.
5]{Zellner-1971}.
The Bayesian estimation method combines the likelihood of the model with
suitable prior distributions to obtain the joint posterior distribution. We
utilize the following prior distributions:
\begin{equation}
\begin{split}
& \beta \sim N_{K} \left( \beta_{0},B_{0}\right), \quad
\gamma \sim N_{M}\left( \gamma_{0}, G_{0}\right), \quad
\Sigma_{\varepsilon}^{-1} \sim W_{M} \left( \nu_{0}, S_{0}\right), \\
& \omega \sim N_{K}\left( \omega _{0},O_{0}\right), \quad
\sigma_{Z}^{2} \sim IG\left( \delta _{1},\delta _{2}\right), \quad
\sigma_{u}^{2} \sim IG\left( \delta _{3},\delta _{4}\right),
\end{split}
\label{sec2:eq_6}
\end{equation}
where $W_{M}$ denotes a Wishart distribution of dimension $M$ and $IG$
denotes an inverse gamma distribution. Here we note that if one is not
interested in the exposure equation, it can be dropped from the model. In
such a case, $\tilde{Z}_{i} \sim N(\mu, \sigma_{Z}^{2} I_{M})$ and $\mu$ can
be given a normal prior as $\mu \sim N(\mu_{0}, \sigma_{\mu}^{2} I_{M})$.
Coming back to the SURME model, the joint posterior distribution can be
obtained by combining the likelihood \eqref{sec2:eq_5} with the prior
distributions \eqref{sec2:eq_6} as follows,
\begin{allowdisplaybreaks}
\begin{equation}
\begin{split}
p\left(\Delta,Z \vert y, X, W\right) & \propto
\prod_{i=1}^{N} \Bigg\{ \left\vert
\Sigma_{\varepsilon}\right\vert^{-1/2} \exp \left[
-\frac{1}{2}\left( y_{i}-X_{i}\beta -Z_{i}\gamma \right)^{\prime}
\Sigma_{\varepsilon}^{-1}\left(y_{i} - X_{i}\beta - Z_{i}\gamma
\right) \right] \\
& \quad \times \left( \sigma_{u}^{2}\right)^{-M/2}\exp
\left[ -\frac{1}{2\sigma_{u}^{2}}\left( \widetilde{W}_{i} -
\widetilde{Z}_{i}\right)^{\prime}\left( \widetilde{W}_{i}-\widetilde{Z}_{i}\right)
\right] \\
& \quad \times \left( \sigma_{Z}^{2}\right)^{-M/2}\exp
\left[ -\frac{1}{2\sigma_{Z}^{2}}\left( \widetilde{Z}_{i}- X_{i}
\omega \right)^{\prime} \left( \widetilde{Z}_{i} - X_{i} \omega \right)
\right] \Bigg\} \\
& \quad \times \left\vert
B_0\right\vert^{-1/2} \exp \left[
-\frac{1}{2}\left( \beta -\beta_{0} \right)^{\prime}
B_{0}^{-1}\left(\beta - \beta_{0}\right) \right] \\
& \quad \times \left\vert
G_0\right\vert^{-1/2} \exp \left[
-\frac{1}{2}\left( \gamma -\gamma_{0} \right)^{\prime}
G_{0}^{-1}\left(\gamma - \gamma_{0}\right) \right] \\
& \quad \times \left\vert
\Sigma_{\varepsilon}\right\vert^{-\frac{s_0 -M-1}{2}} \exp \left[
-\frac{1}{2} \text{tr} \left( S_{0}^{-1} \Sigma^{-1}_{\varepsilon} \right) \right] \\
& \quad \times \left( \sigma_{Z}^{2}\right)^{-\delta_1 - 1}\exp
\left[ -\frac{\delta_2}{ \sigma_{Z}^{2}} \right] \times \left( \sigma_{u}^{2}\right)^{-\delta_3 - 1}\exp
\left[ -\frac{\delta_4}{ \sigma_{u}^{2}} \right].
\end{split}
\label{sec2:eq_7}
\end{equation}
\end{allowdisplaybreaks}
Typical with the Bayesian approach, the joint posterior density
(\ref{sec2:eq_7}) is not tractable and the parameters are sampled using MCMC
techniques. To this purpose, conditional posterior densities of the
parameters are derived (see Appendix A in the supplementary material) and
Gibbs sampling is employed to estimate the model as exhibited in
Algorithm~\ref{alg:algorithm1}. Note that some of the conditional posteriors
are conditioned on a subset of parameters, but these are full conditionals
that just do not depend on the full set of parameters. Conditional posteriors
that depend on a subset of parameters have also been referred to as reduced
conditional posteriors and Gibbs sampling as partially collapsed Gibbs
sampling \citep[see][]{Liu-1994,vanDyk-Park-2008}.
\begin{table*}[!t]
\begin{algorithm}[Gibbs sampling for SURME model]
\label{alg:algorithm1} \rule{\textwidth}{0.5pt}
\begin{enumerate}
\item Sample $\beta|\gamma, \Sigma_{\varepsilon},Z,y \sim N_{K}\left( \overline{\beta },B_{1}\right)$,
where,
\newline
$B_{1}^{-1}=\left[ \displaystyle \sum_{i=1}^{N} X'_{i} \Sigma _{\varepsilon
}^{-1}X_{i}+B_{0}^{-1}\right] $, $\overline{\beta }=B_{1}\left[
\displaystyle \sum_{i=1}^{N} X_{i}^{\prime }\Sigma _{\varepsilon
}^{-1}y_{i}^{\ast }+B_{0}^{-1}\beta _{0}\right]$, and $y_{i}^{\ast
}=y_{i}-Z_{i}\gamma $.
\item Sample $\gamma|\beta,\Sigma_{\varepsilon},Z,y \sim
N_{M}\left( \overline{\gamma },G_{1}\right) $,
where, \newline
$G_{1}^{-1}=\left[ \displaystyle \sum_{i=1}^{N} Z_{i}^{\prime}\Sigma
_{\varepsilon }^{-1}Z_{i}+G_{0}^{-1} \right] $,
$\overline{\gamma}=G_{1}\left[ \displaystyle \sum_{i=1}^{N} Z_{i}^{\prime}
\Sigma _{\varepsilon }^{-1}\tilde{y}_{i} + G_{0}^{-1} \gamma _{0} \right]$,
and $\tilde{y}_{i}=y_{i}-X_{i}\beta$.
\item Sample $\Sigma_{\varepsilon}^{-1}| \beta, \gamma, Z,y \sim W_{M}
\left(\nu_{1},S_{1}\right) $, where, \newline $\nu_{1}=\nu_{0}+N$ and
$S_{1}^{-1}=\left[ S_{0}^{-1} + \displaystyle \sum_{i=1}^{N}\left(
y_{i}-X_{i}\beta -Z_{i}\gamma \right) \left( y_{i}-X_{i}\beta
-Z_{i}\gamma \right)^{\prime }\right]$.
\item Sample $\tilde{Z}_{i}|\beta,\gamma,\Sigma _{\varepsilon},\omega,
\sigma_{Z}^{2},\sigma_{u}^{2},W,y
\sim N_{M}\left( M_{1,i},M_{2}\right)$, $\forall
i=1,...,N$, where, \newline $M_{2}^{-1}=\left[ \Psi +\left(
\frac{1}{\sigma_{Z}^{2}}+\frac{1}{\sigma_{u}^{2}}\right)
I_{M}\right]$, with $\Psi =\Gamma \odot \Sigma_{\varepsilon }^{-1}$,
$\Gamma =\gamma \gamma^{\prime }$, \newline and $M_{1,i}=M_{2}\left[
diag(\gamma )\Sigma_{\varepsilon}^{-1}\left(y_{i}-X_{i}\beta \right) +
\frac{\widetilde{W}_{i}}{\sigma_{u}^{2}} + \frac{X_{i} \omega}
{\sigma_{Z}^{2}} \right]$, where $\odot $ is the dot (or
Hadamard) product.
\item Sample $\omega|Z,\sigma_{Z}^{2} \sim
N_{M}\left( \omega_1, {\Sigma_{\omega} }\right)
$, where, \newline $\Sigma_{\omega}^{-1} = \left[ \frac{1}{\sigma
_{Z}^{2}} \displaystyle \sum_{i=1}^{N} X^{\prime}_{i} X_{i} + O^{-1}_0
\right] $ and $\omega_1= \Sigma_{\omega} \left[ \frac{1}{\sigma
_{Z}^{2}} \displaystyle \sum_{i=1}^{N} X^{\prime}_i \tilde{Z}_{i} +
O^{-1}_0 \omega_0 \right] $.
\item Sample $\sigma_{Z}^{2}|Z,\omega \sim IG\left( \delta_{1}^{\ast },
\delta_{2}^{*}\right) $, where, \newline $\delta_{1}^{\ast} = \delta_{1}
+\frac{N.M}{2}$ and $\delta_{2}^{\ast}=\delta_{2}+\frac{1}{2}
\displaystyle \sum_{i=1}^{N} \left( \tilde{Z}_{i}- X_{i} \omega
\right)^{\prime}\left( \tilde{Z}_{i}- X_{i} \omega \right) $.
\item Sample $\sigma _{u}^{2}|Z,W \sim IG\left(\delta_{3}^{\ast},
\delta_{4}^{\ast}\right)$, where, \newline $\delta_{3}^{\ast}
= \delta_{3}+\frac{N.M}{2}$ and $\delta_{4}^{\ast}
=\delta_{4}+\frac{1}{2}\displaystyle \sum_{i=1}^{N}
\left(\tilde{W}_{i}-\tilde{Z}_{i}\right)^{\prime} \left(
\tilde{W}_{i}-\tilde{Z}_{i}\right) $.
\end{enumerate}
\rule{\textwidth}{0.5pt}
\end{algorithm}
\end{table*}
The sampling algorithm, presented in Algorithm~\ref{alg:algorithm1}, shows
that $\beta$ and $\gamma$ are sampled from an updated multivariate normal
distribution. Standard result is obtained for the precision matrix
$\Sigma_{\varepsilon}^{-1}$, which is sampled from an updated Wishart
distribution. All the three parameters ($\beta,\gamma,\Sigma_{\varepsilon}$)
follow their respective distributions marginally of $\omega$,
$\sigma_{u}^{2}$ and $\sigma _{Z}^{2}$. The true unobserved quantity $Z$ is
drawn from an updated multivariate normal distribution conditional on all the
remaining model parameters. Similarly, $\omega $ is sampled from an updated
multivariate normal distribution conditional on $\left( Z,\sigma
_{Z}^{2}\right) $. The two variance parameters are drawn from updated inverse
gamma distributions with $\sigma _{Z}^{2}$ conditioned on $\left(Z,\omega
\right) $ and $\sigma _{u}^{2}$ conditioned on $\left(W, Z \right)$. Note
that if we drop the exposure equation from the SURME model,
Algorithm~\ref{alg:algorithm1} only requires a slight modification. In this
context, $\mu$ replaces $X_{i}\omega$ and is sampled from an updated normal
distribution as $\mu|Z,\sigma_{Z}^{2} \sim N_{M}(\bar{l}, \bar{\Lambda})$,
where $\bar{l} = \bar{\Lambda} \Big( \sum_{i=1}^{N} \widetilde{Z}_{i} /
\sigma_{Z}^{2} + \frac{\mu_{0}}{\sigma_{\mu}^{2}} \Big)$ and
$\bar{\Lambda}^{-1} = \Big( \frac{N}{\sigma_{Z}^{2}} +
\frac{1}{\sigma_{\mu}^{2}}\Big) I_{M}$ are the posterior mean and posterior
precision, respectively.
We note that the model presented in this paper utilizes the structural
measurement error model which assumes that $Z$ follows a distribution. Hence,
the distribution of $Z$ was introduced as a part of the model. However, in
the measurement error literature, there is another form of measurement error
known as \emph{functional} form. The functional measurement error model
assumes that the true unobserved quantity $Z$ is fixed. In our modeling and
estimation framework, we can easily incorporate the functional form of
measurement error by modeling the distribution of $Z$ as a part of the
subjective prior information \citep{Zellner-1971}. This implies that the
joint posterior distribution \eqref{sec2:eq_6} will be unchanged and
derivations of the conditional posterior distributions will proceed in
exactly the same way as described in Appendix~A of the supplementary file. To
reiterate, the fundamental difference in analyzing SUR model with structural
and functional forms of measurement error lies in the interpretation given to
the distribution of $Z$, the true unobserved quantity.
In the MCMC estimation of SURME model, one consideration that arise is that
$Z$ and $\gamma $ are both unknown, and drawing them conditional on each
other lead to high autocorrelation in MCMC draws. This occurrence is a
general problem and happens when two or more unknown variables/parameters
that appear in product form are drawn conditional on each other. To reduce
the autocorrelation in MCMC draws (and consequently reduce the inefficiency
factors) some authors\footnote{See for instance \citet{Jeliazkov-2013} for
the case of latent variables in a non parametric VAR specification.} propose
to improve mixing by sampling $\gamma $ from the marginal distribution and
then sampling $Z|\gamma $ or \textit{vice versa}. However, deriving the
marginal posterior distribution of $\gamma$ (or of $Z$) is not
straightforward and the marginalization trick do not improve the results in
our modeling context. \footnote{Many thanks to Ivan Jeliazkov and the
participants of the UCI seminar for the suggestion to sample $\gamma$
marginally of $Z$ and then sampling $Z|\gamma$. See appendix E in the
supplementary material. However, the several tests we conducted did not
improve our initial results with standard Gibbs sampling.}
As a solution to reduce autocorrelation, many researchers have employed
\emph{thinning} to improve the mixing of the draws. The thinning of MCMC
draws has been criticized by some authors including
\citet{MacEachern-Berliner-1994}, \citet{Link-Eaton-2012}), but others such
as \citet{Geyer-1991} acknowledges that thinning can increase statistical
efficiency. In a recent paper, \citet{Owen-2017} shows that the usual advice
against thinning can be misleading. We employ thinning to improve the mixing
properties of the MCMC draws of $\gamma$ in our simulation studies and
application. However, given the controversy around thinning, we explore other
methods and come up with the MFVB approximation to estimate SURME model.
\section{The mean field variational Bayes (MFVB) approximation}
Variational Bayes is an alternative to MCMC methods that provides a
locally-optimal, exact analytical solution to an approximation of the
posterior distribution. The parameters of the approximate distribution are
selected to minimize the Kullback-Leibler divergence (a distance measure)
between the approximation and the posterior. The MFVB approximation is a
deterministic optimization approach and so is particularly useful for big
data sets and/or models with large sparse covariance matrices. Besides, it is
similar to Gibbs sampling for conjugate models. Some recent articles on MFVB
approach include \citet{Bishop-2006}, \citet{Ormerod-Wand-2010},
\citet{Pham-etal-2013}, \citet{Lee-Wand-2016} and \citet{Blei-etal-2017}.
Suppose, $y$ denotes an observed data vector and $\theta$ is a parameter
vector defined over the parameter space $\Theta$. Following the Bayes
theorem, the posterior distribution can be written as:
\begin{equation*}
p\left(\theta|y\right) =\frac{p\left( \theta,y\right) }{p\left(
y\right) } = \frac{p\left( y|\theta \right) p\left( \theta \right) }{
p\left(y\right) },
\label{sec3:eq_1}
\end{equation*}
where $p\left( y\right) =\int_{\Theta }p\left( \theta ,y\right) d\theta$ is
the marginal likelihood. Let $q$ be an arbitrary density function over
$\Theta $. Then, the logarithm of the marginal likelihood is,
\begin{equation}
\begin{split}
\log p\left(y\right) & = \log p(y) \int_{\Theta} q(\theta) d\theta
= \int_{\Theta} q(\theta) \log p(y) d\theta \\
& = \int_{\Theta} q(\theta) \log \left\{ \frac{p(\theta,y)/q(\theta)}
{p(\theta/y)/q(\theta)} \right\} d\theta \\
& = \int_{\Theta }q\left( \theta \right) \log \left\{ \frac{p\left( \theta
,y\right) }{q\left( \theta \right) }\right\} d\theta +\int_{\Theta }q\left(
\theta \right) \log \left\{ \frac{q\left( \theta \right) }
{p\left( \theta|y\right) }\right\} d\theta \\
& = \log \underline{p}\left(y,q\right) + KL(q,p) \\
&= \log \underline{p}\left(y,q\right) + E_{q\left( \theta \right) }\left[ \log q\left( \theta \right) \right]
-E_{q\left( \theta \right) }\left[ \log p\left( \theta ,y\right) \right]
+ \log p\left( y\right),
\label{sec3:eq_2}
\end{split}
\end{equation}
where $\log \underline{p}\left(y,q\right) = E_{q(\theta)} \left[ \log \left(
\frac{ p\left( \theta, y\right)}{ q\left( \theta \right)} \right) \right]$
denotes the lower bound on the marginal log-likelihood and $KL(q,p)=
E_{q\left( \theta \right) }\left[ \log q\left( \theta \right) \right]
-E_{q\left( \theta \right) }\left[ \log p\left( \theta |y\right) \right]$ is
the Kullback-Leibler divergence $q\left(\theta \right) $ and $p\left(
\theta|y\right)$. Since $\log p(y)$ is a constant, the minimization of
$KL(q,p)$ is equivalent to maximizing the scalar quantity $\log
\underline{p}\left(y,q\right)$, typically known as evidence lower bound
(ELBO) or variational lower bound. In practice, the maximization of the ELBO
is often preferred to minimization of the KL divergence since it does not
require knowledge of the posterior.
The MFVB approximates the posterior distribution $p\left( \theta|y\right) $
by the product of the $q$-densities,
\begin{equation}
q\left( \theta \right) =\prod_{j=1}^{P}q_{j}\left( \theta _{j}\right).
\label{eq.10}
\end{equation}
Each \textit{optimal} $q$-density minimizes the Kullback-Leibler divergence
and is given by,
\begin{equation}
q_{j}\left( \theta _{j}\right) \propto \exp \left[ E_{q\left(
-\theta _{j}\right) }\left\{ \log p\left( \theta_{j}| \Omega \right)
\right\} \right] \text{ , }j=1,...,P \label{eq.11a}
\end{equation}
where $E_{q\left( -\theta _{j}\right)}$ denotes the expectation with respect
to $\prod_{k\neq j}q_{k}\left( \theta _{k}\right)$, $ \Omega\equiv \left\{
y,\theta _{1},...,\theta _{j-1},\theta _{j+1},...,\theta _{P}\right\} $ is
the set containing all random vectors in the model except $\theta _{j}$, and
$p\left(\theta _{j}|\Omega\right) $ are the full conditional distributions of
the parameters.
For the SURME model, we now consider a MFVB approximation based on the
following factorization:
\begin{equation*}
q\left( \beta ,\gamma ,\Sigma _{\varepsilon },\omega ,\sigma _{Z}^{2},\sigma
_{u}^{2},\tilde{Z}\right) =q\left( \beta \right) q\left( \gamma \right)
q\left( \Sigma _{\varepsilon }\right) q\left( \omega \right) q\left( \sigma
_{Z}^{2}\right) q\left( \sigma _{u}^{2}\right) \prod_{i=1}^{N}q\left( \tilde
Z}_{i}\right). \label{eq.11}
\end{equation*}
These optimal $q$-densities can be derived, as presented in Appendix~B of the
supplementary file, to have the following form,
\begin{equation}
\begin{alignedat}{3}
q\left( \beta \right) & = f_{N_{K}}\left( \mu _{q\left( \beta \right)
},\Sigma _{q\left( \beta \right) }\right)
& \qquad
q\left( \gamma \right) & = f_{N_{M}}\left( \mu _{q\left( \gamma \right)
},\Sigma _{q\left( \gamma \right) }\right) \\
q\left( \Sigma _{\varepsilon }^{-1}\right) & = f_{W_{M}}\left( \nu
_{1},B_{q(\Sigma )}\right)
& \qquad
q\left( \omega \right) & = f_{N_{K}}\left( \mu _{q\left( \omega \right) },\Sigma
_{q\left( \omega \right) }\right) \\
q\left( \tilde{Z}_{i}\right) & = f_{N_{M}} \left( \mu _{q\left( \tilde{Z}_{i}
\right) },\Sigma _{q\left( \tilde{Z}_{i}\right) }\right)
& \qquad
q\left( \sigma _{Z}^{2}\right) & = f_{IG}\left( \delta _{1}^{\ast
},B_{q(\sigma _{Z}^{2})}\right) \label{eq.12} \\
q\left( \sigma _{u}^{2}\right) & = f_{IG} \left( \delta _{3}^{\ast
},B_{q(\sigma _{u}^{2})}\right), &
\end{alignedat}
\end{equation}
where $f$ denotes the density function of the distribution given in the
subscript. The parameters of the optimal densities are updated according to
Algorithm~\ref{alg:algorithm2}. When exposure is dropped, $q(\omega)$ is
replaced with $q(\mu)$ and the optimal density is an updated normal
distribution. Convergence of Algorithm~\ref{alg:algorithm2} is assessed using
the evidence lower bound $\ell$ on the marginal log-likelihood (see
Appendix~C in the supplementary material) that is guaranteed to reach a local
optima based on the convexity property. This algorithm belongs to the family
of coordinate ascent variational inference (CAVI) and iteratively optimizes
each factor of the mean field variational density, while holding the
remaining fixed \citep[see][]{Bishop-2006, Blei-etal-2017}.
\begin{table*}[!t]
\begin{algorithm}[MFVB algorithm for SURME model]
\label{alg:algorithm2} \rule{\textwidth}{0.5pt}
\begin{small}
\begin{enumerate}
\item Initialize $\delta_{1}^{\ast}$, $\delta_{3}^{\ast}$, $B_{q(\sigma
_{Z}^{2})}$, $B_{q(\sigma_{u}^{2})}$, $\mu_{q\left( \beta \right)}$,
$\mu_{q\left( \gamma \right)}$, $\mu_{q\left( \omega \right) }$,
$\Sigma_{q\left( \beta \right)}$, $B_{q(\Sigma)}$, $\Sigma_{q\left(
\gamma\right)}$, $\Sigma_{q\left( \omega \right)}$, $\mu_{q\left(
\tilde{Z}_{i}\right) }$, $\Sigma_{q\left( \tilde{Z}_{i}\right) }$ (for
$i=1,...,N$).
\item Cycle:
\begin{enumerate}
\item $\Sigma_{q\left( \beta \right)} \leftarrow \left[ \sum_{i=1}^{N}
X_{i}^{\prime }\left( \nu_{1}B_{q(\Sigma )}\right) X_{i} +
B_{0}^{-1}\right]^{-1}$
\item $\mu_{q\left( \beta \right)} \leftarrow \Sigma_{q\left( \beta \right)}\left[
\sum_{i=1}^{N} X_{i}^{\prime} \left(\nu_{1}B_{q(\Sigma )}\right)
\left( y_{i} - diag(\mu_{q\left( \tilde{Z}_{i}\right) }) \mu_{q\left(
\gamma \right) }\right) + B_{0}^{-1}\beta_{0} \right] $
\item $\Sigma_{q\left( \gamma \right)} \leftarrow \left[ \sum _{i=1}^{N}
\left( \Sigma_{q\left( \tilde{Z}_{i}\right)} + \mu_{q\left(
\tilde{Z}_{i}\right)} \mu_{q\left( \tilde{Z}_{i}\right)}^{\prime }
\right) \odot \left( \nu_{1}B_{q(\Sigma )}\right) + G_{0}^{-1}\right]
^{-1}$
\item $\mu_{q\left(\gamma \right)} \leftarrow \Sigma_{q\left(\gamma \right) }
\left[\sum_{i=1}^{N} diag(\mu_{q\left( \tilde{Z}_{i}\right)}) \left(
\nu_{1}B_{q(\Sigma)}\right) \left(y_{i} - X_{i}\mu_{q\left( \beta
\right) }\right) + G_{0}^{-1}\gamma_{0}\right]$
\item $ B_{q(\Sigma)} \leftarrow\Bigg[ S_{0}^{-1} + \sum_{i=1}^{N} \bigg[ \left(
y_{i}-X_{i} \mu_{q\left( \beta \right)} - diag(\mu_{q\left(
\tilde{Z}_{i}\right) }) \mu_{q\left( \gamma \right) }\right) \\
\qquad \times \left( y_{i}-X_{i}\mu_{q\left( \beta \right)} -
diag(\mu_{q\left(\tilde{Z}_{i}\right)}) \mu_{q\left( \gamma \right)
}\right)^{\prime } + X_{i}\Sigma_{q\left( \beta \right) }X_{i}^{\prime} \\
\qquad + \left( \mu_{q\left(\tilde{Z}_{i}\right) } \mu_{q\left(
\tilde{Z}_{i}\right)}^{\prime }\right) \odot \Sigma _{q\left( \gamma
\right) } + \Sigma_{q\left( \tilde{Z}_{i}\right)} \odot \left(
\Sigma_{q\left( \gamma \right)} + \mu_{q\left( \gamma \right) }
\mu_{q\left( \gamma \right)}^{\prime }\right) \bigg] \Bigg]^{-1}$
\item $B_{q(\sigma_{Z}^{2})} \leftarrow \delta_{2} + \frac{1}{2} \sum_{i=1}^{N}
\left\{ \parallel \mu_{q\left( \tilde{Z}_{i}\right)} - X_{i}
\mu_{q\left( \omega \right)} \parallel^{2} + \text{tr}\left[
\Sigma_{q\left( \tilde{Z}_{i}\right)} \right] \right\}$
\item $B_{q(\sigma_{u}^{2})} \leftarrow \delta_{4} + \frac{1}{2} \sum_{i=1}^{N}
\left\{ \parallel \tilde{W}_{i} - \mu_{q\left( \tilde{Z}_{i}\right)}
\parallel^{2} + \text{tr}\left[ \Sigma_{q\left( \tilde{Z}_{i}\right)}
\right] \right\} $
\item $\Sigma_{q\left( \omega \right) } \leftarrow \bigg[ \left(
\frac{\delta_{1}^{\ast}}{B_{q(\sigma_{Z}^{2})}}\right)
\sum_{i=1}^{N}X_{i}^{\prime }X_{i} + O_{0}^{-1}\bigg]^{-1}$
\item $\mu_{q\left( \omega \right)} \leftarrow \Sigma_{q\left( \omega \right)
}\left[\left( \frac{\delta_{1}^{\ast}}{B_{q(\sigma_{Z}^{2})}}\right)
\sum_{i=1}^{N} X_{i}^{\prime }\mu_{q\left( \tilde{Z}_{i}\right)} +
O_{0}^{-1}\omega_{0}\right] $
\item $\Sigma_{q\left( \tilde{Z}_{i}\right)} \leftarrow \left[ \left\{
\Sigma_{q\left( \gamma \right)} + \mu_{q\left(\gamma \right)}
\mu_{q\left( \gamma \right)}^{\prime}\right\} \odot \left(
\nu_{1}B_{q(\Sigma)} \right) + \left(
\frac{\delta_{1}^{\ast}}{B_{q(\sigma_{Z}^{2})}} +
\frac{\delta_{3}^{\ast}}{B_{q(\sigma_{u}^{2})}}\right)
I_{M}\right]^{-1}$
\item $\mu_{q\left( \tilde{Z}_{i}\right)} \leftarrow
\Sigma_{q\left(\tilde{Z}_{i}\right)} \bigg[ diag(\mu_{q\left( \gamma
\right) })\left( \nu_{1}B_{q(\Sigma )}\right) \left( y_{i}-X_{i}\mu
_{q\left( \beta \right)} \right) + \left(
\frac{\delta_{3}^{\ast}}{B_{q(\sigma_{u}^{2})}}\right) \widetilde{W}_{i} \\
\quad + \left( \frac{\delta_{1}^{\ast}}{B_{q(\sigma_{Z}^{2})}}
\right) X_{i}\mu_{q\left( \omega \right) }\bigg]$
\end{enumerate}
\item[]until the increase in the ELBO $(\ell )$ is negligible ($\approx 10^{-7}$).
\end{enumerate}
\end{small}
\rule{\textwidth}{0.5pt}
\end{algorithm}
\end{table*}
The MFVB technique provides computational advantages compared to MCMC because
it is deterministic and does not require a large number of iterations
\citep{Pham-etal-2013,Lee-Wand-2016}. Besides, existing works including
\citet{Bishop-2006}, \citet{Ormerod-Wand-2010}, \citet{Faes-etal-2011},
\citet{Pham-etal-2013}, and \citet{Lee-Wand-2016} suggest that the accuracy
scores of the MFVB approximation, relative to MCMC, generally exceed
$95-97\%$ and rarely goes below $90\%$. Given these advantages, the MFVB
approach can be gainfully utilized for large data models. However, some
authors have reported that covariance matrices from variational approximation
may be typically ``too small'' relative to the sampling distribution of the
maximum likelihood estimator. In this regard, \citet{Blei-etal-2017} opine
that underestimation of the variance should be judged in relation to the task
at hand. However, evidence from empirical research indicate that variational
inference typically do not suffer in accuracy.
\section{Monte Carlo simulation studies}
This section examines the performance of the two proposed methods in
multiple simulation studies. The first set of simulations (Case I) employ
different values of $\sigma _{Z}^{2}$ to generate the simulated data. The
second set of simulations (Case II) use different values of reliability ratio
defined as $R_{Z}=\sigma _{Z}^{2}/(\sigma _{Z}^{2}+\sigma _{u}^{2})$. In both
sets of simulations, we use a two equation structure represented as follows,
\begin{equation}
\begin{split}
y_{1i} & = \beta_{11}+x_{1i2}\beta_{12} + x_{1i3}\beta_{13} + z_{1i}
\gamma_{1} + \varepsilon_{1i} \label{eq.m1}, \\
y_{2i} & = \beta_{21}+x_{2i2}\beta_{22} + x_{2i3}\beta_{23} + z_{2i}
\gamma_{2} + \varepsilon_{2i},
\end{split}
\end{equation}
where the first, second and third subscripts in $x_{mij}$ denote the equation
number ($m=1,2$), observation ($i=1,\cdots,N$) and variable number ($j=2,3$),
respectively. The first variable is common to both the equations (i.e.,
$x_{1i2}=x_{2i2}$ for all $i=1,...,N$) and the remaining covariates are
exclusive to the respective equations. Moreover, we assume the error prone
covariate $Z_{i}=diag(z_{1i},z_{2i})$ for all $i$ is unobserved, but is
defined by an exposure model as follows,
\begin{equation}
\begin{split}
z_{1i} & = \omega _{11}+x_{1i2}\omega _{12}+x_{1i3}\omega_{13}+ \varepsilon _{z1i},
\label{eq.m2} \\
z_{2i} & = \omega_{21}+x_{2i2}\omega_{22}+x_{2i3}\omega_{23}+\varepsilon _{z2i}.
\end{split}
\end{equation}
The unobserved $Z_{i}$ is related to the observed $W_{i}=diag(w_{1i},w_{2i})$
by the equations below,
\begin{equation}
\begin{split}
w_{1i} & = z_{1i} + u_{1i}, \label{eq.m3} \\
w_{2i} & = z_{2i} + u_{2i}.
\end{split}
\end{equation}
Note that the estimation of the SURME model solely relies on $W$ and the role
of $Z$ is limited to generating values for $(W,y)$.
To proceed with data generation, we assign specific values to the parameters
$\beta $, $\gamma $, $\omega$, $\Sigma _{\varepsilon }$, $\sigma _{Z}^{2}$,
$\sigma _{u}^{2}$ and generate $N=300$ observations in each simulation study
for all the variables in the model. Let $\beta _{11}=3$, $\beta _{12}=5
$, $\beta _{13}=4$, $\beta _{21}=4$, $\beta _{22}=3.8$, $\beta _{23}=3$,
\gamma _{1}=4$, $\gamma _{2}=4$, $\omega_{11}=1.5$, $\omega_{12}=0.75$,
$\omega_{13}=0.3$, $\omega_{21}=1.5$, $\omega_{22}=1.05$, and
$\omega_{23}=0.45$.
For all values of $i$, the error vector
\varepsilon _{i}=\left( \varepsilon _{1i},\varepsilon _{2i}\right) ^{\prime }
$ is generated from a bivariate normal distribution $N(0_{M},\Sigma
_{\varepsilon })$, where $\Sigma _{\varepsilon }=[1$ $0.5$; $0.5$ $1]$.
Values for the common covariate ($x_{1i2}=x_{2i2}$) are generated from
$U(0,2)$ and values for the exclusive covariates $ x_{1i3}$ and $x_{2i3}$ are
generated from $U(0,4)$, where $U$ denotes an uniform distribution. Values
for $\widetilde{Z}_{i}=\left( z_{1i},z_{2i}\right) ^{\prime }$ are generated
as $\widetilde{Z}_{i}\sim N$($X_i\omega ,\sigma_{Z}^{2}I_{M}$), and the
$\widetilde{W}_{i}$'s are generated as $\widetilde{W}
_{i}=\widetilde{Z}_{i}+\widetilde{u}_{i}$ where $\widetilde{u}_{i}\sim
N$($0_{M},\sigma _{u}^{2}I_{M} $). The above setting remains the same in the
following subsections, with change occurring only in values of $\sigma
_{u}^{2}$ (through $R_{Z}$) or $ \sigma _{Z}^{2}$.
In Case I, we investigate the performance of the proposed algorithms in two
simulation studies where the reliability ratio $R_{Z}$ is fixed ($R_{Z}=0.8$)
and $\sigma _{Z}^{2}$ is gradually decreased. Specifically, two values are
considered $\sigma_{Z}^{2}=\{1, 0.0625 \}$. The definition of $R_{Z}$ is used
to generate the corresponding values for
$\sigma_{u}^{2}=\sigma_{Z}^{2}(1-R_{Z})/R_{Z}$, which leads to a
noise-to-true variance ratio $(1-R_{Z})/R_{Z}$ of 25\%. In Case II, we again
examine the performance of the proposed algorithms in two simulation studies
by keeping $\sigma_{Z}^{2}$ fixed ($\sigma_{Z}^{2}=0.0625$) and using two
values of reliability ratio $R_{Z}= \{0.8, \; 0.5714 \} $. The chosen values
are similar to those used in \citet{Pham-etal-2013} and leads to
noise-to-true variance ratios of 25\% and 75\%, respectively. We could define
a noise-to-true variance of $100\%$, $150\%$ or more, but in those cases we
will be dealing more with outliers than with measurement errors.
Bayesian procedures require prior distribution on the parameters of the
model. For the SURME model, we stipulate the following priors: $\beta \sim
N_{K}\left( \beta _{0},B_{0}\right) $ with $ \beta _{0}=\iota _{K}$,
$B_{0}=I_{K}$, and $\iota _{M}$ is a $ \left( M\times 1\right) $ vector of
ones; $\gamma \sim N_{M}\left( \gamma_{0},G_{0}\right) $ with
$\gamma_{0}=\iota_{M}$, $G_{0}=I_{M}$; $\omega \sim N_{K}\left( \omega_{0},
O_{0}\right) $ with $\omega_{0}=\iota_{K}$, $O_{0}=I_{K}$;
$\Sigma_{\varepsilon}^{-1} \sim W_{M}\left( \nu_{0}, S_{0}\right)$ with
$\nu_{0}=50$ and $S_{0}= \nu_{0}[1$ $0.5$; $0.5$ $1]$; $\sigma_{Z}^{2}\sim
IG\left( \delta_{1},\delta_{2}\right) $ and $\sigma_{u}^{2}\sim IG\left(
\delta_{3},\delta _{4}\right)$ with
$\delta_{1}=\delta_{2}=\delta_{3}=\delta_{4}=1/100$. All these priors are
proper yet specify vague information about the parameters mainly for the
measurement error $u_{i}$ and the error prone covariate $Z_{i}$. In addition,
the same priors are used in all the simulations to highlight the effect of
changing $R_{Z}$ or $\sigma _{Z}^{2}$ in estimation of the parameters and
consequently on the performance of the algorithms.
The MCMC results are obtained from $50,000$ draws, after a burn-in of $1,000$
draws. We replicate these simulations $100$ times and report the means over
these $100$ replications of the posterior means\footnote{ To save time, we
only run $51,000$ draws per replication. Higher number of MCMC draws, such as
$100,000$ or $200,000$, exponentially increase the computing time without any
increase in precision. As an example for $\sigma_{Z}^{2}=0.0625$ and
$R_{Z}=0.909$, the MFVB takes only $11.52$ seconds per replication. If
$51,000$ (resp. $101,000$ and $201,000$) draws are used, the computing time
per replication for the Gibbs sampling of the BSURME model is about $65.64$
(resp. $142.07$ and $352.29$) seconds using a MacBook Pro, 2.8 GHz core i7
with 16Go 1600 MHz DDR3 RAM.\label{footnote_label_1}}. We also compare the
results with the usual frequentist SUR estimation and the standard Bayesian
estimation of SUR model. The Gibbs sampling algorithm for the latter is
presented in Appendix~D of the supplementary material.
\subsection{Case I: Altering $\sigma_{Z}^{2}$}
Amongst the first set of simulation studies labeled Case I,
Table~\ref{Case1:Sim1FreqSUR} presents the results from the frequentist
estimation of SUR\footnote{Without any prior information on the measurement
error, the SUR model for $M$ equations is the following: $y_{i}=X_{i}\beta
+W_{i}\gamma +\varepsilon _{i}$ , $\varepsilon _{i}\sim N\left( 0,\Sigma
_{\varepsilon }\right) $ , $i=1,..,N $ where $W_{i}$ is the covariate with
measurement error.} model for the case $R_{Z}=0.8$ and $\sigma _{Z}^{2}=1$.
Results show that estimates are strongly biased mainly for the intercepts
$\beta _{11}$ and $\beta _{21}$, and for $\gamma _{1}$ and $\gamma _{2}.$ The
relative biases ($\hat{\beta} / \beta - 1$) of the intercepts (resp. the
$\gamma$'s) are $38.1\%$ and $28.8\%$, (resp. $-19.8\%$ and $-19.6\%$). The
$\gamma$'s are strongly under-estimated. On the contrary, slope coefficients
$\beta _{12}$, $\beta _{13}$ and $\beta _{23}$ are less contaminated by the
measurement error and have a lower dispersion of the estimated coefficients
than the intercepts. The relative bias of $\beta _{22}$ ($22.3\%$) is close
(in absolute value) to that of $\gamma$'s. Elements of the
variance-covariance matrix $\Sigma _{\varepsilon }$ are strongly
over-estimated with a relative error of $317\%$ and $314\%$ for the variances
and $4.6\%$ for the covariance $\sigma_{12}$. It leads to a strong
under-evaluation of the coefficient of correlation $\rho_{\varepsilon_1
\varepsilon_2} =0.126$ far from the true value $\left(0.5\right) $. The
Bayesian estimation of SUR model, presented in Table~G1 of the supplementary
material, give similar results. The posterior means of the coefficients
(resp. posterior standard errors) are similar to the frequentist coefficient
estimates (resp. standard errors) of the SUR model. The $95\% $ highest
posterior density intervals (HPDI) are also close to the 95\% confidence
interval of the frequentist estimation. The estimated correlation coefficient
$\rho_{\varepsilon_1 \varepsilon_2} =0.125$ is similar to the frequentist
estimate. We also report Geweke's convergence diagnostic (CD), which tests
for the equality of means of the first and last part of a Markov chain on the
basis of samples drawn from the stationary distribution of the chain. In more
than $98\%$ of cases, Geweke's CD (under the null hypothesis, $CD \sim
N(0,1)$) accepts the null hypothesis at $5\%$ level, which suggests that a
sufficiently large number of draws has been taken. Moreover, inefficiency
factors (reported in Table G1) are also close to 1, which confirms that the
chain is mixing well.
In the upper panel of Table~G2 (of the supplementary material), we see how
the Bayesian estimation of SURME model improves the results. The intercepts
$\beta_{11}$ and $\beta_{21}$ are now less biased as compared to that of SUR
model in Table~G1. Their relative errors are $-3.5\%$ and $-7.1\%$,
respectively. This is also true for the other slope coefficients $\beta$.
Moreover, the model neatly corrects the measurement errors and results in
better estimates of $\gamma_{1}$ and $\gamma_{2}$, their relative errors
being $2.1\%$ and $2.6\%$, respectively. We also note that the
variance-covariance matrix is precisely estimated leading to a correlation
coefficient $\rho_{\varepsilon_1 \varepsilon_2} =0.494$. The parameters
$\sigma_{Z}^{2}$ and $\sigma_{u}^{2}$ are well estimated with small posterior
standard deviations and small relative errors ($-2.6\%$ and $0.6\%$,
respectively). But inefficiency factors are large indicating strong
autocorrelation in MCMC draws, particularly for $\gamma_1$ and $\gamma_2$
whose inefficiency factors are $8.62$ and $10.52$, respectively. In more than
$90\%$ of cases, the Geweke's CD confirms that a sufficiently large number of
draws has been taken. The improvement obtained with a SURME (as compared to
the SUR) is interesting and emphasizes the need to model measurement error.
The lower panel of Table~G2 (in the supplementary material) presents the
results of the exposure equation from the SURME model. They show that the
biases are negligible, the posterior standard deviations are small and so are
the inefficiency factors.
Overall, the SURME model is well estimated, but the high autocorrelation in
MCMC draws of $\gamma$ needs additional consideration. According to
\citet{Owen-2017}, the problem of high autocorrelation can be dealt with
thinning, which itself can be optimized according to the cost of computing
the quantities of interest (after advancing the Markov chain) and the speed
at which autocorrelations decay. As shown in Table~G3 (see the supplementary
material), autocorrelations between the successive draws of $\gamma_1$ and
$\gamma_2$, denoted $\rho_{\tau} (\gamma_1)$ and $\rho_{\tau} (\gamma_2)$,
are close to one and the rate of decay is very slow. For example, $\rho_{1}
(\gamma_1)= 0.98$, $\rho_{10} (\gamma_1)= 0.82$ and $\rho_{1} (\gamma_2)=
0.98$, $\rho_{10} (\gamma_2)= 0.87$. The autocorrelations of some latent
variables $Z_i$ are slightly higher ($0.995$) than those of the $\gamma$'s,
but not reported for the sake of brevity. The cost of computing of
$\tilde{Z}_i$ is on average $2.71$ and an autocorrelation of $0.995$ leads to
an optimal thinning of factor $k=86$ (see appendix F and Table F1 in the
supplementary material). Henceforth, we use a thinning of factor $k=100$ for
all simulations.
We re-estimate the Bayesian SUR and SURME models with a thinning of 100, but
only report the results for SURME. The results, presented in the upper panel
of Table~\ref{Case1:Sim1SURMEthin}, show that the posterior means and
standard deviations are close (or identical) to those of Table G2 (in the
supplementary material). Values of the inefficiency factors are small and are
all between $(1.004, 1.23)$. Specifically, the reduction in inefficiency
factor is tremendous for $\gamma$'s (\textit{e.g.}, $1.11$ \textit{versus}
$8.62$ for $\gamma_1$ and $1.23$ \textit{versus} $10.52$ for $\gamma_2$). The
lower panel of Table~\ref{Case1:Sim1SURMEthin} presents the results for the
exposure equation in the SURME model. Once again, the results show that the
biases are negligible, the posterior standard deviations are small and the
inefficiency factors and Geweke's CD suggest good mixing of the MCMC draws.
Specifically, the autocorrelations of $\gamma_1$ and $\gamma_2$ are now small
($\rho_{1} (\gamma_1)= 0.15$, $\rho_{1} (\gamma_2)= 0.27$) and quickly
converge towards zero ($\rho_{10} (\gamma_1)= -0.007$, $\rho_{10} (\gamma_2)=
-0.003$) confirming a good mixing of Markov chains (see Table~G4 in the
supplementary material).
\begin{sloppypar}
To compare models, we employ the deviance information criterion or
DIC\footnote{Note that there does not exist any model adequacy measure that
takes into account measurement error in a multi-equation setup. This is an
open area of research and the only related work is \citet{Cheng-etal-2014},
where they propose a coefficient of determination for linear regression
models with measurement error.} proposed by \citet{Spiegelhalter-etal-2002},
and further studied in \citet{Celeux-etal-2006} and
\citet{Spiegelhalter-etal-2014}. Following \citet{Chan-Grant-2016}, we
compute the integrated likelihood for the SUR and SURME model with a thinning
factor of 1 and 100. This is used to calculate the marginal likelihood, which
is then utilized in DIC and the effective number of parameters $p_{D}$. For
the SUR model, we get negative estimates of $p_{D}$ which is indicative of
either a poor fit between the model and data or a conflict between the prior
and data. Different variations on the prior yield negative $p_{D}$, so it is
more likely due to a poor fit between the SUR model and data. When $p_{D}<0$,
the DICs are not adequate for evaluating the complexity and the fit of a
model \citep{Celeux-etal-2006}. On the other hand, for the SURME model we get
a positive estimate of $p_{D}$, synonymous with better fit (see Appendix~E in
the supplementary material for further discussion of the method and Table~G5
for the results).
\end{sloppypar}
We next discuss the results from MFVB approach, which on average takes about
$145$ cycles to get the maximum of the evidence lower bound $l$ and the
algorithm is terminated when the relative increase in the evidence lower
bound $l$ is less than $10^{-7}$. The results from the MFVB estimation of
SURME model are presented in Table~\ref{Case1:Sim1MFVB}, which shows that the
MFVB approach gives better results compared to Gibbs sampling. All parameters
have similar or lower biases, mainly for the intercepts $\beta_{11}$,
$\beta_{21}$ and for $\gamma$. However, the relative biases of the $\gamma$'s
are now reduced ($0.7\%$ and $-0.3\%$) as compared to Bayesian estimation of
SURME model. The estimates for $\sigma_{Z}^{2}$ and $\sigma_{u}^{2}$ show
that the model accurately estimates the variances and their relative biases
are small ($0.4\%$ and $-3.5\%$, respectively). The standard deviation of all
the parameters are smaller compared to those from MCMC estimation leading to
slightly narrower $95\%$ credible intervals (as compared to the $95\%$
HPDI)\footnote{When calibrating this Monte Carlo study, we found a
significant underestimation of the variances of the coefficients $\gamma
_{1}$ and $\gamma_{2}$, echoing the previous discussion around the work of
Blei \textit{et al.} (2017) (Section 3). After several trials (and to avoid
embarking on more complex approaches such as linear response variational
Bayes \citep{Giordano-etal-2018} or $\alpha $-variational inference
\citep{Yang-etal-2018}, we decided to use the following simple trick to
correct this undervaluation: $\sigma_{\gamma_{j}}$ is replaced by
$\sigma_{\gamma_{j}} \times \sqrt{MK/ E_{q\left( \sigma _{Z}^{2} \right) }}$,
for $j=1,..,M$ (see Section~B2 of the supplementary material).}.
Additionally, estimates of $\sigma _{mm^{\prime }}$ are closer to their
theoretical values and the estimated correlation coefficient
$\rho_{\varepsilon_1 \varepsilon_2} =0.488$ is close to 0.5, the actual
value. The lower panel of Table~\ref{Case1:Sim1MFVB} presents the results
from the exposure equation which emphasizes the accurate estimation of the
$\omega$ parameters. The MFVB approximation of both the classical structural
form and the exposure model shows that there are definite advantages in
adopting the MFVB approach to estimate measurement error models as compared
to the pure Bayesian method.
We next decrease the variance $\sigma_{Z}^{2}$ from $\sigma_{Z}^{2}=1$ to
$\sigma_{Z}^{2}=0.0625$ leading to $\sigma_{u}^{2}= 0.0156$. The results are
presented in Tables~G6 to G9 of the supplementary material. Results from the
frequentist and Bayesian estimation of SUR model always reveal strong
over-estimation of the intercepts, $\beta_{22}$ and strong under-estimation
of the slopes $\gamma $ of the error prone covariate $Z_{i}$. However,
over-estimation of the variances $\sigma_{11}$ and $\sigma_{22}$ ($ \simeq
19\%$ for both) are largely reduced as compared to those of
$\sigma_{Z}^{2}=1$, but leads to a slightly under-estimated correlation
coefficient $\rho =0.42$. When we incorporate the measurement error in the
model, \textit{i.e.}, SURME model with a very small variance of
$\sigma_{Z}^{2}$, the Bayesian estimates show a less accurate estimation of
the intercepts (increasing the negative relative biases $-29.4\%$ and
$-25.6\%$), of the $\gamma$'s ($15\%$ and $16.8\%$) and of all the $\beta$'s.
Moreover, inefficiency factors rise to about $2$ indicating a relative loss
of efficiency due to slightly correlated samples. To neutralize this effect,
we can increase the thinning appropriately\footnote{We relaunched the
simulations for this case with a thinning of $120$ and we find inefficiency
factors close to $1$. The results are available upon request for the sake of
brevity.}. Results for the exposure equation in the SURME model do not seem
to be affected by the strong decrease of the variance $\sigma_{Z}^{2}$. The
use of the MFVB approximation significantly attenuates the biases observed
with the Bayesian estimation of SURME. The relative biases for the intercepts
are now $-17\%$ and $-11\%$, and those for the $\gamma$'s are $8.6\%$ and
$7\%$, respectively. The relative errors for the variances $\sigma _{mm},\,
(m =1,2)$ reduce to $-2.5\%$ approximately and we get an estimated
correlation coefficient $\rho =0.52$. The MFVB approximation accurately
estimates parameters of the exposure equation. Once again, the MFVB method
reveals its advantages in estimating a SUR model with measurement error
although this advantage tends to be attenuated when a very small variance
$\sigma_{Z}^{2}$ occurs.
In summary, for a fixed measurement error of $25\%$, increasing the variance
$\sigma _{Z}^{2}$ of the error prone covariate $Z_{i}$ strongly biases the
estimated variances $\sigma _{mm} \, (m=1,2)$ as well as the whole set of
coefficients (intercepts and slopes) in the SUR model irrespective of the
method of estimation. But, taking into account the measurement errors through
SURME model neutralizes the negative effects of the increasing uncertainty on
the error prone covariate $Z_{i}$ and thus strongly reduces, or even
eliminates the biases to obtain satisfactory estimates. This conclusion is
further reinforced with the use of the MFVB approximation.
\subsection{Case II: Altering $R_{Z}$}
We now investigate the performance of the proposed algorithms where
$\sigma_{Z}^{2}=0.0625$ and the reliability ratio $R_{Z}$ is gradually
decreased. Specifically, we consider $R_{Z}=\left\{\, 0.8, \,
0.5714\right\}$, which leads to $\sigma_{u}^{2}=\sigma
_{Z}^{2}(1-R_{Z})/R_{Z} = \left\{0.0156, \, 0.0469\right\} $ and
noise-to-true variance ratio $(1-R_{Z})/R_{Z}$ of $\left\{25\%, \,
75\%\right\}$. In the previous subsection, we have already studied the case
where $\sigma_{Z}^{2}=0.0625$ and $R_{Z}=0.8$, therefore the focus is only on
the case where the reliability is reduced to 0.5714.
The results presented in Tables~G14-G17 of the supplementary material are
poorer than those of $R_{Z}=0.8$ for both the frequentist and Bayesian
estimates of SUR model, with stronger over-estimation of the intercepts
($84.3\%$ and $25.6\%$) and stronger under-estimation of the slopes $\gamma$
($-42.5\%$). The relative biases of the intercepts are larger than in the
previous cases and the same is true for the $\gamma$'s and even more obvious
for the $\sigma_{mm} \, (m=1,2)$ (approximately $42\%$). The estimated
correlation coefficient $\rho =0.38$ is far from the true value $0.5$. The
Bayesian estimates of SURME model show a significant improvement, reducing
the biases for the intercepts ($33\%$ and $-10.8\%$) and the $\gamma$'s
($16.2\%$ and $18.4\%$), but with slightly larger posterior standard
deviations. The variance-covariance matrix $\Sigma_{\varepsilon}$ is well
estimated, with small relative biases of $-7\%$ and $-9\%$ for $\sigma_{mm}\,
(m=1,2)$. The estimated correlation coefficient turns out to be $\rho =0.53$.
Both $\sigma_{Z}^{2}$ and $\sigma_{u}^{2}$ are also close to the true values
(their relative biases are $-15.2\%$ and $15.1\%$, respectively). Once again,
the improvement with the MFVB approximation is more noticeable as we get
better results for the parameters with slightly smaller standard deviations.
The relative biases for the intercepts are $-16.6\%$ and $-5.7\%$, and those
of the $\gamma$'s are $8.5\%$ and $6.5\%$. For the slope coefficients, the
relative biases range between $-11\%$ and $-2.5\%$. Both $\sigma_{Z}^{2}$ and
$\sigma_{u}^{2}$ are also better estimated (their relative biases are
respectively $-5.6\%$ and $4.4\%$). This is also true for the $\sigma_{mm}\,
(m=1,2)$ (with small relative biases of $-1.9\%$ and $-1.6\%$) leading to an
estimated correlation coefficient $\rho =0.51$.
To summarize, a change in the reliability ratio $R_{Z}$ --- for example,
increasing the measurement error from $25\%$ to $75\%$ --- strongly biases
the whole set of coefficients (intercepts and slopes), including the
estimated variances $\sigma_{mm} \, (m=1,2)$ in the SUR model. This is true
both for the frequentist and Bayesian approach. On the other hand, accounting
for measurement error through SURME model largely eliminates the negative
effects of this alteration and strongly reduces the biases in SUR estimation.
Moreover, the use of MFVB approximation improves the results beyond those
obtained with the Bayesian estimation of SURME model.\footnote{To get results
with the Bayesian estimation of the SURME equivalent to those obtained with
the MFVB approximation, it should be necessary to greatly increase the number
of MCMC draws resulting in a very important cost in terms of computing time.
Going from $51,000$ draws to $201,000$ draws leads to a relative increase in
computing time per replication from $16$ to $86$ times that of MFVB (see note
\footref{footnote_label_1}). There is therefore an obvious trade-off against
the Bayesian estimation of the SURME and in favor of the MFVB approximation.}
Finally, we present in Table~\ref{Table:SummaryRelErrors} the relative errors
of parameters for all the cases\footnote{For $\sigma_{Z}^{2}=1$ and
$R_{Z}=0.5714$, results are given in Tables~G13-G18. Last, Table~G25 gives a
summary of DICs and $p_D$s.} of $\sigma_{Z}^{2}=\left\{1, \, 0.0625\right\} $
and $R_{Z}=\left\{0.8, \, 0.5714\right\}$. At a glance, this table allows us
to compare and contrast all of the previously discussed results and another
case provided in the supplementary material. To reiterate, the results show
that for a fixed measurement error, increasing the variance of the error
prone covariate $Z_{i}$ strongly biases the whole set of coefficients
(intercepts and slopes) as well as the estimated variances $\sigma_{mm} \,
(m=1,2)$ in the SUR model, irrespective of the method of estimation. This is
also the case when, for a fixed variance $\sigma_{Z}^{2}$, the reliability
ratio $R_{Z}$ is reduced. Fortunately, taking into account the measurement
error through SURME model attenuates or even neutralises the undesirable
effects of the increasing uncertainty on the error prone covariate $Z_{i}$ or
reducing the reliability ratio. This conclusion is further reinforced with
the use of the MFVB approximation.
\section{Application}
\begin{sloppypar}
The statistical literature on modeling measurement error has often drawn
applications from health and epidemiology studies where certain variables
such as urinary sodium chloride \citep{Liu-Liang-1992} and blood pressure
\citep{Kannel-etal-1986} are treated as measured with error. In this context,
\citet{Carroll-etal-Book-2006} utilizes measurement error in systolic blood
pressure (\emph{SBP}) on several occasions to illustrate different kinds of
measurement error models and estimation methods. The idea is that long-term
\emph{SBP} is extremely difficult to measure and hence all recorded
observations from clinic visits on \emph{SBP} have measurement error. We draw
motivation from \citet{Carroll-etal-2006} and
\citet{Tao-etal-2011}\footnote{The Association for the Advancement of Medical
Instrumentation (AAMI) and the British Hypertension Society recommend an
absolute mean deviation (between oscillometric and invasive measurements of
systolic blood pressure) less that $5$ mmHg and a standard deviation of less
than $8$ mmHg. Using $6640$ systolic blood pressure measures from $270$
participants, \citet{Tao-etal-2011} find that large measurement errors of $
> 10$ mmHg, (i.e., oscillometric measurement overestimates the real SBP) in
$28.78\%$ of the sample when their SBP values are around $90$ mmHg. They also
find that when SBP is more than $150$ mmHg, most of the measurement errors
are negative (i.e., oscillometric measurement underestimates the real SBP) .
In their study, \citet{Tao-etal-2011} found an absolute mean deviation of
$1.98$ mmHg but a standard deviation of $14.87$ mmHg, so practically doubling
the recommended norm for measurement errors. As the authors say (p.288)
\textquotedblleft\textit{If oscillometric measurement underestimates the real
BP around the critical value (90 mmHg), the physician may give a wrong
treatment. If the oscillometric measurement overestimates the real BP around
90 mmHg, the error may lead to an under-diagnosis and a delayed treatment
response to perioperative hypotension and significantly increases the risk of
dying}''.} and present an application of SURME model where the primary
objective is to model the measurement error in \emph{SBP} and explore the
possibility of a better model fit relative to a standard SUR model.
\end{sloppypar}
The current study utilizes data from the National Health and Nutrition
Examination Survey (NHANES) for 2007-2008, a widely used survey designed to
assess health and nutritional status of civilians, non-institutionalized
adults and children in the United States. NHANES collects data by
interviewing individuals at home, who then report to mobile examination
centers (MECs) to complete the health examination component of the survey.
The MEC's provide a standardized environment for the collection of high
quality data, thus favoring dependable statistical estimation and
interpretations. The survey is unique in the sense that it combines
interviews and physical examinations of the respondents.
The dependent variables in the model are log of weight and high density
lipoprotein ($HDL$), which is also known as `good cholesterol'. The
covariates that are common to both equations and assumed to be measured
without error are as follows: age, gender, smoking status, hours of sedentary
activities, sleep disorder and low density lipoprotein plus 20 percent of
Triglyceride ($LDL20T$). The variable `height' is only expected to effect
weight, not $HDL$, and therefore only included in the log weight equation.
Observed $SBP$ is assumed to have measurement error and transformed as $\ln
(SBP-50)$ to avoid scaling problems, as done in \citet{Carroll-etal-2006}.
The third reading on $SBP$ is used as data and the first two readings are
utilized to form priors on relevant parameters. Focussing on adults and
removing missing observations on all variables of interest leaves us with a
total of $N=1,001$ observations. Table~\ref{Table:AppDescStat} presents the
definition and descriptive statistics of all the variables used in the study.
To estimate the different SUR models with and without measurement error, we
utilize the following relatively vague priors on the parameters: $\beta \sim
N_{15}\left( \beta _{0},B_{0}\right) $ with $\beta _{0}=0_{15}$, $B_{0}=10
I_{15}$, $\gamma \sim N_{2}\left( \gamma _{0},G_{0}\right) $ with $\gamma
_{0}=0_{2}$, $G_{0}=10 I_{2}$, $\Sigma _{\varepsilon }\sim IW_{2}\left( \nu
_{0},S_{0}\right) $ with $\nu _{0}=10$ and $S_{0}=10 I_{2}$, $\omega \sim
N\left( \omega _{0},O_{0} \right) $ with $\omega_{0}=0_{15}$, $O_{0}=I_{15}$,
$\sigma _{Z}^{2}\sim IG\left( 50,10\right) $ and $\sigma _{u}^{2}\sim
IG\left( 50,5\right)$. The prior distribution for $\sigma _{Z}^{2}$ is
specified such that the prior mean ($0.2$) is close to the mean difference
between first and second readings on transformed \emph{SBP} ($0.024$).
Similarly, the prior distribution for measurement error variance $\sigma
_{u}^{2}$ is stipulated such that prior mean ($0.1$) is near the mean
difference in variance from first and second readings on transformed
\emph{SBP} ($0.002$). Note that some of the parameters only appear in the
measurement error model and the priors are used accordingly.
We first look at the results for the Bayesian estimation of SUR model
presented in Table~\ref{Table:AppBSUR} from 400,000 draws after a burn-in of
50,000 draws with a thinning factor of 100 (optimized following the approach
in \citet{Owen-2017}). The posterior estimates show that $\ln (age)$ is not
statistically different from zero in both the $\ln (weight)$ and $HDL$
equations. \emph{Male} indicator variable positively affects $\ln (weight)$,
but negatively affects \emph{HDL}. \emph{Height} has a strong positive effect
on $\ln (weight)$ and this is typically anticipated for all adults. Smoking
daily or some days is negatively associated with $\ln (weight)$. This outcome
is not surprising since smoking is well known to reduce appetite. On the
contrary, smoking seems to have no significant effect on $HDL$. Number of
hours of sedentary activities is positively associated with $\ln (weight)$
and negatively associated with \emph{HDL}. The result confirms the generally
held belief that being inactive increases weight and is negatively associated
with good cholesterol. Sleep disorder is also known to be positively
associated with weight gain and this is confirmed in our findings but it has
no effect on \emph{HDL}. \emph{LDL20T} has a positive (negative) effect on
$\ln (weight)$ (\emph{HDL}), which is expected since \emph{LDL} is commonly
referred as `bad cholesterol' and is associated with weight gain. Transformed
\emph{SBP} has a positive effect on $\ln (weight)$, but a negative effect on
\emph{HDL} which is statistically different from zero at 90\% probability
level. As the first equation is a log-log specification, the coefficient of
$\ln(SBP-50)$ is an elasticity. Then, a $10\%$ increase in the transformed
\emph{SBP} leads to a $0.967\%$ growth of weight. The second equation is a
semi-log specification, so the elasticity of \emph{HDL} relative to the
transformed \emph{SBP} at the mean of the sample is: $0.1012/1.32 = 0.076$. A
$10\%$ increase in the transformed $SBP$ leads to a $0.76\%$ growth of $HDL$.
The estimated correlation coefficients of the residuals between the two
equations is $\rho_{\varepsilon_1 \varepsilon_2} =-0.304$. Inefficiency
factors are close to 1 suggesting a good mix of the draws and the Geweke's CD
($CD \sim N(0,1)$) confirms that a sufficiently large number of draws has
been taken.
The results from the Bayesian estimation of SURME model (which accounts for
the measurement error in the covariate \emph{SBP}) is presented in the upper
panel of Table~\ref{Table:AppBSURME}. A quick glance shows that the results
for the covariates measured without error are similar to those in
Table~\ref{Table:AppBSUR}, except for the intercept and the male indicator in
the $\ln (weight)$ equation. The $\ln (height)$ coefficient in the $\ln
(weight)$ equation is now slightly higher but its $ 95\%$ credible interval
overlaps with that of the SUR model. Posterior estimates corresponding to
transformed $SBP$ increase in both equations (from $0.097$ to $0.141$ in the
$\ln(weight)$ equation and from $0.101$ to $0.152$ in the $HDL$ equation)
leading to the following elasticities at the mean of the sample: $0.141$ and
$0.115 (= 0.152/1.32)$, for weight and $HDL$, respectively. However, as the
posterior standard errors become larger (from $0.027$ to $0.044$ in the
$\ln(weight)$ equation and from $0.056$ to $0.092$ in the $HDL$
equation)---as in the Monte Carlo study---the $95\%$ HPDI of the posterior
means of $\ln(SBP-50)$ overlap even if the distribution moves to the right
when we go from SUR to SURME (see Figure~\ref{Fig:sbpWEIGHTeq}). In
particular, the $95\%$ HPDI of the posterior means of $\ln(SBP-50)$ are
$\left[ 0.051;0.143\right] $ and $\left[ 0.009;0.194\right] $ in the $\ln
(weight)$ and the $HDL$ equations, respectively, in the SUR model, and
$\left[ 0.067;0.213\right]$ and $\left[ 0.005;0.304\right] $ in the $\ln
(weight)$ and the $HDL$ equations in the SURME model. In the $HDL$ equation,
the coefficient of \emph{SBP} is positive but statistically equivalent to
zero. Posterior estimate of measurement error variance is $0.029$, which
leads to an estimated reliability ratio of about $ 59.78\%$ and a
noise-to-true variance ratio of $67.27\%$. The posterior variances of the
disturbances from $\ln (weight)$ and $HDL$ are close to those of SUR model
and lead to an error correlation $\rho_{\varepsilon_1 \varepsilon_2} =-0.25$.
Inefficiency factors are close to 1 and Geweke's CD confirms, for most
parameters, that a sufficiently large number of draws has been taken.
The lower panel of Table~\ref{Table:AppBSURME} presents the results for the
exposure equation from the Bayesian estimation of SURME model. In the first
equation, only three variables have a positive effect on $\ln(SBP-50)$:
$\ln(age)$, LDL20T and $\ln (height)$ (only at the $10\%$ probability level).
In the second equation, four variables have a positive effect on
$\ln(SBP-50)$: $\ln(age)$, LDL20T, male and smokers (the last two variables
are different from zero only at the $10\%$ probability level).
We next estimate the SURME model using the MFVB approach, which takes $1565$
cycles to get the maximum of the evidence lower bound $l$. The results,
presented in the upper panel of Table~\ref{Table:AppMFVB}, shows that for the
transformed \emph{SBP} both the coefficient ($0.159$) and the probability
interval ($\left[ 0.076;0.242\right]$) are similar to those obtained from the
Bayesian estimation of SURME model. In the $HDL$ equation, the marginal
effect at $0.186$ is higher compared to the Bayesian estimate, with a $95\%$
probability interval of $\left[0.03;0.34\right]$. Taking into account
measurement errors using MFVB allows to get significantly larger and more
accurate elasticities of weight ($0.159$) and $HDL$ ($0.1414= (0.186/1.32)$)
for the transformed $SBP$ compared to the other method.
Posterior estimate of measurement error variance from the MFVB approach is
$0.029$, which is similar to the Bayesian estimate, and leads to an estimated
reliability ratio of about $60.2\%$ and to a noise-to-true variance ratio of
$66.1\%$. The posterior variances of the disturbances from $\ln (weight)$ and
$HDL$ are close to the Bayesian estimates and lead to the same correlation
between the errors of the two equations $\rho_{\varepsilon_1 \varepsilon_2}
=-0.25$. The lower panel of Table~\ref{Table:AppMFVB} presents results for
the exposure model estimated using with MFVB approximation method. In the
first equation, four variables have a positive effect on $\ln(SBP-50)$:
$\ln(age)$, smokers, $LDL20T$ and $\ln (height)$. Similarly, in the second
equation four variables have a positive effect on $\ln(SBP-50)$: $\ln(age)$,
$male$, $smokers$ and $LDL20T$. The exposure equation in the SURME model
allows to define the implicit links between the true systolic blood pressure
and the ``risk factors'' such as age, gender, smokers and ``bad cholesterol''
($LDL20T$).
Figure~\ref{Fig:sbpWEIGHTeq} gives the posterior densities of the parameter
corresponding to $\ln(SBP-50)$ in the $\ln(weight)$ equation from the
Bayesian estimation of SUR model, SURME model and the MFVB estimation of
SURME model. We note a shift of the marginal effect of $\ln(SBP-50)$ on
$\ln(weight)$ to the right of the distribution from a mode established around
$0.097$ for SUR model to a mode of $0.141$ for SURME model but with a wider
dispersion. The estimated probability density function (\emph{pdf}) with MFVB
is slightly to the right and centered around the mode ($0.159$) but with a
surface under the curve globally equivalent to that of Bayesian estimation of
SURME model. In Figure~\ref{Fig:sbpHDLeq}, we observe similar shifts in the
posterior density of the parameter corresponding to $\ln(SBP-50)$ in the
$HDL$ equation, when we move from SUR ($0.101$) to SURME ($0.152$) or to MFVB
($0.187$) estimation of SURME model.
We note that similar to the Monte Carlo study, the MFVB approach to
estimating the SURME model improves the results compared to those from Gibbs
sampling. The results actually lend credibility to the proposed MFVB
algorithm since coefficient estimates for variables which do not have
measurement error are almost unaltered. However, when measurement error in
$SBP$ is ignored as in the SUR model, the posterior estimates are
underestimated relative to the MFVB estimates. So, accounting for measurement
error potentially corrects or reduces the bias in parameter estimates
\citep[see][]{Carroll-etal-2006}.
\section{Conclusion}
The paper considers a SURME model (seemingly unrelated regression where some
covariates have classical measurement error of the structural form) and
introduces two novel estimation methods: a pure Bayesian algorithm based on
MCMC and a second algorithm based on mean field variational Bayes
approximation. The proposed algorithms use a prior distribution on
measurement error variance to resolve identification issues in the model. In
the MCMC estimation, Gibbs sampling is employed to sample the parameters from
the conditional posterior distributions. While most of the conditional
posterior densities have the standard form and are easily derived, the
conditional posterior density for the true unobserved quantity associated
with covariates having measurement error requires extensive attention to
arrive at a manageable form. We also note that the proposed SURME model as
explained is based on the structural form of measurement error, but the
functional form of measurement error can be easily incorporated by
introducing the distribution of the true unobserved quantity as a part of the
subjective prior information. The expression for the joint and conditional
posteriors will remain unchanged. However, estimating the SURME model using
MCMC leads to high autocorrelation in the draws corresponding to the
covariate measured with error. While this is easily dealt using
\emph{thinning}, the paper also proposes the MFVB approach as an alternative
to get around the problem of high autocorrelation.
The proposed estimation algorithms are illustrated in multiple Monte Carlo
simulation studies. While, the first set of 2 simulations (labeled Case~I)
investigate the effect on estimates by varying the variance of the true
unobserved variable (for a fixed reliability ratio), the second set of 2
simulations examine the effect for a changing reliability ratio (for a fixed
variance of the true unobserved variable). The results from all the
simulations show that the Bayesian and MFVB estimation of SURME model reduce
the biases to obtain satisfactory estimates as compared to estimates from SUR
model. Moreover, the MFVB approach turns out as an excellent alternative to
the MCMC and its poor mixing properties in the presence of latent variables.
Besides, the MFVB approach has slightly better estimation accuracy and can be
advantageous with large data sets.
The proposed models and techniques are also implemented in a health study
where the two dependent variables, log of weight and high density lipoprotein
(\emph{HDL}), are regressed on a set of covariates measured without error and
on systolic blood pressure (\emph{SBP}) known to have measurement error. The
model is estimated using the two algorithms and the results obtained reveal
that the sign of the estimated coefficients are mostly consistent with what
is typically found in the literature. Specifically, \emph{SBP} has a positive
effect on both $\ln(weight)$ and \emph{HDL}, measurement error variance is
small with an estimated reliability ratio of about $60\%$ and a noise-to-true
variance ratio of $66\%$. To offer a baseline comparison, a SUR model that
ignores measurement error in \emph{SBP} is also estimated using Gibbs
sampling. Comparing the results across models, we see that posterior
estimates for covariates without measurement error are almost identical, but
that of \emph{SBP} is lower and hence underestimated both in the weight and
the $HDL$ equations.
The combination of SUR and measurement error models is attractive and the
proposed model can be generalized in several directions. One straightforward
extension is the introduction of multiple covariates with measurement error
in each SUR equation. However, the challenge here is to keep track of
measurement errors arising from different covariates. The proposed SURME
model can also be modified by introducing classical measurement error in the
response variable or nonclassical measurement error models, where the errors
may be correlated with the latent true values. Beyond the SUR models, these
Bayesian approaches may be useful for measurement error in simultaneous
equation models. We leave these possibilities for future research.
\clearpage \pagebreak
\pdfbookmark[1]{References}{unnumbered}
| 2024-02-18T23:39:56.509Z | 2020-06-15T02:13:57.000Z | algebraic_stack_train_0000 | 892 | 13,162 |
|
proofpile-arXiv_065-4381 | \section{Introduction}
In high-resolution γ-ray spectroscopy, efficiency is a significant attribute.
While a large detection efficiency is beneficial for data collection, it is the precise value (and its energy dependency) which is crucial for data analysis.
The γ-ray energy range of interest heavily depends on the experiment.
Most γ-rays observed in nuclear physics stem from transitions between excited states of nuclei.
These commonly have energies between \SI{100}{\keV} and \SI{3}{\MeV}, and thus γ-ray spectroscopy is often performed in this energy region.
Several research areas have come into focus which require γ-ray detection at energies around \SI{10}{\MeV} and higher, for example studies of the Pygmy Dipole Resonance (PDR) and radiative capture reactions for nuclear astrophysics.
For the PDR, the decay behavior of $J^\pi=1^-$ states at energies below the neutron separation energy is studied \cite{Savran2013}.
This includes direct decays to the ground state, i.e., γ-ray transitions around \SIrange{5}{10}{\MeV}.
For radiative capture reactions, direct transitions from the entry state at the sum of center-of-mass energy and Q-value to the ground state must be investigated \cite{Netterdon2015}.
This translates to γ-ray energies up to \SI{15}{MeV}.
The higher the γ-ray energy, the harder becomes a reliable experimental determination of the efficiency.
Standard sources provide calibration up to \SI{3.6}{\MeV} only.
From thereon, fewer and more complex methods can be used, see \cref{c:excal}.
Our areas of research require precise efficiency calibration at energies hardly accessible experimentally.
Simulations can address this need for fast, easy, and reliable calibration at any γ-ray energy.
Interactions of γ-rays with matter are known well enough; and given geometries and materials, Monte-Carlo simulations with particle transport codes like \textsc{Geant4} \cite{Agostinelli2003} can provide full-energy-peak (FEP), single-escape-peak (SEP), double-escape-peak (DEP), and coincidence efficiencies.
\textsc{Geant4} provides a simulation framework, but no ready-to-use executable -- one must implement each specific setup.
G4Horus provides a ready-to-use \textsc{Geant4} based application for simulating the efficiency of γ-ray detectors.
It is used at the Institute for Nuclear Physics, University of Cologne, to simulate the efficiency of the HPGe-detector array HORUS, see \cref{c:horus}.
It provides everything required to simulate the efficiency, that includes especially detector and target chamber geometries and a predefined workflow that requires minimal knowledge and effort from the user.
\subsection{\texorpdfstring{γ}{Gamma}-ray spectroscopy with HORUS}\label{c:horus}
Located at the \SI{10}{MV} FN-Tandem accelerator at the Institute for Nuclear Physics, University of Cologne, the γ-ray spectrometer HORUS (High-efficiency Observatory foR Unique Spectroscopy) is used to investigate the structure of nuclei and measure cross sections to answer questions in nuclear astrophysics.
It consists of up to 14 HPGe detectors, six of which are equipped with active anti-Comp\-ton BGO shields \cite{Netterdon2014a}.
Signals from the detectors are processed by XIA's Digital Gamma Finder 4C Rev.\,F, which allows for acquisition of so-called \emph{listmode} data, where coincident hits in different detectors can be correlated \cite{Pickstone2012}.
For example, γγ coincidences can be used to investigate quadrupole and octupole states \cite{Pascu2015} or low-spin structures \cite{Fransen2004}.
Passivated Implanted Planar Silicon (PIPS) particle detectors can be added with the SONIC detector chamber \cite{Pickstone2017}.
They are used in coincidence with the HPGe detectors to select events with a specific excitation energy, which eliminates other unwanted feeding transitions.
The resulting spectra are used for lifetime measurements with the DSAM technique \cite{Hennig2015} or to investigate the Pygmy Dipole Resonance \cite{Pickstone2015}.
In addition, high energetic γ-rays, which are emitted after capture of protons or α-particles, can be used to determine total and partial cross sections for nuclear astrophysics \cite{Netterdon2014a, Mayer2016}.
HORUS has no default, fixed configuration. For every experiment, the detectors and target chambers are optimized to match the experimental requirements.
\subsection{Experimental efficiency calibration}\label{c:excal}
The full-energy-peak efficiency can be determined experimentally using standardized calibration sources and known reactions.
Standard sources of not-too-short lived radioactive isotopes provide easily accessible calibration points up to \SI{3.6}{\MeV} and thus are commonly used for both energy and efficiency calibration.
Sources with known activity made from, e.g., \isotope[152]{Eu} and \isotope[226]{Ra}, are excellent for the γ-ray-energy range up to \SI{3}{\MeV}.
As their half-lifes span decades, they only need to be procured once.
\isotope[56]{Co} emits usable γ-rays up to \SI{3.6}{\MeV}.
Due to its half-life of \SI{77}{\day}, sources need to be re-activated about every year via the (p,n) reaction on an enriched \isotope[56]{Fe} target.
More exotic isotopes can extend the coverage up to \SI{5}{\MeV}.
The energy range covered by the 69 nuclides included in the IAEA xgamma standard \cite{iaea-xgamma} ends at \SI{4.8}{\MeV} with the isotope \isotope[66]{Ga}.
The Decay Data Evaluation Project (DDEP) \cite{DDEP} lists several more exotic nuclei.
Here, the highest transition at \SI{5}{\MeV} also stems from \isotope[66]{Ga}.
With an almost negligible intensity of \SI{0.00124\pm0.00018}{\percent}, it is, however, not well suited for calibration purposes.
While the energy range covered by \isotope[66]{Ga} is expedient, the short half-life of \SI{9.5}{\hour} is not and requires the source to be produced anew for each project -- increasing the already high workload of the main experiment.
Decay measurements of short-lived isotopes in target position can extend the energy range up to \SI{11}{\MeV}.
The decay of \isotope[24]{Al} with a half-life of \SI{2}{\s}, created by pulsed activation of \isotope[24]{Mg}, is a feasible way to obtain calibration lines up to \SI{10}{\MeV} \cite{Wilhelm1996, Pickstone2017}.
Neither the IAEA nor the DDEP currently include \isotope[24]{Al} in their list of recommended nuclides, thus there can be doubts on the accuracy of the existing decay intensity data.
This method is even more involved than the methods mentioned before, as a pulsing device must be set up at the accelerator injection and linked to the data acquisition.
In addition, this method releases neutrons close to the HPGe detectors, which might be damaged.
Direct γ-ray emissions from capture reactions can also be used for efficiency calibration.
Emissions from neutron capture reactions, mostly \isotope[14]{N}(n,γ)\isotope[15]{N}, have been used successfully \cite{Molnar2002, Belgya2008, MIYAZAKI2008}.
As this method requires neutrons, which are neither trivial to procure nor healthy for HPGe detectors, we have made no efforts to introduce this method at HORUS.
We have previously used direct γ-ray emissions from the proton capture resonance of \isotope[27]{Al}(p,γ)\isotope[28]{Si} at $E_p = \SI{3674.4}{\keV}$ \cite{Netterdon2014a}.
As the measurements take about a day, the intensity uncertainties are high, and angular distributions must be corrected for, we no longer perform these regularly.
The \isotope[27]{Al}(p,γ)\isotope[28]{Si} reaction has many resonances, however only few have been measured extensively, e.g., at $E_p = \SI{992}{\keV}$ \cite{Scott1975}.
There are also several resonant proton capture reactions on other light isotopes, e.g., on \isotope[23]{Na}, \isotope[39]{K}, \isotope[11]{B}, \isotope[7]{Li} \cite{Elekes2003,Zijderhand1990,Ciemaa2009}, and \isotope[13]{C} \cite{Kiener2004}.
Unfortunately, these comparatively low-lying resonances are hard to reach with the high-energy FN-Tandem accelerator -- they might be perfectly accessible for other groups.
Alternatively, given enough calibration points, extrapolation using fitted functions can be used. This process can produce diverging results for large distances from the highest calibration point and choice of fit function \cite{Molnar2002}, but is reasonably accurate otherwise and low-effort.
To Summarize: A thorough γ-ray efficiency calibration uses up more time and effort the higher the γ-ray energy of interest.
\section{Purpose}\label{c:purpose}
We developed G4Horus to provide several services to support experiments at HORUS.
The goals in order of importance are:
1) Provide accurate full-energy-peak efficiency.
The difficult access to calibration points at high energies as described in \cref{c:excal} leaves a gap which Monte-Carlo simulations can fill.
Simultaneously, they can provide the single- (SEP) and double-escape-peak (DEP) efficiency with and without active veto signal from the BGO anti-Compton shields.
2) Require minimum effort and domain-specific knowledge from the user.
\textsc{Geant4} does not offer a ready-to-use application and even to get \emph{just} the efficiency, a full implementation of all components is required.
All users should be able to use the software without having to worry about knowing \textsc{Geant4} and without spending more time than necessary.
3) Adapt to all experimental configurations.
The HORUS setup is highly configurable with many different detectors, target chambers, and other equipment.
Users should be able to reproduce their individual configuration from predefined modular parts.
4) Guide new developments.
Experimental requirements continuously change.
Simulations can help to make informed decisions for adaptations to the setup.
5) Provide coincidence and other high-level data.
With simulations, coincidence efficiencies can be checked, and the correctness of the analysis-software procedure confirmed. They can also be used to develop and test new experimental setups and analysis methods.
\section{Implementation}
Monte Carlo simulations of γ-ray detectors are well established \cite{Hardy2002, Soderstrom2011, Baccouche2012}.
For \textsc{Geant4}, the three main components geometry, physics, and actions must be implemented.
The main difficulty is summarized well in \cite{Giubrone2016}:
\enquote{The accuracy of \textsc{Geant4} simulations is heavily dependent on the modeled detector geometry. Characterizing a detector is difficult, especially if its technical characteristics are not well known.}
This especially applies to HPGe detectors, where the manufacturer often only provides the most basic information, e.g., crystal size and weight.
X-ray imaging is a non-destructive method to obtain excellent geometry data for the crystal \cite{Chuong2016}, however not the full volume of the crystal might be \emph{active} volume, see \cref{c:geocoax}.
Passive materials between the source and the detector must be implemented accurately as well.
Users of Monte-Carlo simulation software commonly manufacture the desired shapes by writing code to create, intersect, and position basic shapes.
This seems excessively complicated compared to industry standard engineering tools.
In our case, the complex shapes of the CNC-milled target chambers are difficult or even impossible to implement with standard \textsc{Geant4} tools.
Instead, we use CAD files directly, see \cref{c:chambergeo}.
\subsection{Geometry}
\subsubsection{Target chambers and CAD-based geometry}\label{c:chambergeo}
In general, geometry in \textsc{Geant4} is implemented by writing \texttt{C++} code.
Basic shapes like boxes and spheres are created, rotated, intersected, and placed manually without visual interfaces.
While this is feasible for simple volumes, more complicated structures might be drastically reduced in detail or simply skipped and not implemented at all.
Such a simplified geometry might be acceptable or even desired for faster execution in some cases.
However, investigations of, e.g., background caused by passive components, are meaningless without all physical structures placed completely and accurately.
The target chambers used at HORUS are, like most modern mechanical structures, created using Computer Aided Design (CAD) software, and then build with Computer Numerical Control (CNC) milling machines or even 3D printers.
We think that not using these CAD-files, which already exist \emph{anyway}, is a massive waste of time and effort, independent of the complexity of the models.
Even if these do not exist yet, it should be significantly faster and less error prone to re-create them with a CAD program instead of writing \texttt{C++}-\textsc{Geant4} code.
There are several concepts for creating \textsc{Geant4} compatible volumes from CAD models.
If the shape has been constructed with Constructive Solid Geometry (CSG), the underlying configuration of basic components can be translated to basic \textsc{Geant4} shapes and Boolean operations.
In principle, this is the favorable solution, as it is simple yet elegant and might offer the best performance during simulation.
If the CSG configuration is not known, it is sometimes possible to recreate it with CSG decomposition \cite{Lu2017a}.
Complex volumes can also be converted to a tessellated shape, where the surface is represented by a triangle mesh, called \texttt{G4TessellatedSolid} in \textsc{Geant4} \cite{Poole2012}.
Alternatively, the whole volume can be split into many tiny tetrahedrons (\texttt{G4Tet}) using a Delaunay-based algorithm \cite{Si2015}.
A hybrid approach, that is building a simple structure with CGS and adding complex details with tessellated meshes, is also conceivable.
Converted shapes can be stored in the \texttt{GDML} (Geometry Description Markup Language) format.
The idea of using these CAD files in \textsc{Geant4} is not new, but there is no widely adopted solution.
A conversion can either be performed with plugins in the CAD program, a standalone software, or as a dependency in the \textsc{Geant4} application itself.
For example, plugins have once been developed for \emph{FreeCAD} \cite{FreeCADGDML, Pinto2019} and \emph{CATIA} \cite{Belogurov2011}.
Notable standalone projects are \emph{cad-to-geant4-converter} \cite{Tykhonov2015}, \emph{STEP-to-ROOT} \cite{Stockmanns2012a}, \emph{SW2GDMLconverter} \cite{Vuosalo2016}, and \emph{McCad-Salome} \cite{Lu2017a}.
Some projects seem to be abandoned, having received their last update several years ago.
We had success with \emph{CADMesh} \cite{Poole2012a} to integrate our geometry.
The CADMesh software package supports creating tessellated and tetrahedral meshes in \textsc{Geant4} at runtime, which enables fast iteration and a flexible geometry selection.
The sequence of operations is as follows:
We receive the original geometry as STEP file from the mechanical workshop, which includes every detail as its own object.
First, we use FreeCAD to reduce the complexity by deleting minor components that have little to no impact on the efficiency.
This should provide both a smoother mesh conversion as well as a faster simulation.
To asses which components can be deleted, we reasoned that objects that are not in the direct path between source and detector are less critical, for example the connectors in the footer of SONIC, see \cref{f:targetchamber}.
In addition, objects that are either tiny (screws) or made from low-Z material (gaskets, isolation) are also expendable in our case.
This might not hold when investigating the efficiency at very low γ-ray energies or in the X-ray regime, or scenarios where charged particles pass through.
Ideally, one could even remove the screw holes entirely, which would both be closer to reality in terms of material budget and a less complex model.
Second, we group objects made from the same material, e.g., aluminum, together and save them in a single STEP file.
Third, the STEP geometry is converted to an STL mesh.
While FreeCAD can perform this conversion, we experienced several problems, mostly stuck tracks during simulation, using this process.
Instead, we used the online STEP-to-STL converter of a 3D-print-on-demand service without issues.
An honorable mention at this point is the \emph{MeshLab} software for mesh processing and editing.
Once CADMesh loads the STL shape as tessellated volume, it can be assigned its material and placed like any other shape.
An example of this process is shown in \cref{f:targetchamber}.
\begin{figure}
\centering
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/cad-full.jpg}
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/cad-red.jpg}
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/cad-geant4.jpg}
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/sonic2.jpg}
\caption{\label{f:targetchamber} Example for using CAD geometry in \textsc{Geant4}. The original highly-detailed CAD file (t.l.) is reduced to is main components (t.r.) and converted to an STL Mesh. CADMesh then loads this mesh, which can then be assigned a material and placed like a regular solid in \textsc{Geant4} (b.l.). This process can recreate the real-life geometry (b.r.) quickly and accurately.}
\end{figure}
\subsubsection{Detector geometry}\label{c:hpgegeo}
Several types of detectors are implemented in G4Horus, which are derived from a common \texttt{Detector} class.
This base class provides basic operations to be placeable by the \texttt{Setup} class, such that they can be mounted appropriately, see \cref{c:setup}.
\texttt{PIPS} particle detectors directly derive from this base class.
For HPGe detectors, several different crystal types exist.
A common \texttt{HPGe} base class provides implementation of the cylindrical aluminum hull, while the derived \texttt{HPGe\-Coaxial}, \texttt{HPGeClover}, and \texttt{HPGeHexagonal} classes implement the respective inner structures.
Initial parameters for most HPGe detectors were taken from the manufacturer data sheets and gathered in \texttt{DetectorLibrary}, a factory class that instantiates the correct detector from its identifier.
While all our HPGe detectors used here are technically coaxial detectors, the \texttt{HPGeCoaxial} implements the unaltered detector shape, a cylinder with a drilled hole from the back.
Data sheets provided by the manufacturer are reasonably detailed and include diameter, length, volume and distance to the end cap.
Educated guesses had to be made sometimes for the dimensions of the hole drilled for the cooling finger.
The crystals implemented by \texttt{HPGeHexagonal} are cut to semi-hexagonal conical shapes and encapsulated in hermetically closed aluminum cans of the same form \cite{Thomas1995}.
This type is used also in EUROBALL \cite{Simpson1997} and it is the predecessor to the six-fold segmented encapsulated MINIBALL \cite{Warr2013} and 36-fold segmented AGATA \cite{Akkoyun2012} detectors.
The dimensions of each crystal are identical apart from the length, which can vary slightly and is noted in the data sheets.
The implementation was tested with \isotope[226]{Ra}, \isotope[56]{Co}, and \isotope[27]{Al}(p,γ)\isotope[28]{Si} calibration data \cite{Mayer2016}.
In addition, a calibration data set with \isotope[226]{Ra}, \isotope[56]{Co}, \isotope[66]{Ga}, and \isotope[24]{Al} was used from an experiment with the SONIC-V3-ΔEE target chamber.
For most classic coaxial detectors, only minor changes, e.g., to the dead layer thickness, were necessary to reproduce the absolute FEP efficiency.
While we tried to bring the efficiency shape in line over the whole energy range, we focused less on the low energy part than described in, e.g., \cite{Chuong2016}.
Some of the encapsulated, hexagonal detectors show an experimental efficiency which is up to \SI{30}{\percent} lower as expected from simulations.
We have investigated this issue in more detail and studied the impact on the simulation accuracy at high energies, see \cref{c:geocoax}.
BGO shields for active Compton suppression were implemented with two different types of cone-shaped, lead front pieces (\emph{noses}).
Energy deposited in these detectors is converted to a veto signal afterwards.
For determining the HPGe FEP efficiency, it is not required to record veto detector data, and they can be used passively.
The two HPGe Clover detectors of the Cologne Clover Counting Setup \cite{Scholz2014a} with four crystals each were implemented with dimensions from prior work.
\subsubsection{Setup geometry}\label{c:setup}
For our experiments, detectors are placed around the target in the center.
The base class \texttt{Setup} is the abstract concept of an experimental setup which provides the common detector placement logic.
The individual setups derive from this base class and provide the Θ and φ coordinates of the mounting points as well as physical structures, if needed.
The main experimental setup covered in this project is the high-efficiency γ-ray spectrometer HORUS \cite{Netterdon2014a}.
It provides 14 mounting points, labeled \texttt{Ge00} to \texttt{Ge13}, for HPGe detectors and BGO anti-Compton shields, see \cref{f:horus}.
In the center of HORUS, different target chambers can be installed.
Two different target chambers for nuclear astrophysics were implemented, one with conventional and one with CAD geometry.
Different versions of the SONIC target chamber are available via CAD geometry.
The SONIC-V3 target chamber has 12 mounting points for PIPS detectors, and its ΔE-E variant additional 12 positions to accommodate thinner PIPS detectors to form ΔE-E telescopes \cite{Pickstone2017}.
For each experiment, the user builds the geometry in \texttt{DetectorConstruction} using \texttt{PlaceDetector(id, po\-si\-tion, distance, filters)}.
Within a single line, a detector is identified by its id, mounted to a named position, and equipped with passive filter materials.
See \cref{f:section} for a schematic view and distance definition.
The whole process of creating all required geometry information is thus reduced to a handful of clearly arranged lines of code, and can be done within minutes:
\begin{verbatim}
auto horus = new Horus(worldLV);
horus->PlaceDetector(
"609502", "Ge00", 7. * cm,
{{"G4_Cu", 2. * mm}, {"G4_Pb", 1. * mm}}
);
horus->PlaceDetector("73954", "Ge01", 7. * cm);
// ...
auto sonic = new SonicV3(worldLV);
sonic->PlaceDetector("PIPS", "Si00", 45.25 * mm);
sonic->PlaceDetector("PIPS", "Si01", 45.25 * mm);
// ...
\end{verbatim}
This method requires recompilation on any geometry change.
While it is possible to build a messenger system to set up the geometry at runtime with \textsc{Geant4} macros, the resulting improvement in usability is currently not deemed worth the loss of direct control and flexibility.
This is a subjective matter and we might revisit this decision in the future.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/horus.png}
\caption{\label{f:horus} Full virtual assembly of SONIC@HORUS. 14 HPGe detectors (blue germanium crystals with transparent black aluminum enclosures) and 6 BGO anti-Compton shields (red, with black lead noses) pointed at the target chamber (grey). Note that the z-axis points in beam direction, and the y-axis points down. Copper filters (orange) are installed in front of the detectors to reduce the number of low-energy γ-rays hitting the detectors.}
\end{figure}
\begin{figure}
\begin{tikzpicture}[>=Latex, font=\sffamily, scale=\columnwidth/252.0pt]
\node [anchor=north west,inner sep=0] (img) at (0,-0.5) {\includegraphics[width=\columnwidth]{figures/section.png}};
\node at (2,-0.2) {anti-Compton Shield};
\draw [-] (2,-0.4) -- (2,-0.7);
\node at (5,-0.35) {Lead Nose};
\draw [-] (5,-0.55) -- (4.7,-1.2);
\node at (7.5,-0.2) {Target Chamber};
\draw [-] (7.5,-0.4) -- (7.5,-0.7);
\node at (1,-4) {Detector Hull};
\draw [-] (1,-3.8) -- (1,-2.5);
\node at (4.5,-4) {Germanium Crystal};
\draw [-] (4,-3.8) -- (3,-2.5);
\node at (2.5,-4.5) {Cooling Finger};
\draw [-] (2.5,-4.3) -- (2,-2);
\node at (5.9,-3.4) {Energy Filter};
\draw [-] (5.9,-3.2) -- (5.3,-2.6);
\node at (8.1,-2) {Target};
\node at (5.8,-1.6) {d\textsubscript{HPGe}};
\draw [thick, |<->|] (7.55,-1.85) -- (3.55,-1.85);
\draw [thick, |<->|] (7.55,-2.15) -- (5.335,-2.15);
\node at (5.8,-2.45) {d\textsubscript{BGO}};
\end{tikzpicture}
\caption{\label{f:section} Schematic view of a HPGe detector and its anti-Compton shield. The distances $d_\text{HPGe}$ and $d_\text{BGO}$ are measured from the target position to the front of the detector or shield with filters equipped. For the anti-Compton shields, different nose sizes are available to match the opening angle at different distances.}
\end{figure}
\subsection{Physics}
Interactions of γ-rays are known well enough for most simulation purposes between \SI{20}{keV} and \SI{20}{\MeV}.
A predefined physics lists can supply all these interactions without hassle.
It is not necessary to create the physics from smallest components.
Most physics lists use the same standard electromagnetic physics, which, given the geometrical uncertainties, should be sufficient for this use case --- there should be no advantage in using the specialized high precision models for X-rays and low energy γ-rays.
G4Horus uses the \texttt{Shielding} physics list by default, because it includes the radioactive decay database.
\subsection{Actions}
All actions are initially dispatched by the \texttt{Action\-Ini\-tial\-iza\-tion} management class.
It parses the parameters passed to the executable and selects the appropriate primary generator, run action, and event action class.
Primary particles can either be generated by the basic \textsc{Geant4} \texttt{ParticleGun} to generate single, mono-energetic γ-rays for efficiency simulation or by specialized generators for, e.g., pγ-reactions.
One out of three output formats can be selected:
The simplest output type are histograms, which are created with the ROOT-compatible classes from \textsc{Geant4} and filled with the deposited energy for each detector.
If coincidence data is required, \texttt{ntuples} can be used.
Here, a table-like structure with a row for each detector is filled with a column for each event, also implemented with the ROOT-compatible classes from \textsc{Geant4}.
For simple efficiency simulations, this is extraordinarily inefficient as almost all entries will be zero.
Even with compression and zero-suppression, several gigabytes of data are accumulated quickly.
Instead, \emph{binary event files} can be used to store events.
They are normally produced by the sorting code \emph{SOCOv2} \cite{SOCOv2} as an intermediate event storage from raw experimental data.
Its data types, an output management class, and the respective actions have been implemented in G4Horus.
The format is well suited for the sparse data produced here, and a full simulation will produce only a few hundred megabytes of data.
The simulated data can be analyzed with the same procedure as real experimental data with the same or similar workflows.
All components are built with multi-threading in mind.
The main servers at the Institute for Nuclear Physics in Cologne provide 32 or 56 logical cores each, which can be used to capacity with the simulations.
The executable can either run in visual mode, where the geometry can be examined in 3D, or batch mode for the actual simulation.
\subsection{Automated data evaluation}
The main mission is the reliable and robust efficiency determination, which extends to simulation evaluation.
For this, a ROOT-script is included to automatically extract full-energy, single-escape, and double-escape-peak efficiencies for all simulated energies.
As energy resolution is neither included in the Monte-Carlo simulation itself nor currently added in the application, the full energy peak is a single isolated bin in the spectrum.
For single- and double-escape-peak, the Compton background is subtracted.
In case the single- and double-escape peak efficiencies must be determined with active Compton suppression, the vetoed spectra are created from \texttt{ntuple} data first.
\section{Dead regions and possible aging effects}\label{c:geocoax}
During extensive simulations of several experiments, it was found that for several hexagonally cut N-type HPGe crystals, the simulated efficiency is higher than the actual measured efficiency, up to \SI{30}{\percent} in some cases.
This issue was investigated further.
The shape of the crystal cannot be the issue, as its dimensions and especially its weight are well documented.
The dead-layer at the front of the detector was also excluded, as matching the required efficiency reduction leads to unrealistic thicknesses of \SI{10}{\mm} (instead of \SI{0.3}{\micro\m}) as well as strong deviations in the shape of the efficiency curve.
As the detectors in question were built over 20 years ago, aging effects might play a role.
The detector was used and stored cooled for most of the time but heated many times to anneal neutron induced damage.
While the dead layer at the front is created due to boron doping and should be immobile, the lithium doping of the core may have diffused further into the detector over time, creating a dead layer around the cooling finger.
Other groups have reported deviations from the manufacturer's crystal dimension specifications and aging effects.
For example, Berndt and Mortreau discovered that their cooling finger diameter is \SI{14}{\mm} instead of the declared \SI{10}{mm} by scanning the detector with highly collimated sources \cite{Berndt2012}.
Huy \emph{et al.} could trace an efficiency reduction back to an increase in the lithium dead layer of their p-type coaxial detector \cite{Huy2007}.
See also \cite{Sarangapani2017, Boson2008} and references therein.
We simulate a possible dead layer increase by splitting the geometry of the hexagonal cut HPGe crystal (radius $r_{C}$ and height $h_{C}$) in an active and inactive part.
Here, we made the simplest possible assumption: A cylinder with radius $r_{I}$ and height $h_{I}$ around the cylindrical borehole with
radius $r_{B}$ and height $h_{B}$, see \cref{f:deadhex-sketch}.
\begin{figure}
\begin{tikzpicture}[font=\small, scale=\columnwidth/6.2cm]
\tikzset{>=latex}
\draw[semithick] (0,-1.5) arc (-90:90:1.5/2.7 and 1.5);
\draw[semithick] (0,-1.5) arc (270:90:1.5/2.7 and 1.5);
\draw[semithick] (5,-1.5) arc (-90:90:1.5/2.7 and 1.5);
\draw[semithick] (5,-1.5) arc (270:90:1.5/2.7 and 1.5);
\draw[semithick] (0,-1.5) -- (5,-1.5);
\draw[semithick] (0,+1.5) -- (5,+1.5);
\draw[|<->|,thin] (0,-1.6) -- (5,-1.6) node [midway, below, yshift=0.9mm] {$h_C$};
\draw[dashed,color=gray] (0,-0.7) arc (-90:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (0,-0.7) arc (270:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (4.3,-0.7) arc (-90:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (4.3,-0.7) arc (270:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (0,-0.7) -- (4.3,-0.7);
\draw[dashed,color=gray] (0,+0.7) -- (4.3,+0.7);
\draw[|<->|,thin] (0,-0.8) -- (4.3,-0.8) node [midway, below, yshift=0.9mm] {$h_I$};
\draw[semithick] (0,-0.4) arc (-90:90:0.4/2.7 and 0.4);
\draw[semithick] (0,-0.4) arc (270:90:0.4/2.7 and 0.4);
\draw[semithick] (4,-0.4) arc (-90:90:0.4/2.7 and 0.4);
\draw[semithick] (4,-0.4) arc (270:90:0.4/2.7 and 0.4);
\draw[semithick] (0,-0.4) -- (4,-0.4);
\draw[semithick] (0,+0.4) -- (4,+0.4);
\draw[|<->|,thin] (0,-0.5) -- (4,-0.5) node [midway, below, yshift=0.9mm] {$h_B$};
\draw[dotted] (0,0) -- (5,0);
\draw[|<->|,thin] (5,0) -- (5,1.5) node [midway, right] {$r_C$};
\draw[|<->|,thin] (4.3,0) -- (4.3,0.7) node [at end, above] {$r_I$};
\draw[|<->|,thin] (3.7,0) -- (3.7,0.4) node [midway, left] {$r_B$};
\end{tikzpicture}
\caption{\label{f:deadhex-sketch}
Sketch of a HPGe crystal with radius $r_C$ and height $h_{C}$ with its borehole with radius $r_{B}$ and height $h_{B}$.
Around this hole, we assume an inactive zone with radius $r_{I}$ and height $h_{I}$.
}
\end{figure}
A quick approximation for $r_{I}$ and $h_{I}$ as a function of the relative active volume $A=\frac{\text{Active Volume}}{\text{Total Volume}}$ can be made in two steps:
First, the back part with the bore hole, i.e., three cylinders with the same height
\begin{equation}
A = \frac{r_C^2 - r_{I}^2}{r_C^2 - r_B^2} \Rightarrow r_{I} = \sqrt{r_C^2-A(r_C^2-r_B^2)},
\end{equation}
where a normal cylindrical shape $C$ for the whole crystal is assumed.
Second, the front part:
\begin{equation}
A = 1 - \frac{(h_{I}-h_B)r_{I}^2}{(h_C-h_B)r_C^2} \Rightarrow h_{I} = h_B + (1-A)(h_C-h_B)\frac{r_C^2}{r_{I}^2}
\end{equation}
Simulations exploring a large range of $A$ are compared to experimental values for one detector in \cref{f:deadhex-efficiency}.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/deadhex/efficiency.pdf}
\includegraphics[width=\columnwidth]{figures/deadhex/effdiv.pdf}
\includegraphics[width=\columnwidth]{figures/deadhex/effscaling.pdf}
\caption{\label{f:deadhex-efficiency}
a) Experimental and simulated full-energy-peak efficiency for a hexagonally cut encapsulated HPGe detector.
b) Experimental and simulated full-energy-peak efficiency divided by a reference simulation ($A=\SI{85}{\percent}$).
c) Scale and d) shape quality indicators for different values of active volume $A$.
Notice how the simulation for $A=\SI{100}{\percent}$ is overestimating the real performance by a significant amount - simply scaling an efficiency to the experimental values will not result in accurate results.
The relative differences between the simulations also increase drastically with γ-ray energy.
Once all geometry parameters are optimized, the minima for SCAL and EWSD should be at the same position.
}
\end{figure}
The simulation should reproduce the scale and shape of the efficiency curve.
\texttt{curve\_fit} from the scipy-optimize library was used to find the scaling factor $p$ for each simulation to the measured data points.
Values between the \SI{100}{\keV}-spaced simulation points were interpolated linearly.
An ideal value would be $p=1$, i.e., no scaling. To derive the best value for $A$, this can be reformulated as a smooth minimizable function
\begin{equation}
\text{SCAL}(A) = (1-p)^2.
\end{equation}
In addition, the shape of the curve is extraordinarily important, especially with respect to deriving efficiencies at \SI{10}{\MeV}.
To emphasize the fewer calibration points at high energies, we define the energy weighted squared deviation of the scaled curve
\begin{equation}
\text{EWSD}(A) = \frac{\sum_i{E_{\gamma_i} (\epsilon_{exp}(E_{\gamma_i})-p\epsilon_{sim}(E_{\gamma_i}))^2}}{\sum_i E_{\gamma_i}},\label{eq:ewsd}
\end{equation}
which is another minimizable function of $A$ and related to the covariance / uncertainty of the scaling factor.
Note that other scaling factors for the energy could also be used, e.g., $E_{\gamma_i}^3$.
With this approach the single free variable $A$ can be determined by minimizing both SCAL and EWSD, see \cref{f:deadhex-efficiency}.
\section{Results}
The goals described in \cref{c:purpose} could be achieved.
Efficiencies can be simulated with satisfactory accuracy, including SEP and DEP efficiencies with and without veto, an example is show in \cref{f:efficiency}.
In version 1.0 \cite{jan_mayer_2020_3692475}, 22 HPGe detectors and 5 target chambers are implemented and can be easily combined to the individual setup with minimal knowledge of \textsc{Geant4} or HPGe detector geometries.
Adding new or tweaking existing detectors is possible with a central data file.
There is a procedure in place to add new experimental setups and target chambers as well as detector types.
We have used this simulation environment to make informed decisions about extensions to the existing setup, e.g., adding passive shielding to reduce the number of low energetic γ-rays.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/efficiency/efficiency.pdf}
\caption{\label{f:efficiency}
Example for simulated single escape efficiencies with and without active Compton suppression \cite{Mayer2016}. The escape-peak efficiency can also be tested in-beam with transitions from common contaminants like oxygen and carbon by scaling their intensity to the full-energy-peak efficiency.
}
\end{figure}
The software has been used for several experiments with good results, even though some detectors still require manual parameter tweaking to reproduce the experimental values accurately.
This project was released as Open-Source and is available from \url{https://github.com/janmayer/G4Horus} \cite{jan_mayer_2020_3692475}.
We invite everyone to adapt the project or scrounge parts of the code for other projects.
While our developments are focused on the HORUS setup, the code can be used for other, unrelated experiments employing γ-ray detectors surrounding a target.
Experimental setups can be added by deriving them from the \texttt{Setup} class and specifying the detector Θ and φ angles in the constructor.
Typical HPGe detectors can be added by appending their individual parameter sets to the Detector Library.
If the existing detector templates are insufficient, more can be added by deriving them from the \texttt{Detector} class and overriding
the provided virtual methods.
Target chamber can be implemented with the usual \textsc{Geant4} methods or with CAD-based models as described in \cref{c:chambergeo}.
\section{Outlook}
A large problem with \textsc{Geant4} is the geometry implementation.
While using code is a step up over digital punch cards used in MCNP, it is decades behind other simulation integrations as seen in, e.g., finite element analysis.
In the future, it would be advisable to find a modern solution that is ready for everyday production usage.
Due to its massive advantages in development speed, ease of use, and flexibility, CAD based simulation geometry could be officially supported by the \textsc{Geant4} collaboration.
To reduce the slowdown of simulations, a hybrid approach might be feasible: Convert structures to simple shapes where possible and use tessellated shapes for the remnants.
In a new Monte Carlo code, only tessellated shapes could be supported and used exclusively with GPUs.
For G4Horus, we continue to make improvements to the description of our detectors as well as add new functionality like better support for pγ- and γγ-coincidence measurements.
\section{Acknowledgments}
We would like to thank D. Diefenbach and S. Thiel from our development workshop for accelerators and accelerator experiments for designing the target chambers and their help with the CAD models, Dr. J. Eberth for the fruitful discussions about HPGe detectors, and C. Müller-Gatermann and the accelerator crew for their help with the experiments and source production.
Supported by the DFG (ZI 510/8-1, ZI-510/9-1).
| 2024-02-18T23:39:56.542Z | 2020-06-15T02:11:43.000Z | algebraic_stack_train_0000 | 894 | 6,223 |
|
proofpile-arXiv_065-4549 | \section{Notice of copyright}
This manuscript has been authored in part by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (\url{http://energy.gov/downloads/doe-public-access-plan}).
\section{Introduction}
Square-roots of operators appear in a large number of contexts in theoretical physics, and also play an important role in operator theory. In some cases, it is practical to calculate the operator square-root (OSR) using explicit formulas or by diagonalizing the operator. Often, however, there is only a very limited set of analytical tools to treat them, typically in the form of perturbative expansions.
This is not because OSRs represent a niche problem. Indeed one of the earliest appearances was near the beginning of quantum mechanics in the square-root of the Klein-Gordon equation \cite{1926ZPhy...37..895K,1926ZPhy...40..117G,doi:10.1063/1.1703882,doi:10.1063/1.530015,namsrai1998square,shakeri2008numerical,HAAS_2013}. Even for such an old problem it may prove useful to have a larger analytical toolbox. Another prominent example of OSRs occurs in the Holstein-Primakoff spin representation \cite{PhysRev.58.1098,Auerbach1994}, which is the usual starting point for spin-wave theory calculations. A third important OSR shows up in the context of quantum information in the form of the fidelity function \cite{Nielsen2010,GU_2010,bengtsson2017geometry,jozsa1994fidelity,barnum1996noncommuting,Raginsky_2001,Peters_2004,_yczkowski_2005,zanardi2007mixed,Paunkovi__2008,Mendon_a_2008,Wang_2009,Quan_2009,Marian_2012} and the Bures metric \cite{bures1969extension,uhlmann1976transition,_yczkowski_2005,Mendon_a_2008,Marian_2012,bengtsson2017geometry}, both used to quantify the closeness of two quantum states. The purpose of the current paper is, however, not to review all examples of OSRs, but to introduce a non-perturbative approximation of OSRs.
Our method is inspired by several flow equation approaches to many-body problems. For instance, the Wegner flow equation approach \cite{Wegner1994}, which was applied to various problems \cite{Wegner1994,Kehrein2007,PhysRevB.97.060201,quito2016localization,bach2010rigorous,wegner2001flow,mielke1998flow,lenz1996flow,kehrein1995flow,wegner2006flow,gubankova1998flow,ragwitz1999flow,kehrein1994flow,wegner1998flow,Kelly_2020}, allows for a non-perturbative diagonalization of a Hamiltonian using flow equations for its couplings. In this approach, the problem of diagonalization is recast in terms of differential equations. Similar methods have recently been used by some of us to find effective Floquet Hamiltonians \cite{Vogl_2019}, and various approximations to the time evolution operator \cite{Vogl_2019HJ}. Differential equation approaches have also been used in the method of unitary integration for the Liouville-Bloch equation \cite{PhysRevLett.81.4785} and Lindblad equation \cite{Rau_2002}. We aim to use a similar approach to approximate an operator square-root.
The application that may be of most current interest is the Holstein-Primakoff (HP) OSR. The HP representation is typically used in the context of spin (local moment) models to represent deviations around a well-defined spin order in terms of a single species of boson per lattice site. It allows for a perturbative expansion in the number operator of such bosons, and ultimately leads to linear \cite{toth2015linear} and non-linear \cite{RevModPhys.85.219} spinwave descriptions of quantum magnets \cite{Auerbach1994}. However, for many systems of interest a ground-state spin ordering may be unknown or fail to exist, such as in frustrated systems \cite{Schmidt2017}, spin liquids \cite{balents2010spin,Broholm_2020,Zhou_2017,Knolle_2019}, and one-dimensional systems \cite{Giamarchi2004}. In such cases, the perturbative expansion often proves inaccurate or inconvenient.
Instead, more symmetric spin representations such as Schwinger bosons \cite{Auerbach1994,auerbach2011schwinger} and slave particle approaches \cite{RevModPhys.78.17,Shindou_2009} are commonly used, but require the introduction of auxiliary fields. Other fermionic approaches include the Jordan-Wigner representation of spin-$1/2$ operators \cite{1928ZPhy...47..631J}, and its generally complicated-to-use generalizations to higher dimensions \cite{PhysRevLett.63.322,PhysRevLett.71.3622} and higher spin \cite{PhysRevLett.86.1082,Dobrov_2003,PhysRevB.71.092404}. An important, equivalent alternative to the HP representation that also uses a single boson species, but avoids the square-root, is the Dyson-Maleev representation \cite{PhysRev.102.1217,Maleev1958,Itoi_1994,RevModPhys.63.375}. However, it has the drawback of generically breaking hermiticity. This is by no means an exhaustive list of spin representations --- indeed, other representations can be be found in Refs.\cite{Villain1974,Villain1975,PhysRevB.19.4780,10.1088/0305-4470/13/2/014,GARBACZEWSKI197865,Zhou_1999}. Since each of the available approaches has its own unique advantages and drawbacks, we will in this paper derive expressions for spin operators that i) involve only one boson species satisfying the canonical bosonic commutation relation, ii) preserve hermiticity, and iii) do not include square-roots of operators or other non-polynomial functions of operators.
Some of the expressions we derive were previously found to finite order in Ref.~\cite{Lindgard1974,batyev1986antiferromagnet} by a matching matrix elements (MME) method and have also been usefully applied in \cite{PhysRevB.93.224402} to capture effects beyond the reach of a $1/S$ expansion. Unlike the normal Taylor expansion of the HP OSR, the MME expansion and our result are able to correctly describe the symmetry in a Heisenberg model with easy-plane anisotropy as we will see later in the text. This is a long standing problem and was discussed using a slightly different approach in \cite{Tsuru1986}. Our expansion thus naturally captures the same physics. However, unlike previous works, we present results to all orders and show that the expressions are exact when truncated to an appropriate order that depends on the spin length $S$. This feature was missed in all previous discussions we are aware of, since they focused entirely on reproducing commutation relations of spin operators. We, however, use a slightly softer exactness criterion. Namely, we require only that the operators are block-diagonal with physical and unphysical subspace blocks. In the commutator language we require that the commutators are reproduced up to a term that acts exclusively on the unphysical subspace, without coupling to the physical subspace. This is akin to allowing an inaccessible ``dark sector'' in the spin operator algebra. A more detailed discussion of this rationale is given in the main text.
Our hope is that such a representation may prove useful in describing spectral features not readily captured by conventional spin-wave theory, as is the case in e.g. the triangular-lattice antiferromagnet Ba$_3$CoSb$_2$O$_9$ \cite{Kamiya2018,PhysRevB.93.224402}, and quantum spin liquid candidates. Among the latter, the Kitaev spin liquid \cite{Kitaev_2006,Yang_2008,Vidal_2008,Schmitt_2015,willans2011site,pedrocchi2011physical,burnell20112,janvsa2018observation,gorshkov2013kitaev,halasz2016resonant,hickey2019emergence,wang2010reduced,cui2010quantum,halasz2014doping,wang2010realization,abasto2009thermal,schmoll2017kitaev,bolukbasi2012rigorous,kells2009finite,dusuel2008perturbative} is receiving particularly intense attention, since it hosts anyonic excitations of interest to topological quantum computing. While the ideal model is solvable \cite{Kitaev_2006}, and its dynamics known \cite{PhysRevLett.112.207203}, the description of realistic candidate materials \cite{Takagi2019,doi:10.7566/JPSJ.89.012002,trebst2017kitaev} require additional Hamiltonian terms, which generically breaks integrability. Some such candidates include $\alpha$-RuCl$_3$ \cite{kim2015kitaev,Banerjee2016,sandilands2016spin,Banerjee2017,Do2017,ran2017spin,glamazda2017relation,wolter2017field,Banerjee2018,yu2018ultralow,lampen2018anisotropic,Balz2019,eichstaedt2019deriving,laurell2020dynamical},
CrI$_3$ \cite{xu2018interplay,stavropoulos2019microscopic,lee2020fundamental,aguilera2020topological,rodriguez2020phonon} and honeycomb iridium oxides \cite{chaloupka2010kitaev,kimchi2011kitaev,singh2012relevance,simutis2018chemical}.
The manuscript is structured as follows. In the next section of the paper we discuss how to compute square-roots of operators by using a differential equation approach. In section \ref{expansion_sqrtn} we show how this formalism may be used to find a series expansion for $\sqrt{a^\dag a}$ near $a^\dag a\approx 0$ in terms of integer powers of $(a^\dag a)$, which is an unexpected result because $\sqrt{x}$ cannot be expanded in integer powers of $x$ near $x=0$. Of course, since $a^\dag a$ is an operator $a^\dag a\approx 0$ is a shorthand for "in the part of the Hilbert space where where matrix elements are close to zero". We will use similar shorthands throughout the text. This shows that a Taylor series may not always be ideal for finding power series expansions of operator functions. In section \ref{sec:KG} we then apply the method to the Klein-Gordon particle in a magnetic field with small or zero-mass --- such as in graphene. In section \ref{sec:HP} we present our main application to the Holstein-Primakoff representation of spin operators. We stress that the results we obtain are exact expressions for spin operators that are polynomial in bosonic operators. Lastly we present our conclusion.
\section{General Formalism}
The goal of this section is to find an operator differential equation that can be used to calculate a square root of two operators, $\sqrt{O_1+O_2}$, where $O_1$ and $O_2$ are both operators defined on the same complex Hilbert space $\mathcal{H}$. We will make two simplifying assumptions. First, we assume that a square-root of one of the operators, $O_1$, is known or easy to calculate. Second, we assume that the two operators commute, $[O_1,O_2]=0$. Both these assumptions also have to be made in order for a Taylor expansion in $O_2$ to be viable (the more generic case is more involved, see Appendix~\ref{general_expansion} for details). For instance one could have $O_1=c\mathbb{1}$ with $c\in \mathbb{C}$, and $O_2$ any other operator. It should be noted that the OSR of an operator $O_1$ can have multiple branches, which may seem like an ambiguity. However, the choice of branch will be encoded in the initial conditions for the differential equations we derive, and should be informed by the problem at hand. Different branch choices can lead to different physics --- e.g. a branch with complex eigenvalues could not be used to describe a Hermitian Hamiltonian. The way we will compute $\sqrt{O_1+O_2}$ is by introducing the second operator $O_2$ in infinitesimal steps. To keep track of the steps we introduce a dummy parameter $s$ and define
\begin{equation}
O_{\sqrt{\;}}(s):=\sqrt{O_1+sO_2}.
\end{equation}
Using the assumption $[O_1,O_2]=0$ we find that sending $s\to s+\delta s$ gives
\begin{equation}
O_{\sqrt{\;}}(s+\delta s)=O_{\sqrt{\;}}(s)+\frac{\delta s}{2O_{\sqrt{\;}}(s)}O_2
\end{equation}
if $\delta s$ is infinitesimal and we therefore did a Taylor expansion of the right hand side.
A Taylor expansion of the left side gives us the differential equation
\begin{equation}
\frac{d O_{\sqrt{\;}}(s)}{ds}=\frac{1}{2O_{\sqrt{\;}}(s)}O_2
\label{cumbersomeDGL}
\end{equation}
that makes it possible to find $O_{\sqrt{\;}}(s)$ by introducing $O_2$ via infinitesimal steps. Note that this also means that $\sqrt{O_1}$ for the branch used needs to be invertible, or at least the limit of an invertible operator as we will see in the upcoming section.
The issue with this equation is that calculating the inverse of an operator is difficult. That is, we cannot easily make an ansatz for $O_{\sqrt{\;}}(s)=\sum_n c_n(s)\hat O_n$ as a sum of operators with $s$-dependent coefficients and solve this equation because calculating the inverse of the ansatz is difficult.
This issue can be resolved with a little bit of extra work. We define
\begin{equation}
O_{\sqrt{\;}}^{-1}(s):=\frac{1}{O_{\sqrt{\;}}(s)}.
\label{cumbersome_variation}
\end{equation}
In this case Eq.~\eqref{cumbersomeDGL} becomes
\begin{equation}
\frac{d O_{\sqrt{\;}}(s)}{ds}=\frac{1}{2}O_{\sqrt{\;}}^{-1}(s)O_2,
\label{rel_sqrttoderiv}
\end{equation}
and we now need to find a differential equation for $O_{\sqrt{\;}}^{{-1}}(s)$, which can be obtained by Taylor expanding $O_{\sqrt{\;}}^{{-1}}(s+\delta s)$ in a similar way to above,
\begin{equation}
\frac{dO_{\sqrt{\;}}^{-1}( s)}{ds}=-\frac{1}{2}(O_{\sqrt{\;}}^{-1}(s))^3 O_2.
\label{diffforinversesqrt}
\end{equation}
One may insert Eq.~\eqref{rel_sqrttoderiv} in Eq.~\eqref{diffforinversesqrt}, and we find after rearranging that
\begin{equation}
\frac{1}{2}O_2 \frac{d^2O_{\sqrt{\;}}(s)}{ds^2}=-\left(\frac{dO_{\sqrt{\;}}(s)}{ds}\right)^3.
\label{good_dgl}
\end{equation}
The equation in this form is now useful to find coefficients $C_i$ for an ansatz $O_{\sqrt{\;}}(s)=\sum_n c_n(s)\hat O_n$ because powers of this operator are trivial to compute.
\section{Expanding the square-root of the number operator}
\label{expansion_sqrtn}
We may now use equation \eqref{good_dgl} to find an expansion of
$\sqrt{a^\dag a}$. In the language of the previous section for this case $O_2=a^\dag a$ and $O_1=0^+\mathbb{1}$, where $0^+$ signifies a dummy variable that eventually will take a directed limit to zero. One can make the ansatz
\begin{equation}
\sqrt{sa^\dag a}\approx \sum_n C_n(s)(a^\dag)^n a^n
\label{ansatz}
\end{equation} and compare coefficients of $(a^\dag)^n a^n$ to find a set of differential equations for $C_n$. If we truncate at third order we find
\begin{widetext}
\begin{equation}
\begin{aligned}
C_0^\prime&=0\\
\frac{C_1''(s)}{2}&=-C_1'(s)^3\\
\frac{C_2''(s)}{2}&=-6 C_1'(s) C_2'(s) \left[C_1'(s)+C_2'(s)\right]-C_1'(s)^3-2 C_2'(s)^3\\
\frac{C_3''(s)}{4}&=-36 C_2'(s) C_3'(s) \left[C_1'(s)+C_2'(s)+C_3'(s)\right]-3 C_1'(s) C_2'(s) \left[C_1'(s)+4 C_2'(s)\right]\\
&-9 C_1'(s) C_3'(s) \left[C_1'(s)+2 C_3'(s)\right]-10 C_2'(s)^3-12 C_3'(s)^3
\end{aligned}
\label{diffeqeqsqrt}
\end{equation}
\end{widetext}
and initial conditions
\begin{equation}
C_{0,1,2,3}(0)=0;\quad C_1'(0)=\frac{1}{2\sqrt{0^+}};\quad C_{2,3}^\prime(0)=0.
\label{initSQRT}
\end{equation}
The initial conditions were found by comparison to the infinitesimal case that is accurately described by a first order Taylor series. Note that the term with $\frac{1}{2\sqrt{0^+}}$ represents a directional limit that has to be taken at the end but $0^+$ can first be replaced by a dummy variable.
If we solve the equations and set $s=1$ and take the limit for $0^+$ we find that
\begin{equation}
\sqrt{a^\dag a}\approx a^\dag a+\frac{\sqrt{2}-2}{2}a^{\dag ^2} a^2+\frac{3-3 \sqrt{2}+\sqrt{3}}{6} a^{\dag ^3} a^3.
\end{equation}
One should note that this expression can be put in terms of powers of $\hat n= a^\dag a$ and is valid near $ a^\dag a=0$.
More precisely, in what sense does this expansion converge to the correct operator? The answer is that by including terms up to $\left( a^{\dag}\right)^n a^n$ the $n+1$ lowest eigenvalues are exactly reproduced; higher eigenvalues are approximated more accurately as well.
It is important to stress that the square-root $\sqrt{x}$ is non-analytic near $x=0$. Yet we were able to find an expansion in terms of powers of $x$ that is valid near $x=0$.
\section{Application to the Klein-Gordon square-root}\label{sec:KG}
The method for finding a non-perturbative expansion of an operator square-root can of course also be used for the Klein-Gordon square-root Hamiltonian for relativistic particles. Let us for instance consider the 2D Hamiltonian
\begin{equation}
H=\sqrt{m^2+p^2}+V(x,y).
\end{equation}
If this system is subjected to a constant magnetic field given by $\vect A=\frac{B}{2}(-y,x)$ one may introduce the magnetic field by minimal substitution $p_i\to\Pi_i=p_i-A_i$ and one can introduce creation and annihilation operators, $a=\sqrt{\frac{1}{2B}}(\Pi_x+i\Pi_y)$, to find the Hamiltonian
\begin{equation}
H=\sqrt{4|B|S}\sqrt{1+\frac{a^\dag a}{2S}}+V(x,y),
\end{equation}
where we introduced a short-hand $S=\frac{m^2+|B|}{4|B|}$. The operator now bears a striking resemblance to the square-root that appears in the Holstein-Primakoff spin representation, which we will discuss later. A straightforward Taylor expansion of the square-root in terms of $1/S$ already yields corrections
\begin{equation}
H\approx \sqrt{4|B|S}-\frac{1}{4}\sqrt{\frac{|B|}{S}}+\frac{1}{4}\sqrt{\frac{1}{|B|S}}\Pi^2+V(x,y)
\end{equation}
to what one would expect from the non-relativistic limit of large mass
\begin{equation}
\sqrt{m^2+\Pi^2}\approx m+\frac{\Pi^2}{2m}+V(x,y).
\end{equation}
This approximation lifts the restrictions to large masses from the non-relativistic limit as long as one considers strong magnetic fields.
However, we can do better without the introduction of further complications. That is, we can make the ansatz $\sqrt{1+sa^\dag a}=\sum_n C_n(s)(a^\dag)^n a^n$, which means that we can employ the first two differential equations from \eqref{diffeqeqsqrt} to approximate the square root. For this we have to choose slightly different initial conditions than previously, $C_0(0)=1$, $C_1(0)=0$, $C_1'(0)=1/2$ and let s run up to $s=1/(2S)$. The initial conditions are again found by comparison to a first order Taylor expansion. The result we find is
\begin{equation}
\begin{aligned}
& H\approx \sqrt{4|B|S}\left[1+\left(\sqrt{1+\frac{1}{2S}}-1\right)\frac{S}{2|B|}\Pi^2\right]+V(x,y)
\end{aligned}.
\hspace*{-0.5cm}
\label{approx_kgsqrt}
\end{equation}
This new approximation is now more reliable for small $|B|$, $m$ and level number $n$. This is seen most easily in the case of $V(x,y)=0$ where it is easy to check that it reproduces the lowest two energy levels $n=0,1$ exactly (recall that $\hat n=a^\dag a=S\Pi^2/(2|B|)$).
The advantage of this approximation over an exact solution is that a quadratic $V(x,y)$ can be added and an analytic solution of this approximate problem is still possible because this is still a harmonic oscillator. Note that in this case we would be able to find an approximation that is non-perturbative in $1/S$.
\section{Resummed Holstein-Primakoff expansion}\label{sec:HP}
We will now turn to our most interesting application --- an expansion for the square-root in the Holstein-Primakoff representation of a spin operator.
\subsection{Review of the method}
The Holstein-Primakoff representation \cite{PhysRev.58.1098} of spin-$S$ operators is given as
\begin{equation}
\begin{aligned}
&S^+=\hbar \sqrt{2S} \sqrt{1-\frac{a^\dagger a}{2S}}\, a\\
&S^- = \hbar \sqrt{2S} a^\dagger\, \sqrt{1-\frac{a^\dagger a}{2S}}\\
&S^z = \hbar(S - a^\dagger a)
\end{aligned}.
\label{holstein-primakoff}
\end{equation}
A few notes are due. For finite $S$ only finitely many bosonic excitations correspond to physical states. That is, bosonic excitations correspond to spin projections i.e. $S^z$ can only take eigenvalues in $\{ -S, -S+1, \dots ,S\}$. Hence, for spin $S$ we have the restriction $a^\dagger a \leq 2S$, which is also signaled by the fact that the square-root becomes imaginary for higher occupation numbers.
This means that the Hilbert space is a Fock space, $F(\mathcal{H})=\bigoplus_{n=0}^\infty \mathcal{S}\mathcal{H}^{\otimes n}$, where $\mathcal{S}$ is the symmetrization operator, $\mathcal{H}^{\otimes n}$ denotes $n$ tensor products of the single particle Hilbert space $\mathcal{H}$. For spin $S$ the physical part of the Hilbert space is restricted such that it has the basis $\{\ket{0},...,\ket{2S}\}$.
\subsection{Exactness of the Holstein Primakoff approximation}
To see that the Holstein-Primakoff representation is an exact description of spin operators it is enough to check that it fulfills the correct spin algebra.
For instance $[S^+,S^-]=2S^z$.
This reasoning is slightly restrictive so let us soften it a bit. The key feature of the Holstein Primakoff representation is that the spin operators $S^{+,-,z}$ reproduce the exact spin operators on the physical part of the Hilbert space and at the same time have no elements that couple to the unphysical part of the Hilbert space.
That is, in the occupation basis they have the form
\begin{equation}
S^{+,-,z}=\begin{pmatrix}
S^{+,-,z}_{phys}&0\\
0& S^{+,-,z}_{unphys}
\end{pmatrix}.
\label{Holstein_spin_op_block}
\end{equation}
In particular, for $S=1/2$ the explicit form of $S^+$ in the occupation basis is
\begin{align}
S^+ &= \left( \begin{array}{cc|ccccc}
\color{lightblue}0 & \color{lightblue} 1 & & & & &\\
\color{lightblue}0 & \color{lightblue} 0 & & & & &\\\hline\rule{0pt}{2.6ex}
&& \color{redd}0 &\color{redd}i\sqrt{3}&\color{redd}0&\color{redd}\cdots&\color{redd}0\\
&& \color{redd}\vdots&&\color{redd}\ddots&&\color{redd}\vdots\\
&& \color{redd}0&\color{redd}\cdots&&&\color{redd}0
\end{array}\right).
\end{align}
One sees that it splits into the physical (highlighted in blue) and unphysical (red) Hilbert spaces like in equation \eqref{Holstein_spin_op_block}
The physical block is just the conventional $S^+$ matrix for spin 1/2. Importantly there is no coupling between physical and unphysical parts of the Hilbert space. This is what makes the method exact.
Note that, because of this block structure, a spin Hamiltonian exactly written in the bosonic language will also separate into physical and unphysical blocks because the product of block diagonal matrices stays block diagonal. That is, the Hamiltonian is block diagonal of the form
\begin{equation}
H=\begin{pmatrix}
H_{phys}&0\\
0&H_{unphys}
\end{pmatrix}.
\end{equation}
One now can see that diagonalising the Hamiltonian one will find the exact physical eigenvalues and spurious unphysical ones.
\subsection{Usual Approach: Taylor expansion}
While the expressions in Eq. \eqref{holstein-primakoff} provide an exact way to represent the spin operators this is not too useful by itself because the square-roots are impractical to work with. One usually does a Taylor expansion around large $S$, using $1/S$ as expansion parameter.
\begin{equation}
\begin{aligned}
S^+\approx \hbar \sqrt{2S}\left(1-\frac{1}{4S}a^\dag a-\frac{1}{32S^2}(a^\dag a+a^{\dag^2}a^2)\right.\\
-\left.\frac{1}{128 S^3}(a^{\dagger } a+3 a^{\dagger ^2} a^2+a^{\dagger^3 } a^3)\right)a.
\end{aligned}
\label{Taylor_sp}
\end{equation}
This approach is most often also used in the case of $S=\frac{1}{2}$, where it is slightly surprising that it is justified. To see why it is justified recall that as mentioned above for smaller spins only states with few bosonic excitations e.g. $\{|0\rangle,|1\rangle\}$ for spin $\frac{1}{2}$ are physical. Therefore acting in this part of the Hilbert space $a^\dag a|_{phys}\leq 1$ and the expansion is valid.
Although the expansion is useful it is not exact when truncated at any finite order.
The spin operators $S^{+,-}$ no longer separate into physical and unphysical blocks, but couple physical and unphysical parts of the Hilbert space. For example, for spin 1/2 the spin operator $S^+$ in Eq.~\eqref{Taylor_sp} has the form
\begin{align}
S^+ &\approx \left( \begin{array}{cc|ccccc}
\color{lightblue}0 & \color{lightblue} 1 & & & & &\\
\color{lightblue}0 & \color{lightblue} 0 & \frac{5}{8\sqrt{2}} & & & &\\\hline\rule{0pt}{2.6ex}
&& \color{redd}0 &\color{redd}i\sqrt{3}&\color{redd}0&\color{redd}\cdots&\color{redd}0\\
&& \color{redd}\vdots&&\color{redd}\ddots&&\color{redd}\vdots\\
&& \color{redd}0&\color{redd}\cdots&&&\color{redd}0
\end{array}\right).
\end{align}
One may see that physical and unphysical parts of the Hilbert space get coupled by the term $\frac{5}{8\sqrt{2}}$.
Generically a spin Hamiltonian using this approximate bosonic language when expressed in occupation number space has the form
\begin{equation}
H=\begin{pmatrix}
H_{phys}&\Delta\\
\Delta^\dag &H_{unphys}
\end{pmatrix},
\end{equation}
where $\Delta$ is the small coupling between physical and unphyical parts of the Hilbert space. It leads to unphysical contributions in the physical eigenvalues. The method is not exact anymore.
\subsection{Improved Expansion}
As mentioned we can improve on the expansion. One may use the differential equation \eqref{good_dgl} to find such an improved expansion of the square-root. Like previously one may use the ansatz \eqref{ansatz} to introduce $-\frac{a^\dagger a}{2S}$ by infinitesimal steps. However, because we need to decrease terms under the square-root rather than increase them, one has to replace $d/ds\to -d/ds$ in \eqref{diffeqeqsqrt}. The second thing that changes compared to before are two of the initial conditions
\begin{equation}
\begin{aligned}
C_0(0)=1;\quad C_1'(0)=-\frac{1}{4},
\end{aligned}
\end{equation}
while the other initial conditions in \eqref{initSQRT} remain unchanged.
The solution to these differential equations \eqref{diffeqeqsqrt} for $s=\frac{1}{S}$ with the new initial conditions gives us an improved Holstein-Primakoff expansion up to third order, which we will not present here.
Rather with additional work one may find that it is possible to construct higher order terms by the same scheme. After analysing additional orders one can see a pattern emerge. We find that the full expansion is given as
\begin{equation}
\begin{aligned}
&S_+\approx \hbar \sqrt{2S}\left[\sum_{n=0}^{n_{\mathrm{max}}} Q_n a^{\dag^n} a^n\right]a;\quad Q_0=1,\\ &Q_n=\frac{1}{n!}A_n-\sum_{m=0}^{n-1}\frac{1}{(n-m)!}Q_m;\quad A_n=\sqrt{1-\frac{n}{2S}},
\end{aligned}
\label{new_expans_spin}
\end{equation}
where we prove later that this amounts to exact expressions for spin operators. It should be noted that during the review process of this manuscript equivalent expressions in closed form were also found by elegant alternative means via a Newton series expansion \cite{konig2020newton}.
Let us for now truncate at $n_{\mathrm{max}}=1$ to find
\begin{equation}
\begin{aligned}
&S^+\approx \hbar \sqrt{2S}\left[1+\left(\sqrt{1-\frac{1}{2 S}}-1\right)a^\dag a\right]a,
\end{aligned}
\label{trunc_splus}
\end{equation}
and discuss the case of spin $S=\frac{1}{2}$ to most easily see what kind of improvement we achieved. One may note that an expansion around large $S$ gives back the results for the Taylor expansion. In that sense our new expansion is a resummation of the Taylor series.
In the occupation basis we find
\begin{align}
S^+ &= \left( \begin{array}{cc|ccccc}
\color{lightblue}0 & \color{lightblue} 1 & & & & &\\
\color{lightblue}0 & \color{lightblue} 0 & & & & &\\\hline\rule{0pt}{2.6ex}
&& \color{redd}0 &\color{redd}-\sqrt{3}&\color{redd}0&\color{redd}\cdots&\color{redd}0\\
&& \color{redd}\vdots&&\color{redd}\ddots&&\color{redd}\vdots\\
&& \color{redd}0&\color{redd}\cdots&&&\color{redd}0
\end{array}\right). \label{eq:splusnew:matrix}
\end{align}
Therefore the spin operator reproduces the physical matrix elements of $S^+$, and the physical block does not couple to the unphysical block like in Eq. \eqref{Holstein_spin_op_block}.
One can show also more explicitly that there are no coupling between physical and unphysical parts of the Hilbert space
\begin{equation}
\begin{aligned}
&\langle 0| S^+|1\rangle=\hbar,\\
&\langle n\neq 0| S^+|1\rangle=\langle n| S^+|0\rangle=\langle 0|S^+|n\neq 1\rangle=0.
\end{aligned}
\end{equation}
In the same sense as before this method therefore \emph{allows us to reproduce the exact eigenvalues of the Hamiltonian}. In this sense it is exact.
Of course, this first truncated expression is not exact for higher spins $S$ because couplings to the non-physical states reappear. We can obtain exact expressions also for $S>1/2$ by setting $n_{\mathrm{max}}=2S$. Similarly to the $S=1/2$ case these expressions reproduce all physical matrix elements. The proof is given in the Appendix~\ref{proof_no_coupling_to_non_phys}, but is essentially the same as for spin $1/2$. A list of explicit expressions for spin operators up to $S=3$ are given in Appendix~\ref{app:higherspin}.
\subsection{Commutator properties and exactness for the improved expansion}
One may ask what happens to commutators. Here the spin 1/2 case again is instructive,
\begin{equation}
[S^+,S^-]\approx 2\hbar S^z-3 h^2 \left(S \left(2 \sqrt{4-\frac{2}{S}}-4\right)+1\right) a^{\dag^2}a^2.
\end{equation}
While the commutator is not exactly reproduced we can immediately recognize that this is not important because the extra term ${a^\dag}^2a^2$ does not couple unphysical and physical parts of the Hilbert space and solely affects the unphysical parts. It is therefore of no physical consequence.
This additional term often was understood as rendering the expressions for spin operators approximate \cite{Lindgard1974}. After all, the most commonly used criterion for ruling out if an operator can be expressed in a certain way is by checking the commutation relations. Here we stress that this criterion can be softened. Namely it can be enough to reproduce commutation relations up to the addition of a term that acts solely in the unphysical part of the Hilbert space and does not couple to the physical part of the Hilbert space.
In some cases more stringent exactness criteria have been applied, such as requiring that all non-physical matrix elements vanish \cite{Zhou_1999}. This type of criteria can simplify formal quantum statistical treatments, since one does not have to be careful about excluding non-physical states in sums over states. However, in practice these approaches are cumbersome because the associated expansions are infinite and more complicated. Therefore this is only an advantage at the purely formal level.
\subsection{Additional properties of the expansion and comparison to other expansions}
One may wonder how this expansion compares to a more conventional Dyson Maleev expansion with $S^+=\hbar a$ and $S^-=\hbar a^\dag(2S-a^\dag a)$. Our method has the advantage that $S^+$ and $S^-$ are treated on the same footing and therefore are related by conventional Hermitian conjugation. This guarantees that the approach will not break hermiticity in the conventional sense, unlike the Dyson-Maleev expansion.
Next one may wonder if an additional perturbative expansion around classical spin configurations may be stacked on top of the expansion as it is done for the more conventional $1/S$ expansion in non-linear spinwave theory \cite{RevModPhys.85.219}. One may therefore be tempted to identify $\delta=\left(\sqrt{1-\frac{1}{2 S}}-1\right)$ in Eq. \eqref{trunc_splus} as an expansion parameter since it corresponds to fluctuation corrections around a classical ground state. That is, one would write $S^+\approx \hbar \sqrt{2S}\left[1+\delta a^\dag a\right]a$. This, however, is not possible and becomes clear if one considers that $S^z=1/2[S^+,S^-]$. Then, one can write
\begin{equation}
\begin{aligned}
S^z=&\frac{1}{2}[S^+,S^-]\approx \underbrace{S+2S\left( 2 \delta +\delta ^2 \right) a^{\dagger } a}_{S^z}+3 \delta ^2 S \left(a^{\dagger }\right)^2 a^2.
\end{aligned}
\end{equation}
We find that the physical part of $S^z$ has contributions from different orders of $\delta$. This of course means that any expansion in $\delta$ will treat $S^z$ and $S^{x,y}$ on unequal footing, even at low orders in such an expansion. This, for instance, will result in an unphysical breaking of symmetries in a Heisenberg model or similar even at the lowest order expansion. Therefore, $\delta$ cannot be used as an expansion parameter. Additionally, there is no other obvious choice of expansion parameter and ad-hoc expansions in powers of $a$ also lead to unphysical results in non-linear spin-wave theories. Therefore, it seems that the expansion does not allow for an additional perturbative expansion in terms of fluctuations around a classical spin configuration. A mean field theory treatment must include all the terms that are needed to accurately describe spin $S$ for each operator $S^+$ and $S^-$.
\subsection{Symmetries and exact properties in the improved expansion}
To study symmetries in the new expansion we consider the Hamiltonian for the Heisenberg model with easy-plane single-ion anisotropy,
\begin{equation}
H=\sum_i \left[ J\vect S_i \cdot \vect S_{i+1}+D(S^x_i)^2 \right].
\end{equation}
Let us first recognize that for $S=1/2$ the single-ion anisotropy $(S^x_i)^2$ should result in a trivial number $(S^x_i)^2=1/4$ that does not affect the spin-wave excitation spectrum. However, in the usual Taylor expansion with $S^+\approx a-\frac{1}{2}a^\dag a^2$ one finds that
\begin{equation}
\begin{aligned}
(S^x)^2&=\frac{1}{4}+\frac{1}{16} \left(2 {a^{\dagger }}^2+2 a^{\dagger } a-2 a^{\dagger } a^3-3 {a^{\dagger }}^2 a^2\right.\\
&\left.+{a^{\dagger }}^2 a^4-2 {a^{\dagger }}^3 a+2 {a^{\dagger }}^3 a^3+{a^{\dagger }}^4 a^2+2 a^2\right),
\end{aligned}
\end{equation}
which has unphysical contributions in the physical part of the Hilbert space, e.g. $a^\dagger a$. In other words, the Taylor expansion introduces unphysical artifacts.
In the new expansion for $S=1/2$, however, we have $S^+=a-a^\dag a^2$ and find that
\begin{equation}
\begin{aligned}
&(S^x)^2=\frac{1}{4}+\frac{1}{4} \left( {a^{\dagger }}^2 a^2+{a^{\dagger }}^2 a^4+2 {a^{\dagger }}^3 a^3+{a^{\dagger }}^4 a^2\right).
\end{aligned}
\end{equation}
The additional non-constant terms we find have non-zero contributions only in the non-physical part of the Hilbert space, and do not couple to the physical part of the Hilbert space. This can easily be verified explicitly by computing the operator in the occupation number basis using Eq.~\eqref{eq:splusnew:matrix}. The non-physical terms are therefore of no consequence for physical states, and could just as well be dropped. This means that the new expansion properly reproduces the fact that $(S^x_i)^2$ contributes only a trivial scalar for spin $1/2$.
Next we recall that the Hamiltonian is symmetric with respect to the symmetry generated by the generator $g=\sum_i S_i^x,$ i.e. $C=[H,g]=0$. Again we will only check spin $1/2$ for simplicity, but similar results will hold for higher spins. Let us first see what happens if we use the usual Taylor expansion approach to compute the commutator. We find that
\begin{equation}
\begin{aligned}
C&=\sum_i\frac{1}{16} a_{i+1}^{\dagger } a_{i+1} \left(2 a_i^{\dagger }+a_i^{\dagger } a_i^2-{a_i^{\dagger }}^2 a_i-2 a_i\right)\\
&+\frac{3}{32} {a_{i+1}^{\dagger }}^2 a_{i+1}^2 \left(2 a_i^{\dagger }+a_i^{\dagger } a_i^2-{a_i^{\dagger }}^2 a_i-2 a_i\right)\\
&+(i)\leftrightarrow(i+1)
\end{aligned},
\end{equation}
where $(i)\leftrightarrow(i+1)$ is a shorthand for the same terms with $i$ and $i+1$ switched. Here we can see that the operators in the first line couple the physical two site states in $A_p=\{\left|0\right\rangle_i\left|0\right\rangle_{i+1},\left|1\right\rangle_i\left|0\right\rangle_{i+1},\left|0\right\rangle_i\left|1\right\rangle_{i+1},\left|1\right\rangle_i\left|1\right\rangle_{i+1}\}$ to non-physical two-site states in $A_{np}=\left\{\left|n\right\rangle_{i}\left|m\right\rangle_{i+1}| (n>1)\lor (m>1) \right\}$, which are the states where at least one of the two sites is more than single occupied. The symbol $\lor$ denotes the inclusive ``$\mathrm{or}$'' (disjunction) operator.
For the new expansion, on the other hand, we find that
\begin{equation}
\begin{aligned}
&C=\frac{3}{4}\sum_i\left({a_{i+1}^{\dagger }}^2 a_{i+1}^2 \left[a_i^{\dagger }-a_i+a_i^{\dagger } a_i^2-{a_i^{\dagger }}^2 a_i\right]\right.\\
&\left.+{a_i^{\dagger }}^2 a_i^2 \left[a_{i+1}^{\dagger }-a_{i+1}+a_{i+1}^{\dagger } a_{i+1}^2-{a_{i+1}^{\dagger }}^2 a_{i+1}\right]\right).
\end{aligned}
\end{equation}
From here one may observe the term ${a_{j}^{\dagger }}^2 a_{j}^2$ to see that only matrix elements for the unphysical states $A_{np}$ will be non-zero. The operator also does not couple to the physical states in $A_p$. We can hence conclude that, unlike the Taylor expansion, the bosonic expressions for the spin operators in the new expansion do not break symmetries present in the original spin operator language.
\section{Conclusion}
We were able to demonstrate the surprising result that the square-root of an operator $\sqrt{\hat O}$ may be expanded in an integer power series around $\hat O=0$. We believe that the approach can be usefully applied to other operator square-roots in theoretical physics and that the observation is useful for finding better expansions of other operator functions where a Taylor expansion fails.
The methods described in this paper allowed us to find a significant non-perturbative improvement on the Taylor expansion for the Holstein-Primakoff realization of spin operators. We expect these results to be useful to better treat spin models in different mean field approaches if there is no clear classical spin configuration around which one could expand. We therefore hope that the approach will prove useful for the study of spin liquid phases.
\acknowledgments
We thank C. D. Batista and G. Marmorini for useful discussions. M.V. and G.A.F. gratefully acknowledge partial support from the National Science Foundation through the Center for Dynamics and Control of Materials: an NSF MRSEC under Cooperative Agreement No. DMR-1720595, and also from NSF Grant No. DMR-1949701. PL was supported by the Scientific Discovery through Advanced Computing (SciDAC) program funded by the US Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences, Division of Materials Sciences and Engineering. SO was supported by U.S. DOE, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.
| 2024-02-18T23:39:57.207Z | 2020-10-30T01:29:14.000Z | algebraic_stack_train_0000 | 925 | 6,347 |
|
proofpile-arXiv_065-4593 | \section{Introduction}
Complex systems far from equilibrium can rarely be described by well-established potentials or thermodynamic functions
\cite{Prigogine1977,San1991,Elder1992,Grossmann_Kosterlitz,Tribelsky1996,Costa_Kosterlitz,Liang2013,Dunkel2013,Slomka2017}.
Real world problems such as the Navier-Stokes (NS) equation \cite{Jolly1990,Anderson2006,Cross2009,Dunkel2013,Slomka2017} and artificial deep neural networks (DNN) \cite{doi:10.1146/annurev-conmatphys-031119-050745,Saxe11537,chaudhari2018stochastic,Fenge2015617118} are examples of such systems. However, the questions if, how and under what circumstances proper stochastic potentials can be constructed for such systems have been addressed recently by Ao {\em et~al.} \cite{Ao_2004,Ao_Thouless, Ao_2008,Yuan_Lyapunov,Yuan_Exploring,Zhu2006} These authors suggest that a stochastic system can possess a Lyapunov functional which describes some fluctuation dissipation properties of the system. There are two fundamentally distinct parts of the dynamics, a diffusive and a transverse process, both operating on the potential. This decomposition is unique near stationary points and is determined by the stochastic structure. The transverse process can lead to vorticity without detailed balance \cite{Ao_Thouless}.
The methodology can be extended to nonlinear partial differential equations (PDEs) where the dynamical variables are labelled by continuous spatial coordinate(s). In an earlier work \cite{Chen23227}, a noisy one-dimensional stabilized Kuramoto-Sivashinsky (SKS) equation \cite{Misbah1994,Brunet2007,Pradas2011} was used to demonstrate the application of this. The SKS equation is derived formally \cite{Jolly1990} from an NS equation and it can describe a variety of physical phenomena with bifurcation instabilities \cite{Malomed1984,Kevrekidis1990,Goldstein1991,Knobloch1995}. The PDE exhibits nonlinear stationary cellular structures with additional complications such as vacillating breathing (VB) oscillations \cite{Misbah1994}. The absence of a conventional potential function \cite{Kerszberg1983,Obeid_Kosterlitz,Cross2016,Saxena_Kosterlitz} makes it a useful system for such stochastic studies.
In the following, we first review our earlier work \cite{Chen23227} on how to obtain a global potential landscape from a topological web of fixed points interconnected by low-lying eigenmodes. This result is then verified by direct stochastic simulations. The transverse dynamics near the fixed points and the nonlinear evolution of these are explored. A universal class of vortex like circulations is found near a range of cellular structures. The amplitude of circulation can grow or shrink with time and this is resilient to random noise. In a VB mode, a growing oscillation together with the nonlinearity exhibits limit cycles which cause periodic phase drifting of the cells themselves. We discuss our findings and their significance as a systematic alternative to explore nonlinearities.
\section{Stochastic Decomposition}
The noisy SKS equation is a one-dimensional nonlinear stochastic PDE which is periodic under $x\rightarrow x+L$ \cite{Hyman1986,Christiansen1997,Lan2008},
\begin{eqnarray}
\label{SKS}
\partial_t u(x, t) & = & -\hat{L}(x)\,u(x, t) + [\partial_x u(x, t)]^2 + \xi(x, t)\\
\label{linear0}
\hat{L}(x) & = & [\alpha + \partial_x^2 + \partial_x^4]
\end{eqnarray}
where $\xi(x, t)$ is an additive external Gaussian noise with $\langle\xi(x, t)\rangle=0$ and
\begin{equation}\label{diffusion}
\langle\xi(x, t)\xi(x', t')\rangle = 2\epsilon D(x, x')\delta(t-t').
\end{equation}
Here, $\epsilon$ is the noise strength and the diffusion matrix $D(x, x')$ is symmetric and semi positive definite.
Following the work of Ao \cite{Ao_2004} and subsequent studies \cite{Ao_Thouless,Ao_2008}, one can recast the equation into the form \cite{Ao_Thouless,Ao_2008},
\begin{eqnarray}\label{matrix}
\partial_t u(x, t) =&-&\int \text{d}x' \,\left[D(x, x') + Q (x, x'; \{u(x,t)\})\right] \cr
&\times& \frac{\delta}{\delta u(x')}\Phi(\{u(x,t)\}) + \xi(x, t).
\end{eqnarray}
This can be understood as multiplication of infinite dimensional matrices. The multiplication of two matrices of continuous degrees of freedom is weighted by $\text{d}x$ and $\delta/\delta u(x')$, written below as $\partial_{\vec{u}}$, is the functional differentation of the global potential $\Phi(\{u(x,t)\})$. We adopt a convention in which a boldface symbol indicates a matrix or vector labelled by $x$, while the same symbol in normal face indicates the corresponding matrix element so that Eq.~(\ref{matrix}) becomes
\begin{eqnarray}\label{matrix0}
\partial_t\, \vec{u}(t) & = & - [\vec{D} + \vec{Q}]\cdot{\partial}_{\vec{u}}\Phi[\vec{u}(t)] + \bvec{\xi}(t).
\end{eqnarray}
Here $\vec{u}(t)$ is the state vector with components labelled by $x$ and both the semi-positive definite $\vec{D}=\vec{D}^{\dag}$ and the anti-symmetric $\vec{Q}=-\vec{Q}^{\dag}$ are square matrices defined by Eq.~(\ref{matrix}). With this decomposition, $\Phi[\vec{u}]$ becomes a Lyapunov functional for Eq.~(\ref{SKS}) which characterizes the dynamical properties of the system \cite{Yuan_Lyapunov,Ao_2004,Ao_Thouless,Smelyanskiy1997,Zhu2006}.
\subsection{Equation for the Global Potential}
We now briefly summarize the main conclusions of \cite{Chen23227}. For homogeneous and spatially uncorrelated noise, we set $\vec{D} = \vec{I}$ with matrix elements
\begin{equation}\label{diffusion0}
I(x, x') = \delta(x-x').
\end{equation}
Letting $\vec{L} = \vec{L}^{\dag}$ be the linear operator in Eq.~(\ref{linear0}) with
\begin{equation}\label{linear0_1}
L(x, x') = \hat{L}(x)\delta(x - x'),
\end{equation}
the linear term on the right-hand side of Eq.~(\ref{SKS}) corresponds to $- {\partial}_{\vec{u}}\Phi_{0}[\vec{u}(t)] $ with
\begin{equation}\label{potential00}
\Phi_{0}[\vec{u}] = \frac{1}{2}\,\vec{u}^{\dag}\,\vec{L}\,\vec{u}.
\end{equation}
The nonlinear term is recovered by setting $\vec{Q} = \vec{G}$ where
\begin{equation}\label{transfer0}
G(x,x';\{u(x)\}) = u_x(x)[\hat{L}^{-1}(x')\partial_{x'}\delta(x-x')].
\end{equation}
However, to make $\vec{Q}$ antisymmetric we must adjust $\Phi$ and these are related by \cite{Chen23227}
\begin{equation}\label{DeltaPhi}
\left[\vec{G} - \vec{Q}\right]\partial_{\vec{u}}\Phi_0 -\left[\vec{I} + \vec{Q}\right]\partial_{\vec{u}}[\Phi- \Phi_0] = 0.
\end{equation}
Eq.~(\ref{DeltaPhi}) can be solved formally by defining a force $\vec{F}$ as the gradient of the potential
\begin{equation}\label{DeltaQPhiF}
\vec{F} = -\partial_{\vec{u}}\Phi = -\left[\vec{I} + \vec{Q}\right]^{-1}\left[\vec{I} + \vec{G}\right]\vec{L} \,\vec{u}
\end{equation}
which must have vanishing curl,
\begin{equation}\label{DeltaQ}
\partial_{\vec{u}} \times \vec{F} \equiv \frac{\delta F(x', \{u\})}{\delta u(x)} - \frac{\delta F(x, \{u\})}{\delta u(x')} = 0.
\end{equation}
Eq.~(\ref{DeltaQ}) determines $\vec{Q}$ and ensures that $\Phi(\{u\})$ is a path independent integral over the field variables,
\begin{equation}\label{DeltaPhi1}
\Phi(\{u\}) = -\int \text{d}x\left\{\int_{0}^{u(x)}{\cal D}v\, F(x; \{v\})\right\}.
\end{equation}
These formal results suggest strongly the existence of a global potential for the entire system, although the nonlinearity in Eq.~(\ref{DeltaQPhiF}) is a major obstacle to its construction.
\subsection{Near Stationary States}\label{sec3}
We carry out the same procedure starting from a nontrivial fixed point solution $a(x)$ of Eq.~({\ref{SKS})
\begin{eqnarray}\label{La}
\hat{L}(x)a(x)=[\partial_x a(x)]^{2}
\end{eqnarray}
where $\tilde{u}(x)=u(x)-a(x)$ is the deviation from $a(x)$. The linear part of Eq.~({\ref{SKS}) is obtained from a slightly different potential
\begin{eqnarray}\label{potential1}
\Phi_{0}(\vec{u}:\vec{a})&=&\Phi(\vec{a}) + \frac{1}{2}\,\vec{\tilde{u}}^{\dag}\,\vec{L}\,\vec{\tilde{u}},
\end{eqnarray}
and the nonlinear part by the replacement $\vec{G}\rightarrow\tilde{\vec{G}}$ in Eq.~(\ref{transfer0}) where
\begin{eqnarray}\label{transfer1}
\tilde{G}(x,x';\{u:a\})&=& \cr
[\tilde{u}_{x}(x)&+& 2a_{x}(x)]\hat{L}^{-1}(x')\partial_{x'}\delta(x-x'),
\end{eqnarray}
Note, when $\vec{u}=0=\tilde{\vec{u}}+\vec{a}$, $\tilde{\vec{G}}\neq 0$ (cf. Eq.~(\ref{transfer0})). It is convenient to define $\vec{A} \equiv \tilde{\vec{G}}\,\vec{L} = \vec{A}_{0} + \vec{A}_{1}$ where
\begin{eqnarray}\label{transfer_a}
A_{0}(x,x';\{a\}) & = & 2a_x(x)\partial_{x'}\delta(x-x') \cr
A_{1}(x,x';\{\tilde{u}\}) & = & \tilde{u}_{x}(x)\partial_{x'}\delta(x-x').
\end{eqnarray}
At a fixed point, $\vec{A}\rightarrow \vec{A}_{0}$ and expanding Eq.~(\ref{DeltaQPhiF}) in powers of $\tilde{u}$ we have
\begin{eqnarray}\label{DeltaQPhi1_a}
\vec{F}_{1} & = & -\vec{R}_{0}\,\tilde{\vec{u}} + O(\tilde{u}^2), \cr
\vec{R}_{0} & = & \left[\vec{I} + \vec{Q}_{0}\right]^{-1}\left[\vec{L} + \vec{A}_{0}\right].
\end{eqnarray}
Here the subscripts indicate orders in powers of $\tilde{u}(x)$.
We obtain an equation for $\vec{Q}_{0}$ by observing that $\partial_{\vec{u}}\times\vec{F}_{1} = 0 \Rightarrow \vec{R}_{0}=\vec{R}_{0}^{\dag}$ so that
\begin{eqnarray}\label{DeltaQ2_a}
[\vec{L}+\vec{A}_{0}]\vec{Q}_{0}+\vec{Q}_{0}[\vec{L}+\vec{A}_{0}^{\dag}]=\vec{A}_{0}-\vec{A}_{0}^{\dag}.
\end{eqnarray}
Eq.~(\ref{DeltaQ2_a}) is known as a continuous Lyapunov equation \cite{Mori2002,Jbilou2006,Hached2018} for which there exist efficient numerical algorithms \cite{Ao_Thouless,Chen23227}. From Eq.~(\ref{DeltaQPhi1_a}) the potential to ${\cal O}(\tilde{u}^{2})$ is
\begin{eqnarray}\label{DeltaPhi_a}
\Phi_{2}(\vec{u}: \vec{a}) & = & \frac{1}{2}\,\vec{\tilde{u}}^{\dag}\,\vec{R}_{0}\,\vec{\tilde{u}} + \Phi(\vec{a}).
\end{eqnarray}
\section{Topology and Global Landscape}
Knowing the potential near individual fixed points allows us explore the global properties of the system. When $L\rightarrow\infty$ and $ \alpha < 1/4$, the SKS equation has a continuous band of periodic stationary states \cite{Misbah1994,Brunet2007} and part of the band is stable. When $L<\infty$ the states can be labelled by the wave number $\kappa=2\pi k/L$ with integer $k$, centered around a critical wave number $\kappa_c = 1/\sqrt{2}$. However, in the presence of external noise some states are more stable than others which can be understood as a natural consequence of a global potential. In the following, we show how the potential differences between these fixed points can be inferred from the topology spanned by a network of interconnected fixed points. The analysis is supplemented by direct stochastic simulations.
\subsection{Potential Difference Between Stationary States}
If we extrapolate $\Phi_2$ of Eq.~(\ref{DeltaPhi_a}) to a neighboring fixed point $u(x) = b(x)$, the potential difference between them, assuming that a single valued potential exists, would be approximately $\Phi_2(\vec{b}: \vec{a})$ of Eq.~(\ref{DeltaPhi_a}). Since the same procedure applies in the opposite direction from $b(x)$ to $a(x)$, the potential difference should be
\begin{eqnarray}\label{DeltaPhi3}
\Delta\Phi_{ba}=\frac{1}{2}[\Phi_{2}(\vec{b}: \vec{a}) - \Phi_{2}(\vec{a}: \vec{b})]=\Phi(\vec{b})-\Phi(\vec{a}).
\end{eqnarray}
This approach can be refined by noticing that the entire set of fixed points forms an interconnected web \cite{Chen23227}. There is always a pair of dominant eigenmodes of $\vec{R}_{0}$ leaving from one state and flowing towards another state. These modes can be identified as having the largest amplitude with the wave number of the destination state, together with an eigenvalue with a vanishing real part. This novel topology suggests that Eq.~(\ref{DeltaPhi3}) should be confined to the subspace of the interconnected modes only so that the dominant contribution to the landscape is from the low-lying modes flowing between the nodes. Define $\vec{v}^{\sigma}_{ba}$ ($\sigma=\pm$) to be the eigenmodes of $\vec{R}_{0}$ at state $a$ flowing to state $b$ with eigenvalue $\lambda^{\sigma}_{ba}$. An improved version of Eq.~(\ref{DeltaPhi3}) is
\begin{eqnarray}\label{DeltaPhi4}
\Delta\Phi_{ba}\approx \sum_{\sigma=\pm}\frac{1}{4}\,\vec{c}^{\dag}\,[\vec{v}^{\sigma}_{ba}\lambda^{\sigma}_{ba}\vec{v}^{\sigma\dag}_{ba} -\vec{v}^{\sigma}_{ab}\lambda^{\sigma}_{ab}\vec{v}^{\sigma\dag}_{ab}]\,\vec{c},\;\;\;\vec{c}\equiv \vec{b} - \vec{a}.
\end{eqnarray}
Knowing the pairwise potential differences, one can map out the global potential difference between any two states by following a path between them. However, this potential difference is path dependent and, to make the result path independent as it must be, we include the whole set of pairs to obtain $\Phi(\kappa)$ as a function $\kappa$ by a least squares fit to a low-order polynomial. A more detailed discussion is in the supplementary information (SI) \cite{supplementary}. Also in the SI \cite{supplementary} we correct an error in our earlier work where there is an erroneous factor $h$ in the expression $(h\,a_{k-k'})$ in Eqs.~(34) and (35) of \cite{Chen23227}.
\subsection{Verification by Stochastic Simulations}
\begin{figure}[!ht]
\begin{tabular}{ccc}
\includegraphics[width=0.35\textwidth]{figure1a.eps} &
\includegraphics[width=0.35\textwidth]{figure1b.eps} \\
(a) & (b) \\
\includegraphics[width=0.35\textwidth]{figure1c.eps} &
\includegraphics[width=0.35\textwidth]{figure1d.eps} \\
(c) & (d) \\
\end{tabular}
\protect\caption{Global potentials (a), (c) and corresponding probability distributions (b), (d) for $L = 512$, $\alpha = 0.20$. (a) Global potentials $\Phi(\kappa)$ of Eq.~(\ref{DeltaPhi4}) from $4^{\rm th}$ order polynomial fits over the whole topology of stationary states for two grid spacings $h$. (b) Probability distributions $P(\kappa)$ using $\Phi(\kappa)$ of (a). (c), (d) Simulated potential $\Phi_s(\kappa)$ and distribution $P_s(\kappa)$ for $h = 0.32$ and several values of $\epsilon$ and $\Delta T_s$.}\label{Fig1}
\end{figure}
The global landscape $\Phi(\kappa)$ can be verified by comparing with $\Phi_{s}(\kappa)$ from direct stochastic simulations for the probability distribution $P(\kappa)$ in the presence of strong external noise with the algorithm of \cite{Saxena_Kosterlitz}. We expect $P(\kappa)$ is a Boltzmann-like distribution \cite{Ao_Thouless}, $P(\kappa) = P_{0}(\kappa)\exp[-\Phi(\kappa)/\epsilon]$ where $\epsilon$ is the noise strength of Eq.~(\ref{diffusion}) and $P_{0}(\kappa)$ is a slowly varying function of $\kappa$, although there is no rigorous proof of this. In a simulation with external stochastic noise, there is also the question of the meaning of occupying a stationary state $\kappa$.
Suppose the system is initially in some arbitrary state and the simulation is performed in the presence of external stochastic noise for some arbitrarily chosen time $t_{0}$. One can define the probability of being in the state $\kappa$ by the overlap of this state with the stationary solution of the noiseless SKS equation with wave number $\kappa$. A closely related method is to expand the simulated state at $t_{0}$ as a linear superposition of periodic solutions of the {\it noiseless} SKS equation and define its wave number as that of the periodic solution of the SKS equation of largest magnitude. Neither approach is satisfactory because neither accurately reproduces the theoretical potential $\Phi(\kappa)$. A third and better method is to switch off the noise at some sufficiently long time and then evolve the system in the absence of noise for a time $\Delta T_{s}$ to a stationary state of wave number $\kappa$. By repeating this many times, a simulated $P_{s}(\kappa)$ of a Boltzmann form is obtained with a simulated potential $\Phi_{s}(\kappa)$ which is a close match to the theoretical $\Phi(\kappa)$. However, the detailed shape of $P_{s}(\kappa)$ does depend on the time $\Delta T_{s}$ allowed for the chosen state to evolve to a stationary state. When an effective noise strength $\tilde{\epsilon} = \sqrt{\epsilon/\Delta T_{s}}$ is used to characterize the distribution $P_{s}(\kappa)$, we obtain a consistent $\Phi_{s}(\kappa)$ which is independent of the separate values of $\epsilon$ and $\Delta T_{s}$. The simulations agree reasonably well with the theoretical predictions up to an overall scale factor $\Phi(\kappa)/\Phi_{s}(\kappa)\sim 10$. Using $\alpha = 0.20$ as an example, a least squares polynomial fit and a stochastic simulation are compared in Fig.~\ref{Fig1}.} More simulation details can be found in the SI \cite{supplementary}.
\section{Vorticity near Fixed Points}
Another essential feature, which is a more distinct characteristic of the stochastic dynamics, is the transverse component described by the antisymmetric $\vec{Q}$ in Eq.~(\ref{matrix0}). When $\vec{Q}$ is large there is a large deviation from the gradient diffusion process. Vortex like circulation or ``vorticity'' can be a prominent feature of the dynamics. This can be explored near a steady state when $\vec{Q}\rightarrow \vec{Q}_{0}$ is essentially a constant matrix (the subscript $0$ and the overhead tilde on $\vec{u}$ are dropped in the following for simplicity).
\subsection{Oscillating Pair Decomposition}\label{subsection4a}
We are free to choose any convenient basis to represent the state vector. When $\vec{Q}$ is large, we choose the eigenvectors which partially diagonalize $\vec{Q}$ into a direct sum of pairs of $2\times 2$ antisymmetric matrices. Let $\bvec{q}_{i}=q_{i}\,(i\bvec{\sigma}_{y})$ where $q_{i}>0$ is the $i^{{\rm th}}$ eigenvalue and $\bvec{\sigma}_{y}$ is a Pauli matrix so that $\vec{Q} = \bvec{q}_{1}\oplus \bvec{q}_{2}\oplus\cdots \oplus\bvec{q}_{N/2}$. Denote the corresponding eigenvectors by $\vec{e}_{i\sigma}$ where $i=1,2, \dots, N/2$ and $\sigma=1, 2$ so that $\vec{e}^{\dag}_{i\sigma}\,\vec{Q}\,\vec{e}_{j\sigma'} = \delta_{ij}(\vec{q}_{i})_{\sigma\sigma'}$.
Following \cite{Ao_2008}, we define $\vec{S}+\vec{T} \equiv [\vec{I}+\vec{Q}]^{-1} $ so that $\vec{S}$ is a symmetric ``dissipative'' matrix and $\vec{T}$ is an antisymmetric ``transfer'' matrix. Now Eq.~(\ref{matrix0}) can be written as
\begin{eqnarray}\label{matrix1}
[\vec{S} + \vec{T}]\,\partial_t \vec{u}(t) & = & - {\partial}_{\vec{u}}\Phi[\vec{u}(t)] + \bvec{\zeta}(t)
\end{eqnarray}
where the new ``canonical'' noise $\bvec{\zeta}(t) = [\vec{S}+\vec{T}]\,\bvec{\xi}(t)$ has zero mean and variance
\begin{equation}\label{diffusion1}
\langle\bvec{\zeta}(t)\bvec{\zeta}^{\dag}(t')\rangle = 2\epsilon \,\vec{S}\,\delta(t-t').
\end{equation}
Both $\vec{S}$ and $\vec{T}$ are diagonal in the same basis as $\vec{Q}$. Now, let $\vec{1}$ be the $2\times 2$ unit matrix so that, in the $i^{{\rm th}}$ subspace, $\vec{s}_{i} = s_{i}\vec{1}$ with $s_{i} = 1/(1+q_{i}^{2}) > 0$ and $\vec{t}_{i}$ is a $2\times 2$ antisymmetric matrix where $-(\bvec{t}_{i})_{12} =(\bvec{t}_{i})_{21}=t_{i} = q_{i}/(1+q_{i}^{2})$. When $q_{i}\gg 1$, all matrix elements are very small and $t_{i}/s_{i} \gg 1$. Since $\vec{S}$ relates dissipation to fluctuations by Eq.~(\ref{diffusion1}), a small $\vec{s}_{i}$ allows for oscillations of $\vec{u}_{i}$ in the $i^{{\rm th}}$ subspace by the transfer matrix $\vec{t}_{i}$ (cf. below). Note, when continuous matrices are discretized, the matrix element of $\vec{I}$ is not always $1$ but it can always be re-scaled so that this subtlety does not change the essence of our analysis.
The eigenstates can be labelled by $s_{1} \leq s_{2}\leq \dots \leq s_{N/2}$ and, in the $i^{{\rm th}}$ subspace, the lowest approximation to Eq.~(\ref{matrix1}) is
\begin{equation}\label{lc0}
(\vec{s}_{i}+\vec{t}_{i})\,\partial_{t}\vec{u}_{i}(t) = -\vec{r}_{ii}\,\vec{u}_{i}(t) + \bvec{\zeta}_{i}(t),
\end{equation}
where $(\vec{r}_{ij})_{\sigma\sigma'} = \vec{e}^{\dag}_{i\sigma}\,\vec{R}\,\vec{e}_{j\sigma'}$. When $\vec{s}_{i}\rightarrow 0$, the variance of the noise $\langle\bvec{\zeta}_{i}(t)\bvec{\zeta}^{\dag}_{i}(t')\rangle \rightarrow 0$ so that $\vec{u}_{i}$ of Eq.~(\ref{lc0}) oscillates with frequency $\omega_{i}\approx q_{i}\sqrt{\text{det}(\vec{r}_{ii})}$ when $\text{det}(\vec{r}_{ii}) > 0$. This oscillation either decays to a stable fixed point or grows away from an unstable fixed point. In either case, this creates vortex motion as discussed below.
\begin{figure}[!ht]
\begin{tabular}{ccc}
\includegraphics[width=0.30\textwidth]{figure2a.eps} &
\includegraphics[width=0.30\textwidth]{figure2b.eps} &
\includegraphics[width=0.30\textwidth]{figure2c.eps} \\
(a) $\alpha = 0.17$ & (b) $\alpha = 0.15$ & (c) $\alpha = 0.12$ \\
\end{tabular}
\protect\caption{Vortex like motions in periodic structures with $L = 512$, $h = 0.32$, $\kappa = 0.6995$ for various $\alpha$. The system is initially in the $1^{\rm{st}}$ subspace of $\vec{Q}$ by Eq.~(\ref{lc0}). (a)-(c) Small deviations $P_{\sigma} = \vec{e}^{\dag}_{1\sigma}\cdot\tilde{\vec{u}}(t)$ ($\sigma=x,y$) from a stable state decay to zero for $\alpha = 0.17, 0.15, 0.12$ respectively.}\label{Fig2}
\end{figure}
A typical example of vortex motion near a steady state in the stable region is shown in Fig.~\ref{Fig2}, where $L = 512$, $h = 0.32$, wavenumber $\kappa = 0.6995$ and $\alpha = 0.17, 0.15, 0.12$. We choose to restrict the motion to the $1^{\rm{st}}$ subspace of $\vec{Q}$ from Eq.~(\ref{lc0}). Small initial deviations from the stationary state are chosen as the Fourier space eigenstates of $\vec{Q}$, $\tilde{\vec{u}}_{0} = \vec{e}_{1\sigma}$ ($\sigma = 1, 2$ or $x,y$ for convenience). These states evolve according to Eq.~\cmmnt{(\ref{SKSF})}(S10) in the SI \cite{supplementary}. The specific parameters chosen are: time step $\Delta t = 0.003$, number of iterations $10^6$ and data is recorded every $100^{{\rm th}}$ time step. The state vector is projected on to the $i^{{\rm th}}$ subspace by ${\rm{P}}_{\sigma}(t) = \vec{e}^{\dag}_{i\sigma}\cdot\tilde{\vec{u}}(t)$. More detailed discussion is found in the SI \cite{supplementary}.
\subsection{Overlap with exact eigenstates}
\begin{figure}[!ht]
\includegraphics[width=0.5\textwidth]{figure3.eps}
\protect\caption{Overlap $P_{1}$ between two 4 dimensional degenerate subspaces, the $1^{\rm{st}}$ eigenstate $\vec{e}_{1\sigma}$ of $\vec{Q}$ and $\vec{V}_{j\sigma}$ ($\sigma = 1,2,3,4$) of $(\vec{D}+\vec{Q})\vec{R}$, varies on the Fourier component $k$ for different values of control parameter $\alpha$.}\label{Fig3}
\end{figure}
Here we investigate how accurately the two-state truncation represents the real many dimensional system as the control parameter $\alpha$ is reduced. There are higher order corrections to Eq.~(\ref{lc0}) from other pairs when $\bvec{r}_{ii}\gg\bvec{r}_{ij}\neq 0$ when the system is equivalent to a set of weakly coupled harmonic oscillators. A displacement $\vec{u}_{i}(t)$ drives the $j^{{\rm th}}$ pair by a force $\sim\vec{r}_{ji}\,\vec{u}_{i}(t)$ which adds to the right-hand side of Eq.~(\ref{lc0}) a perturbation $\vec{r}_{ij}\,\vec{u}_{j}(t)$.
Taking this into account, a second-order perturbation calculation, neglecting the random noise, yields
\begin{equation}\label{lc1}
(\vec{s}_{i}+\vec{t}_{i})\,\partial_{t}\vec{u}_{i}(t) = -\left[\vec{r}_{ii}-\sum_{j\neq i}\vec{r}_{ij}\,\frac{1}{(\vec{s}_{j}+\vec{t}_{j})\partial_{t}+\vec{r}_{jj}}\,\vec{r}_{ji}\right]\,\vec{u}_{i}(t).
\end{equation}
Writing ${\bf u}_{i}(t)={\bf u}_{i}(0)\,{\rm exp}(\lambda_{i}t)$, gives the secular equation
\begin{equation}\label{lc1}
\text{det}\left[(\vec{s}_{i}+\vec{t}_{i})\lambda_{i} + \vec{r}_{ii} - \sum_{j\neq i}\vec{r}_{ij}\,\frac{1}{(\vec{s}_{j}+\vec{t}_{j})\lambda_{i}+\vec{r}_{jj}}\,\vec{r}_{ji}\right] = 0,
\end{equation}
which can be evaluated iteratively. The real part of $\lambda_{i}$ is the damping or growth rate while the imaginary part, when it exists, gives the oscillation frequency $\omega_{i}$.
This approximation is in the right direction, but is not sufficient when quasi degenerate modes are involved. We can diagonalize numerically the $N\times N$ matrix $(\vec{D}+\vec{Q})\,\vec{R}$ in Eq.~(\ref{matrix0}) which yields all eigenvalues $\Lambda_{j\sigma}$ and eigenvectors $\vec{V}_{j\sigma}$, $(j=1,\dots,N/2)$. The overlap between the two spaces $\vec{e}_{i\sigma}$ and $\vec{V}_{j\sigma}$ can be obtained from the $2\times 2$ matrix $\vec{p}_{ij}$ with elements $(\vec{p}_{ij})_{\sigma\sigma'} = \vec{e}^{\dag}_{i\sigma}\vec{V}_{j\sigma'}$. An absolute measure of overlap is obtained from
\begin{equation}\label{overlap}
0\leq P_{ij}=\text{Tr}(\vec{p}_{ij}^{\dag}\vec{p}_{ij})/\text{Tr}(\vec{1}) \leq 1.
\end{equation}
An estimate of the overlap is obtained from $P_{i}={\rm max}_{j}(P_{ij})\equiv P_{ij_{m}}$ which identifies the correct eigenvalue as $\lambda_{i}=\Lambda_{j_{m}}$. $P_{i}$ is a measure of the isolation of the subspace from the larger environment and the larger $P_{i}$ is, the better is the two-state approximation to the dynamics near the fixed point. If two pairs of $\vec{e}_{i\sigma}$ and $\vec{V}_{j\sigma}$ are degenerate, it is convenient to compute the overlap between the two $4\times 4$ subspaces. Numerical results are shown in Fig.~\ref{Fig3}.
\subsection{Drifting of Steady States and Limit Cycles}
This general analysis can be applied to perturbations and vorticity about a periodic stationary state. Some of the analysis is most conveniently done in Fourier space but we return to real space to ensure that $\vec{\tilde{u}}$ is real. Algebraic and computation details can be found in the SI \cite{supplementary}.
The stochastic decomposition allows for a relatively simple identification of vortex modes and observation of their evolution in a nonlinear system. When $\text{Re}\,\lambda_{i} > 0$ the $i^{\text{th}}$ mode is unstable and its amplitude increases with time. Some modes are saturated by the nonlinearity and form a quasi limit cycle when their amplitude is sufficiently large. This behavior is seen clearly for values of control parameter region for which VB modes exist \cite{Misbah1994} (cf. below). Also, other interesting phenomena related to drifting of the periodic stationary states are seen.
In a VB mode, every cell oscillates out of phase by $\pi$ with its neighbors resulting in a quasi stationary periodic cellular structure which drifts uniformly in coordinate space. We find that this phenomenon can be attributed to the following generic pattern. Initially, the system is in a periodic state with a maximum at $x=0$. The eigenmodes of a small perturbation about this state are found to alternate between stable and unstable modes, when numbered from the smallest eigenvalue of the $\vec{S}$ matrix (with the minor complication that these modes are two-fold degenerate). We impose the $i^{\text{th}}$ unstable mode as an initial perturbation. The amplitude of this mode increases which causes uniform drifting of the quasi stationary periodic state. When the growth of this mode is saturated by the nonlinearity, it changes to a decaying mode which continues to drift. A careful analysis shows that the original mode is projected on to a mode which is stable relative to the new drifting stationary state. However, part of the amplitude also becomes a new unstable mode which begins to grow. The quasi steady state itself evolves back to its initial state, thus completing a limit cycle. We find that this pattern is quite robust against small external noise. More results are shown in the SI \cite{supplementary}.
\section{Discussions}
The topology of multiple inter connected fixed points in a global potential landscape subject to nonlinearity and random fluctuations is discussed in this research. The stochastic decomposition provides new insights into the dynamics near stationary points. From very general considerations, we predict the existence of vortex like limit cycles near stationary solutions when the dynamics has a significant transverse component (large $\vec{Q}$ in Eq.~(\ref{matrix0})). This prediction from theory agrees with numerical simulations. This explains and reproduces in detail the VB mode in the SKS equation. The limit cycles appear for certain values of the control parameter when the strength of the random fluctuations is sufficiently small.
These intriguing phenomena, which are generic in out of equilibrium nonlinear stochastic systems, may be useful for increasing our understanding of vorticity and turbulence in related systems. In addition to problems of natural origin, artificial ones such as DNN fall into this class, for example, the statistical mechanics of deep learning \cite{doi:10.1146/annurev-conmatphys-031119-050745} and pattern formation in semantic development \cite{Saxe11537} are very similar to the stochastic dynamics studied here. Even though the stochastic gradient descent in the learning process usually explicitly uses a cost function, a large anisotropy in the noise spectrum leads to a different canonical potential by the same decomposition used here, cf. \cite{chaudhari2018stochastic}. This results in limit cycles \cite{chaudhari2018stochastic} and an unusual inverse Einstein relation \cite{Fenge2015617118} near local minima. Further study, extension and use of the ideas and methods in this work seem to be worth further study.
\begin{acknowledgments}
This work was supported in part by the National Natural Science Foundation of China No. 16Z103060007 (PA). JMK thanks the Shanghai Center for Quantitative Life Sciences and Shanghai University for their hospitality while a portion of this work was begun.
\end{acknowledgments}
\section{Fourier Transform}\label{sec4}
Numerical analysis of the stabilized Kuramoto-Sivashinsky (SKS) equation is most conveniently carried out in Fourier space. Following earlier work \cite{Chen23227}, we consider a one dimensional periodic lattice of length $L=Nh$ and approximate it by an {\it even} number $N$ of points with grid spacing $h$. Define the discrete forward and inverse Fourier transforms of the field $u(x)$ on a set of $N$ discrete points, $x_{n}=nh$, as
\begin{eqnarray} \label{FT1}
u_{k} & = & h\sum_{n}\,u(x_{n})\,e^{-i\frac{2\pi k\,n}{N}} = \text{fft}[u(x_{n})h](k)\\
\label{FT2}
u(x_{n}) & = & \frac{1}{L}\sum_{k}\,u_{k}\,e^{i\frac{2\pi k\,n}{N}}\;\;\;\;\;\; = \text{ifft}[u_{k}/h](n).
\end{eqnarray}
The operations $\text{fft}[\dots]$ and its inverse $\text{ifft}[\dots]$ are identical to the fast Fourier transform routines in MatLab$^{\copyright}$ with indices
\begin{eqnarray*}
n &=& 0,1, \dots, N-1 \\
k &=& 0,1,\dots,N/2-1;-N/2,\dots,-1
\end{eqnarray*}
and periodic boundary conditions $x_{N} \equiv x_{0}$ are imposed. Differential operators in $x$ space are algebraic in $k$ space so that the first derivative with respect to $x$ maps to
\begin{equation}
\partial_{x} \rightarrow d_k = (e^{i\kappa h} -e^{-i\kappa h})/2h = d^{\ast}_{-k} = -d_{k}^{\ast}
\end{equation}
where $\kappa=2\pi k/N$ is the wavenumber of the $k^{{\rm th}}$ Fourier component. The linear operator $\hat{L}(x)$ of Eq.~\cmmnt{(\ref{linear0})}(2) maps to
\begin{multline}\label{L_k}
\hat{L}(x) \rightarrow L_{k} = \alpha + (e^{i\kappa h} -2 + e^{-i\kappa h})/h^{2}
+ (e^{2i\kappa h} - 4e^{i\kappa h} +6 -4e^{-i\kappa h} + e^{-2i\kappa h})/h^4.
\end{multline}
Note that equation numbers without the prefix ``S'' in the SI refer to the main paper \cite{mainwork}.
We define the Fourier transform of a square matrix $M(x_{n},x_{n'})$ by discrete forward and inverse transforms in $k$ space as
\begin{eqnarray}\label{FTmatrik}
M_{k,-k'} &=& h^{2}\sum_{n,n'}\,M(x_{n}, x_{n'})\,e^{-i\frac{2\pi(kn - k'n')}{N}} \\
\label{FTmatrikinv}
M(x_{n}, x_{n'}) &=& \frac{1}{L^{2}}\sum_{k,k'}M_{k,k'}\,e^{i\frac{2\pi(kn - k'n')}{N}}.
\end{eqnarray}
Note that, in the transform, the column index $k'$ has opposite sign to that of the row index $k$ in the exponents. In terms of the two-dimensional fast Fourier transform MatLab$^{\copyright}$ routines fft2[. . . ] and its inverse ifft2[. . . ], Eqs.~(\ref{FTmatrik}) and (\ref{FTmatrikinv}) are
\begin{eqnarray}\label{FTmatrik1}
M_{k,-k'} &=& \text{fft2}[M(x_{n}, x_{n'})\,h^{2}], \\%\right|_{k'\rightarrow -k'} \\
\label{FTmatrix}
M(x_{n}, x_{n'}) &=& \text{ifft2}[M_{k,-k'}/\,h^{2}].
\end{eqnarray}
In discrete $x$ space, the product of two matrices is weighted by $h$ and the unit matrix $\vec{I}$ has elements $I_{n,n'}=\delta_{n,n'}/h$. In discrete $k$ space the product of two matrices is weighted by $1/L$ so that
\begin{equation}\label{weight}
\begin{array}{ccc}
\vec{K} = \vec{M}\,\vec{N} & \;\Rightarrow \; & K_{kk'} =\frac{1}{L} \sum_{k''}M_{kk''}N_{k''k'}
\end{array}
\end{equation}
and the unit matrix $\vec{I}$ becomes $I_{kk'} = L\delta_{kk'}$. Note that matrix elements in Fourier space are complex numbers so that the Fourier transform of a real symmetric matrix in $x$ space is a Hermitian matrix in $k$ space. The transpose operation is followed by complex conjugation as usual. Sometimes, it is advantageous to work in $x$ space where the eigenvectors of real symmetric and anti-symmetric matrices are real.
\section{Stationary States of the SKS equation}
To obtain stationary periodic solutions of the SKS equation in the noiseless limit, we use the semi-implicit algorithm of \cite{Saxena_Kosterlitz}. The Fourier transform $N_{k}(t)$ of the nonlinear term $[u_{x}(x, t)]^{2}$ is found by combining the inverse and forward Fourier transforms as
\begin{equation*}
N_{k}(t) = \text{fft}[\,(\text{ifft}[d_{k}u_{k}(t)/h])^{2}h\,](k),
\end{equation*}
so that, at time $t+\Delta t$
\begin{equation}\label{SKSF}
u_{k}(t + \Delta t) = \frac{u_{k}(t) + \Delta t N_{k}(t)}{1 + \Delta t L_{k}}.
\end{equation}
In the noiseless limit, a stationary state is achieved numerically in ${\cal N}_{t}\sim 10^{6}$ iterations with a fairly small time step, $\Delta t\sim 3\times10^{-4}$ and this final state depends on the choice of initial state. If this is dominated by a single wavenumber, $k=\pm k_{0}$ or $\kappa_{0} = 2\pi k_{0}/N$, the final state consists of components $k_{n} = \pm nk_{0}$, $n=0,1,2,\dots$, where the harmonics decay rapidly for $n>1$ and has the same periodicity as the initial state. When $ \alpha < 0.25$, there is a band of stationary states clustered around the critical or linearly fastest growing wavenumber $\kappa_{c} = 1/\sqrt{2}$ which corresponds to $\min (L_{k})$.
\section{Constructing the Global Potential}
To avoid confusion, we drop subscripts indicating the order in powers of $\tilde{u}$ when in Fourier space.The Fourier transform of $\vec{A}_{0}$ of Eq.~\cmmnt{(\ref{transfer_a})}(17) is
\begin{eqnarray}\label{A_k}
\begin{array}{cc}
A_{kk'}=2d_{k-k'}d_{k'}^{\ast}\,a_{k-k'}; & a_{k} = \text{fft}[a(x)h](k).
\end{array}
\end{eqnarray}
When $(k-k')\notin[-N/2,N/2-1]$, $(k-k')\rightarrow (k-k'\pm N)$ to ensure that $(k-k')$ is inside the defined range.
Note that there is an error in Eq.~(34) of our earlier work \cite{Chen23227} where there is an extra factor $h$. The consequences of this error are addressed and corrected in this SI.
$\vec{Q}_{0}$ is found numerically by solving Eq.~\cmmnt{(\ref{DeltaQ2_a})}(19) using a standard MatLab$^{\copyright}$ routine, $X = \text{lyap}(A,Q)$ which solves the continuous Lyapunov equation \cite{Jbilou2006,Hached2018}, $AX+XA^{T}+Q=0$. When formatting the codes, special care must be taken in Fourier space with the $1/L$ weight associated with the product of two matrices in Eq.~(\ref{weight}). For example, the inverse of a matrix $\vec{C}$ is calculated as $L^{2}\,\text{inv}(C)$ by the matrix inversion routine of MatLab$^{\copyright}$ so that $\left[\vec{I} + \vec{Q}_{0}\right]^{-1}\Rightarrow L\,\text{inv}(I + Q_{0}/L)$. Note that the weight in $x$ space is $h$ so that $L\rightarrow 1/h$ when we transform matrices back to $x$ space. Eigenvalues and eigenmodes of $\vec{R}_{0}$ in Eq.~\cmmnt{(\ref{DeltaQPhi1_a})}(18) are computed by another standard MatLab$^{\copyright}$ routine, $[V,D] = \text{eig}(A)$. The pair-wise potential differences between the stationary states are found by using these results and Eq.~\cmmnt{(\ref{DeltaPhi4})}(22).
\subsection{Eigenmodes, Eigenvalues, and Topology}
With the {\em corrected} expression for the Fourier transform $\vec{A}_{0}$ of Eq.~(\ref{A_k}), we can correct some observations in our earlier work \cite{Chen23227}. It was stated that the number of unstable eigenmodes of $\vec{R}_{0}$ is not changed by the nonlinearity for every stationary state. Only some of these become unstable when the wavenumber $\kappa\neq \kappa_{c}$ so that the size of the unstable region agrees with that predicted by the Eckhaus instability which is a secondary instability of the periodic stationary states \cite{PhysRevE.49.166,PhysRevE.76.017204}. The remaining observations in \cite{Chen23227} remain correct. Modes with eigenvalues of small magnitude are found to lead to another stationary state inside the allowed range of $k$. This correspondence is identified by the dominant Fourier components as summarized in TABLE~\ref{Tab1} (which should replace Table~1 in \cite{Chen23227}). Note that a factor $1/L$ must be applied to the list as in Eq.~(\ref{weight}) when comparing these values to those of $L_{k}$ of Eq.~(\ref{L_k}). Hence the low-lying eigenmodes with small negative eigenvalues are seen to connect stationary states so that the topology of the web of inter-connected fixed points remains robust. This phenomenon leads to useful information about the global potential landscape as discussed below.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\begin{tabular}{c} $k$-index \\of largest\end{tabular} &
\multicolumn{7}{c|}{\begin{tabular}{ll}top row: & stationary state by wavenumber\\columns: & unstable eigenvalues below the state\end{tabular}} \\
\cline{2-8}
amplitude & $11$ & $12$ & $13$ & $14$ & $15$ & $16$ & $17$ \\
\hline
11
& 0.012 & 3.606 & 3.810 & 3.283 & 2.650 & 0.109 & -3.967 \\
\hline
12
& -1.664 & 0.000 & 26.524 & 29.903 & 1.981 & -0.352 & -4.584 \\
\hline
13
& -3.104 & 0.032 & 0.000 & 29.709 & 1.337 & -0.638 & -4.912 \\
\hline
14
& -3.583 & 0.307 & 0.745 & 0.000 & 0.626 & -0.657 & -4.740 \\
\hline
15
& -3.022 & 0.935 & 1.750 & 1.001 & 0.000 & -0.414 & -3.835 \\
\hline
16
& -1.400 & 1.931 & 4.736 & 2.469 & 32.090 & 0.000 & -2.097 \\
\hline
17
& 0.924 & 3.283 & 5.982 & 4.340 & 33.498 & 32.479 & 0.001 \\
\hline
\end{tabular}
\protect\caption{List of low-lying eigenmodes of $\vec{R}_{0}$ labeled by index of the largest component in $k$-space (1st column) and eigenvalues in columns below stationary states listed by wavenumber in units of $2\pi/L$. Conjugate $\{-k\}$ modes with identical eigenvalues are not shown. Data for $\alpha = 0.20$, $L=128$, $h=0.125$. This corrects Table~1 in the earlier work \cite{Chen23227}.}
\label{Tab1}
\end{table}
\subsection{Least Squares Polynomial Fitting}
With this novel topology in mind, we first evaluate the pairwise potential difference between two stationary states from Eq.~\cmmnt{(\ref{DeltaPhi4})}(22).
As a first attempt, we use the potential differences between pairs of neighboring stationary states to map out incrementally the potential $\Phi(\kappa)$ over the whole range of $\kappa$, assuming that this potential exists. However, this leads to a monotonically increasing potential as the wavenumber $\kappa$ increases which implies that we must consider the topology of the set of all states. Note that Fig. 3 in \cite{Chen23227} is not accurate and is no longer meaningful.
There are many paths in state space connecting any chosen pair of fixed points and each path involves different intermediate states. Every path yields a different potential difference between the same two states from the pair-wise potential formula so that this procedure does not yield a potential but rather an action. The {\it true} potential difference must be path independent according to Eqs.~\cmmnt{(\ref{DeltaQ})}(12) and \cmmnt{(\ref{DeltaPhi1})}(13) and one way to overcome this problem of path dependence is to weight the paths by a fitting procedure. We assume the existence of a potential which is parameterized by a low-order polynomial in the wavenumber $\kappa$ and optimize the coefficients over all pair-wise potential differences. The resulting potential is insensitive to the order of the polynomial for $4^{\rm th}$, $5^{\rm th}$ and $6^{\rm th}$ order fits since these are almost identical, thus validating the fitting procedure. This new result, shown in Fig.~\ref{SFig1}, replaces Fig.~4 of \cite{Chen23227}. Note that the number of nearest neighbors used in \cite{Chen23227} as a fitting parameter is not needed here.
\begin{figure}[!ht]
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{Sfigure1a.eps} &
\includegraphics[width=0.3\textwidth]{Sfigure1b.eps} &
\includegraphics[width=0.3\textwidth]{Sfigure1c.eps} \\
(a) $ \alpha = 0.22 $ & (b) $ \alpha = 0.20$ & (c) $ \alpha = 0.17$
\end{tabular}
\protect\caption{Global potentials vs. wavenumber ($\kappa$) with $L = 512$ from a $4^{\rm th}$ order polynomial fitting over the entire topology of the stationary states for several values of control parameter $\alpha$. Arrows mark the most probable wavenumber for each $\alpha$.}\label{SFig1}
\end{figure}
\section{Stochastic Simulations}
\subsection{Adding Noise to the SKS Equation}
The global potential landscape can be checked by direct, brute-force stochastic simulations. Of necessity, these simulations must be done with rather strong noise so that {\it all} stationary states have some finite probability of being visited during the simulation. With noise of strength $\epsilon$, the evolution Eq.~(\ref{SKSF}) is modified to
\begin{equation}\label{SKSF1}
u_{k}(t + \Delta t) = \frac{u_{k}(t) + \Delta t N_{k}(t) + \sqrt{2\epsilon \Delta t N\,h}\,\xi_{k}(t)}{1 + \Delta t L_{k}}.
\end{equation}
where $\xi_{k}(t)$ is the Fourier transform of a Gaussian distributed random variable with zero mean, $\langle\xi(t)\rangle=0$, and uncorrelated in time and space so that
\begin{equation}\label{SKSF2}
\langle \xi_{k}(t)\xi^{\ast}_{k'}(t')\rangle = \delta_{k,k'}\delta_{t,t'}.
\end{equation}
Technically, for every $k\neq 0$, we use two sets of real normal noises $\xi^{(1)}_{k}, \xi^{(2)}_{k}$ so that $\xi_{k} = [\xi^{(1)}_{k}+i\,\text{sgn}(k)\,\xi^{(2)}_{k}]/\sqrt{2}$. Note that our Fourier transform convention of Eq.~(\ref{FT1}) has an overall factor $h$, so that the same factor appears in the numerator of Eq.~(\ref{SKSF1}), rather than in the denominator of the noise term (cf. \cite{Saxena_Kosterlitz}).
\subsection{Probability Distribution of Occupancies}
In a simulation with noise, the probability of the system being in a given stationary state is a difficult question because, by definition, a stationary state is a solution of the {\it noiseless} SKS equation and, in a simulation without noise, the final stationary state is determined by the initial state of the system. To address this issue, we take a set of snap shots of the system evolving with external noise at periodic time intervals. To evolve these states to their associated stationary states, we use one of the snapshots as an initial state of the system which then evolves {\it deterministically} for time $\Delta T_{s}$ to some stationary state which depends on the choice of $\Delta T_{s}$. By performing this for every snapshot, we can define the probability distribution $P_{s}(\kappa)\propto \exp[-\Phi_{s}(\kappa)/\epsilon]$ (but cf. below) of finding the system in the stationary state $\kappa = 2\pi |k|/L)$. This scheme is used to verify the theoretical predictions from the global potential landscape point of view.
The most stable state is expected to correspond to a minimum of the potential. The global potential $\Phi(\kappa)$ obtained from the stochastic decomposition is compared to the $\Phi_{s}(\kappa)$ from simulations. The $P(\kappa)$ should be a standard Boltzmann distribution \cite{Ao_Thouless} so one expects $P(\kappa) = P_{0}(\kappa)\exp[-\Phi(\kappa)/\epsilon]$ where $\epsilon$ is the noise strength of Eq.~\cmmnt{(\ref{diffusion})}(3) and $P_{0}(\kappa)$ varies slowly with $\kappa$. We now compare $\Phi_{s}(\kappa)$ with $\Phi(\kappa)$.
It turns out that $\Phi_{s}(\kappa)$, shown in Fig.~\ref{SFig2}, does agree with the position of the minimum and also with the shape of $\Phi(\kappa)$ when the noise strength $\epsilon$ is replaced by a rescaled effective noise strength $\tilde{\epsilon} = \sqrt{\epsilon/\Delta T_{s}}$. This $\Phi(\kappa)$ correctly predicts the most stable state and the shape of the potential up to some overall scale factor. This holds independently of the individual values of $\epsilon$ and the relaxation time $\Delta T_{s}$ which supports the theoretical analysis although neither theoretical nor numerical analysis is able to predict the absolute scale of the global potential landscape. This discrepancy remains to be investigated.
\begin{figure}[!ht]
\begin{tabular}{cc}
\includegraphics[width=0.245\textwidth]{Sfigure2a.eps}
\includegraphics[width=0.245\textwidth]{Sfigure2b.eps} &
\includegraphics[width=0.245\textwidth]{Sfigure2c.eps}
\includegraphics[width=0.245\textwidth]{Sfigure2d.eps} \\
(a) $\epsilon = 0.8$ & (b) $\epsilon = 0.3$ \\
\includegraphics[width=0.245\textwidth]{Sfigure2e.eps}
\includegraphics[width=0.245\textwidth]{Sfigure2f.eps} &
\includegraphics[width=0.245\textwidth]{Sfigure2g.eps}
\includegraphics[width=0.245\textwidth]{Sfigure2h.eps} \\
(c) $ \epsilon = 0.12 $ & (d) $ \epsilon = 0.08$
\end{tabular}
\protect\caption{Simulated potentials $\Phi_{s}(\kappa)$ (left), obtained from the probability distribution $P_s(\kappa)$ with $\tilde{\epsilon} = \sqrt{\epsilon/\Delta T_{s}}$ (right), for $L = 512$, $h = 0.32$, $\alpha = 0.20$, and different noise strengths $\epsilon$.}\label{SFig2}
\end{figure}
\section{Vortex Oscillation and Limit Cycle}
Vortex like oscillations and limit cycle motion of the eigenstates in $\kappa$ space are predicted from general considerations based on the stochastic dynamics as discussed in the main work \cite{mainwork} and are observed near stationary points. The matrix $\vec{Q}$ is found from Eq.~\cmmnt{(\ref{DeltaQ2_a})}(19) in Fourier space (we drop the subscript 0 for simplicity) and, to obtain real matrices, we work in coordinate space by using the transform of Eq.~(\ref{FTmatrix}). The eigenmodes and eigenvalues of $\vec{Q}$ are computed by a standard routine in MatLab$^{\copyright}$, $[V,E] = \text{eig}(A)$, which gives pairs of pure imaginary eigenvalues of the matrix $E$. To obtain real eigenvalues in the form $q_{i}\,(i\bvec{\sigma}_{y})$, which are real $2\times 2$ antisymmetric matrices used in the main work \cite{mainwork}, we perform a rotation of $V$ by $Y=V*P$ where $P$ consists of $N/2$ diagonal $2\times 2$ matrix blocks of $(\vec{1}-i\,\bvec{\sigma}_{x})/\sqrt{2}$ and $\bvec{\sigma}_{x}$ where $\bvec{\sigma}_{y}$ are Pauli matrices. The eigenvectors obtained from this form the basis for Eq.~\cmmnt{(\ref{lc0})}(25), labelled by the eigenvalues $\{q_{i}\}$ in descending order as discussed in Eqs.~\cmmnt{(\ref{matrix1})}(23)- \cmmnt{(\ref{lc0})}(25). Each of these pairs constitutes a $2D$ subspace in which the system can have circular oscillations about a stationary state.
The first few pairs in each steady state are the most likely to exhibit this vorticity. We evolve the system from an initial state with a finite amplitude of such a pair, which is in turn a small perturbation on the underlying steady state and quasi periodic oscillations or vortex motions around some fixed points are indeed observed. This behavior can occur at both stable and unstable fixed points. In the case of a fixed point which is unstable in both dimensions of the subspace, the oscillation moves away from the fixed point, leading to the possibility of a limit cycle when the nonlinearity becomes sufficiently large. An example of this is discussed in the main work \cite{mainwork} and more examples are discussed below.
\subsection{Typical Oscillations and Regions with Vorticity}
A typical vortex like motion is shown in Fig.~\ref{SFig3} (a),(b) near a stationary periodic state inside the Eckhaus stable region, with $L = 512$, $h = 0.32$, $\alpha = 0.20$ and $\kappa = 0.6995$. Motion in the $15^{\rm th}$ subspace is described by Eq.~\cmmnt{(\ref{lc0})}(25) and we impose a small initial deviation from the stationary state in the direction of the eigenvector by $\tilde{\vec{u}}_{0}=\pm\vec{e}_{15\sigma}$ ($\sigma=1, 2$) in Fourier space. Each state then evolves according to Eq.~(\ref{SKSF}) and the state vector is projected back onto the $i^{{\rm th}}$ subspace by
{${\rm{P}}_{\sigma}(t) = \vec{e}^{\dag}_{i\sigma}\cdot\tilde{\vec{u}}(t)$ (rename $\sigma$ to $x,y$ for convenience)}. When the exponential damping factors on the trajectories are removed, quasi circular motions appear as shown in Fig.~\ref{SFig3} (b). However, at high wave numbers, shown in Fig.~\ref{SFig3} (c) for the $1^{\rm st}$ subspace, the vorticity disappears so that trajectories in this subspace converge to the stable state but {\it without} vortex like circulation.
\begin{figure}[!ht]
\begin{tabular}{ccc}
\includegraphics[width=0.33\textwidth]{Sfigure3a.eps} &
\includegraphics[width=0.33\textwidth]{Sfigure3b.eps} &
\includegraphics[width=0.33\textwidth]{Sfigure3c.eps} \\
(a) $\kappa = 0.6995$ & (b) $\kappa = 0.6995$ & (c) $\kappa = 0.7363$
\end{tabular}
\protect\caption{Trajectories ${\rm{P}}_{\sigma}(t) = \vec{e}^{\dag}_{i\sigma}\cdot\tilde{\vec{u}}(t)$ in 2D dimensional subspace for $L = 512$, $h = 0.32$, $\alpha = 0.20$ with different wavenumber. (a) Small deviations from a stable state decay back to the origin with vortex like circulation for $\kappa = 0.6995$. (b) Vortex like motion seen by removing attenuation from the trajectories with the same parameters as (a). (c) Small deviations from a stable state decay to zero {\it without} vortex like circulation for $\kappa = 0.7363$.}\label{SFig3}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=0.5\textwidth]{Sfigure4.eps}
\protect\caption{Phase diagram for some parameters. In a vacillating-breathing (VB) mode, there will be quasi periodic oscillation in the region far away from the fixed point, namely, limit cycle motion. In the range of stable region, the region below the green dotted-square (contains the points on this line) shows stable vortex like circulation, which is absent at large wavenumber.}\label{SFig4}
\end{figure}
We next examine parameter regions which exhibit vorticity. We also investigate numerically the degree of isolation of pairs of eigenmodes from Eq.~\cmmnt{(\ref{overlap})}(28) and Fig.~\ref{SFig4} gives an overall picture of this. In general, vorticity exists to the left of the dotted-square line.
It is of interest to note that this dividing line coincides with the minima of mode isolation. The degree of isolation or the pair approximation improves as $\alpha$ decreases, even for unstable states. The same happens on the right-hand side of the line, although vorticity is absent for large wavenumbers. At present, there is no analytical understanding of this.
\subsection{Vorticity Driven VB Modes and Pattern Drifting}
A vacillating-breathing (VB) mode \cite{Kerszberg1983,Obeid_Kosterlitz,Cross2016,Saxena_Kosterlitz} in which each cell oscillates out of phase by $\pi$ with its neighbors occurs around $\alpha\cong 0.1$ and $\kappa < \kappa_{c}$. This is the ideal region to show that the VB mode corresponds to the vortex motion discussed earlier and also to study the mechanism causing vorticity to drive the phase drifting of a periodic stationary state. {A typical scenario of VB oscillation is shown in FIG.~\ref{SFig5}.}
\begin{figure}[!ht]
\begin{tabular}{c}
\includegraphics[width=0.8\textwidth]{Sfigure5a.eps} \\
(a)
\end{tabular}
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{Sfigure5b.eps} &
\includegraphics[width=0.4\textwidth]{Sfigure5c.eps} \\
(b) & (c)
\end{tabular}
\protect\caption{Vortex motions in VB oscillations decomposed into the eigenstates of $\vec{Q}_{0}$ with $L = 512$, $h = 032$, $\alpha = 0.12$ and cellular wavenumber $\kappa = 0.6136$. (a) Spatiotemporal portrait of the oscillations, showing no apparent vorticity in the dynamics. (b) Decaying circulation in the stable $3^{\rm rd}$ subspace. (c) Growing circulation in the unstable $5^{\rm th}$ subspace. Here ${\rm{P}}_{\sigma}(t) = \vec{e}^{\dag}_{i\sigma}\cdot\tilde{\vec{u}}(t)$.}\label{SFig5}
\end{figure}
In a VB mode, when an unstable vortex like circulation increases in size, eventually this growth ceases because of the nonlinearity and decays at some later time. We find that, simultaneously, the periodic cellular structure itself undergoes phase drifting, indicating that this is driven by the vorticity of the former. In fact, closer examination shows that, as the cells drift, the overlap of the growing eigenstate with a shrinking eigenstate of the drifted stationary state increases. When the drift phase reaches $\pi$, the overlap is a maximum. Clearly, the shrinking mode causes the vortex circulation to return to zero. Note we only look at one pair of modes. There are other modes acting so that the drifting continues and forms a limit cycle when the phase reaches $2\pi$. The complete dynamics is shown in Fig.~\ref{SFig6}. The system is initially in the $3^{\rm rd}$ subspace and evolves by Eq.~\cmmnt{(\ref{lc0})}(25) with $L = 512$, $h = 0.32$, $\alpha = 0.12$, and $\kappa = 0.6381$. The projection on to the subspace changes with time {, ${\rm{P}}_{\sigma}(t) = \vec{e}^{\dag}_{3\sigma}\cdot\tilde{\vec{u}}(t)$ ($\sigma=x,y$ for convenience)}. From the perspective of the cross section, this is a vortex like motion in time with the time-step set to ${\rm{d}}t = 0.003$, and data is recorded every 400 iterations. The saturation is clearly present when it reaches a certain magnitude and the motion becomes quasi limit cycle. As mentioned above, this happens while cellular structure itself undergoes phase drifting in coordinate space. The behavior can be understood from the overlap, $P$, between the 4 dimensional degenerate subspace [$\vec{e}_{3\sigma}$ $\vec{e}_{4\sigma}$] with the stable and unstable eigenmodes of the complete $(\vec{D}+\vec{Q})\vec{R}$ matrix calculated at respective stationary points with phase drift $\Delta\phi$. It is found that the overlaps between stable and unstable oscillate periodically with $\Delta\phi$.
\begin{figure}[!ht]
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{Sfigure6a.eps} &
\includegraphics[width=0.4\textwidth]{Sfigure6b.eps} \\
(a) & (b) \\
\includegraphics[width=0.4\textwidth]{Sfigure6c.eps} &
\includegraphics[width=0.4\textwidth]{Sfigure6d.eps} \\
(c) & (d)
\end{tabular}
\protect\caption{Vorticity driven VB oscillation explored in detail. {(a) Periodic cell drifting in coordinate space for $u(x)$.} (b) Vortex like dynamics in a two dimensional subspace. {(c) The time-evolution of state vector onto the an initially unstable subspace.} (d) The overlapping of the 4D degenerate subspace [$\vec{e}_{3\sigma}$ $\vec{e}_{4\sigma}$], as in Eq.~\cmmnt{(\ref{lc0})}(25), with the corresponding unstable (solid blue line) and stable (dotted red line) eigenmodes of the full $N\times N$ matrix $(\vec{D}+\vec{Q})\vec{R}$.}\label{SFig6}
\end{figure}
\section{Data Availability}
The necessary formulae and the steps to perform the computations are detailed in the main work and the Supplementary Information. There is no need for a specific platform nor a specially purposed software package. The data used to justify the results and conclusions of this work are entirely presented within the body and supplementary information of the manuscript.
The code used in this paper is available on request from [email protected].
\clearpage
\begin{comment}
\end{comment}
| 2024-02-18T23:39:57.362Z | 2022-07-07T02:11:39.000Z | algebraic_stack_train_0000 | 936 | 9,325 |
|
proofpile-arXiv_065-4717 | \section{Introduction}
For several decades wave equations have appealed to a lot of interest and have been extensively studied in many works of literature. It plays a role as a model to explain various physical phenomena and in the mathematical literature, the study of the wave equations becomes the very first step to shattering the light on the investigation of hyperbolic partial differential equations.
We are interested in time-evolution of solutions to wave equations with various nonlinearities for low regularity initial data. In the investigation it is important to control the nonlinearity in terms of the inital data. In other words, we have to prove that the presence of nonlinearity turns out to be nothing but a small perturbation. Such a perturbative method to wave equations and even more general dispersive equations is a typical approach to the study of Cauchy problems. The first well-known tool is so-called Stricharz estimates \cite{keeltao,stri}
$$
\|e^{-it|\nabla|}P_1f\|_{L^q_tL^r_x(\mathbb R^{1+n})} \lesssim \|P_1f\|_{L^2_x}
$$
for any function $f$. Here $P_1f$ is the projection onto the unit frequency and $(q,r)$ is a proper admissible pair. However, the linear estimate is not sufficient to control all over the frequency-interactions between the products of homogeneous solutions especially when we are concerned with well-posedness problem for a low regularity data. This problem requires one to delicately consider the following bilinear estimates
$$
\|e^{-it|\nabla|}f e^{-it|\nabla|}g\|_{L^2_tL^2_x} \lesssim \|f\|_{L^2_x}\|g\|_{L^2_x}.
$$
In fact, when nonlinearity is given by power-type, nonlinear estimates are reduced to bilinear estimates. Recently there has been a huge amount of progress on bilinear estimates of wave-type by many works of \cite{fosklai,klaima,klaima1,sleevar,tao,tao1,tao2,tataru,tataru1} and long-time behaviour of solutions to nonlinear wave equations and even more complicated systems such as the Maxwell-Klein-Gordon or the Yang-Mills equations is well-known for $(1+4)$ dimensions and higher dimensional setting \cite{kriegersterbenztataru, kriegertataru,sterbenz2,ohtataru}.
However, global regularity is still open for the most of wave equations in a low dimensional setting such as $(1+3)$ or $(1+2)$ dimensions. At a first glimpse this is obviously because of the weaker time-decay of solutions in a low dimensional setting $\|e^{-it|\nabla|}P_1f\|_{L^\infty_x(\mathbb R^n)}\lesssim t^{-\frac{n-1}{2}}\|P_1f\|_{L^1_x}$. Moreover, at the nonlinear level, one can see that {\it resonant interactions} grows stronger as the spatial dimensions decrease. Even further, when nonlinearity possesses singularity near the origin, one may encounter more serious situation since the singularity grows harsher in a low dimension. From these several problems one may have a question whether it is possible to establish global well-posedness and scattering for the scale-invariant Sobolev data.
To overcome this difficulty we equip the Sobolev spaces with an extra weighted smoothness assumptions with respect to the angular variables. Indeed, we invoke the infinitesimal rotation generators $\Omega_{ij}=x_i\partial_j-x_j\partial_i$. In the sprit of \cite{ster}, $\Omega_{ij}$ plays a crucial role in the aspect of both linear and multilinear estimates. More precisely, one enjoy a significant improvement of linear estimates. At the nonlinear estimates, the rotation operator helps to overcome the resonant interactions. Even more, the rotation can relax the harshness of the singularity since the operator $\Omega_{ij}$ works very favourably in the low-output interactions. In this manner, it is possible to improve the bilinear estimates.
Now we turn to an application of an improved multilinear estimate on $\mathbb R^{1+3}$. We are concerned with somewhat a general class of quadratic nonlinear wave equations and the Hartree-type nonlinear Dirac equations which becomes {\it a toy model} for several nonlinear wave and Dirac equations. The following equations we shall present seem {\it too primitive} at a first glimpse, however, by the primitiveness and generality of a toy model we can attack efficiently even more complicated system such as gauge-field-theoretic wave equations which represent a genuinely physical model.
\subsection{Quadratic nonlinear wave equations}
Firstly we aim to investigate global-in-time evolution of wave equations in $\mathbb R^{1+3}$ with quite a general quadratic nonlinearity given by
\begin{align}\label{main-wave}
\left\{
\begin{array}{l}
\Box u = |\nabla|^{-1}Q(\overline u,u), \\
(u,\partial_tu)|_{t=0} = (u_0,u_1),
\end{array}
\right.
\end{align}
where $u$ is a complex-valued function on $\mathbb R^{1+3}$ and $Q:(u,v)\mapsto Q(u,v)$ is a bilinear form which is a finite linear combination of the standard $Q$-type null forms
\begin{align*}
Q_{ij}(u,v) & = \partial_iu\partial_jv - \partial_ju\partial_iv, \, Q_{0}(u,v) = \partial_tu\partial_tv -\nabla u\cdot\nabla v,
\end{align*}
which give the cancellation by angle between input-frequency.\footnote{In fact, the null form $Q_0$ gives stronger cancellation, and we can overcome the singularity $|\nabla|^{-1}$ more easily by exploiting the $Q_0$ null form. However, for the generality of our result, we focus on the $Q_{ij}$ null form.} More precisely the Fourier transform of $Q_{ij}(u,v)$ is
\begin{align*}
|\widehat{Q_{ij}(u,v)}|(\zeta) \lesssim \int_{\zeta=\xi+\eta}\angle(\xi,\eta)|\xi||\eta| \widehat u(\xi) \widehat v(\eta)\,d\xi d\eta.
\end{align*}
The wave equation \eqref{main-wave} has the scaling symmetry, i.e., if $u=u(t,x)$, $(t,x)\in\mathbb R^{1+3}$ is a solution of \eqref{main-wave} then the scaled function $\lambda^{-1}u(\lambda^{-1}t,\lambda^{-1}x)$ will be also a solution to the equation \eqref{main-wave} for any $\lambda>0$ and hence the scale-invariant Sobolev space for the initial data $(u_0,u_1)$ is $\dot H^\frac12 \times \dot H^{-\frac12}$, where $\dot H^s$ is the usual homogeneous Sobolev space. Now we define the angularly regular space $\dot H^s_\sigma$ to be $\|f\|_{\dot H^s_\sigma}=\|\langle\Omega\rangle^\sigma f\|_{\dot H^s}$, where $\langle\Omega\rangle^\sigma=(1-\Delta_{\mathbb S^2})^\frac\sigma2$ and $\Delta_{\mathbb S^2}$ is the Laplace-Beltrami operator on the unit sphere $\mathbb S^2\subset\mathbb R^3$. The inhomogeneous Sobolev space with angular regularity $H^s_\sigma$ is defined in the obvious way. We state our first main result.
\begin{thm}\label{gwp-wave}
Let $\sigma=1$. Suppose that the initial datum $(u_0,u_1)\in \dot H^{\frac12}_\sigma\times\dot H^{-\frac12}_\sigma$ satisfies
$$
\|(u_0,u_1)\|_{\dot H^\frac12_\sigma\times\dot H^{-\frac12}_\sigma} = \|u_0\|_{\dot H^\frac12_\sigma}+\|u_1\|_{\dot H^{-\frac12}_\sigma}\ll1.$$
The Cauchy problem for the equation \eqref{main-wave} is globally well-posed and scatters to free solutions as $t\rightarrow\pm\infty$.
\end{thm}
\subsubsection{Application to the Maxwell-Klein-Gordon equations in the Coulomb gauge}
We would like to mention here briefly an application of Theorem \ref{gwp-wave}. The Maxwell-Klein-Gordon system is a physical model for the interaction of a spin $0$ particle with electromagnetic fields. We define the real-valued gauge potentials $A_\mu$, $\mu=0,1,\cdots,3$ on the Minkowski space $(\mathbb R^{1+3},\mathbf m)$, where the metric $\mathbf m$ is given by $\mathbf m = \textrm{diag}(-1,1,1,1)$. The covariant derivative is given by $\mathcal D_\mu = \partial_\mu+iA_\mu $. The electromagnetic field $F$ associated to the potential $A_\mu$ is defined by $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$. Then the covariant form of the Maxwell-Klein-Gordon system presents
\begin{align}
\begin{aligned}
\partial_\mu F^{\mu\nu} &= -\textrm{Im}(\phi \overline{\mathcal D^\nu\phi}), \\
\mathcal D_\mu\mathcal D^\mu\phi &= 0,
\end{aligned}
\end{align}
where $\textrm{Im}(A)$ is the imaginary part of $A$.
Note that we adapt the usual summation convention with respect to repeated indices. The Maxwell-Klein-Gordon system has a gauge-invariance. Indeed, if $(A_\mu,\phi)$ is a solution to the system, then for any real-valued smooth function $\Lambda$ on $\mathbb R^{1+3}$, the set $(A_\mu+\partial_\mu\Lambda, e^{-i\Lambda}\phi)$ is also a solution to the system. This observation allows one to enjoy the gauge-freedom. Now we impose the Coulomb gauge condition: $\textrm{div} A = \partial_j A^j=0$. Then after an application of the projection $\mathbf P=-\frac{(\rm curl)^2}{\Delta}$ we see that the spatial parts of the gauge potentials obey the following wave equation
\begin{align}\label{model-mkg}
\Box A_j = -\textrm{Im}\,\mathbf P(\phi\overline{\mathcal D_j\phi}).
\end{align}
Then the quadratic nonlinearity in the wave equation \eqref{model-mkg} presents a finite linear combination of the $Q$-type null forms as $\Delta^{-1}\partial_k Q_{ij}(\phi,\overline\phi)$, which turns out to be the nonlinearity in our toy model \eqref{main-wave}. We refer the readers to \cite{masterbenz} for more details on the Maxwell-Klein-Gordon system.
The Maxwell-Klein-Gordon system is one of well-studied gauge-field-theoretic wave equations. In $(1+4)$ dimensional setting, the global dynamics of solutions to the system are shown by Oh and Tataru \cite{ohtataru,ohtataru1,ohtataru2}. However, global solutions to the system in $(1+3)$ dimensions is still open. The main drawback of the system is the strong singularity in the quadratic nonlinearity $|\nabla|^{-1}Q_{ij}(\phi,\overline\phi)$. Our first main result provides a partial answer on the question of the scattering property of solutions to the Maxwell-Klein-Gordon system for the scale-invariant Sobolev regularity.
\subsection{Cubic Dirac equations}
Secondly we would like to investigate long-time behaviour of solutions to cubic Dirac equations with the Hartree-type nonlinearty
\begin{align}\label{main-dirac}
\left\{
\begin{array}{l}
-i\gamma^\mu\partial_\mu\psi+m\psi = [V_b*(\psi^\dagger\psi)]\gamma^0\psi,\\
\psi|_{t=0} = \psi_0,
\end{array}
\right.
\end{align}
where $V_b=V_b(x)$ is the Yukawa-type potential given by
$$
V_b(x) = \frac1{4\pi}\frac{e^{-b|x|}}{|x|},\quad b>0,
$$
and $m>0$ is a positive mass.
Recall that we adapt the summation convention. Here $\psi:\mathbb R^{1+3}\rightarrow\mathbb C^4$ is the Dirac spinor field and $\psi^\dagger$ is the complex conjugate transpose of $\psi$, i.e., $\psi^\dagger=(\psi^*)^T$. The Dirac gamma matrices $\gamma^\mu$ are the $4\times4$ complex matrices given by
\begin{align*}
\gamma^0 = \begin{bmatrix}
I_{2\times2} & \mathbf0 \\ \mathbf0 & -I_{2\times2}
\end{bmatrix}, \ \gamma^j = \begin{bmatrix}
\mathbf 0 & \sigma^j \\ -\sigma^j & \mathbf0
\end{bmatrix} ,
\end{align*}
with the Pauli matrices $\sigma^j$, $j=1,2,3$ given by
\begin{align*}
\sigma^1 = \begin{bmatrix}
0 & 1 \\ 1 & 0
\end{bmatrix}, \ \sigma^2 = \begin{bmatrix}
0 & -i \\ i & 0
\end{bmatrix}, \ \sigma^3=\begin{bmatrix}
1 & 0 \\ 0 & -1
\end{bmatrix}.
\end{align*}
As Theorem \ref{gwp-wave} we prove the global well-posedness and scattering for the scaling critical Sobolev data.
\begin{thm}\label{gwp-dirac}
Let $\sigma=1$. Suppose that the initial data $\psi_0\in L^2_\sigma$ satisfies $\|\psi_0\|_{L^2_\sigma}\ll1$. The Cauchy problem for the equation \eqref{main-dirac} is globally well-posed and scatters to free solutions as $t\rightarrow\pm\infty$.
\end{thm}
\subsubsection{Application to nonlinear Dirac equations}
Now we shall discuss an application of Theorem \ref{gwp-dirac}. First of all it is instructive to introduce the general form of the Dirac-Klein-Gordon system. Indeed, cubic Dirac equations of the form \eqref{main-dirac} can be obtained by uncoupling the Dirac-Klein-Gordon system
\begin{align}\label{dkg}
\left\{
\begin{array}{l}
(-i\gamma^\mu\partial_\mu+M)\psi = g \phi\Gamma\psi, \\
(\Box+m^2) \phi = -g \psi^\dagger\gamma^0\Gamma\psi.
\end{array}
\right.
\end{align}
Here $g$ is a coupling constant and we put $g=1$ for simplicity. The $4\times4$ matrix $\Gamma$ can be chosen properly by researchers, for example, $\Gamma = I_{4\times4}, \gamma^0, -\gamma^0\gamma^1\gamma^2\gamma^3$ \cite{bjor}. From \eqref{dkg} one can obtain cubic Dirac equations of the form
\begin{align}\label{gen-dirac}
(-i\gamma^\mu\partial_\mu+M)\psi = V_b*(\psi^\dagger\gamma^0\Gamma\psi)\Gamma\psi.
\end{align}
We refer the readers to \cite{tes,tes1,cyang} for more detailed derivation from the system \eqref{dkg} to \eqref{gen-dirac}. Recently the nonlinear Dirac systems \eqref{dkg} and \eqref{gen-dirac} with $\Gamma=I_{4\times4}$ have been extensively studied. See \cite{danfos,behe,candyherr,candyherr1,cholee,chohlee,choozlee,wang} and reference therein. For the case $\Gamma=\gamma^\mu$ and the Klein-Gordon field $\phi$ replaced by the vector potential $A_\mu$ with $m=0$, the system \eqref{dkg} becomes the Maxwell-Dirac system \cite{dasfos1,gaoh}. In the case $\Gamma=I_{4\times4}$, it is crucial to exploit the null structure in the bilinear form $\psi^\dagger\gamma^0\psi$ to attain low regularity well-posedness. If $\Gamma=\gamma^0$, however, one cannot enjoy such an advantage and in consequence it is not easy to obtain global well-posedness for a low regularity data. Our second main result says that one can establish scattering property even when it is not possible to take an advantage of null structures.
We would like to mention the Cauchy problems for the boson star equation (or the semi-relativistic equation) with the Hartree-type nonlinearity on $\mathbb R^{1+3}$:
\begin{align}\label{boson-star}
\left\{
\begin{array}{l}
-i\partial_tu + \sqrt{m^2-\Delta}u = (V_b*|u|^2)u, \\
u|_{t=0} = u_0
\end{array}\right.
\end{align}
We refer to \cite{chooz,herrlenz,herrtes} for this well-studied equation. After the use of the Dirac projection operators (see Section \ref{sec:dirac-op}) our Dirac equations \eqref{main-dirac} is of the form \eqref{boson-star}. Thus as a direct application of Theorem \ref{gwp-dirac}, we have
\begin{cor}\label{dirac-appli}
Let $\sigma=1$. Suppose that the initial data $u_0\in L^2_\sigma$ satisfies $\|u_0\|_{L^2_\sigma}\ll1$. The Cauchy problems for the equation \eqref{boson-star} is globally well-posed and scatters to free solutions as $t\rightarrow\pm\infty$.
\end{cor}
\noindent By Corollary \ref{dirac-appli} we improve the previous results on the Cauchy problems for \eqref{boson-star} and attain the scaling critical regularity.
The rest of this paper is organised as follows. In the next section, we give some preliminaries which include half-wave decompositions, Dirac operators, multipliers, definition and basic properties on $U^p-V^p$ spaces and auxiliary estimates. Section 3 and Section 4 are devoted to the proof of our main results, Theorem \ref{gwp-wave} and Theorem \ref{gwp-dirac}, respectively.
\subsection*{Notations}
\begin{enumerate}
\item
As usual different positive constants, which are independent of dyadic numbers $\mu,\lambda$, and $h$ are denoted by the same letter $C$, if not specified. The inequalities $A \lesssim B$ and $A \gtrsim B$ means that $A \le CB$ and
$A \ge C^{-1}B$, respectively for some $C>0$. By the notation $A \approx B$ we mean that $A \lesssim B$ and $A \gtrsim B$, i.e., $\frac1CB \le A\le CB $ for some absolute constant $C$. We also use the notation $A\ll B$ if $A\le \frac1CB$ for some large constant $C$. Thus for quantities $A$ and $B$, we can consider three cases: $A\approx B$, $A\ll B$ and $A\gg B$. In fact, $A\lesssim B$ means that $A\approx B$ or $A\ll B$.
The spatial and space-time Fourier transform are defined by
$$
\widehat{f}(\xi) = \int_{\mathbb R^3} e^{-ix\cdot\xi}f(x)\,dx, \quad \widetilde{u}(\tau,\xi) = \int_{\mathbb R^{1+3}}e^{-i(t\tau+x\cdot\xi)}u(t,x)\,dtdx.
$$
We also write $\mathcal F_x(f)=\widehat{f}$ and $\mathcal F_{t, x}(u)=\widetilde{u}$. We denote the backward and forward wave propagation of a function $f$ on $\mathbb R^3$ by
$$
e^{-\theta it |\nabla|}f = \frac1{(2\pi)^3}\int_{\mathbb R^3}e^{ix\cdot\xi}e^{-\theta it|\xi|}\widehat{f}(\xi)\,d\xi,
$$
where $\theta\in\{+,-\}$.
\item
We fix a smooth function $\rho\in C^\infty_0(\mathbb R)$ such that $\rho$ is supported in the set $\{ \frac12<r<2\}$ and we let
$$
\sum_{\lambda\in2^{\mathbb Z}}\rho\left(\frac r\lambda\right) =1,
$$
and write $\rho_1=\sum_{\lambda\le1}\rho(\frac r\lambda)$ with $\rho_1(0)=1$. Now we define the standard Littlewood-Paley multipliers for $\lambda\in 2^{\mathbb N}$ and $\lambda>1$:
$$
P_\lambda = \rho\left(\frac{|-i\nabla|}{\lambda}\right),\quad P_1=\rho_1(|-i\nabla|).
$$
\end{enumerate}
\section{Preliminaries}
\subsection{Half-wave decomposition of the d'Alembertian}
We formulate nonlinear wave equations $\Box u =F$ as a first-order system, which clarifies the dispersive properties of a nonlinear wave. (See also \cite{huhoh}.) We first write
$$
\frac{\partial}{\partial t}\begin{bmatrix} u \\ \partial_tu \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ \Delta & 0 \end{bmatrix} \begin{bmatrix}u \\ \partial_tu \end{bmatrix} + \begin{bmatrix} 0 \\ F \end{bmatrix}.
$$
We make use of the transform
$$
(u,\partial_tu) \rightarrow (u_+,u_-),\ (0,F) \rightarrow(F_+,F_-),
$$
where
$$
u_\pm = \frac12\left( u\mp\frac{1}{i|\nabla|}\partial_tu \right),\ F_\pm = \mp\frac{1}{2i|\nabla|}F,
$$
with $|\nabla|=\sqrt{-\Delta}$, which yields the following diagonal system
$$
\frac{\partial}{\partial t} \begin{bmatrix} u_+ \\ u_- \end{bmatrix} = \begin{bmatrix} -i|\nabla| & 0 \\ 0 & +i|\nabla| \end{bmatrix} \begin{bmatrix} u_+ \\ u_- \end{bmatrix} + \begin{bmatrix} F_+ \\ F_- \end{bmatrix}.
$$
This is equivalent to the following half-wave equations
\begin{align}\label{wave-decom}
(-i\partial_t+\theta|\nabla|)u_\theta = \theta\frac{1}{2|\nabla|}F,
\end{align}
where $\theta\in\{+,-\}$. Thus we conclude that the initial value problems for the equation \eqref{main-wave} is reduced to the following first-order system of nonlinear wave equations
\begin{align}
\left\{
\begin{array}{l}
(-i\partial_t+\theta|\nabla|)u_\theta = \theta|\nabla|^{-2}Q(\overline u,u), \\
u_{\theta}|_{t=0} = u_{0,\theta}.
\end{array}
\right.
\end{align}
\subsection{Dirac projection operators}\label{sec:dirac-op}
Recall the Dirac equations \eqref{main-dirac}
$$
-i\gamma^\mu\partial_\mu\psi + m\psi = [V*(\psi^\dagger\psi)]\gamma^0\psi.
$$ We would like to decompose the Dirac equations and obtain a similar form of a first-order system of half-wave equations as we have done in the previous section. To do this, we first introduce the projections for $\theta\in\{+,-\}$
\begin{align}
\Pi_\theta(\xi) = \frac12\left(I_{4\times4}+\theta\frac{\xi_j\gamma^0\gamma^j+m\gamma^0}{\langle\xi\rangle_m} \right),
\end{align}
where we used the summation convention and the gamma matrices $\gamma^\mu\in\mathbb C^{4\times4}$, $\mu=0,1,2,3$ are given by
\begin{align*}
\gamma^0 = \begin{bmatrix}
I_{2\times2} & \mathbf0 \\ \mathbf0 & -I_{2\times2}
\end{bmatrix}, \ \gamma^j = \begin{bmatrix}
\mathbf 0 & \sigma^j \\ -\sigma^j & \mathbf0
\end{bmatrix} ,
\end{align*}
with the Pauli matrices $\sigma^j\in\mathbb C^{2\times2}$, $j=1,2,3$ given by
\begin{align*}
\sigma^1 = \begin{bmatrix}
0 & 1 \\ 1 & 0
\end{bmatrix}, \ \sigma^2 = \begin{bmatrix}
0 & -i \\ i & 0
\end{bmatrix}, \ \sigma^3=\begin{bmatrix}
1 & 0 \\ 0 & -1
\end{bmatrix}.
\end{align*}
Now we define the Fourier multiplier by the identity $\mathcal F_x[\Pi_\theta f](\xi) = \Pi_\theta(\xi)\widehat{f}(\xi)$. By an easy computation one easily see the identity $\Pi_\theta\Pi_\theta=\Pi_\theta$ and $\Pi_\theta\Pi_{-\theta}=0$. We also have $\psi=\Pi_+\psi+\Pi_-\psi$. Then we see that $(-i\gamma^\mu\partial_\mu+m)\Pi_\theta\psi=\gamma^0(-i\partial_t+\theta\langle\nabla\rangle_m)\psi$ and
hence we conclude that the initial value problems for the equations \eqref{main-dirac} is reduced to the following first-order system of nonlinear Klein-Gordon equations
\begin{align}\label{dirac-decom}
\left\{
\begin{array}{l}
(-i\partial_t+\theta\langle\nabla\rangle_m)\psi_\theta = \Pi_\theta[V_b*(\psi^\dagger\psi)\psi], \\
\psi_\theta|_{t=0}=\psi_{0,\theta},
\end{array}
\right.
\end{align}
where $\psi_\theta = \Pi_\theta\psi$.
\subsection{Multipliers}\label{multi}
We define $\mathcal Q_\mu$ to be a finitely overlapping collection of cubes of diameter $\frac{\mu}{1000}$ covering $\mathbb R^3$, and let $\{ \rho_{\mathsf q}\}_{\mathsf q\in\mathcal Q_\mu}$ be a corresponding subordinate partition of unity.
For $\mathsf q\in\mathcal Q_\mu$, $d\in 2^{\mathbb Z}$ let
$$
P_{\mathsf q} = \rho_{\mathsf q}(-i\nabla),\quad C^{\theta}_d = \rho\left(\frac{|-i\partial_t + \theta|\nabla||}{d}\right).
$$
We define $C^\theta_{\le d}=\sum_{\delta\le d}C^\theta_\delta$.
Given $0 < \alpha \lesssim1$, we define $\mathcal C_{\alpha}$ to be a collection of finitely overlapping caps of radius ${\alpha}$ on the sphere $\mathbb S^2$. If $\kappa\in\mathcal C_{\alpha}$, we let $\omega_\kappa$ be the centre of the cap $\kappa$. Then we define $\{\rho_\kappa\}_{\kappa\in\mathcal C_{\alpha}}$ to be a smooth partition of unity subordinate to the conic sectors $\{ \xi\neq0 , \frac{\xi}{|\xi|}\in\kappa \}$ and denote the angular localisation Fourier multipliers by
$
R_\kappa = \rho_\kappa(-i\nabla).
$
\subsection{Analysis on the sphere}\label{an-sph}
We recall some basic facts from harmonic analysis on the unit sphere. We refer the readers to \cite{candyherr, ster} for the most of ingredients in this section.
We let $Y_{\ell}$ be the set of homogeneous harmonic polynomial of degree $\ell$ on $\mathbb R^3$. Then define $\{ y_{\ell,n} \}_{n=0}^{2\ell}$ a set of orthonormal basis for $Y_{\ell}$, with respect to the inner product:
\begin{align}
\langle y_{\ell,n},y_{\ell',n'}\rangle_{L^2_\omega(\mathbb S^2)} = \int_{\mathbb S^2}{y_{\ell,n}(\omega)} \overline{y_{\ell',n'}(\omega)}\,d\omega.
\end{align}
Given $f\in L^2_x(\mathbb R^3)$, we have the orthogonal decomposition as follow:
\begin{align}
f(x) = \sum_{\ell}\sum_{n=0}^{2\ell}\langle f(|x|\omega),y_{\ell,n}(\omega)\rangle_{L^2_\omega(\mathbb S^2)}y_{\ell,n}\big(\frac{x}{|x|}\big).
\end{align}
For a dyadic number $N>1$, we define the spherical Littlewood-Paley decompositions by
\begin{align}\begin{aligned}\label{hn}
H_N(f)(x) & = \sum_{\ell}\sum_{n=0}^{2\ell}\rho\left(\frac\ell N\right)\langle f(|x|\omega),y_{\ell,n}(\omega)\rangle_{L^2_\omega(\mathbb S^2)}y_{\ell,n}\big(\frac{x}{|x|}\big), \\
H_1(f)(x) & = \sum_{\ell}\sum_{n=0}^{2\ell}\rho_{\le1}(\ell)\langle f(|x|\omega),y_{\ell,n}(\omega)\rangle_{L^2_\omega(\mathbb S^2)}y_{\ell,n}\big(\frac{x}{|x|}\big).
\end{aligned}\end{align}
Since $-\Delta_{\mathbb S^2}y_{\ell, n} = \ell(\ell+1)y_{\ell, n}$, by orthogonality one can readily get
$$\|\langle\Omega\rangle^\sigma f\|_{L^2_\omega({\mathbb S^2})} \approx \left\|\sum_{N\in2^{\mathbb N}\cup\{0\}}N^\sigma H_Nf\right\|_{L^2_\omega({\mathbb S^2})}.$$
\begin{lem}[Lemma 7.1 of \cite{candyherr}]\label{sph-ortho}
Let $N\ge1$. Then $H_N$ is uniformly bounded on $L^p(\mathbb R^3)$ in $N$, and $H_N$ commutes with all radial Fourier multipliers. Moreover, if $N'\ge1$, then either $N\approx N'$ or
$$
H_N\Pi_\theta H_{N'}=0.
$$
\end{lem}
As an application of Lemma \ref{sph-ortho} one can say that the spherical harmonic projections $H_N$ commutes with the Littlewood-Paley projections such as $P_\lambda$ and $C^\theta_d$. Furtheremore the orthogonality of the spherical harmonics still holds when one deals with the Dirac projections.
\subsection{Adapted function spaces}\label{ftn-sp}
We discuss the basic properties of function spaces of $U^p$ and $V^p$ type. We refer the readers to \cite{haheko,kochtavi} for more details. Let $\mathcal I$ be the set of finite partitions $-\infty=t_0<t_1<\cdots<t_K=\infty$ and let $1\le p<\infty$.
\begin{defn}
A function $a:\mathbb R\rightarrow L^2_x$ is called a $U^p$-atom if there exists a decomposition
$$
a=\sum_{j=1}^K\chi_{[t_k-1,t_k)}(t)f_{j-1}
$$
with
$$
\{f_j\}_{j=0}^{K-1}\subset L^2_x,\ \sum_{j=0}^{K-1}\|f_j\|_{L^2_x}^p=1,\ f_0=0.
$$
Furthermore, we define the atomic Banach space
$$
U^p := \left\{ u=\sum_{j=1}^\infty \lambda_ja_j : a_j \, U^p\textrm{-atom},\ \lambda_j\in\mathbb C \textrm{ such that } \sum_{j=1}^\infty|\lambda_j|<\infty \right\}
$$
with the induced norm
$$
\|u\|_{U^p} := \inf\left\{ \sum_{j=1}^\infty|\lambda_j| : u=\sum_{j=1}^\infty \lambda_ja_j,\,\lambda_j\in\mathbb C,\, a_j \, U^p\textrm{-atom} \right\}.
$$
\end{defn}
We list some basic properties of $U^p$ spaces.
\begin{prop}[Proposition 2.2 of \cite{haheko}]
Let $1\le p<q<\infty$.
\begin{enumerate}
\item $U^p$ is a Banach space.
\item The embeddings $U^p\subset U^q\subset L^\infty(\mathbb R;L^2_x)$ are continuous.
\item For $u\in U^p$, $u$ is right-continuous.
\end{enumerate}
\end{prop}
\noindent We also define the space $U^p_\theta$ to be the set of all $u\in\mathbb R\rightarrow L^2_x$ such that $e^{-\theta it|\nabla|}u\in U^p$ with the obvious norm
$
\|u\|_{U^p_\theta} := \|e^{-\theta it|\nabla|}u\|_{U^p}.
$
We define the $2$-variation of $v$ to be
$$
|v|_{V^2} = \sup_{ \{t_k\}_{k=0}^K\in\mathcal I } \left( \sum_{k=0}^K\|v(t_k)-v(t_{k-1})\|_{L^2_x}^2 \right)^\frac12
$$
Then the Banach space $V^2$ can be defined to be all right continuous functions $v:\mathbb R\rightarrow L^2_x$ such that the quantity
$$
\|v\|_{V^2} = \|v\|_{L^\infty_tL^2_x} + |v|_{V^2}
$$
is finite. Set $\|u\|_{V^2_\theta}=\|e^{-\theta it|\nabla|}u\|_{V^2}$. We recall basic properties of $V^2_\theta$ space from \cite{candyherr, candyherr1, haheko}. In particular, we use the following lemma to prove the scattering result.
\begin{lem}[Lemma 7.4 of \cite{candyherr}]\label{v-scatter}
Let $u\in V^2_\theta$. Then there exists $f\in L^2_x$ such that $\|u(t)-e^{-\theta it|\nabla|}f\|_{L^2_x}\rightarrow0$ as $t\rightarrow\pm\infty$.
\end{lem}
The following lemma is on a simple bound in the high-modulation region.
\begin{lem}[Corollary 2.18 of \cite{haheko}]
Let $2\le q\le\infty$. For $d\in2^{\mathbb Z}$ and $\theta \in \{+, -\}$, we have
\begin{align}\label{bdd-high-mod}
\begin{aligned}
\|C^{\theta}_d u\|_{L^q_tL^2_x} \lesssim d^{-\frac1q}\|u\|_{V^2_\theta},
\end{aligned}
\end{align}
\end{lem}
\begin{lem}[Lemma 2.2 of \cite{cholee}]\label{energy-ineq}
Let $u\in U^p$ be absolutely continuous with $1<p<\infty$. Then
\begin{align*}
\|u\|_{U^p} = \sup\left\{ \left| \int \langle u'(t),v(t)\rangle_{L^2_x}\,dt \right| : v\in C^\infty_0,\ \|v\|_{V^{p'}}=1 \right\}.
\end{align*}
\end{lem}
We define the Banach space associated with the homogeneous Sobolev space to be the set
$$
\dot F^{s,\sigma}_\theta = \left\{ u\in C(\mathbb R;\langle\Omega\rangle^{-\sigma}\dot H^s): \|u\|_{\dot F^{s,\sigma}_\theta}<\infty \right\},
$$
where the norm is defined by
$$
\|u\|_{\dot F^{s,\sigma}_\theta} = \bigg( \sum_{\lambda\in2^{\mathbb Z}}\sum_{N\ge1}\lambda^{2s}N^{2\sigma}\|P_\lambda H_Nu\|_{U^2_{\theta}}^2 \bigg)^\frac12.
$$
Similarly we define the Banach space $F^{s,\sigma}_\theta$ associated to the inhomogenous Sobolev space in the obvious way.
\begin{rem}
So far we have defined the adapted function spaces for the wave operator. However, with a slight modification we can also define the adapted function spaces for the Klein-Gordon-type operator and all the above lemma also holds for the Klein-Gordon operator. In consequence, for brevity we allow abuse of notation and simply use the notation $U^p_\theta$ and $V^p_\theta$ for both wave and Klein-Gordon operators. We also refer the readers to \cite{candyherr}.
\end{rem}
\subsection{Auxiliary estimates}
We begin with very basic Sobolev estimates which is also known as the Bernstein inequality.
\begin{lem}
Let $0<\alpha\lesssim1$ and $\kappa\in\mathcal C_\alpha$. Let $\lambda>0$ be a dyadic number. For any test function $f$ on $\mathbb R^3$ we have
\begin{align}\label{bernstein}
\|R_\kappa P_\lambda f\|_{L^\infty_x} \lesssim (\lambda^3\alpha^2)^\frac1p \|f\|_{L^p_x}.
\end{align}
\end{lem}
To study the dispersive property of solutions it is of great importance to exploit so-called the Strichartz estimates \cite{keeltao,stri}. In this paper we use an improved Strichartz estimate which is obtained by spending an extra regularity with respect to the angular variables. (See also \cite{choslee,ster}.)
\begin{prop}
For $\frac{1}{10}\ge\eta>0$, let $q_\eta=\frac{4}{1-\eta}$. We have the improved Strichartz estimates by imposing angular regularity as follow:
\begin{align}\label{stri-ang}
\|e^{\theta it|\nabla|}P_\lambda H_N f\|_{L^2_tL^{q_\eta}_x} \lesssim \lambda^{1-\frac3{q_\eta}}N^{\frac12+\eta}\| P_\lambda H_Nf\|_{L^2_x}.
\end{align}
\end{prop}
\noindent
Note that the estimates hold when we replace the propogator $e^{-it|\nabla|}$ with $e^{-it\langle\nabla\rangle}$. The space-time estimates \eqref{stri-ang} say that one can obtain an improved bound when dealing with multilinear estimates. However, the singularity $|\nabla|^{-2}$ in the nonlinearity in \eqref{main-wave} is too strong and we cannot obtain the desired well-posedness for the critical Sobolev data by simply using the estimate \eqref{stri-ang}. This is why the low-output frequency interaction becomes the most serious case. To overcome this problem, we apply the almost orthogonal decompositions of conic sectors. The question is whether one can obtain the better estimates by exploiting the localisation into the conic sectors. The following lemma which is also known as {\it angular concentration estimates} answers this question.
\begin{lem}[Lemma 8.5 of \cite{candyherr}]\label{ang-con}
Let $2\le p<\infty$, and $0\le s<\frac2p$. If $\lambda,N\ge1$, ${\alpha}\gtrsim\lambda^{-1}$, and $\kappa\in\mathcal C_{\alpha}$, then we have
$$
\|R_\kappa P_\lambda H_N f\|_{L^p_x(\mathbb R^3)} \lesssim ({\alpha} N)^s \|P_\lambda H_N f\|_{L^p_x(\mathbb R^3)}.
$$
\end{lem}
\noindent We refer the readers to \cite{sterbenz2} for the proof. Note that in the Bernstein inequality, it is no harm to put $f\rightarrow R_{\kappa'}P_{2\lambda}f$ with $\kappa'\in\mathcal C_{2\alpha}$. With cube localisation $P_\mathtt q$ of size $\mu\le\lambda$, we use in order the Bernstein inequality, angular concentration estimates, and then the improved Strichartz estimates. Here one should note that $\frac12-2\eta<\frac2{q_\eta}=\frac{1-\eta}{2}$. Then we see that
\begin{align*}
\|P_\mathtt q R_\kappa P_\lambda H_N u\|_{L^2_tL^\infty_x} & \lesssim (\mu^3\alpha^2)^\frac1q \sup_{\kappa\in\mathcal C_\alpha}\|R_\kappa P_\lambda H_N u\|_{L^2_tL^q_x} \\
& \lesssim (\mu^3\alpha^2)^\frac1q(\alpha N)^{\frac12-2\eta}\|P_\lambda H_Nu\|_{L^2_tL^q_x} \\
& \lesssim \mu^\frac3q\alpha^\frac2q\alpha^{\frac12-2\eta}N^{\frac12-2\eta}\lambda^{1-\frac3q}N^{\frac12+\eta}\|P_\lambda H_N u\|_{U^2_\theta}.
\end{align*}
The above argument will be often used in the proof of Theorem \ref{gwp-wave} and Theorem \ref{gwp-dirac}.
\section{Bilinear estimates: Proof of Theorem \ref{gwp-wave}}
Now we arrive at the proof of Theorem \ref{gwp-wave}. First we define the Duhamel integral
\begin{align*}
\mathfrak I^\theta[F] = \int_0^t e^{-\theta i(t-t')|\nabla|}F(t')\,dt'.
\end{align*}
Then $\mathfrak I^\theta[F]$ solves the equation
$$
(-i\partial_t+\theta|\nabla|)\mathfrak I^\theta[F] = F,
$$
with vanishing data at $t=0$.
To prove Theorem \ref{gwp-wave} it is enough to show the following bilinear estimates
\begin{align}
\left\|\mathfrak I^\theta[|\nabla|^{-2}Q(\overline u,v)]\right\|_{\dot F^{\frac12,1}_\theta} & \lesssim \|u\|_{\dot F^{\frac12,1}_{\theta_1}}\|v\|_{\dot F^{\frac12,1}_{\theta_2}}. \label{main-bi}
\end{align}
Then an application of the standard contraction argument gives the desired global solutions to the equation \eqref{main-wave} when we have the smallness assumptions on the initial datum:
$
\|(u_0,u_1)\|_{\dot H^\frac12_\sigma\times\dot H^{-\frac12}_\sigma} \ll1.
$
Moreover, the continuous embedding $U^2\subset V^2$ and Lemma \ref{v-scatter} imply the scattering in $U^2_\theta$ space.
By the duality in $U^2-V^2$ Lemma \ref{energy-ineq} we obtain the trilinear expression as follows
\begin{align*}
\left\|\mathfrak I^\theta[|\nabla|^{-2}Q(\overline u,v)]\right\|^2_{\dot F^{\frac12,1}_\theta} & \lesssim \sum_{\mu\in2^\mathbb Z}\sum_{N\ge1}(\mu^\frac12 N)^2 \|P_\mu H_N \mathfrak I^\theta[|\nabla|^{-2}Q(\overline u,v)]\|_{U^2_\theta}^2 \\
& \lesssim \sum_{\mu\in2^\mathbb Z}\sum_{N\ge1}(\mu^\frac12N)^2\sup_{\|P_\mu H_Nw\|_{V^2_\theta}\le1}\left| \int_{\mathbb R^{3+1}}P_\mu H_Nw\, |\nabla|^{-2}Q(\overline u,v) \,dxdt \right|^2.
\end{align*}
Thus our main bilinear estimates can be obtained provided that the following frequency-localised trilinear estimates holds:
\begin{lem}
Let $0<\eta\ll1$ be a small positive number. For some $\frac18<\delta\le\frac14$, we have
\begin{align}\label{main-bi-loc}
\begin{aligned}
&\left|\int_{\mathbb R^{1+3}} w_{\mu,N}\, |\nabla|^{-2}Q(\overline{u_{\lambda_1,N_1}},v_{\lambda_2,N_2}) \,dxdt\right| \\
& \qquad\qquad \lesssim (\min\{\lambda_1,\lambda_2\})^\frac12 \left( \frac{\min\{\mu,\lambda_1,\lambda_2\}}{\max\{\mu,\lambda_1,\lambda_2\}} \right)^\delta (\min\{N_1,N_2\})^{1-\eta} \|w_{\mu,N}\|_{V^2_\theta}\|u_{\lambda_1,N_1}\|_{U^2_{\theta_1}}\|v_{\lambda_2,N_2}\|_{U^2_{\theta_2}},
\end{aligned}
\end{align}
\end{lem}
\noindent where we put $w_{\mu,N}=P_\mu H_Nw, u_{\lambda_1,N_1}=P_{\lambda_1}H_{N_1}u,$ and $v_{\lambda_2,N_2}=P_{\lambda_2}H_{N_2}v$ for brevity. To obtain \eqref{main-bi-loc} we shall consider all possible frequency interactions.
In view of the standard Littlewood-Paley trichotomy one can easily see that the integral in \eqref{main-bi-loc} vanishes unless the following interactions hold:
\begin{align}
\min\{\mu,\lambda_1,\lambda_2\} & \lesssim \textrm{med}\{\mu,\lambda_1,\lambda_2\} \approx \max\{\mu,\lambda_1,\lambda_2\}, \\
\min\{N,N_1,N_2\} & \lesssim \textrm{med}\{N,N_1,N_2\} \approx \max \{N,N_1,N_2\}.
\end{align}
We first decompose the integrand in \eqref{main-bi-loc} with respect to the modulation as follows
\begin{align*}
&\int_{\mathbb R^{1+3}} P_\mu H_Nw\, |\nabla|^{-2}Q(\overline{u_{\lambda_1,N_1}},v_{\lambda_2,N_2}) \,dxdt \\
& = \sum_{d\in2^{\mathbb Z}}\int_{\mathbb R^{3+1}}C^\theta_d w_{\mu,N}|\nabla|^{-2}Q(C^{\theta_1}_{\ll d}\overline{u_{\lambda_1,N_1}},C^{\theta_2}_{\ll d}v_{\lambda_2,N_2})\,dtdx \\
& \qquad + \sum_{d\in2^{\mathbb Z}}\int_{\mathbb R^{3+1}}C^\theta_{\le d}w_{\mu,N}|\nabla|^{-2}Q(C^{\theta_1}_d\overline{u_{\lambda_1,N_1}},C^{\theta_2}_{\le d}v_{\lambda_2,N_2})\,dtdx \\
& \qquad + \sum_{d\in2^{\mathbb Z}}\int_{\mathbb R^{3+1}}C^\theta_{\le d}w_{\mu,N}|\nabla|^{-2}Q(C^{\theta_1}_{\le d}\overline{u_{\lambda_1,N_1}},C^{\theta_2}_{ d}v_{\lambda_2,N_2})\,dtdx \\
& := \sum_{d\in2^{\mathbb Z}} \mathcal I_0+\mathcal I_1+\mathcal I_2.
\end{align*}
\subsection{Low modulation}\label{sec:low-mod}
Now we consider the low-modulation regime
$
d \lesssim \min\{\mu,\lambda_1,\lambda_2\}.
$
In this regime we will pay special attention to the low-output interaction, i.e., $\mu\ll\lambda_1\approx\lambda_2$. The main problem is that even when we can take an advantage of the presence of null forms very favourably in the low-output interactions, the Fourier multiplier $|\nabla|^{-2}$ gives rise to the serious singularity and the cancellation property given by null structure is not sufficient to cover all such a {\it bad interaction}. To overcome this problem we adapt fully angular momentum operator and exploit angular concentration phenomena via bilinear decompositions by conic sectors. The key point is that when one input-frequency is localised in a conic sector of a small angle, the other input-frequency should be also localised in another conic sector of a compatible size. On the other hand, in the high-output interaction the null structure no longer plays any crucial role compared to the low-output case since we only have bilinear decompositions by a rather wide-angle. This is not problematic however, the Fourier multiplier $|\nabla|^{-2}$ no longer is a serious singularity, instead, it plays a crucial role as strong decay. In consequence, the high-output interaction becomes the easiest case in the proof.
\subsubsection{Case 1: $\mu\ll \lambda_1\approx\lambda_2$} It is no harm to put $\lambda_1=\lambda_2=\lambda$ in our argument. Put $\alpha = (\frac{d\mu}{\lambda^2})^\frac12$. We first use an almost orthogonal decomposition by smaller cubes and angular sectors and obtain
\begin{align*}
\mathcal I_0 & \lesssim \sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }}\int_{\mathbb R^{3+1}}C^\theta_d w_{\mu,N}|\nabla|^{-2}Q(P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1},P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2})\,dtdx \\
& \lesssim \sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }}\|C^\theta_d w_{\mu,N}\|_{L^2_{t}L^2_x}\||\nabla|^{-2}Q(P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1},P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2})\|_{L^2_tL^2_x} \\
& \lesssim d^{-\frac12}\|w_{\mu,N}\|_{V^2_\theta} \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }}\||\nabla|^{-2}Q(P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1},P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2})\|_{L^2_tL^2_x}^2\bigg)^\frac12,
\end{align*}
where we used the simple bound for a high-modulation-regime \eqref{bdd-high-mod} for $w_{\mu,N}$. Now we exploit the null structure in the bilinear form $Q$ and then use the H\"older inequality and Bernstein inequality for $u_{\lambda,N_1}$. In sequel we put $q=\frac{4}{1-\eta}$ for a small $\eta>0$. Then we have
\begin{align*}
\mathcal I_0 & \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \|w_{\mu,N}\|_{V^2_\theta} \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }}\|P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1}\|_{L^2_tL^\infty_x}^2\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2\bigg)^\frac12 \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \mu^\frac3q\alpha^\frac2q \|w_{\mu,N}\|_{V^2_\theta}\sup_{\kappa_1\in\mathcal C_\alpha}\|R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1}\|_{L^2_tL^q_x} \\
& \qquad\qquad \times \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }}\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2\bigg)^\frac12.
\end{align*}
The final step is an application of the angular concentration estimates Lemma \ref{ang-con} with $s=\frac12-2\eta<\frac2q$ and then the improve Strichartz estimates \eqref{stri-ang}, which gives
\begin{align*}
\mathcal I_0 & \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \mu^\frac3q\alpha^\frac2q(\alpha N_1)^{\frac12-2\eta} \|w_{\mu,N}\|_{V^2_\theta} \|C_{\ll d}^{\theta_1}u_{\lambda,N_1}\|_{L^2_tL^q_x}\|C^{\theta_2}_{\ll d}v_{\lambda,N_2}\|_{L^\infty_tL^2_x} \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \mu^\frac3q\alpha^\frac2q(\alpha N_1)^{\frac12-2\eta}\lambda^{1-\frac3q} N_1^{\frac12+\eta} \|w_{\mu,N}\|_{V^2_\theta} \|u_{\lambda,N_1}\|_{U^2_{\theta_1}}\|v_{\lambda,N_2}\|_{U^2_{\theta_2}}.
\end{align*}
The summation with respect to $d\lesssim\mu$ yields
$$
\sum_{d:d\lesssim\mu}\mathcal I_0 \lesssim \mu^{-1+\frac5q-2\eta}\lambda^{\frac32-\frac5q+2\eta}N_1^{1-\eta}\|w_{\mu,N}\|_{V^2_\theta} \|u_{\lambda_1,N_1}\|_{U^2_{\theta_1}}\|v_{\lambda_2,N_2}\|_{U^2_{\theta_2}}.
$$
If $N_1\gg N_2$, then we simply interchange the role of $u_{\lambda,N_1}$ and $v_{\lambda,N_2}$ and then obtain exactly the same bound. We now consider $\mathcal I_1$. As we have done in the previous estimate, we use an almost orthogonal decomposition of cubes and angular sectors to get
\begin{align*}
\mathcal I_1 & \lesssim \sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }}\int_{\mathbb R^{3+1}}C^\theta_{\le d} w_{\mu,N}|\nabla|^{-2}Q(P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{ d}u_{\lambda,N_1},P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda,N_2})\,dtdx.
\end{align*}
The next step is to exploit the null structire and use the H\"older inequality as the previous estimate
\begin{align*}
\mathcal I_1 & \lesssim \mu^{-2}\lambda^2\alpha \sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }} \int_{\mathbb R^{3+1}} C^\theta_{\le d} w_{\mu,N}P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{ d}u_{\lambda,N_1}P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda,N_2}\,dtdx \\
& \lesssim \mu^{-2}\lambda^2\alpha \sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }} \| C^\theta_{\le d} w_{\mu,N}\|_{L^\infty_tL^2_x}\| P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{ d}u_{\lambda,N_1}\|_{L^2_tL^2_x}\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda,N_2}\|_{L^2_tL^\infty_x} \\
& \lesssim \mu^{-2}\lambda^2\alpha \| C^\theta_{\le d} w_{\mu,N}\|_{L^\infty_tL^2_x} \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }} \| P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{ d}u_{\lambda,N_1}\|_{L^2_tL^2_x}^2\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda,N_2}\|_{L^2_tL^\infty_x}^2\bigg)^\frac12.
\end{align*}
Then we use the Bernstein inequality for $v_{\lambda,N_2}$ and then Lemma \ref{ang-con} and the Strichartz estimates \eqref{stri-ang}
\begin{align*}
\mathcal I_1
& \lesssim \mu^{-2}\lambda^2\alpha \mu^\frac3q\alpha^\frac2q \|w_{\mu,N}\|_{V^2_\theta} \sup_{\kappa_2\in\mathcal C_\alpha}\|R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda,N_2}\|_{L^2_tL^q_x} \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha }} \| P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{ d}u_{\lambda,N_1}\|_{L^2_tL^2_x}^2\bigg)^\frac12 \\
& \lesssim \mu^{-2}\lambda^2\alpha \mu^\frac3q\alpha^\frac2q (\alpha N_2)^{\frac12-2\eta} \|w_{\mu,N}\|_{V^2_\theta} \|C^{\theta_2}_{\le d}v_{\lambda_2,N_2}\|_{L^2_tL^q_x}\|C^{\theta_1}_d u_{\lambda_1,N_1}\|_{L^2_tL^2_x} \\
& \lesssim \mu^{-2}\lambda^2\alpha \mu^\frac3q\alpha^\frac2q (\alpha N_2)^{\frac12-2\eta} \lambda^{1-\frac3q}N_2^{\frac12+\eta}d^{-\frac12}\|w_{\mu,N}\|_{V^2_\theta} \|u_{\lambda_1,N_1}\|_{U^2_{\theta_1}}\|v_{\lambda_2,N_2}\|_{U^2_{\theta_2}},
\end{align*}
where we used the bound \eqref{bdd-high-mod} for $C^\theta_du$. The summation with respect to the modulation $d\lesssim\mu$ gives the desired bound. If $N_1\ll N_2$, we can simply interchange the role of $u_{\lambda,N_1}$ and $v_{\lambda,N_2}$ and follow the above argument. The estimate of $\mathcal I_2$ can be obtained in the identical manner as the estimate of $\mathcal I_1$. We omit the details.
\subsubsection{Case 2: $\lambda_1 \lesssim \mu\approx\lambda_2$} The case $\lambda_2\lesssim\mu\approx\lambda_1$ would readily follow by symmetry and we focus on the case $\lambda_1\ll \lambda_2$. The high-output case is much easier than the low-output case, i.e., $\min\{\mu,\lambda_1,\lambda_2\}=\mu$, since the Fourier multiplier $|\nabla|^{-2}$ in the integrand is not the serious singularity, even further it plays a role as a strong decay. We only treat the estimate of $\mathcal I_1$ with $N_1\gg N_2$ in this paper, since this case is the most serious interaction in the high-output interaction.
We put $\beta = (\frac d{\lambda_1})^\frac12$ and use the orthogonal decompositions
\begin{align*}
\mathcal I_1 & \lesssim \sum_{\substack{\mathtt q,\mathtt q_2\in\mathcal Q_{\lambda_1} \\ |\mathtt q+\theta_2\mathtt q_2|\lesssim\lambda_1}}\sum_{\substack{\kappa,\kappa_1,\kappa_2\in\mathcal C_\beta \\ |\kappa_1+\kappa_2|,|\kappa+\theta_2\kappa_2|\lesssim\beta}}\int_{\mathbb R^{1+3}}P_\mathtt q R_\kappa C^\theta_{\le d}|\nabla|^{-2}Q(R_{\kappa_1}C^{\theta_1}_du_{\lambda_1,N_1},P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda_2,N_2})\,dtdx \\
& \lesssim \mu^{-2}\lambda_1\lambda_2\beta \sum_{\substack{\mathtt q,\mathtt q_2\in\mathcal Q_{\lambda_1} \\ |\mathtt q+\theta_2\mathtt q_2|\lesssim\lambda_1}}\sum_{\substack{\kappa,\kappa_1,\kappa_2\in\mathcal C_\beta \\ |\kappa_1+\kappa_2|,|\kappa+\theta_2\kappa_2|\lesssim\beta}}\int_{\mathbb R^{1+3}}P_\mathtt q R_\kappa C^\theta_{\le d}R_{\kappa_1}C^{\theta_1}_du_{\lambda_1,N_1}P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda_2,N_2}\,dtdx.
\end{align*}
Then we use the H\"older inequality and then the Cauchy-Schwarz inequality in $\kappa_1$ to get
\begin{align*}
\mathcal I_1 & \lesssim \mu^{-2}\lambda_1\lambda_2\beta \bigg( \sum_{\kappa_1}\|R_{\kappa_1} C^{\theta_1}_d u_{\lambda_1,N_1}\|_{L^2_tL^2_x}^2 \bigg)^\frac12 \\
& \qquad\times \bigg( \sum_{\kappa_1} \bigg( \sum_{\kappa,\kappa_2}\sum_{\mathtt q,\mathtt q_2}\|P_\mathtt qR_\kappa C^{\theta}_{\le d}w_{\mu,N}\|_{L^\infty_tL^2_x}\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda_2,N_2}\|_{L^2_tL^\infty_x} \bigg)^2 \bigg)^\frac12.
\end{align*}
We use the Bernstein inequality for $v_{\lambda_2,N_2}$ and obtain
\begin{align*}
\mathcal I_1 & \lesssim \mu^{-2}\lambda_1\lambda_2\beta \lambda_1^\frac3q \beta^\frac2q \|C^\theta_du_{\lambda_1,N_1}\|_{L^2_tL^2_x} \sup_{\kappa_2}\|R_{\kappa_2}C^{\theta_2}_{\le d}v_{\lambda_2,N_2}\|_{L^2_tL^q_x} \\
& \qquad\qquad \times\bigg( \sum_{\kappa,\kappa_1,\kappa_2}\sum_{\mathtt q,\mathtt q_2} \|P_\mathtt qR_\kappa C^{\theta}_{\le d}w_{\mu,N}\|_{L^\infty_tL^2_x}^2\bigg)^\frac12.
\end{align*}
The remaining step is to apply the bound for the high-modulation-region \eqref{bdd-high-mod} for $C^\theta_du$ and then Lemma \ref{ang-con} followed by the Strichartz estimate \eqref{stri-ang} for $v_{\lambda_2,N_2}$
\begin{align*}
\mathcal I_1 & \lesssim \mu^{-2}\lambda_1\lambda_2\beta \lambda_1^\frac3q \beta^\frac2q (\beta N_2)^{\frac12-2\eta}\lambda_2^{1-\frac3q}N_2^{\frac12+\eta}d^{-\frac12}\|w_{\mu,N}\|_{V^2_\theta}\|u_{\lambda_1,N_1}\|_{V^2_{\theta_1}}\|v_{\lambda_2,N_2}\|_{U^2_{\theta_2}}.
\end{align*}
The summation with respect to $d\lesssim\lambda_1$ yields
$$
\sum_{d\lesssim\lambda_1}\mathcal I_1 \lesssim \lambda_1^\frac12 \left(\frac{\lambda_1}{\lambda_2}\right)^\frac3q \left(\frac{\lambda_2}{\mu}\right)^{-2}N_2^{1-\eta}\|w_{\mu,N}\|_{V^2_\theta}\|u_{\lambda_1,N_1}\|_{U^2_{\theta_1}}\|v_{\lambda_2,N_2}\|_{U^2_{\theta_2}},
$$
where we used the continuous embedding $U^2\subset V^2$ for $u_{\lambda_1,N_1}$. (See \cite[Proposition 2.4]{haheko}.)
This completes the proof of \eqref{main-bi-loc} in the low-modulation regime.
\subsection{High modulation}\label{sec:high-mod}
From now on we shall consider the high-modulation region: $d\gg\min\{\mu,\lambda_1,\lambda_2\}$. In this regime we only consider low-output interaction, i.e., $\mu\ll\lambda_1\approx\lambda_2$; the Fourier multiplier $|\nabla|^{-2}$ yields good decay in the high-output interaction and hence it is much easier than the low-output case. As Section \ref{sec:low-mod}, we put $\lambda_1=\lambda_2=\lambda$.
Note that the angle between the Fourier support of $u_\lambda$ and $v_\lambda$ is less than $\dfrac\mu\lambda$. We put $\alpha = \dfrac\mu\lambda$. By the orthogonal decompositions by smaller cubes of size $\mu$ and conic sectors of size $\alpha$ we follow the similar approach as we have done in Section \ref{sec:low-mod}. For $d\gtrsim\lambda$, we have
\begin{align*}
\mathcal I_0 & \lesssim \sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1+\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1+\kappa_2|\lesssim\alpha }}\int_{\mathbb R^{3+1}}C^\theta_d w_{\mu,N}|\nabla|^{-2}Q(P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1},P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2})\,dtdx \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \|w_{\mu,N}\|_{V^2_\theta} \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1+\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1+\kappa_2|\lesssim\alpha }}\|P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1}\|_{L^2_tL^\infty_x}^2\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2\bigg)^\frac12 \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \mu^\frac3q\alpha^\frac2q \|w_{\mu,N}\|_{V^2_\theta}\sup_{\kappa_1\in\mathcal C_\alpha}\|R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1}\|_{L^2_tL^q_x} \\
& \qquad\qquad \times \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1+\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1+\kappa_2|\lesssim\alpha }}\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2\bigg)^\frac12 \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda_1\lambda_2 \mu^\frac3q\alpha^\frac2q(\alpha N_1)^{\frac12-2\eta}\lambda_1^{1-\frac3q} N_1^{\frac12+\eta} \|w_{\mu,N}\|_{V^2_\theta} \|u_{\lambda,N_1}\|_{V^2_{\theta_1}}\|v_{\lambda,N_2}\|_{V^2_{\theta_2}} \\
& \lesssim \lambda^\frac12 \left(\frac\mu\lambda\right)^{-\frac12+\frac5q+\eta}\left(\frac\lambda d\right)^\frac12 N_1^{1-\eta} \|w_{\mu,N}\|_{V^2_\theta} \|u_{\lambda,N_1}\|_{V^2_{\theta_1}}\|v_{\lambda,N_2}\|_{V^2_{\theta_2}},
\end{align*}
which gives the required bound after the summation with respect to the modulation $d;d\gtrsim\lambda$. On the other hand,
if $\mu\ll d\ll\lambda$, we see that
\begin{align*}
\mathcal I_0 & \lesssim \sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1+\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1+\kappa_2|\lesssim\alpha }}\int_{\mathbb R^{3+1}}C^\theta_d w_{\mu,N}|\nabla|^{-2}Q(P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1},P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2})\,dtdx \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \|w_{\mu,N}\|_{V^2_\theta} \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1+\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1+\kappa_2|\lesssim\alpha }}\|P_{\mathtt q_1}R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1}\|_{L^2_tL^\infty_x}^2\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2\bigg)^\frac12 \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \mu^\frac3q\alpha^\frac2q \|w_{\mu,N}\|_{V^2_\theta}\sup_{\kappa_1\in\mathcal C_\alpha}\|R_{\kappa_1}C^{\theta_1}_{\ll d}u_{\lambda,N_1}\|_{L^2_tL^q_x} \\
& \qquad\qquad \times \bigg(\sum_{\substack{ \mathtt q_1,\mathtt q_2\in \mathcal Q_\mu \\ |\mathtt q_1+\mathtt q_2|\lesssim\mu }}\sum_{\substack{ \kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1+\kappa_2|\lesssim\alpha }}\|P_{\mathtt q_2}R_{\kappa_2}C^{\theta_2}_{\ll d}v_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2\bigg)^\frac12 \\
& \lesssim d^{-\frac12} \mu^{-2}\alpha \lambda^2 \mu^\frac3q\alpha^\frac2q(\alpha N_1)^{\frac12-2\eta}\lambda^{1-\frac3q} N_1^{\frac12+\eta} \|w_{\mu,N}\|_{V^2_\theta} \|u_{\lambda_1,N_1}\|_{V^2_{\theta_1}}\|v_{\lambda_2,N_2}\|_{V^2_{\theta_2}} \\
& \lesssim \lambda^\frac12 \left(\frac\mu\lambda\right)^{-1+\frac5q+\eta}\left(\frac\mu d\right)^\frac12 N_1^{1-\eta} \|w_{\mu,N}\|_{V^2_\theta} \|u_{\lambda_1,N_1}\|_{V^2_{\theta_1}}\|v_{\lambda_2,N_2}\|_{V^2_{\theta_2}},
\end{align*}
and the summation with respect to the modulation $d;\mu\ll d\ll\lambda$ gives the desired estimate. The estimates of $\mathcal I_1$ and $\mathcal I_2$ follow by the similar way. We omit the details. This completes the proof of the main trilinear estimates \eqref{main-bi-loc}.
\section{Trilinear estimates: Proof of Theorem \ref{gwp-dirac}}
This section is devoted to the proof of Theorem \ref{gwp-dirac}. As the previous section, we define the Duhamel integral
$$
\mathfrak J^\theta[F] = \int_0^t e^{-\theta i(t-t')\langle\nabla\rangle_m}F(t')\,dt'.
$$
Then $\mathfrak J^\theta[F]$ solves the equation
$$
(-i\partial_t+\theta\langle\nabla\rangle_m)\mathfrak J^\theta[F] = F,
$$
with vanishing data at $t=0$. From now on we put $m=1$ for simplicity. We are left to prove the following trilinear estimates
\begin{align}
\|\mathfrak J^\theta [V_b*(\varphi^\dagger\phi)\psi]\|_{F^{0,1}_\theta} \lesssim \|\varphi\|_{F^{0,1}_{\theta_1}} \|\phi\|_{F^{0,1}_{\theta_2}}\|\psi\|_{F^{0,1}_{\theta_3}}
\end{align}
which imply the global well-posedness and scattering in the $U^2$-space provided that the smallness condition for the inital data is given. The use of duality in $U^2-V^2$ gives
\begin{align*}
\|\mathfrak J^{\theta_4}[V_b*(\varphi^\dagger\phi)\psi]\|_{F^{0,1}_{\theta_4}}^2 & \lesssim \sum_{\lambda_4,N_4\ge1}(N_4)^2\|P_{\lambda_4}H_{N_4}\mathfrak J^{\theta_4}[V_b*(\varphi^\dagger\phi)\psi]\|_{U^2_{\theta_4}}^2 \\
& \lesssim \sum_{\lambda_4,N_4\ge1} (N_4)^2 \sup_{\|P_{\lambda_4}H_{N_4}\psi\|_{V^2_{\theta_4}}\le1}\left| \int_{\mathbb R^{1+3}} V_b*(\varphi^\dagger\phi)(P_{\lambda_4}H_{N_4}\psi)^\dagger\psi\,dtdx \right|^2.
\end{align*}
Then dyadic decompositions and the H\"older inequality yield
\begin{align*}
&\|\mathfrak J^{\theta_4}[V_b*(\varphi^\dagger\phi)\psi]\|_{F^{0,1}_{\theta_4}}^2 \\
& \lesssim \sum_{\lambda_j\ge1,j=0,1,\cdots,4}\sum_{N_j\ge1,j=0,1,\cdots,4} \\
&\qquad \sup_{\|P_{\lambda_4}H_{N_4}\psi\|_{V^2_{\theta_4}}\le1}\left| \int_{\mathbb R^{1+3}} \langle\nabla\rangle^{-2}_bP_{\lambda_0}H_{N_0}(\varphi_{\lambda_1,N_1}^\dagger\phi_{\lambda_2,N_2})P_{\lambda_0}H_{N_0}(\psi_{\lambda_4,N_4}^\dagger\psi_{\lambda_3,N_3})\,dtdx \right|^2 \\
& \lesssim \sum_{\lambda_j\ge1,j=0,1,\cdots,4}\sum_{N_j\ge1,j=0,1,\cdots,4} \langle\lambda_0\rangle^{-2} \\
&\qquad \sup_{\|P_{\lambda_4}H_{N_4}\psi\|_{V^2_{\theta_4}}\le1} \|P_{\lambda_0}H_{N_0}(\varphi_{\lambda_1,N_1}^\dagger\phi_{\lambda_2,N_2})\|_{L^2_tL^2_x}^2\|P_{\lambda_0}H_{N_0}(\psi_{\lambda_4,N_4}^\dagger\psi_{\lambda_3,N_3})\|_{L^2_tL^2_x}^2.
\end{align*}
Thus our main trilinear estimates follow from the following frequency-localised $L^2$-bilinear estimates:
\begin{lem}
Let $0<\eta\ll1$ be a small positive number. For some $\frac18\le \delta\le\frac14$, we have
\begin{align}\label{main-tri-loc}
\begin{aligned}
&\|P_{\lambda_0} (\varphi_{\lambda_1,N_1}^\dagger\phi_{\lambda_2,N_2})\|_{L^2_tL^2_x} \\
&\quad \lesssim \lambda_0 \left(\frac{\min\{\lambda_0,\lambda_1,\lambda_2\}}{\max\{\lambda_0,\lambda_1,\lambda_2\}}\right)^\delta (\min\{N_1,N_2\})^{1-\eta}\|\varphi_{\lambda_1,N_1}\|_{U^2_{\theta_1}}\|\phi_{\lambda_2,N_2}\|_{U^2_{\theta_2}}.
\end{aligned}
\end{align}
\end{lem}
To prove \eqref{main-tri-loc} we need to deal with the frequency interactions:
\begin{align*}
\lambda_0\ll \lambda_1\approx\lambda_2, \ \lambda_1 \ll \lambda_0\approx\lambda_2, \ \lambda_2 \ll \lambda_0\approx\lambda_1.
\end{align*}
Then it suffices to consider the bilinear estimates
$$
\|P_\mu (\varphi_{\lambda,N_1}^\dagger\phi_{\lambda,N_2})\|_{L^2_tL^2_x}, \ \|P_{\lambda}(\varphi^\dagger_{\mu,N_1}\phi_{\lambda,N_2})\|_{L^2_tL^2_x}
$$
for $\mu\ll\lambda$. We first consider the first bilinear form. As the proof of Theorem \ref{gwp-wave} we apply the orthogonal decomposition of cubes of size $\mu$ and conic sectors of size $\alpha$ with $\alpha=\frac\mu\lambda$ and we use in order the H\"older inequality and the Bernstein inequality and then Lemma \ref{ang-con} and the Strichartz estimates \eqref{stri-ang} for $\varphi_{\lambda,N_1}$
\begin{align*}
\|P_\mu (\varphi^\dagger_{\lambda,N_1}\phi_{\lambda,N_2})\|_{L^2_tL^2_x} & \lesssim \bigg(\sum_{\substack{\mathtt q_1,\mathtt q_2\in\mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu}}\sum_{\substack{\kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha}}\| P_\mu (P_{\mathtt q_1}R_{\kappa_1}\varphi^\dagger_{\lambda,N_1}P_{\mathtt q_2}R_{\kappa_2}\phi_{\lambda,N_2})\|_{L^2_tL^2_x}^2 \bigg)^\frac12 \\
& \lesssim \bigg(\sum_{\substack{\mathtt q_1,\mathtt q_2\in\mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu}}\sum_{\substack{\kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha}}\| P_{\mathtt q_1}R_{\kappa_1}\varphi_{\lambda,N_1}\|^2_{L^2_tL^\infty_x}\|P_{\mathtt q_2}R_{\kappa_2}\phi_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2 \bigg)^\frac12 \\
& \lesssim \mu^\frac3q \left(\frac\mu\lambda\right)^\frac2q\sup_{\kappa_1}\|R_{\kappa_1}\varphi_{\lambda,N_1}\|_{L^2_tL^q_x} \\
& \qquad\qquad\times \bigg(\sum_{\substack{\mathtt q_1,\mathtt q_2\in\mathcal Q_\mu \\ |\mathtt q_1-\mathtt q_2|\lesssim\mu}}\sum_{\substack{\kappa_1,\kappa_2\in\mathcal C_\alpha \\ |\kappa_1-\kappa_2|\lesssim\alpha}}\|P_{\mathtt q_2}R_{\kappa_2}\phi_{\lambda,N_2}\|_{L^\infty_tL^2_x}^2 \bigg)^\frac12 \\
& \lesssim \mu^\frac3q \left(\frac\mu\lambda\right)^\frac2q \left(\frac\mu\lambda N_1\right)^{\frac12-2\eta}\lambda^{1-\frac3q}N_1^{\frac12+\eta}\|\varphi_{\lambda,N_1}\|_{U^2_{\theta_1}}\|\phi_{\lambda,N_2}\|_{U^2_{\theta_2}} \\
& \lesssim \mu \left(\frac\mu\lambda\right)^{\frac5q-\frac12-\eta}N_1^{1-\eta} \|\varphi_{\lambda,N_1}\|_{U^2_{\theta_1}}\|\phi_{\lambda,N_2}\|_{U^2_{\theta_2}}.
\end{align*}
If $N_2\ll N_1$, then we interchange the role of $\varphi$ and $\phi$.
For the second bilinear form, we are only concerned with the case $N_1\gg N_2$. We make the use of $L^2$-duality and then orthgonal decompositions of cubes and angular sectors of size $c>0$ where $c$ is a small constant and we have
\begin{align*}
\|P_{\lambda}(\varphi_{\mu,N_1}^\dagger\phi_{\lambda,N_2})\|_{L^2_tL^2_x} & \lesssim \sup_{\|\psi\|_{L^2_tL^2_x}\lesssim1}\sum_{\substack{q,q_2\in\mathcal Q_\mu \\ |q+\theta_2\mathtt q_2|\lesssim\mu}}\sum_{\substack{\kappa,\kappa_1,\kappa_2\in\mathcal C_c \\ |\kappa_1-\kappa_2|,|\kappa+\theta_2\kappa_2|\lesssim c}}\\
& \qquad\qquad \int_{\mathbb R^{1+3}}P_qR_\kappa\psi R_{\kappa_1}\varphi^\dagger_{\mu,N_1}P_{\mathtt q_2}R_{\kappa_2}\phi_{\lambda,N_2}\,dtdx \\
& \lesssim \sup_{\|\psi\|_{L^2_tL^2_x}\lesssim1}\bigg(\sum_{\substack{q,q_2\in\mathcal Q_\mu \\ |q+\theta_2\mathtt q_2|\lesssim\mu}}\sum_{\substack{\kappa,\kappa_1,\kappa_2\in\mathcal C_c \\ |\kappa_1-\kappa_2|,|\kappa+\theta_2\kappa_2|\lesssim c}} \\
&\qquad\qquad \|P_qR_\kappa\psi \|_{L^2_tL^2_x}^2\|R_{\kappa_1}\varphi_{\mu,N_1}P_{\mathtt q_2}R_{\kappa_2}\phi_{\lambda,N_2}\|_{L^2_tL^2_x}^2\bigg)^\frac12 \\
& \lesssim \sup_{\|\psi\|_{L^2_tL^2_x}\lesssim1}\bigg(\sum_{\substack{q,q_2\in\mathcal Q_\mu \\ |q+\theta_2\mathtt q_2|\lesssim\mu}}\sum_{\substack{\kappa,\kappa_1,\kappa_2\in\mathcal C_c \\ |\kappa_1-\kappa_2|,|\kappa+\theta_2\kappa_2|\lesssim c}} \\
& \qquad\qquad\qquad\qquad \|P_qR_\kappa\psi \|_{L^2_tL^2_x}^2\|R_{\kappa_1}\varphi_{\mu,N_1}\|_{L^\infty_tL^2_x}^2\|P_{\mathtt q_2}R_{\kappa_2}\phi_{\lambda,N_2}\|_{L^2_tL^\infty_x}^2\bigg)^\frac12 \\
& \lesssim \mu^\frac3q c^\frac2q\sup_{\kappa_2}\|R_{\kappa_2}\phi_{\lambda,N_2}\|_{L^2_tL^q_x} \|\varphi_{\mu,N_1}\|_{V^2_{\theta_1}} \\
& \lesssim \mu^\frac3q c^\frac2q(cN_2)^{\frac12-2\eta}\lambda^{1-\frac3q}N_2^{\frac12+\eta}\|\varphi_{\mu,N_1}\|_{V^2_{\theta_1}}\|\phi_{\lambda,N_2}\|_{U^2_{\theta_2}} \\
& \lesssim \lambda \left(\frac\mu\lambda\right)^\frac3q N_2^{1-\eta} \|\varphi_{\mu,N_1}\|_{u^2_{\theta_1}}\|\phi_{\lambda,N_2}\|_{U^2_{\theta_2}},
\end{align*}
where we used the continuous embedding $U^2\subset V^2$. This completes the proof of the main bilinear estimates \eqref{main-tri-loc}.
\section*{Acknowledgements}
Most of all, the author would like to express his gratitude to Doctor Lee, Kiyeon, and Professor Cho, Yonggeun for helpful discussion and generous criticism. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1A2C4002615).
| 2024-02-18T23:39:57.965Z | 2022-07-07T02:07:27.000Z | algebraic_stack_train_0000 | 960 | 10,416 |
|
proofpile-arXiv_065-5044 | \section{The stability result.}
\subsection{Notations and definitions.}
Let $x\in \mathbb{R}^n$. We will denote by $ B_{\rho}(x)$ the ball in $\mathbb{R}^n$ centered in $x$ of radius $\rho$. We will indicate $x = (x_1, \dots ,x_n) $ as $x= (x^\prime, x_n)$ where $x^\prime = (x_1 \dots x_{n-1})$. Accordingly, $B^\prime_{ \rho}(x^\prime)$ will denote the ball of center $x^\prime$ and radius $\rho$ in $\mathbb{R}^{n-1}$.
We will often make use of the following definition of regularity of a domain.
\begin{definition}
Let $\Omega \subset \mathbb{R}^n$ a bounded domain. We say $\Gamma \subset \partial \Omega$ is of class $C^{k, \alpha}$ with constants $\rho_0$, $M_0 >0$, where $k$ is a nonnegative integer, $\alpha \in [ 0,1 )$ if, for any $P \in \Gamma$ there exists a rigid transformation of coordinates in which $P = 0$ and
\begin{equation} \label{regolarita}
\Omega \cap B_{\rho_0}(0) = \{ (x^\prime, x_n) \in B_{\rho_0}(0) \, \, \mathrm{s.t. } \, \, x_n > \varphi (x^\prime)\},
\end{equation}
where $\varphi$ is a real valued function of class $C^{k, \alpha}(B^\prime_{\rho_0}(0))$ such that \begin{displaymath} \begin{split} \varphi(0)&=0, \\ \nabla\varphi(0)&=0, \text{ if } k \ge 1 \\ \| \varphi\|_{C^{k, \alpha}(B^\prime_{\rho_0}(0))} &\le M_0 \rho_0.
\end{split}
\end{displaymath}
\end{definition}
When $k=0$, $\alpha=1$ we will say that $\Gamma$ is {\it of Lipschitz class with constants $\rho_0$, $M_0$}.
\begin{remark} We normalize all norms in such a way they are all dimensionally equivalent to their argument and coincide with the usual norms when $\rho_0=1$. In this setup, the norm taken in the previous definition is intended as follows:
\begin{displaymath}
\| \varphi\|_{C^{k, \alpha}(B^\prime_{\rho_0}(0))} = \sum_{i=0}^{k} \rho_0^i \| D^i \varphi\|_{L^{\infty}(B^\prime_{\rho_0}(0))} + \rho_0^{k+\alpha} | D^k \varphi |_{\alpha,B^\prime_{\rho_0}(0) },
\end{displaymath}
where $| \cdot |$ represents the $\alpha$-H\"older seminorm
\begin{displaymath}
| D^k \varphi |_{\alpha,B^\prime_{\rho_0}(0) } = \sup_{x^\prime, y^\prime \in B^\prime_{\rho_0}(0), x^\prime \neq y^\prime } \frac{| D^k \varphi(x^\prime)-D^k \varphi(y^\prime)| }{|x^\prime -y^\prime|^\alpha},
\end{displaymath}
and $D^k \varphi=\{ D^\beta\varphi\}_{|\beta|= k}$ is the set of derivatives of order $k$.
Similarly we set
\begin{displaymath}
\normadue{u}{\Omega}^2 = \frac{1}{\rho_0^n} \int_\Omega u^2 \,
\end{displaymath}
\begin{displaymath}
\norma{u}{1}{\Omega}^2 = \frac{1}{\rho_0^n} \Big( \int_\Omega u^2 +\rho_0^2 \int_\Omega |\nabla u|^2 \Big).
\end{displaymath}
The same goes for the trace norms $\norma{u}{\frac{1}{2}}{\partial \Omega}$ and the dual norms $\norma{u}{-1}{\Omega}$, $\norma{u}{-\frac{1}{2}}{\partial \Omega}$ and so forth.
\end{remark}
We will sometimes use the following notation, for $h>0$:
\begin{displaymath}
\Omega_h = \{ x \in \Omega \, \, \mathrm{such \, \, that } \, \, d(x, \partial \Omega) > h \}.
\end{displaymath}
\subsection{A priori information.}
Here we present all the a priori hypotheses we will make all along the paper. \\
(1) {\it A priori information on the domain.} \\
We assume $\Omega \subset \mathbb{R}^n$ to be a bounded domain, such that
\begin{equation} \label{apriori0}
\partial \Omega \text{ is connected, }
\end{equation}
and it has a sufficiently smooth boundary, i.e.,
\begin{equation} \label{apriori1}
\partial \Omega \text{ is of class } C^{2, \alpha} \text{ of constants } \rho_0, \, \, M_0, \end{equation} where $\alpha \in (0,1]$ is a real number, $M_0 > 0$, and $\rho_0 >0 $ is what we shall treat as our dimensional parameter. In what follows $\nu$ is the outer normal vector field to $\partial \Omega$. We also require that
\begin{equation} \label{apriori2} |\Omega| \le M_1 \rho_0^n, \end{equation} where $M_1 > 0$. \\
In our setup, we choose a special open and connected portion $\Gamma \subset \partial \Omega$ as being the accessible part of the boundary, where, ideally, all measurements are taken. We assume that there exists a point $P_0 \in \Gamma$ such that
\begin{equation} \label{apriori2G}
\partial \Omega \cap B_{\rho_0}(P_0) \subset \Gamma.
\end{equation}
(2) { \it A priori information about the obstacles.} \\
We consider $D \subset \Omega$, which represents the obstacle we want to detect from the boundary measurements, on which we require that
\begin{equation} \label{apriori2bis}
\Omega \setminus \overline{D} \text{ is connected, }
\end{equation}
\begin{equation} \label{apriori2ter}
\partial D \text{ is connected. }
\end{equation}
We require the same regularity on $D$ as we did for $\Omega$, that is,
\begin{equation} \label{apriori3} \partial D \text{ is of class } C^{2, \alpha} \text{ with constants } \rho_0 , \, M_0. \end{equation}
In addition, we suppose that the obstacle is "well contained" in $\Omega$, meaning \begin{equation} \label{apriori4} d (D, \partial \Omega) \ge \rho_0. \end{equation}
\begin{remark}
We point out that, in principle, assumptions (\ref{apriori1}), (\ref{apriori3}) and (\ref{apriori4}) could hold for different values of $\rho_0$. If that were the case, it would be sufficient to redefine $\rho_0$ as the minimum among the three constants; then (\ref{apriori1}), (\ref{apriori2}) and (\ref{apriori3}) would still be true with the same $\rho_0$, while we would need to assume a different value of the constant $M_1$ in (\ref{apriori2}) accordingly. As a simple example, if $\Omega = B_1(0)$, and $D=B_{1/2}(0)$, then (\ref{apriori1}) is true for every $\rho_0 <1$, while (\ref{apriori3}) and (\ref{apriori4}) is true for all $\rho_0 <1/2$, so $\rho_0$ would be assumed to be less than $1/2$.
\end{remark}
(3) { \it A priori information about the boundary data.} \\
For the Dirichlet-type data $g$ we assign on the accessible portion of the boundary $\Gamma$, we assume that
\begin{equation} \begin{split} \label{apriori5}
g \in \accan{\frac{3}{2}}{\partial \Omega}, \, \; \; g \not \equiv 0, \\ \mathrm{supp} \,g \subset \subset \Gamma.
\end{split} \end{equation}
As it is required in order to ensure the existence of a solution, we also require
\begin{equation} \label{aprioriexist} \int_{\partial \Omega} g \, \mathrm{d} s =0.
\end{equation}
We also ask that, for a given constant $F>0$, we have
\begin{equation} \label{apriori7}
\frac{\norma{g}{\frac{1}{2}}{\Gamma} }{\normadue{g}{\Gamma} } \le F.
\end{equation}
Under the above conditions on $g$, one can prove that there exists a constant $c>0$, only depending on $M_0$, such that the following equivalence relation holds: \begin{equation} \label{equivalence}
\norma{g}{\frac{1}{2}}{\Gamma} \le \norma{g}{\frac{1}{2}}{\partial \Omega} \le c \norma{g}{\frac{1}{2}}{\Gamma}.
\end{equation}
\subsection{The main result.}
Let $\Omega \subset \mathbb{R}^n$, and $\Gamma \subset \partial \Omega$ satisfy (\ref{apriori1})-(\ref{apriori2G}). Let $D_i \subset \Omega$, for $i=1,2$, satisfy (\ref{apriori2bis})-(\ref{apriori4}), and let us denote by $\Omega_i= \Omega \setminus \overline{D_i}$. We may state the main result as follows.
\begin{theorem}[Stability] \label{principale} Let $g \in \accan{\frac{3}{2}}{\Gamma}$ be the assigned boundary data, satisfying (\ref{apriori5})-(\ref{apriori7}). Let $u_i \in \accauno{\Omega_i}$ solve (\ref{NSE}) for $D=D_i$. If, for $\epsilon > 0 $, we have
\begin{equation} \label{HpPiccolo}
\rho_0 \norma{\sigma(u_1, p_1)\cdot \nu -\sigma(u_2,p_2) \cdot \nu }{-\frac{1}{2}}{\Gamma} \le \epsilon, \end{equation}
then
\begin{equation}\label{stimstab}
d_{\mathcal{H}} (\partial D_1, \partial D_2) \le \rho_0 \omega \Bigg( \frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}}\Bigg),
\end{equation}
where $\omega : (0, +\infty) \to \mathbb{R}^+$ is an increasing function satisfying, for all $0<t<\frac{1}{e}$:
\begin{equation}
\omega(t) \le C (\log | \log t |)^{-\beta }.
\end{equation}
The constants $C>0$ and $0<\beta<1$ only depend on $n$, $M_0$, $M_1$ and $F$.
\end{theorem}
\subsection{The Helmholtz-Weyl decomposition.}
We find it convenient to recall a classical result which will come in handy later on. A basic tool in the study of the Stokes equations (\ref{NSE}) is the Helmholtz-Weyl decomposition of the space $\elledue{\Omega}$ in two
orthogonal spaces:
\begin{equation}\label{HW} \elledue{\Omega} = H \oplus H^{\perp}, \end{equation}
where
\[ H =\{u \in \elledue{\Omega} \hspace{0.25em} : \dive u = 0, \hspace{0.25em}
u|_{\partial \Omega} = 0\} \]
and
\[ H^{\perp} =\{u \in \elledue{\Omega} \hspace{0.25em} : \exists \hspace{0.25em} p \in \accan{1}{\Omega} \,: \; u = \nabla p \hspace{0.25em} \}. \]
This decomposition is used, for example, to prove the existence of a solution of the Stokes system (among many others, see \cite{LadyK}).
From this, and using a quite standard "energy estimate" reasoning, one can prove the following (see \cite{LadyK} or \cite{Temam}, among many others):
\begin{theorem}[Regularity for the direct Stokes problem.] \label{TeoRegGen}
Let $m \ge -1$ an integer number and let $E \subset \mathbb{R}^n$ be a bounded domain of class $C^r$ , with $r= \max \{ m+2, 2\}$. Let us consider the following problem:
\begin{equation}
\label{NSEdiretto} \left\{ \begin{array}{rl}
\dive \sigma(u,p) & = f \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\dive u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E, \\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \partial E,\\
\end{array} \right.
\end{equation}
where $f \in \mathbf{H}^{m} (E)$ and $g \in \accan{m+\frac{3}{2}}{E}$. Then there exists a weak solution $(u,p) \in \mathbf{H}^{m+2}(E) \times H^{m+1} (E)$ and a constant $c_0$, only depending on the regularity constants of $E$ such that
\begin{equation} \label{stimanormadiretto}
\| u \|_{\mathbf{H}^{m+2} (E)} + \rho_0 \| p-p_E \|_{H^{m+1}(E)} \le c_0 \big(\rho_0 \| f \|_{\mathbf{H}^{m} (E)} + \| g \|_{\mathbf{H}^{m+\frac{3}{2}} (\partial E)} \big),
\end{equation}
where $p_E$ denotes the average of $p$ in $E$, $p_E = \frac{1}{|E|} \int_E p . $
\end{theorem}
Finally, we would like to recall the following version of Poincar\`e inequality, dealing with functions that vanish on an open portion of the boundary:
\begin{theorem}[Poincar\`e inequality.]
Let $E\subset \mathbb{R}^n$ be a bounded domain with boundary of Lipschitz class with constants $\rho_0$, $M_0$ and satisfying (\ref{apriori2}). Then for every $ u \in \accan{1}{E}$ such that
\begin{displaymath}
u = 0 \, \, \text{on} \, \, \partial E \cap B_{\rho_0}(P),
\end{displaymath}
where $P$ is some point in $\partial E$, we have
\begin{equation} \label{pancarre}
\normadue{u}{E} \le C \rho_0 \normadue{\nabla u}{E},
\end{equation}
where C is a positive constant only depending on $M_0$ and $M_1$.
\end{theorem}
\section{Proof of Theorem \ref{stabilitycauchy}. }
As already premised, in order to prove Theorem \ref{stabilitycauchy}, we will need to perform an extension argument on the solution to (\ref{NSE}) we wish to estimate. This has been done for solutions to scalar elliptic equations with sufficiently smooth coefficients (\cite{Isa}). Here, however, we are dealing with a system: extending $u$ implies finding a suitable extension for the pressure $p$ as well; moreover, both extensions should preserve some regularity they inherit from the original functions.
Following the notations given for Theorem \ref{stabilitycauchy} we define $$Q(P_0) = B^\prime_{\rho_{00}} (0) \times \Big[-\frac{M_0\rho_0^2}{\sqrt{1+M_0^2}}, \frac{M_0\rho_0^2}{\sqrt{1+M_0^2}}\Big].$$
We have:
\begin{equation}\label{gagrafico} \begin{split}
\Gamma_0 &= \partial E \cap Q(P_0). \\
\end{split}\end{equation}
We then call $E^- = Q(P_0) \setminus E$ and $\til{E} = E \cup E^- \cup \Gamma_0$.
\begin{lemma}[Extension] \label{teoextensionNSE}
Suppose the hypotheses of Theorem \ref{stabilitycauchy} hold. Consider the domains $E^-$, $\til{E}$ as constructed above. Take, furthermore, $g \in \accan{\frac{5}{2}}{\partial E}$. Let $(u,p)$ be the solution to the following problem:
\begin{equation}
\label{NseHomDirExt} \left\{ \begin{array}{rl}
\dive \sigma(u,p) & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\dive u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E,\\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
\sigma (u, p) \cdot \nu & = \psi \hspace{2em} \mathrm{\tmop{on}}
\hspace{1em} \Gamma,\\ \end{array} \right.
\end{equation}
Then there exist functions $\tilde{u} \in \accan{1}{\til{E}}$, $\til{p} \in L^2(\til{E})$ and a functional $\Phi \in \accan{-1}{\til{E}}$ such that $\tilde{u} = u$, $\tilde{p} = p$ in $E$ and $(\til{u}, \til{p})$ solve the following:
\begin{equation} \begin{split} \label{sistilde}
\triangle \til{u} + \nabla \til{p} &= \Phi \, \, \text{ in } \, \, \til{E}, \\ \dive \til{u}&=0 \, \, \text{ in } \, \, \til{E}.
\end{split}
\end{equation}
If
\[ \norma{g}{\frac{1}{2}}{\Gamma}+ \rho_0\norma{\psi}{-\frac{1}{2} }{\Gamma} = \eta , \]
then we have
\begin{equation} \label{stimaPhi}
\norma{\Phi}{-1}{\til{E}} \le C\frac{\eta}{\rho_0}.
\end{equation}
where $C>0$ only depends on $\alpha$ and $M_0$.
\end{lemma}
\begin{proof}
From the assumptions we made on the boundary data and the domain, it follows that $(u, p) \in \accan{3}{E} \times L^2(E) $.
We can find (see \cite{MITREA} or \cite{BedFix}) a function $u^- \in \accan{3}{E^-}$ such that
\begin{equation} \label{propumeno} \begin{split} \dive u^-=0 \quad \mathrm{in}\quad E^-,\qquad u^-=g \quad\mathrm{on} \quad \Gamma,\\ \norma{u^-}{3}{E^-} \le C \norma{g}{\frac{1}{2}}{\Gamma},
\end{split} \end{equation}
with $C$ only depending on $|E|$.
We now call
\begin{displaymath} F^-= \triangle u^-,\end{displaymath}
by our assumptions we have $ F^- \in \accan{1}{E^-}$.
Let $p^- \in H^1(E^-)$ be the weak solution to the following Dirichlet problem:\begin{equation}\label{pmeno} \left\{ \begin{array}{rl}
\triangle p^- - \dive F^- &=0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E^-,\\
p^- & = 0 \hspace{1em}
\hspace{0.50em} \mathrm{\tmop{on}} \hspace{1em}
\partial E^-.\\
\end{array} \right.
\end{equation}
We now define \begin{equation}\label{effestesa} X^-= F^- -\nabla p^-. \end{equation} This field is divergence free by construction,
and its norm is controlled by \begin{equation} \label{stimaX} \|X^- \|_{\elledue{E^-}} \le C \norma{g}{\frac{1}{2}}{\Gamma} \end{equation}
We thus extend $(u,p)$ as follows:
\begin{displaymath} \til{u}= \left\{ \begin{array}{rl} & u \quad \text{ in } \; \; E, \\ & u^- \quad \text{ in } \; \; E^-,\end{array} \right.\end{displaymath} \begin{displaymath}\til{p}= \left\{ \begin{array}{rl} & p \quad \text{ in } \; E, \\ & p^- \quad \text{ in } \; E^-. \end{array} \right. \end{displaymath}
We now investigate the properties of the thus built extension $(\til{u},\til{p})$. Take any $v \in \accano{1}{\til{E}}$, we have
\begin{equation}
\label{NSEEXT} \begin{split} &\int_{\til{E}} (\nabla \til{u} +(\nabla \til{u})^T - \til{p} \ide ) \cdot \nabla v = \\ =& \int_{E} (\nabla u +(\nabla u )^T - p \ide ) \cdot \nabla v + \int_{E^-} (\nabla u^- +(\nabla u^-)^T - p^- \ide ) \cdot \nabla v. \end{split}\end{equation}
About the first term, using (\ref{NSE}) and the divergence theorem we obtain
\begin{equation}
\label{phiuno} \int_{E} (\nabla u +(\nabla u)^T - p \ide ) \cdot \nabla v = \int_{\Gamma} \psi \cdot v. \end{equation}
Define $ \Phi_1(v)= \int_{\Gamma} \psi \cdot v $ for all $v \in \accano{1}{\til{E}}$.
Using the decomposition made in (\ref{effestesa}) on the second term, we have
\begin{equation}\label{phiduetre} \begin{split} & \int_{E^-} (\nabla u^- +(\nabla u^- )^T - p^- \ide ) \cdot \nabla v = \\ =& \int_{\Gamma}(\nabla u^- +(\nabla u^- )^T - p^- \ide ) \cdot \nu \, v -\int_{E^-} \dive \big( \nabla u^- +(\nabla u^- )^T - p^-\ide \big) \cdot v= \\=& \int_{\Gamma}(\nabla u^- +(\nabla u^- )^T ) \cdot \nu \, v -\int_{E^-} (\triangle u^- -\nabla p^- ) \cdot v= \\ =&\int_{\Gamma}(\nabla u^- +(\nabla u^- )^T ) \cdot \nu \, v -\int_{E^-} X^- \cdot v = \Phi_2(v)+\Phi_3(v), \end{split}\end{equation}
where we define for all $v \in \accano{1}{\til{E}}$ the functionals
\begin{displaymath}
\begin{split}
\Phi_2(v)&=\int_{\Gamma}(\nabla u^- +(\nabla u^- )^T ) \cdot \nu \, v, \\
\Phi_3(v)&=-\int_{E^-} X^- \cdot v
\end{split}
\end{displaymath}
We can estimate each of the linear functionals $\Phi_1$, $\Phi_2$ and $\Phi_3$ easily, for we have (by (\ref{phiuno}) and the trace theorem):
\begin{equation} \label{stimaphi1} \big| \Phi_1(v) \big| \le \norma{\psi}{-\frac{1}{2}}{\Gamma} \norma{v}{\frac{1}{2}}{\Gamma} \le C \rho_0\norma{\psi}{-\frac{1}{2}}{\Gamma} \norma{v}{1}{E^-}, \end{equation}
moreover (using (\ref{phiduetre}) and (\ref{propumeno}) )
\begin{equation} \label{stimaphi2} \big| \Phi_2(v) \big| \le {\| \nabla u \|_{{\bf L}^2(\Gamma)}} {\|v \|_{{\bf L}^2(\Gamma)}} \le C \norma{g}{\frac{1}{2}}{\Gamma} \norma{v}{1}{E^-}, \end{equation}
and, at last, by (\ref{stimaX}),
\begin{equation} \label{stimaphi3} \big| \Phi_3(v) \big| \le \|X^- \|_{\elledue{E^-}} \|v \|_{\elledue{E^-}} \le C \norma{g}{\frac{1}{2}}{\Gamma} \norma{v}{1}{E^-}. \end{equation}
Then, defining $\Phi(v)=\Phi_1(v) + \Phi_2(v) + \Phi_3(v)$ for all $v \in \accano{1}{\til{E}}$, putting together (\ref{phiuno}), (\ref{phiduetre}), (\ref{stimaphi1}), (\ref{stimaphi2}) and (\ref{stimaphi3}), we have (\ref{stimaPhi}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{stabilitycauchy}. ]
Consider the domain $\til{E}$ built at the beginning of this section, and take $\til{u}$ the extension of $u$ built according to Theorem \ref{teoextensionNSE}. By linearity, we may write $\til{u}= u_0+w$ where $(w,q)$ solves
\begin{equation} \label{NSEPARTIC}
\dive \sigma (w, q) = \til{\Phi} \hspace{2em} \mathrm{\tmop{in}}
\hspace{1em} \til{E}, \end{equation}
and $w \in \accano{1}{\til{E}}$, whereas $(u_0, p_0)$ solves
\begin{equation} \label{NSEHOM}
\left\{ \begin{array}{rl}
\dive \sigma (u_0, p_0) &= 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} \til{E}, \\
u_0 & = 0 \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
\sigma (u_0, p_0) \cdot \nu & = \psi \hspace{2em} \mathrm{\tmop{on}}
\hspace{1em} \Gamma.
\end{array} \right.
\end{equation}
Using well known results about interior regularity of solutions to strongly elliptic equations
\begin{equation}
\| u_0 \|_{{\bf L}^\infty( B_{\frac{t}{2}} (x))} \le t^{-\frac{n}{2}} \normadue{u_0}{B_{\frac{t}{2}}(x)}.
\end{equation}
It is then sufficient to estimate $\normadue{u}{B(x)}$ for a "large enough" ball near the boundary.
Since (see the proof of Proposition \ref{teoPOS}) $\triangle^2 u_0=0$, we may apply Theorem \ref{teotresfere} to $u_0$. Calling $r_1= \frac{\rho_{00}}{8}$, $r_2= \frac{3 \rho_{00}}{8}$ and $r_3= \rho_{00}$ we have (understanding that all balls are centered in $P^*$)
\begin{equation} \label{3sfereu0}
\normadue{u_0}{B_{r_2}} \le C \normadue{u_0}{B_{r_1}}^{\tau} \normadue{u_0}{B_{r_3}}^{1-\tau}.
\end{equation}
Let us call $\eta=\rho_0\norma{\psi}{-\frac{1}{2}}{\Gamma}$.
By the triangle inequality, (\ref{propumeno}) and (\ref{stimau}) we have that
\begin{equation} \label{trin1}
\normadue{u_0}{B_{r}} \le \normadue{\til{u}}{B_{r}}+\normadue{w}{B_{r}} \le \normadue{\til{u}}{B_{r}} + C \eta,
\end{equation}
for $r=r_1,r_3$; furthermore, we have
\begin{equation} \label{trin2}
\normadue{\til{u}}{B_{r_2}} \le \normadue{u_0}{B_{r_2}}+\normadue{w}{B_{r_2}} \le \normadue{u_0}{B_{r_2}} + C \eta.
\end{equation}
Putting together (\ref{3sfereu0}), (\ref{trin1}), (\ref{trin2}), and recalling (\ref{stimau}) and (\ref{stimanormadiretto}) we get
\begin{equation} \begin{split} \label{3sfere2}
& \normadue{u}{B_{r_2}} \le \normadue{\til{u}}{B_{r_2} \cap E} \le \\ \le & C \eta + C (\normadue{\til{u}}{B_{r_1}}+ C \eta)^{\tau} (\normadue{\til{u}}{B_{r_3} \cap E} + C \eta )^{1-\tau} \le \\ \le & C \big( \eta + \eta^\tau (\eta + \normadue{u}{E} )^{1-\tau} \big) \le C \eta^\tau \normadue{u}{E}^{1-\tau}. \end{split} \end{equation}
\end{proof}
\section{Introduction.}
In this paper we deal with an inverse problem associated to the Stokes system. We consider $\Omega \subset \mathbb{R}^n$, with $n=2,3$, with a sufficiently smooth boundary $\partial \Omega$. We want to detect an object $D$ immersed in this container, by collecting measurements of the velocity of the fluid motion and of the boundary forces, but we only have access to a portion $\Gamma$ of the boundary $\partial \Omega$.
The fluid obeys the Stokes system in $\omegad$:
\begin{equation}
\label{NSE} \left\{ \begin{array}{rl}
\dive\sigma(u,p) &= 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
\omegad,\\
\dive u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} \omegad,\\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
u & = 0 \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \partial D.
\end{array} \right.
\end{equation}
Here, \begin{displaymath} \sigma (u, p) = \mu ( \nabla u + \nabla u ^T ) - p \ide \end{displaymath} is the \tmtextit{stress tensor}, where $\ide$ denotes the $n \times n$ identity matrix, and $\mu$ is the viscosity function. The last request in (\ref{NSE}) is the so called ``no-slip condition''. We will always assume constant viscosity, $\mu(x)=1$, for all $x \in \omegad$. We observe that if $(u,p) \in \accauno{\omegad} \times L^2(\omegad)$ solves (\ref{NSE}), then it also satisfies \begin{displaymath}
\triangle u -\nabla p=0.
\end{displaymath}
Call $\nu$ the outer normal vector field to $\partial \Omega$.
The ideal experiment we perform is to assign $g \in \accan{\frac{3}{2}}{\Gamma}$ and measure on $\Gamma$ the normal component of the stress tensor it induces, \begin{equation}\label{psi}\sigma (u, p) \cdot \nu = \psi, \end{equation}
and try to recover $D$ from a single pair of Cauchy data $(g, \psi)$ known on the accessible part of the boundary $\Gamma$. Under the hypothesis of $\partial \Omega$ being of Lipschitz class, the uniqueness for this inverse problem has been shown to hold (see \cite{ConcOrtega2}) by means of unique continuation techniques. For a different inverse problem regarding uniqueness of the viscosity function $\mu$, an analogous uniqueness result has been shown to hold, under some regularity assumptions (see \cite{HXW}). \\
The stability issue, however, remains largely an open question. There are some partial "directional stability" type result, given in \cite{ConcOrtega} and \cite{ConcOrtega2}. This type of result, however, would not guarantee an a priori uniform stability estimate for the distance between two domains that yield boundary measurement that are close to each other. In the general case, even if we add some a priori information on the regularity of the unknown domain, we can only obtain a weak rate of stability. This does not come unexpected since, even for a much simpler system of the same kind, the dependence of $D$ from the Cauchy data is at most of logarithmic type. See, for example, \cite{ABRV} for a similar problem on electric conductivity, or \cite{MRC}, \cite{MR} for an inverse problem regarding elasticity.
The purpose of this paper is thus to prove a log-log type stability for the Hausdorff distance between the boundaries of the inclusions, assuming they have $C^{2,\alpha}$ regularity. Such estimates have been estabilished for various kinds of elliptic equations, for example, \cite{ABRV}, \cite{AlRon}, for the electric conductivity equation, \cite{MRC} and \cite{MR} for the elasticity system and the detection of cavities or rigid inclusions. For the latter case, the optimal rate of convergence is known to be of log type, as several counterexamples (see \cite{Aless1} and \cite{DiCriRo}) show.
The main tool used to prove stability here and in the aforementioned papers (\cite{ABRV}, \cite{MRC}, \cite{MR}) is essentially a quantitative estimate of continuation from boundary data, in the interior and in the boundary, in the form of a three spheres inequality, see Theorem \ref{teotresfere}, and its main consequences. However, while in \cite{ABRV} the estimates are of log type for a scalar equation, here, and in \cite{MRC} and \cite{MR}, only an estimate of log-log type could be obtained for a system of equations. The reason for this is that, at the present time, no doubling inequalities at the boundary for systems are available, while on the other hand they are known to hold in the scalar case. \\
The basic steps of the present paper closely follows \cite{MRC}, \cite{MR}, and are the following: \begin{enumerate} \item {\it An estimate of propagation of smallness from the interior}. The proof of this estimate relies essentially on the three spheres inequality for solutions of the bilaplacian system. Since both the Lam\'e system and the Stokes system can be represented as solutions of such equations (at least locally and in the weak sense, see \cite{GAES} for a derivation of this for the elasticity system), we expected the same type of result to hold for both cases.
\item {\it A stability estimate of continuation from the Cauchy data}. This result also relies heavily on the three spheres inequality, but in order to obtain a useful estimate of continuation near the boundary, we need to extend a given solution of the Stokes equation a little outside the domain, so that the extended solution solves a similar system of equation. Once the solution has been properly extended, we may apply the stability estimates from the interior to the extended solution and treat them like estimates near the boundary for the original solution. \item{\it An extension lemma for solutions to the Stokes equations}. This step requires finding appropriate conditions on the velocity field $u$ as well as for the pressure $p$ at the same time, in order for the boundary conditions to make sense. In Section 5 we build such an extension. We point out that, if we were to study the inverse problem in which we assign the normal component $\psi$ of the stress tensor and measure the velocity $g$ induced on the accessible part of the boundary, the construction we mentioned would fail to work.
\end{enumerate}
The paper is structured as follows. In Section 2, we state the apriori hypotheses we will need throughout the paper, and state the main result, Theorem \ref{principale}. In Section 3 we state the estimates of continuation from the interior we need, Propositions \ref{teoPOS}, \ref{teoPOSC}, and Propositions \ref{teostabest} and \ref{teostabestimpr} which deal, in turn, with the stability estimates of continuation from Cauchy data and a better version of the latter under some additional regularity hypotheses, and we use them for the proof of Theorem \ref{principale}.
In section 4, we prove Proposition \ref{teoPOS} and \ref{teoPOSC} using the three spheres inequality, Theorem \ref{teotresfere}. Section 5 is devoted to the proof of Proposition \ref{teostabest}, which will use an extension argument, Proposition \ref{teoextensionNSE}, which will in turn be proven in Section 6.
\section{Proof of Theorem \ref{principale}.}
The proof of Theorem \ref{principale} relies on the following sequence of propositions.
\begin{proposition}[Lipschitz propagation of smallness]
\label{teoPOS}
Let $E$ be a bounded Lipschitz domain with constants $\rho_0$, $M_0$, satisfying (\ref{apriori2}).
Let $u$ be a solution to the following problem:
\begin{equation}
\label{NSEPOS} \left\{ \begin{array}{rl}
\dive\sigma(u,p) &= 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\dive u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E,\\
u & = g \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \partial E,\\
\end{array} \right.
\end{equation}
where $g$ satisfies
\begin{equation} \label{apriori5POS}
g \in \accan{\frac{3}{2}}{\partial E}, \, \; \; g \not \equiv 0,
\end{equation}
\begin{equation} \label{aprioriexistPOS} \int_{\partial E} g \, \mathrm{d} s =0,
\end{equation}
\begin{equation} \label{apriori7POS}
\frac{\norma{g}{\frac{1}{2}}{\partial E} }{\normadue{g}{\partial E} } \le F,
\end{equation}
for a given constant $F>0$. Also suppose that there exists a point $P \in \partial E$ such that
\begin{equation}
g = 0 \;\; \text{on} \; \; \partial E \cap B_{\rho_0}(P).
\end{equation} Then there exists a constant $s>1$, depending only on $n$ and $M_0$ such that, for every $\rho >0$ and for every $\bar{x} \in E_{s\rho}$, we have
\begin{equation} \label{POS} \int_{B_{\rho}(\bar{x})} \! |\nabla u|^2 dx \ge C_\rho \int_{E} \! |\nabla u|^2 dx . \end{equation}
Here $C_\rho>0$ is a constant depending only on $n$, $M_0$, $M_1$, $F$, $\rho_0$ and $\rho$. The dependence of $C_\rho$ from $\rho$ and $\rho_0$ can be traced explicitly as
\begin{equation} \label{crho} C_\rho = \frac{C}{\exp \Big[ A \big( \frac{\rho_0}{\rho}\big) ^B \Big] } \end{equation} where $A$, $B$, $C>0$ only depend on $n$, $M_0$, $M_1$ and $F$.
\end{proposition}
\begin{proposition}[Lipschitz propagation of smallness up to boundary data] \label{teoPOSC}
Under the hypotheses of Theorem \ref{principale}, for all $\rho>0$, if $\bar{x} \in (\Omega_i)_{{{(s+1)\rho}}}$, we have for $i=1,2$:
\begin{equation} \label{POScauchy}
\frac{1}{\rho_0^{n-2}} \int_{B_{\rho}(\bar{x})} \! |\nabla u_i|^2 dx \ge C_\rho \norma{g}{\frac{1}{2}}{\Gamma}^2,
\end{equation}
where $C_\rho$ is as in (\ref{crho}) (with possibly a different value of the term $C$), and $s$ is given by Proposition \ref{teoPOS}.
\end{proposition}
\begin{proposition}[Stability estimate of continuation from Cauchy data]
\label{teostabest} Under the hypotheses of Theorem \ref{principale} we have
\begin{equation}\label{stabsti1} \frac{1}{\rho_0^{n-2}} \int_{D_2\setminus D_1} |\nabla u_1|^2 \le C \norma{g}{\frac{1}{2}}{\Gamma}^2 \omega\Bigg( \frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}} \Bigg) \end{equation}
\begin{equation}\label{stabsti2} \frac{1}{\rho_0^{n-2}} \int_{D_1\setminus D_2} |\nabla u_2|^2 \le C \norma{g}{\frac{1}{2}}{\Gamma}^2 \omega\Bigg( \frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}} \Bigg)\end{equation}
where $\omega$ is an increasing continuous function, defined on $\mathbb{R}^+$ and satisfying
\begin{equation}\label{andomega} \omega(t) \le C \big( \log |\log t|\big)^{-c} \end{equation}
for all $t < e^{-1}$, where $C$ only depends on $n$, $M_0$, $M_1$, $F$, and $c>0$ only depends on $n$.
\end{proposition}
\begin{proposition}[Improved stability estimate of continuation] \label{teostabestimpr}
Let the hypotheses of Theorem \ref{principale} hold. Let $G$ be the connected component of $\Omega_1 \cap \Omega_2$ containing $\Gamma$, and assume that $\partial G$ is of Lipschitz class of constants $\tilde{\rho}_0$ and $\tilde{M_0}$, where $M_0>0$ and $0<\tilde{\rho}_0<\rho_0$. Then (\ref{stabsti1}) and (\ref{stabsti2}) both hold with $\omega$ given by
\begin{equation}\label{omegabetter}
\omega(t)= C |\log t|^{\gamma},
\end{equation}
defined for $t<1$, where $\gamma >0$ and $C>0$ only depend on $M_0$, $\tilde{M_0}$, $M_1$ and $\frac{\rho_0}{\tilde{\rho}_0}$.
\end{proposition}
\begin{proposition} \label{teoreggra} Let $\Omega_1$ and $\Omega_2$ two bounded domains satisfying (\ref{apriori1}). Then there exist two positive numbers $d_0$, $\tilde{\rho}_0$, with $\tilde{\rho}_0 \le \rho_0$, such that the ratios $\frac{\rho_0}{\tilde{\rho}_0}$, $\frac{d_0}{\rho_0}$ only depend on $n$, $M_0$
and $\alpha$ such that, if
\begin{equation} \label{relgr1}
d_{\mathcal{H}} (\overline{\Omega_1}, \overline{\Omega_2}) \le d_0,
\end{equation}
then there exists $\tilde{M}_0>0$ only depending on $n$, $M_0$ and $\alpha$ such that every connected component of $\Omega_1 \cap \Omega_2$ has boundary of Lipschitz class with constants $\tilde{\rho}_0$, $\tilde{M}_0$.
\end{proposition}
We postpone the proofs of Propositions \ref{teoPOS} and \ref{teoPOSC} to Section 4, while Propositions \ref{teostabest} and \ref{teostabestimpr} will be proven in Section 5. The proof of Proposition \ref{teoreggra} is purely geometrical and can be found in \cite{ABRV}.
\begin{proof}[Proof of Theorem \ref{principale}.]
Let us call
\begin{equation} \label{distanza} d= d_\mathcal{H}(\partial D_1, \partial D_2). \end{equation}
Let $\eta$ be the quantity on the right hand side of (\ref{stabsti1}) and (\ref{stabsti2}), so that
\begin{equation} \label{eta} \begin{split}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le \eta, \\
\int_{D_1 \setminus D_2} |\nabla u_2|^2 \le \eta. \\
\end{split}\end{equation}
We can assume without loss of generality that there exists a point $x_1 \in \partial D_1$ such that dist$(x_1, \partial D_2)=d$. That being the case, we distinguish two possible situations: \\ (i) $B_d(x_1) \subset D_2$, \\ (ii) $B_d(x_1) \cap D_2 =\emptyset$.\\
In case (i), by the regularity assumptions on $\partial D_1$, we find a point $x_2 \in D_2 \setminus D_1$ such that $B_{td}(x_2) \subset D_2 \setminus D_1$, where $t$ is small enough (for example, $t=\frac{1}{1+\sqrt{1+M_0^2}}$ suffices). Using (\ref{POScauchy}), with $\rho = \frac{t d}{s}$ we have
\begin{equation} \label{stimapos} \int_{B_\rho (x_2) } |\nabla u_1|^2 dx \ge \frac{C\rho_0^{n-2}}{\exp \Big[A \big(\frac{s\rho_0}{t d }\big)^B\Big]} \norma{g}{\frac{1}{2}}{\Gamma}^2.
\end{equation}
By Proposition \ref{teostabest}, we have:
\begin{equation} \label{quellaconomega}
\omega\Bigg( \frac{\epsilon}{ \norma{g}{\frac{1}{2}}{\Gamma}} \Bigg) \ge \frac{C}{\exp \Big[A \big(\frac{s\rho_0}{t d }\big)^B\Big]} , \end{equation}
and solving for $d$ we obtain an estimate of log-log-log type stability:
\begin{equation}\label{logloglog}
d \le C \rho_0 \Bigg\{ \log \Bigg[ \log \Bigg|\log\frac{\epsilon}{\norma{g}{\frac{1}{2}}{\Gamma}} \Bigg| \Bigg] \Bigg\}^{-\frac{1}{B}},
\end{equation}
provided $\epsilon < e^{-e} \norma{g}{\frac{1}{2}}{\Gamma}$: this is not restrictive since, for larger values of $\epsilon$, the thesis is trivial. If we call $d_0$ the right hand side of (\ref{logloglog}), we have that there exists $\epsilon_0$ only depending on $n$, $M_0$, $M_1$ and $F$ such that, if $\epsilon \le \epsilon_0$ then $d\le d_0$. Proposition \ref{teoreggra} then applies, so that $G$ satisfies the hypotheses of Proposition \ref{teostabestimpr}. This means that we may choose $\omega$ of the form (\ref{omegabetter}) in (\ref{quellaconomega}), obtaining (\ref{stabsti1}).
Case (ii) can be treated analogously, upon substituting $u_1$ with $u_2$.
\end{proof}
\section{Proof of Proposition \ref{teoPOS}.}
The main idea of the proof of Proposition \ref{teoPOS} is a repeated application of a three-spheres type inequality. Inequalities as such play a crucial role in almost all stability estimates from Cauchy data, thus they have been adapted to a variety of elliptic PDEs: in the context of the scalar elliptic equations (see \cite{ABRV}), then in the determination of cavities or inclusions in elastic bodies (\cite{MR}, \cite{MRC}) and more in general, for scalar elliptic equations (\cite{ARRV}) as well as systems (\cite{LNW}) with suitably smooth coefficients. We recall in particular the following estimate, which is a special case of a result of Nagayasu, Lin and Wang (\cite{LNW}), dealing with systems of differential inequalities of the form:
\begin{equation} \label{ellgeneral} |\triangle^l u^i | \le K_0 \sum_{|\alpha| \le \big[ \frac{3l}{2} \big] } | D^\alpha u | \, \quad i=1,\dots , n. \end{equation}
Then the following holds (see \cite{LNW}):
\begin{theorem}[Three spheres inequality.] \label{teotresfere} Let $E \subset \mathbb{R}^n$ be a bounded domain with Lipschitz boundary with constants $\rho_0$, $M_0$. Let $B_R(x)$ a ball contained in $E$, and let $u \in \accan{2l}{E}$ be a solution to (\ref{ellgeneral}). Then there exists a real number $\vartheta^* \in (0, e^{-1/2})$, depending only on $n$, $l$ and $K_0$ such that, for all $0<r_1 <r_2 <\vartheta^* r_3$ with $r_3 \le R$ we have:
\begin{equation} \label{tresfere} \int_{B_{r_2}} \! | u|^2 dx \le C \Big(\int_{B_{r_1} } \! | u|^2 dx \Big)^\delta \Big(\int_{B_{r_3}} \! | u|^2 dx \Big)^{1-\delta} \end{equation}
where $\delta \in (0,1)$ and $C>0$ are constants depending only on $n$, $l$, $K_0$, $\frac{r_1}{r_3}$ and $\frac{r_2}{r_3}$, and the balls $B_{r_i}$ are centered in $x$.
\end{theorem}
First, we show that Proposition \ref{teoPOSC} follows from Proposition \ref{teoPOS}:
\begin{proof}[Proof of Proposition \ref{teoPOSC}.]
From Proposition \ref{teoPOS} we know that \begin{displaymath} \int_{B_{\rho}(x)} \! |\nabla u_i|^2 dx \ge C_\rho \int_{\Omega \setminus \overline{D_i}} \! |\nabla u_i|^2 dx, \end{displaymath}
where $C_\rho$ is given in (\ref{crho}).
We have, using Poincar\`e inequality (\ref{pancarre}) and the trace theorem,
\begin{equation} \label{altofrequenza} \begin{split} \int_{\Omega\setminus \overline{D_i}} |\nabla u_i|^2 dx \ge C \rho_0^{n-2} \norma{u_i}{1}{\Omega \setminus \overline{D_i}}^2 \ge C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\partial \Omega}^2. \end{split}\end{equation}
Applying the above estimate to (\ref{POS}) and using (\ref{equivalence}) will prove our statement.
\end{proof}
Next, we introduce a lemma we shall need later on:
\begin{lemma} \label{42}
Let the hypotheses of Proposition \ref{teoPOS} be satisfied. Then
\begin{equation}
\normadue{u}{E} \ge \frac{C}{F^2} \rho_0 \normadue{\nabla u}{E}
\end{equation}
where $C>0$ only depends on $n$, $M_0$ and $M_1$.
\end{lemma}
The proof is obtained in \cite{MRC}, with minor modifications. We report it here for the sake of completeness.
\begin{proof}
Assume $\rho_0=1$, otherwise the thesis follows by scaling. The following trace inequality holds (see \cite[Theorem 1.5.1.10]{21}):
\begin{equation} \label{trace1}
\normadue{u}{\partial E} \le C (\normadue{\nabla u}{E} \normadue{u}{E} + \normadue{u}{E}^2),
\end{equation}
where $C$ only depends on $M_0$ and $M_1$. Using the Poincar\`e inequality (\ref{pancarre}), we have
\begin{equation}
\frac{\normadue{\nabla u}{E} }{ \normadue{u}{E} } \le C \frac{\normadue{\nabla u}{E}^2}{\normadue{u}{\partial E}^2}.
\end{equation}
This, together with (\ref{stimanormadiretto}), immediately gives the thesis.
\end{proof}
A proof of Proposition \ref{teoPOS} has already been obtained in \cite{MRC} dealing with linearized elasticity equations; we give a sketch of it here, with the due adaptations.
\begin{proof}[Proof of Proposition \ref{teoPOS}.]
We outline the main steps taken in the proof. First, we show that the three spheres inequality (\ref{tresfere}) applies to $\nabla u$. Then, the goal is to estimate $\normadue{\nabla u}{E}$ by covering the set $E$ with a sequence of cubes $Q_i$ with center $q_i$ of "relatively small" size. Each of these cubes is contained in a sphere $S_i$, thus we estimate the norm of $\nabla u$ in every sphere of center $q_i$, by connecting $q_i$ with $x$ with a continuous arc, and apply an iteration of the three spheres inequality to estimate $\normadue{\nabla u}{S_i}$ in terms of $\normadue{\nabla u}{B_\rho(x)}$. However, the estimates deteriorate exponentially as we increase the number of spheres (or equivalently, if the radius $\rho$ is comparable with the distance of $x$ from the boundary) giving an exponentially worse estimate of the constant $C_\rho$. To solve this problem, the idea is to distinguish two areas within $E_{s \rho}$, which we shall call $A_1$, $A_2$. We consider $A_1$ as the set of points $y \in E_{s \rho}$ such that $\mathrm{dist}(y, \partial E)$ is sufficiently large, whereas $A_2$ is given as the complement in $E_{s \rho}$ of $A_1$. Then, whenever we need to compare the norm of $\nabla u$ on two balls whose centers lie in $A_2$, we reduce the number of spheres by iterating the three spheres inequality over a sequence of balls with increasing radius, exploiting the Lipschitz character of $\partial E$ by building a cone to which all the balls are internall tangent to. Once we have reached a sufficiently large distance from the boundary, we are able to pick a chain of larger balls, on which we can iterate the three speres inequality again without deteriorating the estimate too much. This line of reasoning allows us to estimate the norm of $\nabla u$ on any sphere contained in $E_{s \rho}$, thus the whole $\normadue{\nabla u}{E}$. \\
{\bf Step 1.}
{ \it If $u \in \accauno{E}$ solves (\ref{NSEPOS}) then the three spheres inequality (\ref{tresfere}) applies to $ \nabla u$.}
\begin{proof}[Proof of Step 1.] We show that $u$ can be written as a solution of a system of the form (\ref{ellgeneral}). By Theorem \ref{TeoRegGen}, we have $u \in \mathbf{H}^2(E)$ so that we may take the laplacian of the second equation in (\ref{NSE}):
\begin{displaymath}
\triangle \dive u =0. \end{displaymath}
Commuting the differential operators, and recalling the first equation in (\ref{NSE}),
\begin{displaymath}
\triangle{p}=0
\end{displaymath}
thus $p$ is harmonic, which means that, if we take the laplacian of the first equation in (\ref{NSE}) we get
\begin{displaymath}
\triangle^2 u=0,
\end{displaymath}
so that $\nabla u$ is also biharmonic, hence the thesis.
\end{proof}
In what follows, we will always suppose $\rho_0=1$: The general case is treated by a rescaling argument on the biharmonic equation.
We closely follow the geometric construction given in \cite{MRC}. In the aforementioned work the object was to estimate $\| \hat{\nabla} u\|$, by applying the three spheres inequality to $\hat{\nabla} u$ (the symmetrized gradient of $u$); in order to relate it to the boundary data, this step had to be combined with Korn and Caccioppoli type inequalities. Here the estimates are obtained for $\|\nabla u \|$. \\
From now on we will denote, for $z \in \mathbb{R}^n$, $\xi \in \mathbb{R}^n$ such that $|\xi|=1$, and $\vartheta >0$,
\begin{equation} \label{cono}
C(z, \xi, \vartheta)= \Big\{ x \in \mathbb{R}^n \text{ s.t. } \frac{(x-z) \cdot \xi}{|x-z|} > \cos \vartheta \Big\}
\end{equation}
the cone of vertex $z$, direction $\xi$ and width $2 \vartheta$. \\ Exploiting the Lipschitz character of $\partial E$, we can find $\vartheta_0 >0$ depending only on $M_0$, $\vartheta_1>0$, $\chi >1$ and $s>1$ depending only on $M_0$ and $n$, such that the following holds (we refer to \cite{MRC} for the explicit expressions of the constants $\vartheta_0$, $\vartheta_1$, $\chi$, $s$, and for all the detailed geometric constructions).\\
{\bf Step 2.}
{ \it Choose $0<\vartheta^* \le 1$ according to Theorem \ref{teotresfere} .There exists $\overline{\rho}>0$, only depending on $M_0$, $M_1$ and $F$, such that: \\
If $ 0<\rho \le \bar{\rho}$, and $x \in E$ is such that $ s \rho < \mathrm{dist} (x, \partial E) \le \frac{\vartheta^*}{4}$, then there exists $\hat{x} \in E$ satisfying the following conditions:
\begin{enumerate}
\item[(i)] $B_{\frac{5 \chi \rho}{\vartheta^*}} (x) \subset C(\hat{x},e_n=\frac{x-\hat{x}}{|x-\hat{x}|} , \vartheta_0) \cap B_{\frac{\vartheta^* }{8}}
(\hat{x}) \subset E$,
\item[(ii)] Let $x_2 = x+ \rho(\chi+1)e_n$. Then the balls $B_\rho (x)$ and $B_{\chi \rho} (x_2)$ are internally tangent to the cone $C(\hat{x},e_n, \vartheta_1)$.
\end{enumerate}}
The idea is now to repeat iteratively the construction made once in Step 2. We define the following sequence of points and radii:
\begin{displaymath} \begin{split}
\rho_1 &= \rho, \; \; \; \rho_k = \chi \rho_{k-1}, \; \; \text{ for } k \ge 2, \\
x_1 &= x, \; \; \; x_k=x_{k-1}+ (\rho_{k-1} + \rho_k) e_n , \qquad \text{ for } k \ge 2. \end{split}
\end{displaymath}
We claim the following geometrical facts (the proof of which can be found again in \cite{MRC}, except the first, which is \cite[Proposition 5.5]{ARRV}): \\
{\it There exist $0<h_0<1/4$ only depending on $M_0$, $\bar{\rho} >0$ only depending on $M_0$, $M_1$ and $F$, an integer $k(\rho)$ depending also on $M_0$ and $n$, such that, for all $h \le h_0$, $0<\rho \le \bar{\rho}$ and for all integers $1<k \le k(\rho)-1$ we have: \begin{enumerate}
\item \label{fatto0} $E_h$ is connected,
\item \label{fatto1} $B_{\rho_k}(x_k)$ is internally tangent to $C(\hat{x}, e_n, \vartheta_1) $,
\item \label{fatto2} $B_{\frac{5 \chi \rho_k}{\vartheta^*}}(x_k) $ is internally tangent to $C(\hat{x}, e_n, \vartheta_0) $,
\item The following inclusion holds: \begin{equation} \label{fatto3} B_{\frac{5 \rho_k}{\vartheta^*}}(x_k) \subset B_{\frac{\vartheta^*}{8}}(\hat{x}), \end{equation}
\item $k(\rho)$ can be bounded from above as follows: \begin{equation} \label{432} k(\rho) \le \log \frac{\vartheta^* h_0}{5 \rho} +1. \end{equation}
\end{enumerate}
}
Call $\rho_{k(\rho)}= \chi^{k(\rho)-1} \rho$; from (\ref{432}) we have that
\begin{equation} \label{433}
\rho_{k(\rho)} \le \frac{\vartheta^* h_0}{5}.
\end{equation}
In what follows, in order to ease the notation, norms will be always understood as being $\mathbf{L}^2$ norms, so that $\|\cdot \|_U$ will stand for $\normadue{\cdot}{U}$. \\
{\bf Step 3.} {\it For all $0<\rho \le \bar{\rho}$ and for all $x \in E$ such that $s \rho \le \mathrm{dist}(x, \partial E) \le \frac{\vartheta^*}{4}$, the following hold:
\begin{equation} \label{434}
\frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (x_{k(\rho)}) }}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_{\rho} (x)}}{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi^{k(\rho)-1}},
\end{equation}
\begin{equation} \label{435}
\frac{\nor{\nabla u}{B_{\rho} (x)} }{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (x_{\rho_{k(\rho)}})}}{\nor{\nabla u}{E}} \Bigg)^{\delta^{k(\rho)-1}},
\end{equation}
where $C>0$ and $0<\delta_\chi<\delta<1$ only depend on $M_0$.
}
\begin{proof}[Proof of Step 3.]
We apply to $\nabla u$ the three-spheres inequality, with balls of center $x_j$ and radii $r_1^{j}=\rho_j$, $r_2^{j}=3\chi \rho_j$, $r_3^{j}=4 \chi\rho_j$, for all $j=1, \dots, k(\rho)-1$. Since
$B_{r_1^{j+1}}(x_{j+1}) \subset B_{r_2^j}(x_j)$,
by the three spheres inequality, there exists $C$ and $\delta_\chi$ only depending on $M_0$, such that:
\begin{equation} \label{437} \nor{\nabla u}{B_{\rho_{j+1}}(x_{j+1})} \le C \Big(\nor{\nabla u}{B_{\rho_j}(x_j) }\Big)^{\delta_\chi} \Big(\nor{\nabla u}{B_{4\chi\rho_j}(x_j)}\Big) ^{1-\delta_\chi}. \end{equation}
This, in turn, leads to:
\begin{equation} \label{438} \frac{\nor{ \nabla u}{B_{\rho_{j+1}}(x_{j+1})} }{\nor{\nabla u}{E}} \le C \Bigg(\frac{\nor{ \nabla u}{B_{\rho_j}(x_j)} }{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi},\end{equation}
for all $j=0, \dots k(\rho)-1$.
Now call
\begin{displaymath}
m_k = \frac{\nor{ \nabla u}{B_{\rho_{j+1}}(x_{j+1})} }{\nor{\nabla u}{E}}.
\end{displaymath}
so that (\ref{438}) reads
\begin{equation} \label{stepdue}
m_{k+1} \le C m_k^{\delta_\chi} \, \nor{\nabla u}{E}^{1-\delta_\chi} ,
\end{equation}
which, inductively, leads to \begin{equation} \label{steptre}
m_{N} \le \tilde{C} m_0^\alpha,
\end{equation}
where $\tilde{C} = C^{1+\delta_\chi+ \dots+ \delta_\chi^{k(\rho)-2}}$. Since $ 0<\delta_\chi <1$, we have $1+\delta_\chi+ \dots+ \delta_\chi^{k(\rho)-2} \le \frac{1}{1-\delta_\chi}$, and since we may take $C>1$,
\begin{equation} \label{stepquattro}
\tilde{C} \le C^{\frac{1}{1-\delta_\chi}}.
\end{equation}
Similarly, we obtain (\ref{435}): we find a $0<\delta<1$ such that the three spheres inequality applies to the balls $B_{\rho_j}(x_j)$, $B_{3\rho_j}(x_j)$ $B_{4\rho_j}(x_j)$ for $j=2,\dots, k(\rho)$; observing that $B_{\rho_{j}(x_{j-1})} \subset B_{3\rho_j}(x_j)$, the line of reasoning followed above applies identically.
\end{proof}
{\bf Step 4.}
\\{\it For all $0<\rho \le \overline{\rho}$, and for every $\bar{x} \in E_{s\rho}$ we have \begin{equation}\label{453}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{A+B\log \frac{1}{\rho}}} .
\end{equation} }
\begin{proof} We distinguish two subcases:
\begin{enumerate}
\item[\it (i).] $\bar{x}$ is such that $\mathrm{dist} (\bar{x}, \partial E) \le \frac{\vartheta^*}{4}$,
\item[\it (ii).] $\bar{x}$ is such that $\mathrm{dist}(\bar{x}, \partial E) > \frac{\vartheta^*}{4}$.
\end{enumerate}
{\it Proof of Case (i).}
Let us consider $\delta$, $\delta_\chi$ we introduced in Step
. Take any point $y \in E$ such that $s \rho < \mathrm{dist}(y, \partial E) \le \frac{\vartheta^*}{4}$.
By construction, the set $E_{\frac{5\rho_{k(\rho)}}{\vartheta^*}}$ is connected, thus there exists a continuous path $\gamma : [0,1] \to E_{\frac{5\rho_{k(\rho)}}{\vartheta^*}}$ joining $\bar{x}_{k(\rho)}$ to $y_{k(\rho)}$. We define a ordered sequence of times $t_j$, and a corresponding sequence of points $x_j= \gamma(t_j)$, for $j=1, \dots, L$ in the following way: $t_1=0$, $t_L =1$, and
\begin{displaymath}
t_j= \mathrm{max} \{t\in (0,1] \text{ such that } |\gamma(t)- x_i| = 2 \rho_{k(\rho)} \} \; \text{, if } |x_i-y_{k(\rho)}| > 2 \rho_{k(\rho)},
\end{displaymath}
otherwise, let $k=L$ and the process is stopped. Now, all the balls $B_{\rho_{k(\rho)}}(x_i)$ are pairwise disjoint, the distance between centers $| x_{j+1}-x_j | = 2 \rho_{k(\rho)}$ for all $j=1 \dots L-1$ and for the last point, $|x_L - y_{k(\rho)}| \le 2 \rho_{k(\rho)}$. The number of points, using (\ref{apriori2}), is at most
\begin{equation} \label{sferealmassimo} L \le \frac{M_1}{\omega_n \rho_{k(\rho)}^n}. \end{equation}
Iterating the three spheres inequality over this chain of balls, we obtain
\begin{equation} \label{442}
\frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(y_{k(\rho)})}}{\nor{\nabla u}{E}} \le C \Bigg( \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(\bar{x}_{k(\rho)}) }}{\nor{\nabla u}{E}} \Bigg)^{\delta^L}
\end{equation}
On the other hand, by the previous step we have, applying (\ref{434}) and (\ref{435}) for $x=\bar{x}$ and $x=y$ respectively,
\begin{equation} \label{443}
\frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (\bar{x}_{k(\rho)}) }}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_{\rho} (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi^{k(\rho)-1}},
\end{equation}
\begin{equation} \label{444}
\frac{\nor{\nabla u}{B_{\rho}(y)} }{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_{\rho_{k(\rho)}} (y_{k(\rho)})}}{\nor{\nabla u}{E}} \Bigg)^{\delta^{k(\rho)-1}},
\end{equation}
where $C$, as before, only depends on $n$ and $M_0$. Combining (\ref{442}), (\ref{443}) and (\ref{444}), we have
\begin{equation} \label{445}
\frac{\nor{\nabla u}{B_{\rho}(y)} }{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_{\rho} (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{\delta_\chi^{k(\rho)-1} \delta^{k(\rho)+L-1}},
\end{equation}
for every $y \in E_{s\rho}$ satisfying $\mathrm{dist} (y, \partial E) \le \frac{\vartheta^*}{4}$. Now consider $y \in E$ such that $\mathrm{dist} (y, \partial E) > \frac{\vartheta^*}{4}$.
Call \begin{equation} \label{446}
\tilde{r}= \vartheta^* \rho_{k(\rho)}.
\end{equation}
By construction (\ref{433}) and (\ref{fatto3}) we have
\begin{equation}
\mathrm{dist}(\bar{x}_{k(\rho)}, \partial E) \ge \frac{5 \rho_{k(\rho)}}{\vartheta^*} > \frac{5}{\vartheta^*} \tilde{r} ,
\end{equation}
\begin{equation}
\mathrm{dist}(y, \partial E) \ge \frac{5 \rho_{k(\rho)}}{\vartheta^*} > \frac{5}{\vartheta^*} \tilde{r},
\end{equation}
and again $E_{\frac{5}{\vartheta^*} \tilde{r}}$ is connected, since $\tilde{r}< \rho_{k(\rho)}$. We are then allowed to join $\bar{x}_{k(\rho)}$ to $y$ with a continuous arc, and copy the argument seen before over a chain of at most $\tilde{L}$ balls of centers $x_j \in E_{\frac{5}{\vartheta^*} \tilde{r}}$ and radii $\tilde{r}$, $3\tilde{r}$, $4\tilde{r}$, where \begin{equation} \label{sferealmassimotilde} \tilde{L} \le \frac{M_1}{\omega_n \tilde{r}^n}. \end{equation}
Up to possibly shrinking $\overline{\rho}$, we may suppose $\rho \le \tilde{r}$; iterating the three spheres inequality as we did before, we get
\begin{equation}
\label{451}
\frac{\nor{\nabla u}{B_{\tilde{r}}(y)} }{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_{\tilde{r}} (\bar{x}_{k(\rho)})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta^{\tilde{L}}},
\end{equation}
which, in turn, by (\ref{443}) and since $\rho \le \tilde{r} < \rho_{k(\rho)}$, becomes
\begin{equation} \label{452}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1}\delta^{\tilde{L}}},
\end{equation}
with $C$ depending only on $M_0$ and $n$. The estimate (\ref{452}) holds for all $y \in E$ such that $\mathrm{dist} (y, \partial E) > \frac{\vartheta^*}{4}$.
We now put (\ref{432}), (\ref{452}), (\ref{445}), (\ref{sferealmassimo}) (\ref{sferealmassimotilde}) together, by also observing that $\delta_\chi \le \delta$ and trivially $\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le 1$, we obtain precisely (\ref{453}), for $\rho \le \overline{\rho}$, where $C>1$ and $B>0$ only depend on $M_0$, while $A>0$ only depend on $M_0$ and $M_1$.\\
{ \it Proof of Case (ii).}
We use the same constants $\delta$ and $\delta_\chi$ introduced in Step 3. Take $\rho \le \bar{\rho}$, then $B_{s\rho}(\bar{x}) \subset B_{\frac{\vartheta^*}{16}}(\bar{x})$, and for any point $\tilde{x}$ such that $|\bar{x} - \tilde{x}| = s \rho$, we have $B_{\frac{\vartheta^*}{8}}(\tilde{x}) \subset E$. Following the construction made in Steps 2 and 3, we choose a point $\bar{x}_{k(\rho)} \in E_{\frac{5}{\vartheta^*}\rho_{k(\rho)}}$, such that
\begin{equation}
\frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(\bar{x}_{k(\rho)})}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1}}, \end{equation}
with $C>1$ only depending on $n$, $M_0$.
If $y \in E$ is such that $s\rho <\mathrm{dist} (y, \partial E) \le \frac{\vartheta^*}{4}$, then, by the same reasoning as in Step 4.(i), we obtain
\begin{equation}\label{459}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1} \delta^{k(\rho)+L-1}}, \end{equation}
with $C>1$ again depending only on $M_0$.
If, on the other hand, $y \in E$ is such that $\mathrm{dist}(y, \partial E) \ge \frac{\vartheta^*}{4}$, taking $\tilde{r}$ as in (\ref{446}), using the same argument as in Step 4.(i), we obtain
\begin{equation}\label{460}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho(\bar{x})}}{\nor{\nabla u}{E}} \Bigg)
^{\delta_\chi^{k(\rho)-1} \delta^{\tilde{L}}},
\end{equation}
where again $C>1$ only depends on $M_0$.
From (\ref{459}),(\ref{460}), (\ref{sferealmassimo}),(\ref{sferealmassimotilde}) and (\ref{432}), and recalling that, again, $\delta_\chi \le \delta$, and $\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le 1$, we obtain
\begin{equation}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{A+B\log\frac{1}{\rho}} },
\end{equation}
where $C>1$ and $B>0$ only depend on $M_0$, while $A>0$ only depends on $M_0$, $M_1$.
\end{proof}
{\bf Step 5.} {\it For every $\rho \le \bar{\rho}$ and for every $\bar{x} \in E_{s\rho}$ the thesis (\ref{POS}) holds. }
\begin{proof}[Proof of Step 5]
Suppose at first that $\bar{x} \in E_{s\rho}$ satisfies $\mathrm{dist} (\bar{x}, \partial E) \le \frac{\vartheta^*}{4}$.
We cover $E_{(s+1)\rho}$ with a sequence of non-overlapping cubes of side $l= \frac{2 \rho}{\sqrt{n}}$, so that every cube is contained in a ball of radius $\rho$ and center in $E_{s \rho}$. The number of cubes is bounded by
\begin{displaymath}
N= \frac{|\Omega|n^{\frac{n}{2}}}{(2\rho)^n} \le \frac{M_1 n^{\frac{n}{2}}}{(2 \rho)^n}.
\end{displaymath}
If we then sum over $k=0$ to $N$ in (\ref{453}) we can write:
\begin{equation} \label{stepcinque}
\frac{\nor{\nabla u}{E_{(s+1) \rho}}}{\nor{\nabla u}{E}} \le C \rho^{-\frac{n}{2}} \Biggr( \frac{\nor{\nabla u}{B_\rho(\bar{x})} }{\nor{\nabla u}{E}} \Biggr)^{\delta_\chi^{A+B\log \frac{1}{\rho}}} .
\end{equation}
Here $C$ depends only on $M_0$.
Now, we need to estimate the left hand side in (\ref{stepcinque}). In order to do so, we start by writing
\begin{equation} \label{unomeno} \frac{\nor{\nabla u}{E_{(s+1) \rho}}}{\nor{\nabla u}{E}} =1-\frac{\nor{\nabla u}{E \setminus E_{(s+1) \rho}}}{\nor{\nabla u}{E}}.
\end{equation}
By Lemma \ref{42} and the H\"older inequality,
\begin{equation} \label{buttatali}
\nor{\nabla u}{E \setminus E_{(s+1)\rho}}^2 \le C F^2 \nor{u}{E \setminus E _{(s+1)\rho}}^2 \le C F^2 |E \setminus E _{(s+1)\rho}| ^{\frac{1}{n}} \| u\|^2_{\mathbf{L}^{\frac{2n}{n-1}}(E \setminus E _{(s+1)\rho})}.
\end{equation}
On the other hand, by the Sobolev and the Poincar\`e inequalities:
\begin{equation} \label{buttatali2}
\| u\|_{\mathbf{L}^{\frac{2n}{n-1}}(E
)} \le C \norma{ u}{\frac{1}{2}}{E
} \le C \nor{u}{E} \le C \nor{\nabla u} {E}.
\end{equation}
It can be proven (see \cite[Lemma 5.7]{ARRV}) that
\begin{equation} \label{buttatali3}
|E \setminus E_{(s+1)\rho }| \le C \rho,
\end{equation}
where $C$ depends on $M_0$, $M_1$ and $n$. We thus obtain that
\begin{equation} \label{storysofar}
\frac{\nor{\nabla u}{E \setminus E_{(s+1)\rho}}}{\nor{\nabla u} {E}} \le C F^2 |E \setminus E_{(s+1)\rho}|^{\frac{1}{n}}.
\end{equation}
Therefore, combining
(\ref{storysofar}) and (\ref{buttatali3}), we have that for $\rho \le \bar{\rho}$,
\begin{equation} \label{unmezzo}
\frac{\nor{\nabla u}{E _{(s+1) \rho}}}{\nor{\nabla u}{E}} \le \frac{1}{2},
\end{equation}
which, inserted into (\ref{stepcinque}) yields
\begin{equation*}
\int_{B_\rho(\bar{x})} |\nabla u|^2 \ge C \rho^{n\delta_\chi^{-A-B\log\frac{1}{\rho}}} \int_E |\nabla u|^2.
\end{equation*}
Since for all $t>0$ we have $|\log t| \le \frac{1}{t}$, it is immediate to verify that (\ref{POS}) holds.
Now take $\bar{x} \in E_{s\rho}$ such that $\mathrm{dist}(\bar{x}, \partial E) > \frac{\vartheta^*}{4}$. Then $B_{s\rho}(\bar{x}) \subset B_{\frac{\vartheta^*}{16}}(\bar{x})$, then for any point $\tilde{x}$ such that $|\bar{x} - \tilde{x}| = s \rho$, we have $B_{\frac{\vartheta^*}{8}}(\tilde{x}) \subset E$. Following the construction made in Steps 2 and 3, we choose a point $\bar{x}_{k(\rho)} \in E_{\frac{5}{\vartheta^*}\rho_{k(\rho)}}$, such that
\begin{equation}
\frac{\nor{\nabla u}{B_{\rho_{k(\rho)}}(\bar{x}_{k(\rho)})}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1}}, \end{equation}
with $C>1$ only depends on $n$, $M_0$. \\
If $y \in E$ is such that $s\rho <\mathrm{dist} (y, \partial E) \le \frac{\vartheta^*}{4}$, then, by the same reasoning as in Step 4, we obtain
\begin{equation}\label{4591}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{k(\rho)-1} \delta^{k(\rho)+L-1}}, \end{equation}
with $C>1$ again depending only on $n$ and $M_0$.
If, on the other hand, $y \in E$ is such that $\mathrm{dist}(y, \partial E) \ge \frac{\vartheta^*}{4}$, taking $\tilde{r}$ as in (\ref{446}), using the same argument as in Step 4, we obtain
\begin{equation}\label{4601}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho(\bar{x})}}{\nor{\nabla u}{E}} \Bigg)
^{\delta_\chi^{k(\rho)-1} \delta^{\tilde{L}}},
\end{equation}
where again $C>1$ only depends on $n$ and $M_0$.
From (\ref{4591}),(\ref{4601}), (\ref{sferealmassimo}),(\ref{sferealmassimotilde}) and (\ref{432}), and recalling that, again, $\delta_\chi \le \delta$, and $\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}} \le 1$, we obtain
\begin{equation}
\frac{\nor{\nabla u}{B_{\rho}(y)}}{\nor{\nabla u}{E}}
\le C \Bigg( \frac{\nor{\nabla u}{B_\rho (\bar{x})}}{\nor{\nabla u}{E}} \Bigg)^{ \delta_\chi^{A+B\log\frac{1}{\rho}} },
\end{equation}
where $C>1$ and $B>0$ only depend on $n$ and $M_0$, while $A>0$ only depends on $n$, $M_0$, $M_1$.
The thesis follows from the same cube covering argument as in Step 4.
\end{proof}
{\bf Conclusion.} So far, we have proven (\ref{POS}) true for every $\rho \le \bar{\rho}$, and for every $\bar{x} \in E_{s \rho}$, where $\bar{\rho}$ only depends on $M_0$, $M_1$ and $F$. If $\rho > \bar{\rho}$ and $\bar{x} \in E_{s \rho} \subset E_{s \bar{\rho}}$, then, using what we have shown so far,
\begin{equation} \label{462}
\nor{\nabla u}{B_\rho (\bar{x})} \ge \nor{\nabla u}{B_{\bar{\rho}}(\bar{x})} \ge \tilde{C} \nor{\nabla u}{E},
\end{equation}
where $\tilde{C}$ again only depends on $n$, $M_0$, $M_1$ and $F$. On the other hand, by the regularity hypotheses on $E$, it is easy to show that
\begin{equation} \label{463}
\rho \le \frac{\mathrm{diam}(\Omega)}{2s} \le \frac{C^*}{2s}
\end{equation}
thus the thesis
\begin{displaymath}
\int_{B_\rho (\bar{x})} |\nabla u|^2 \ge \frac{C}{\exp \Big[ A \Big(\frac{1}{\rho}\Big)^B\Big] } \int_E |\nabla u|^2 \end{displaymath}
is trivial, if we set \begin{displaymath}
C = \tilde{C} \exp\Big[ A \Big( \frac{2s}{C^*}\Big)^B \Big].
\end{displaymath}
\end{proof}
\section{Stability of continuation from Cauchy data.}
Throughout this section, we shall again distinguish two domains $\Omega_i= \Omega \setminus \overline{D_i}$ for $i=1,2$, where $D_i$ are two subset of $\Omega$ satisfying (\ref{apriori2bis}) to (\ref{apriori4}).
We start by putting up some notation. In the following, we shall call
\begin{displaymath} U^i_\rho =\{x \in \overline{\Omega_i} \; \text{s.t.} \mathrm{dist}(x,\partial \Omega) \le \rho \}. \end{displaymath}
The following are well known results of interior regularity for the bilaplacian (see, for example, \cite{Miranda}, \cite{GilTru}):
\begin{lemma}[Interior regularity of solutions] \label{teoschauder} Let $u_i$ be the weak solution to \ref{NSE} in $\Omega_i$. Then for all $0<\alpha<1$ we have that $u_i \in C^{1,\alpha}(\overline{\Omega_i \setminus U^i_{\frac{\rho_0}{8}}})$ and
\begin{equation} \label{schauder1} \|u_i \|_{C^{1,\alpha}(\overline{ \Omega_i \setminus U^i_{\frac{\rho_0}{8}}})} \le C
\norma{g}{\frac{1}{2}}{\Gamma} \end{equation}
\begin{equation} \label{schauder2} \|u_1-u_2 \|_{C^{1,\alpha}( \overline{\Omega_1 \cap \Omega_2})} \le C
\norma{g}{\frac{1}{2}}{\Gamma} \end{equation} where $C>0$ only depends on $\alpha$, $M_0$. \end{lemma}
\begin{proof}
Using standard energy estimates, as in Theorem \ref{TeoRegGen}, it follows that
\begin{equation} \label{stimau} \norma{u_i}{1}{\Omega_i} \le C
\norma{g}{\frac{1}{2}}{\partial \Omega}. \end{equation}
On the other hand, using interior regularity estimates for biharmonic functions, we have
\begin{equation} \label{intreg}
\|u_i \|_{C^{1,\alpha} (\overline{\Omega_i \setminus U^i_{\frac{\rho_0}{8}}})} \le C \|u_i \|_{\mathbf{L}^{\infty} (\overline{\Omega_i \setminus U^i_{\frac{\rho_0}{16}}})} \le
\normadue{u_i}{\Omega_i},
\end{equation}
where $C>0$ only depends on $\alpha$ and $M_0$. Combining (\ref{stimau}), (\ref{intreg}), and recalling (\ref{equivalence}), immediately leads to (\ref{schauder1}).
As for (\ref{schauder2}), we observe that $u_1-u_2=0$ on $\Gamma$ (actually, on $\partial \Omega$); therefore, the $C^{1,\alpha}$ norm of $u_1-u_2$ in $U_{\frac{\rho_0}{2}}^1 \cap U_{\frac{\rho_0}{2}}^2$ can be estimated in the same fashion; using (\ref{schauder1}) in the remaining part, we get (\ref{schauder2}).
\end{proof}
We will also need the following lemma, proved in \cite{ABRV}:
\begin{lemma}[Regularized domains] \label{regularized}
Let $\Omega$ be a domain satisfying (\ref{apriori1}) and (\ref{apriori2}), and let $D_i$, for $i=1,2$ be two connected open subsets of $\Omega$ satisfying (\ref{apriori3}), (\ref{apriori4}). Then there exist a family of regularized domains $D_i^h \subset \Omega$, for $0 < h < a \rho_0$, with $C^1$ boundary of constants $\til{\rho_0}$, $\til{M_0}$ and such that
\begin{equation} \label{643} D_i \subset D_i^{h_1} \subset D_i^{h_2} \; \text{ if } 0<h_1 \le h_2; \end{equation}
\begin{equation} \label{644} \gamma_0 h \le \mathrm{dist}(x, \partial D_i) \le \gamma_1 h \; \text{ for all } x \in \partial D_i^h; \end{equation}
\begin{equation} \label{645} \mathrm{meas}(D_i^h\setminus D_i)\le \gamma_2 M_1 \rho_0^2 h; \end{equation}
\begin{equation} \label{646} \mathrm{meas}_{n-1}(\partial D_i^h)\le \gamma_3 M_1 \rho_0^2; \end{equation}
and for every $x \in \partial D_i^h$ there exists $y \in \partial D_i$ such that
\begin{equation} \label{647} |y-x|= \mathrm{dist}(x, \partial D_i), \; \; |\nu(x) - \nu(y)|\le \gamma_4 \frac{h^\alpha}{\rho_0^\alpha}; \end{equation}
where by $\nu(x)$ we mean the outer unit normal to $\partial D_i^h$, $\nu(y)$ is the outer unit normal to $D_i$, and the constants $a$, $\gamma_j$, $j=0 \dots 4$ and the ratios
$\frac{\til{M}_0}{M_0}$, $\frac{\til{\rho}_0}{\rho_0}$ only depend on $M_0$ and $\alpha$.
\end{lemma}
We shall also need a stability estimate for the Cauchy problem associated with the Stokes system with homogeneous Cauchy data. The proof of the following result, which will be given in the next section, basically revolves around an extension argument. Let us consider a bounded domain $E\subset \mathbb{R}^n$ satisfying hypotheses (\ref{apriori1}) and (\ref{apriori2}), and take $\Gamma \subset \partial E$ a connected open portion of the boundary of class $C^{2, \alpha}$ with constants $\rho_0$, $M_0$. Let $P_0 \in \Gamma$ such that (\ref{apriori2G}) holds. By definition, after a suitable change of coordinates we have that $P_0 = 0$ and
\begin{equation}
E \cap B_{\rho_0}(0) = \{ (x^\prime, x_n) \in E \, \text{ s.t.} \, x_n>\varphi(x^\prime) \} \subset E,
\end{equation}
where $\varphi$ is a $C^{2,\alpha}(B^\prime_{\rho_0}(0))$ function satisfying
\begin{displaymath}
\begin{split}
\varphi(0)&=0, \\
|\nabla \varphi (0)|&=0, \\
\|\varphi \|_{C^{2,\alpha} (B^\prime_{\rho_0}(0))}& \le M_0 \rho_0.
\end{split}
\end{displaymath}
Define
\begin{equation} \begin{split} \label{rho00}
\rho_{00} & = \frac{\rho_0}{\sqrt{1+M_0^2}}, \\ \Gamma_0 & = \{ (x^\prime, x_n) \in \Gamma \, \, \mathrm{s.t.} \, \, |x^\prime|\le \rho_{00}, \, \, x_n = \varphi(x^\prime) \}.
\end{split} \end{equation}
\begin{theorem} \label{stabilitycauchy}
Under the above hypotheses, let $(u,p)$ be a solution to the problem:
\begin{equation}
\label{NseHomDir} \left\{ \begin{array}{rl}
\dive \sigma(u,p) & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em}
E,\\
\dive u & = 0 \hspace{2em} \mathrm{\tmop{in}} \hspace{1em} E,\\
u & = 0 \hspace{2em} \mathrm{\tmop{on}} \hspace{1em} \Gamma,\\
\sigma (u, p) \cdot \nu & = \psi \hspace{2em} \mathrm{\tmop{on}}
\hspace{1em} \Gamma,\\ \end{array} \right.
\end{equation}
where $\psi \in \accan{-\frac{1}{2}}{\Gamma}$. Let $P^* = P_0 + \frac{\rho_{00}}{4} \nu$ where $\nu$ is the outer normal field to $\partial \Omega$. Then we have
\begin{equation} \label{NseHomDirEqn}
\| u \|_{{\bf L}^\infty(E \cap B_{\frac{3 \rho_{00}}{8}} (P^*))} \leq \frac{C}{\rho_0^{\frac{n}{2}}} \normadue{u}{E}^{1-\tau} (\rho_0 \norma{\psi}{-\frac{1}{2}}{\Gamma})^\tau,
\end{equation}
where $C>0$ and $\tau$ only depend on $\alpha$ and $M_0$.
\end{theorem}
\begin{proof}[Proof of Proposition \ref{teostabest}]
Let $\theta= \mathrm{min} \{a, \frac{7}{8 \gamma_1} \frac{\rho_{0}}{2\gamma_0 (1+M_0^2)} \}$ where $a$, $\gamma_0$, $\gamma_1$ are the constants depending only on $M_0$ and $\alpha$ introduced in Lemma \ref{regularized}, then let $\overline{\rho}= \theta \rho_0$ and fix $\rho \le \overline{\rho}$.
We introduce the regularized domains $D_1^\rho$, $D_2^\rho$ according to Lemma \ref{regularized}. Let $G$ be the connected component of $\Omega\setminus(\overline{D_1 \cup D_2})$ which contains $\partial \Omega$, and $G^\rho$ be the connected component of $\overline{\Omega}\setminus(D_1^\rho \cup D_2^\rho)$ which contains $\partial \Omega$.
We have that \begin{equation*}
D_2 \setminus \overline{D_1} \subset \Omega_1 \setminus \overline{G} \subset \big( (D_1^\rho \setminus \overline{D_1} ) \setminus\overline{G}\big) \cup \big( (\Omega \setminus G^\rho)\setminus D_1^\rho \big)
\end{equation*}
and
\begin{equation*}
\partial \big( (\Omega \setminus G^\rho)\setminus D_1^\rho \big) = \Gamma_1^\rho \cup \Gamma_2^\rho,
\end{equation*}
where $\Gamma_2^\rho= \partial D_2^\rho \cap \partial G^\rho$ and $\Gamma_1^\rho \subset \partial D_1^\rho$. It is thus clear that
\begin{equation} \label{652} \int_{D_2 \setminus \overline{D_1 }} |\nabla u_1|^2 \le \int_{\Omega_1 \setminus \overline{G}} |\nabla u_1|^2 \le \int_{(D_1^\rho \setminus \overline{D_1} )\setminus\overline{G}} |\nabla u_1|^2 +\int_{(\Omega \setminus G^\rho)\setminus D_1^\rho} |\nabla u_1|^2. \end{equation}
The first summand is easily estimated, for using (\ref{schauder1}) and (\ref{645}) we have
\begin{equation} \label{6.53} \int_{(D_1^\rho \setminus \overline{D_1} )\setminus\overline{G}} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \frac{\rho}{\rho_0} \end{equation}
where $C$ only depends on the $M_0$, $M_1$ and $\alpha$.
We call $\Omega(\rho)= (\Omega \setminus G^\rho)\setminus D_1^\rho$. The second term in (\ref{652}), using the divergence theorem twice, becomes:
\begin{equation} \label{sommandi} \begin{split} & \int_{\Omega(\rho)} |\nabla u_1|^2 = \int_{\partial\Omega(\rho)} (\nabla u_1 \cdot \nu) u_1 - \int_{\Omega(\rho)} \triangle u_1 \cdot u_1 = \\& \int_{\partial\Omega(\rho)} (\nabla u_1 \cdot \nu) u_1 - \int_{\Omega(\rho)} \nabla p_1 \cdot u_1 = \int_{\partial\Omega(\rho)} (\nabla u_1 \cdot \nu) u_1 + \int_{\partial \Omega(\rho)} p_1 (u_1\cdot \nu) = \\ & \int_{\Gamma_1^\rho}(\nabla u_1 \cdot \nu) u_1 + \int_{\Gamma_2^\rho}(\nabla u_1 \cdot \nu) u_1 + \int_{\Gamma_1^\rho} p_1 (u_1 \cdot \nu) +\int_{\Gamma_2^\rho} p_1 (u_1 \cdot \nu) . \end{split} \end{equation}
About the first and third term, if $x \in \Gamma_1^\rho$, using Lemma \ref{regularized}, we find $y \in \partial D_1$ such that $|y-x|= d(x, \partial D_1) \le \gamma_1 \rho$; since $u_1(y)=0$, by Lemma \ref{teoschauder} we have
\begin{equation} \label{pezzobuono} |u_1(x)|= |u_1(x)-u_1(y)|\le C \frac{\rho}{\rho_0} \norma{g}{\frac{1}{2}}{\Gamma} . \end{equation}
On the other hand, if $x \in \Gamma_2^\rho$, there exists $y \in D_2$ such that $|y-x| = d(x, \partial D_2) \le \gamma_1 \rho$. Again, since $u_2(y)=0$, we have
\begin{equation} \label{pezzocattivo} \begin{split} & |u_1(x)| \le |u_1(x)-u_1(y)|+|u_1(y)-u_2(y) | \\
& \le C \big( \frac{\rho}{\rho_0} \norma{g}{\frac{1}{2}}{\Gamma} + \max_{\partial G^\rho \setminus \partial \Omega} |w| \big) , \end{split}\end{equation}
where $w=u_1-u_2$. Combining (\ref{pezzobuono}), (\ref{pezzocattivo}) and (\ref{sommandi}) and recalling (\ref{schauder1}) and (\ref{646}) we have:
\begin{equation} \label{sommandi2}
\int_{D_2\setminus D_1} |\nabla u_1|^2 \le C\rho_0^{n-2} \Big( \norma{g}{\frac{1}{2}}{\Gamma}^2 \frac{\rho}{\rho_0} + \norma{g}{\frac{1}{2}}{\Gamma} \max_{\partial G^\rho \setminus \partial \Omega} |w| \Big)
\end{equation}
We now need to estimate $\max_{\partial G^\rho \setminus \partial \Omega} |w| $. We may apply (\ref{tresfere}) to $w$, since it is biharmonic. Let $ x \in \partial G^\rho \setminus \partial \Omega$ and \begin{equation} \label{rhostar} \rho^*=\frac{\rho_0}{16(1+M_0^2)}, \end{equation}
\begin{equation}\label{zetazero}
x_0= P_0 - \frac{\rho_1}{16}\nu,
\end{equation}
where $\nu$ is the outer normal to $\partial \Omega$ at the point $P_0$. By construction $x_0 \in \overline{\til{\Omega}_{\frac{\rho^*}{2}}}$.
There exists an arc $\gamma: [0,1] \mapsto G^\rho \setminus \overline{\til{\Omega}_{\frac{\rho^*}{2}}} $ such that $\gamma(0)=x_0$, $\gamma(1)=x$ and $\gamma([0,1])\subset G^\rho \setminus \overline{\til{\Omega}_{\frac{\rho^*}{2}}}$. Let us define a sequence of points $\{x_i \}_{i=0 \dots S}$ as follows: $t_0=0$, and
\begin{displaymath}
t_i= \mathrm{max} \{t\in (0,1] \text{ such that } |\gamma(t)- x_i| = \frac{\gamma_0 \rho \vartheta^*}{2} \} \; \text{, if } |x_i-x| >\frac{\gamma_0 \rho \vartheta^*}{2},
\end{displaymath}
otherwise, let $i=S$ and the process is stopped. Here $\vartheta^*$ is the constant given in Theorem \ref{teotresfere}. All the balls $B_{\frac{\gamma_0 \rho \vartheta^*}{4}}(x_i)$ are pairwise disjoint, the distance between centers $| x_{i+1}-x_i | =\frac{\gamma_0 \rho \vartheta^*}{2}$ for all $i=1 \dots S-1$ and for the last point, $|x_S - x| \le \frac{\gamma_0 \rho \vartheta^*}{2}$. The number of spheres is bounded by
\begin{displaymath} S\le C \Big( \frac{\rho_0}{\rho} \Big)^n \end{displaymath} where $C$ only depends on $\alpha$, $M_0$ and $M_1$. For every $\rho \le \overline{\rho}$, we have that, letting
\begin{displaymath} \rho_1 = \frac{\gamma_0 \rho \vartheta^*}{4},\; \rho_2= \frac{3 \gamma_0 \rho \vartheta^*}{4}, \; \rho_3={\gamma_0 \rho \vartheta^*}
\end{displaymath}
an iteration of the three spheres inequality on a chain of spheres leads to
\begin{equation} \label{iteratresfere} \int_{B_{\rho_2} (x)} \! | w|^2 dx \le C \Big(\int_G \! | w|^2 dx \Big)^{1-\delta^S} \Big(\int_{B_{\rho_3}(x_0)} \! | w |^2 dx \Big)^{\delta^S} \end{equation}
where $0<\delta<1$ and $C>0$ only depend on $M_0$ and $\alpha$. From our choice of $\bar{\rho}$ and $\vartheta^*$, it follows that $B_{\frac{\gamma_0 \rho \vartheta^*}{4}}(x_0) \subset B_{\rho^*}(x_0) \subset G \cap B_{\frac{3 \rho_1 }{4}}(P^*)$, where we follow the notations from Theorem \ref{stabilitycauchy}. We can therefore apply Theorem \ref{stabilitycauchy}. Let us call
\begin{equation} \label{epsilontilde}
\tilde{\epsilon} = \frac{ \epsilon}{ \norma{g}{\frac{1}{2}}{\Gamma} }.
\end{equation}
Using (\ref{NseHomDirEqn}), (\ref{stimau}) and (\ref{HpPiccolo}) on (\ref{iteratresfere}) we then have:
\begin{equation} \label{pallina}
\int_{B_{\rho_2}(x)} \! | w|^2 dx \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \tilde{\epsilon}^{2 \tau \delta^S}.
\end{equation}
The following interpolation inequality holds for all functions $v$ defined on the ball $B_t(x) \subset \mathbb{R}^n$:
\begin{equation} \label{interpolation}
\|v \|_{\mathbf{L}^\infty (B_t(x))} \le C \Big( \Big( \int_{B_t(x)} | v|^2 \Big)^{\frac{1}{n+2}} |\nabla v|^{\frac{n}{n+2}}_{\mathbf{L}^\infty (B_t(x))} + \frac{1}{t^{n/2}} \Big( \int_{B_t(x)} | v|^2 \Big)^{\frac{1}{2}} \Big)
\end{equation}
We apply it to $w$ in $B_{\rho_2}(x)$, using (\ref{pallina}) and (\ref{schauder1}) we obtain
\begin{equation} \label{stimaw}
\| w \|_{\mathbf{L}^\infty (B_{\rho_2}(x))} \le C \Big( \frac{\rho_0}{\rho} \Big)^{\frac{n}{2}} \norma{g}{\frac{1}{2}}{\Omega} \tilde{\epsilon}^{\gamma \delta^S},
\end{equation}
where $\gamma=\frac{2\tau}{n+2}$.
Finally, from (\ref{stimaw}) and (\ref{sommandi2}) we get:
\begin{equation} \label{sommandi3}
\int_{D_2\setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \Big( \frac{\rho}{\rho_0} + \Big( \frac{\rho_0}{\rho} \Big)^{\frac{n}{2}} \tilde{\epsilon}^{\gamma \delta^S} \Big)
\end{equation}
Now call $$\til{\mu}=\exp \Big( -\frac{1}{\gamma} \exp \Big(\frac{2S \log \delta}{\theta^n}\Big)\Big) $$ and $\overline{\mu}= \min \{ \til{\mu}, \exp(-\gamma^2) \}.$
Choose $\rho$ depending upon $\tilde{\epsilon}$ of the form
\begin{displaymath}
\rho(\tilde{\epsilon}) = \rho_0 \Bigg( \frac{2S \log |\delta|}{\log |\log \tilde{\epsilon}^\gamma|} \Bigg)^{-\frac{1}{n}}.
\end{displaymath}
We have that $\rho$ is defined and increasing in the interval $(0, e^{-1})$, and by definition $\rho(\overline{\mu}) \le \rho(\til{\mu}) = \theta \rho= \overline{\rho}$, we are able to apply (\ref{sommandi3}) to (\ref{652}) with $\rho=\rho(\til{\epsilon})$ to obtain
\begin{equation}
\label{quasifinito}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \log |\log \til{\epsilon}|^\gamma,
\end{equation}
and since $\til{\epsilon} \le \exp(-\gamma^2)$ it is elementary to prove that \begin{displaymath}
\log |\log {\til{\epsilon}^\gamma}| \ge \frac{1}{2} \log | \log \til{\epsilon}|,
\end{displaymath}
so that (\ref{quasifinito}) finally reads
\begin{displaymath}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2} \norma{g}{\frac{1}{2}}{\Gamma}^2 \,\omega(\til{\epsilon}),
\end{displaymath}
with $\omega(t) = \log |\log t|^{\frac{1}{n}}$ defined for all $0<t<e^{-1}$, and $C$ depends on $M_0$, $M_1$ and $\alpha$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{teostabestimpr}]
We will prove the thesis for $u_1$, the case $u_2$ being completely analogous.
First of all, we observe that
\begin{equation} \label{sommandiB}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le \int_{\Omega_1 \setminus G} |\nabla u_1|^2 =\int_{\partial (\Omega_1 \setminus G)} (\nabla u_1 \cdot \nu) u_1 + \int_{\partial (\Omega_1 \setminus G)} p_1 ( u_1 \cdot \nu)
\end{equation}
and that
\begin{equation*}
\partial (\Omega_1 \setminus G) \subset \partial D_1 \cup (\partial D_2 \cap \partial G)
\end{equation*}
and recalling the no-slip condition, applying to (\ref{sommandiB}) computations similar to those in (\ref{652}), (\ref{6.53}), we have
\begin{equation*} \begin{split}
& \int_{D_2 \setminus D_1} |\nabla u_1|^2 \le \int_{\partial D_2 \cap \partial G} (\nabla u_1 \cdot \nu) w + \int_{\partial D_2 \cap \partial G} p_1 ( w \cdot \nu) \le \\ \le & C \rho_0^{n-2}\norma{g}{\frac{1}{2}}{\Gamma} \max_{\partial D_2 \cap \partial G} |w|,
\end{split} \end{equation*}
where again $w= u_1 - u_2$ and $C$ only depends on $\alpha$, $M_0$ and $M_1$.
Take a point $z \in \partial G$. By the regularity assumptions on $\partial G$, we find a direction $\xi \in \mathbb{R}^n$, with $|\xi|=1$, such that the cone (recalling the notations used during the proof of Proposition \ref{teoPOS}) $C(z, \xi, \vartheta_0) \cap B_{\rho_0} (z) \subset G$, where $\vartheta_0 =\arctan \frac{1}{M_0}$. Again (\cite[Proposition 5.5]{ARRV}) $G_\rho$ is connected for $\rho \le \frac{\rho_0 h_0 }{3}$ with $h_0$ only depending on $M_0$. Now set
\begin{equation*}\begin{split}
\lambda_1 &= \min \Big\{ \frac{\tilde{\rho}_0}{1+\sin \vartheta_0}, \frac{\tilde{\rho}_0}{3\sin \vartheta_0}, \frac{\tilde{\rho}_0}{16(1+M_0^2)\sin \vartheta_0} \frac{}{} \Big\}, \\
\vartheta_1 & = \arcsin\Big(\frac{\sin \vartheta_0}{4} \Big), \\
w_1 &=z+ \lambda_1 \xi, \\
\rho_1 &= \vartheta^* h_0 \lambda_1 \sin \vartheta_1.
\end{split}\end{equation*}
where $0<\vartheta^*\le 1$ was introduced in Theorem \ref{teotresfere}.
By construction, $B_{\rho_1}(w_1) \subset C(z, \xi, \vartheta_1) \cap B_{\tilde{\rho}_0}(z)$ and $B_{\frac{4 \rho_1}{\vartheta^*}}(w_1) \subset C(z, \xi, \vartheta_0) \cap B_{\tilde{\rho}_0}(z) \subset G$. Furthermore $\frac{4 \rho_1}{\vartheta^*} \le \rho^*$, hence $B_{\frac{4 \rho_1}{\vartheta^*}} \subset G$, where $\rho^*$ and $x_0$ were defined by (\ref{rhostar}) and (\ref{zetazero}) respectively, during the previous proof. Therefore, $w_1$, $x_0 \in \overline{G_{\frac{4\rho_1}{\vartheta^*}}}$, which is connected by construction.
Iterating the three spheres inequality (mimicking the construction made in the previous proof)
\begin{equation} \label{iteratresferei} \int_{B_{\rho_1} (w_1)} \! | w|^2 dx \le C \Big(\int_G \! | w|^2 dx \Big)^{1-\delta^S} \Big(\int_{B_{\rho_1 }(x_0)} \! | w |^2 dx \Big)^{\delta^S} \end{equation}
where $0<\delta<1$ and $C \ge 1$ depend only on $n$, and $S \le \frac{M_1 \rho_0^n}{\omega_n \rho_1^n}$.
Again, since $B_{\rho^*}(x_0) \subset G \cap B_{\frac{3}{8}\rho_1}(P_0)$, we apply Theorem \ref{stabilitycauchy} which leads to
\begin{equation}
\int_{B_{\rho_1}(w_1)} |w|^2 \le C \rho_0^n \norma{g}{\frac{1}{2}}{\Gamma}^2 \tilde{\epsilon}^{2\beta},
\end{equation}
where $0<\beta<1$ and $C \ge 1$ only depend on $\alpha$, $M_0$, and $\frac{\tilde{\rho}_0}{\rho_0}$ and $\tilde{\epsilon}$ was defined in (\ref{epsilontilde}).
So far the estimate we have is only on a ball centered in $w_1$, we need to approach $z \in \partial G$ using a sequence of balls, all contained in $C(z, \xi, \vartheta_1)$, by suitably shrinking their radii. Take
\begin{equation*}
\chi = \frac{1-\sin \vartheta_1}{1+\sin\vartheta_1}
\end{equation*}
and define, for $k \ge 2$,
\begin{equation*} \begin{split}
\lambda_k&=\chi \lambda_{k-1}, \\
\rho_k&= \chi \rho_{k-1}, \\
w_k &= z + \lambda_k \xi. \\
\end{split}\end{equation*}
With these choices, $\lambda_k= \lambda \chi^{k-1} \lambda_1$, $\rho_k=\chi^{k-1} \rho_1$ and $B_{\rho_{k+1}}(w_{k+1}) \subset B_{3\rho_k}(w_k)$, $B_{\frac{4}{\vartheta^*}\rho_k}(w_k) \subset C(z, \xi, \vartheta_0) \cap B_{\tilde{\rho}_0}(z) \subset G$.
Denote by
\begin{displaymath}
d(k)= |w_k-z|-\rho_k,
\end{displaymath}
we also have
\begin{displaymath}
d(k)= \chi^{k-1}d(1),
\end{displaymath}
with
\begin{displaymath}
d(1)= \lambda_1(1-\vartheta^* \sin \vartheta_1).
\end{displaymath}
Now take any $\rho \le d(1)$ and let $k=k(\rho)$ the smallest integer such that $d(k) \le \rho$, explicitly
\begin{equation} \label{chirho}
\frac{\big|\log \frac{\rho}{d(1)}\big|}{\log \chi} \le k(\rho)-1 \le \frac{| \log \frac{\rho} {d(1)}|}{\log \chi}+1.
\end{equation}
We iterate the three spheres inequality over the chain of balls centered in $w_j$ and radii $\rho_j$, $3 \rho_j$, $4\rho_j$, for $j=1, \dots, k(\rho)-1$, which yields
\begin{equation} \label{iteratresferetre}
\int_{B_{\rho_{k(\rho)}}(w_{k(\rho)})} |w|^2 \le C \norma{g}{\frac{1}{2}}{\Gamma}^2 \rho^n \tilde{\epsilon}^{2 \beta \delta^{k(\rho)-1}},
\end{equation}
with $C$ only depending on $\alpha$, $M_0$ and $\frac{\tilde{\rho}_0}{\rho_0}$.
Using the interpolation inequality (\ref{interpolation}) and (\ref{schauder2}) we obtain
\begin{equation}\label{543}
\|w \|_{\mathbf{L}^\infty (B_{\rho_{k(\rho)}}(w_{k(\rho)}))} \le C \norma{g}{\frac{1}{2}}{\Gamma} \frac{\tilde{\epsilon}^{\beta_1 \delta^{k(\rho)-1}}}{\chi^{\frac{n}{2}(k(\rho)-1)}},
\end{equation}
where $\beta_1=\frac{2 \beta}{n+2}$ depends only on $\alpha$, $M_0$, $M_1$ and $\frac{\tilde{\rho}_0}{\rho_0}$.
From (\ref{543}) and (\ref{schauder2}) we obtain
\begin{equation} \label{544}
|w(z) | \le C \norma{g}{\frac{1}{2}}{\Gamma} \Bigg( \frac{\rho}{\rho_0} +\frac{\tilde{\epsilon}^{\beta_1 \delta^{k(\rho)-1}}}{\chi^{\frac{n}{2}(k(\rho)-1)}} \Bigg),
\end{equation}
Finally, call
\begin{displaymath}
\rho(\tilde{\epsilon})= d(1) |\log \tilde{\epsilon}^{\beta_1}|^{-B},
\end{displaymath}
with
\begin{displaymath}
B= \frac{|\log \chi|}{2 \log |\delta|}.
\end{displaymath}
and let $\tilde{\mu} = \exp(-\beta_1^{-1})$. We have that $\rho(\tilde{\epsilon})$ is monotone increasing in the interval $0<\tilde{\epsilon} < \tilde{\mu}$, and $\rho(\tilde{\mu})=d(1)$, so $\rho(\tilde{\epsilon}) \le d(1)$ there. Putting $\rho=\rho(\tilde{\epsilon})$ into (\ref{544}) we obtain
\begin{equation}
\int_{D_2 \setminus D_1} |\nabla u_1|^2 \le C \rho_0^{n-2}\norma{g}{\frac{1}{2}}{\Gamma}^2 |\log \tilde{\epsilon}|^{-B},
\end{equation}
where $C$ only depends on $\alpha$, $M_0$ and $\frac{\til{\rho}_0}{\rho_0}$.
\end{proof}
| 2024-02-18T23:39:59.328Z | 2010-10-21T02:03:04.000Z | algebraic_stack_train_0000 | 1,024 | 13,846 |
|
proofpile-arXiv_065-5091 | \section{INTRODUCTION}
\label{intro}
During an eruptive event comprising a solar flare and a coronal mass ejection (CME), energy is believed to be converted into the heating of plasma and the kinetic energy of particles and the CME itself through the process of magnetic reconnection. The standard reconnection models (\citealt{park57,swee58,pets64}) state that newly connected field lines expel plasma from the reconnection site due to the Lorentz force. The pressure gradient across the diffusion region then forces new plasma inwards, along with the field lines frozen to it where they change connectivity and dissipate energy. \citet{lin00} stated that these inflows are concurrent with the eruption of a CME, which remains connected to the magnetic neutral point by an extended current sheet. Initially the CME rises slowly until no neighboring equilibrium state is available. After reaching this point the CME begins to rise at an increasing rate. Energy release and particle acceleration continue due to sustained reconnection as the CME accelerates.
To date, there has been little observational evidence for the predicted inflows associated with reconnection. The most cited example is that of \cite{yoko01}, who found inflow velocities of 1.0--4.7~km~s$^{-1}$ by tracing the movement of threadlike patterns above the limb in a series of {\it SOHO}/EIT 195\AA~images (\ion{Fe}{12}; \citealt{dela95}). Evidence for sustained energy release during CME acceleration has been reported in a recent study of two fast ($>$1000~km~s$^{-1}$) halo CMEs by \cite{temm08,temm10}, who found a strong correlation between the CME acceleration and flare hard X-ray (HXR) time profiles. The bremsstrahlung hard X-rays are signatures of thick-target collisions between the accelerated electrons and the ambient chromospheric material. The authors interpret this correlation as strong evidence for a feedback relationship occurring between CME acceleration and the energy released by magnetic reconnection in the current sheet formed behind the CME. In cases where the current sheet becomes sufficiently longer than its width, it is possible for multiple X-points to form due to the tearing mode instability which can result in the formation of plasmoids \citep{furt63}. Both upward- \citep{ko03,lin05} and downward-directed \citep{shee02} plasmoids associated with CME eruption have been commonly observed in white light coronagraph images, in agreement with MHD simulations (e.g., \citealt{forb83}). Further evidence for plasmoid motions has been presented through radio observations of drifting pulsating structures \citep[DPS;][]{klie00,karl04,rile07,bart08b}.
\begin{figure}[!t]
\begin{center}
\includegraphics[height=8.5cm,angle=90]{f1.eps}
\caption{The CME on 2007 January 25 as seen by the COR1 coronagraph (blue) at 06:53:24~UT as well as the associated EUVI field of view (green). The expanded box shows the coronal loops that form part of active region NOAA 10940 with 40 and 80\% contours of the 5-10~keV emission seen by RHESSI at the same time (solid line).}
\label{euvi_cor1}
\end{center}
\end{figure}
Observational X-ray evidence for the formation of a current sheet has been presented by \cite{sui03} using data from the Ramaty High Energy Solar Spectroscopic Imager (RHESSI; \citealt{lin02}). The authors were able to show that an above-the-looptop coronal X-ray source (or plasmoid) increased in altitude as a lower lying X-ray loop decreased in altitude during the initial stages of a solar flare. They concluded that magnetic reconnection occurred between the two sources as the current sheet formed. This interpretation was strengthened by evidence that the mean photon energy decreased with distance in both directions away from the reconnection site (see also \citealt{liu08}). The authors attribute the downward moving looptop to the collapse of the X-point to a relaxed magnetic loop during the reconfiguration of the magnetic field. The same conclusions were reached by \cite{sui04} and \cite{vero06}, who observed similar motions of rising plasmoids concurrent with shrinking looptop sources in other events imaged with RHESSI.
A recent numerical simulation by \cite{bart08a} shows that by invoking variable reconnection rates along the current sheet, {\it downward} propagating plasmoids should also be visible in X-rays below $\sim$2~R$_{\odot}$ (see also \citealt{rile07}). The condition for this scenario is met when the reconnection rate above the plasmoid is greater than that below resulting in a net downward tension in the newly connected magnetic field lines. Furthermore, this model shows that the interaction of such a plasmoid with the underlying loop system can result in a substantial increase in dissipated energy, more so than during the initial ejection of the rising plasmoid or coalescing plasmoid pairs. To date, there has only been one report of such an interaction by \cite{kolo07} using Yohkoh/SXT data. They found an increase in X-ray and decimetric radio flux and an increase in temperature at the interaction site.
\begin{figure}[!t]
\begin{center}
\includegraphics[height=8.5cm,angle=90]{f2.eps}
\caption{Lightcurves of the flare in the 3--6, 6--12, and 12--25~keV energy bands as observed by RHESSI, as well as the GOES 1--8~\AA~lightcurve. The horizontal bars at the top of the plot denote RHESSI's attenuator state (A0, A1), nighttime (N) and SAA passes (S).}
\label{hsi_goes_ltc}
\end{center}
\end{figure}
\begin{figure*}[]
\begin{center}
\includegraphics[height=\textwidth,angle=90]{f3.eps}
\caption{RHESSI images in the 5-10 keV energy band formed over 60s integrations during the onset of the flare, although only alternate images are shown here. Contours mark the 40\% and 80\% levels. The plasmoid (source A) and looptop (source B) sources are labeled. The grey pattern around the sources are the CLEAN residuals and reflect the background noise level of the images.}
\label{multi_hsi_plot}
\end{center}
\end{figure*}
\begin{figure}[!b]
\begin{center}
\includegraphics[height=8.5cm]{f4.eps}
\caption{The two sources observed by RHESSI imaged over 2~keV wide energy bins (3--5, 5--7, 7--9~keV) for a single time interval.}
\label{hsi_ht_vs_en}
\end{center}
\end{figure}
In this paper observational evidence is presented for increased hard X-ray and radio emission during the coalescence of a downward-moving coronal source with a looptop kernel at the onset of a flare observed with RHESSI. Coordinated observations from the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI; \citealt{howa08}) suite of instruments onboard the Solar Terrestrial Earth Relations Observatory (STEREO; \citealt{kais08}) show that this interaction was concurrent with the acceleration phase of the associated CME. Using wavelet enhanced images from the EUV Imager (EUVI), evidence is also presented for inflowing magnetic field lines that persisted for several hours prior to reconnection. Section~\ref{obs} describes the event as viewed by RHESSI and STEREO and the techniques used to determine the motion of the coronal X-ray sources and the CME. Section~\ref{conc} discusses the findings in the context of numerical simulations, and summarizes the conclusions.
\section{OBSERVATIONS AND ANALYSIS}
\label{obs}
The event presented here occurred on 2007 January 25 in active region NOAA 10940, which was just behind the east limb at the time as seen from the Earth. Several CMEs from the same active region were observed around this period shortly after the launch of STEREO, and have been the focus of many studies \citep{attr07,luga08,luga09,grig09}. As STEREO-Behind was only within 0.2$^{\circ}$ from the Sun-Earth line at this time, no corrections were required to align the images with those from RHESSI. Figure~\ref{euvi_cor1} shows the CME as it passed through the COR1 field-of-view at 06:53:24~UT along with the associated active region as seen by EUVI. Also overlaid on the inset EUVI image are the 5--10~keV source contours observed with RHESSI at the same time. Figure~\ref{hsi_goes_ltc} shows the X-ray lightcurves in the 3--6, 6--12, and 12--25~keV energy bands from RHESSI, along with the 1--8~\AA~lightcurve from GOES. The GOES C6.3 class flare began at 06:33:00~UT and peaked at 07:15:00~UT. Emission in the 3--6 and 6--12 keV energy bands observed by RHESSI began to increase $\sim$5 minutes earlier. At the time of this event, there was another active region on the western limb that was the focus of instruments not capable of observing the full solar disk, such as TRACE and those onboard Hinode. Data for the event presented here were, therefore, only obtainable from those instruments capable of observing the entire solar disk. This included radio data from the Learmonth Solar Observatory in Western Australia at eight discreet frequencies (245, 410, 610, 1415, 2695, 4995, 8800, and 15400 MHz).
\subsection{Coronal X-Ray Source Motions}
\label{x_ray_sources}
RHESSI images were formed using CLEAN \citep{hurf02} in the 5-10~keV energy band over 60s integrations using detectors 4, 6, 8, and 9. Detectors \#2 and 7 were omitted from this analysis due to their reduced sensitivity to low-energy photons. The calibration for detector \#5 was poorly known at this time and was, therefore, also excluded. Detectors \#1, and 3 also introduced noise in the images in this case by over-resolving the sources due to their higher spatial resolution and were therefore also omitted. The 5--10~keV range was chosen to give the best signal to noise ratio below the instrumental background Ge line at $\sim$11~keV during the onset of the flare when the count rate was low. The earliest images revealed a single, high-altitude coronal source (source A; Figure~\ref{multi_hsi_plot}$a$--$f$). At 06:37~UT a second source (source B; Figure~\ref{multi_hsi_plot}$e$--$h$) appeared apparently at a lower altitude than the initial source. Source B was observed to lie above the post-flare loop arcade that later rose from behind the limb in EUVI images (see Figures 3 and 4 in \citealt{grig09}), and was therefore assumed to be a looptop kernel associated with newly formed loops that lay above those emitting in EUV. Source A, on the other hand, resembled an above-the-looptop source or plasmoid. From the bottom row of Figure~\ref{multi_hsi_plot} it can be seen that these two sources merged together between 06:37:00--06:41:00~UT. After 06:41:00~UT the amalgamated source displayed a cusp-like structure extending to the southeast until RHESSI went into the Earth's shadow at 07:09~UT.
Figure~\ref{hsi_ht_vs_en} shows the structure of the two sources in finer energy bins at a time when each of the sources could be clearly resolved (06:37~UT). It is shown that source A displayed an energy gradient in terms of height, with higher energy emission emanating from higher altitudes. Source B on the other hand showed no discernible displacement in terms of energy. This is in contrast to what was observed in the event presented by \cite{kolo07}, who found a thermal stratification in the looptop source, but no clear displacement for the higher altitude source. Similarly, \cite{sui03} found that higher energy emission emanated from higher altitudes for their looptop source, as expected. However, the reverse was found to be true for the associated rising plasmoid with mean photon energy decreasing with height, consistent with the idea that reconnection occurred in between the two sources. Similar conclusions were reached by \cite{liu08} who stated that higher energy emission should be observed closer to the reconnection site \citep{liu08}. With this in mind the case presented in Figure~\ref{hsi_ht_vs_en} suggests that in forming the plasmoid, the reconnection rate above the source must have been greater than that below. This would not only explain the reverse energy stratification as a function height, but also the resulting downward motion due to the resulting net tension exerted by the magnetic field, as surmised by \cite{bart08a}.
In order to track the motion of each source, the coordinates of the peak emission were identified and used to triangulate their height above the solar limb. The peak emission, rather than the centroid, was chosen to exclude the possibility of interpreting the relative change in intensity of the two sources as a motion. It was found that source A had an initial height of 45~Mm at 06:29~UT and decreased in altitude during the subsequent 12 minutes (Figure~\ref{hsi_ltc_ht_radio}$c$). A linear least-squares fit to these data points yielded a mean downward velocity of 12~km~s$^{-1}$, similar to the value of 16~km~s$^{-1}$ found by \citet{kolo07} for their downward-moving plasmoid. Source B was observed to rise continuously throughout the event, which is characteristic of a post-flare arcade. Its mean velocity was found to be $\sim$5~km~s$^{-1}$. After 06:41:00~UT the individual sources could no longer be resolved therefore the time interval over which the two sources merged was estimated to be from 06:37 to 06:41~UT.
\subsection{Evidence for Enhanced Nonthermal Emission}
\label{xray_spec}
According to \cite{bart08b}, a plasmoid-looptop interaction as described in Section~\ref{x_ray_sources} should have a distinct observational signature. The authors predict that the resulting increase in energy dissipation should manifest itself as enhanced chromospheric or HXR emission. In the event presented by \cite{kolo07}, the authors observed an concurrent increase in both HXRs (14--23~keV) and radio emission (1--2~GHz), both indicators of nonthermal electrons. The authors also detected an increase in temperature at the interaction site in the corona during the merging. Figure~\ref{hsi_ltc_ht_radio}$a$ shows the RHESSI lightcurves (in raw counts) in 3~keV wide energy bins (3--6, 6--9, 9--12, 12--15, and 15--18~keV) over the flare onset using the front segment of detector \#1 only. Between 06:38 and 06:44~UT (shown by the two vertical dotted lines in Figure~\ref{hsi_ltc_ht_radio}) there is a pronounced enhancement in the higher energy bands (12--15 and 15--18~keV, in particular). A similar enhancement is also visible in the 245 MHz channel of the Learmonth radio data (Figure~\ref{hsi_ltc_ht_radio}b). The increase in emission (from 06:38--06:41~UT) corresponds to the approximate time over which the two X-ray sources were observed to merge from Figure~\ref{multi_hsi_plot}e--g. From 06:41--06:44~UT (after the plasmoid source was no longer visible) HXR and radio emissions both appeared to decrease briefly. This episode of increased nonthermal emission is therefore believed to be a result of a secondary phase of particle acceleration due to magnetic reconnection within the current sheet formed between the two merging sources. Unfortunately there was no radio spectrograph data available at the time of this event to search for evidence of drifting pulsating structures.
A RHESSI spectrum taken around the time of the merging (06:41:00~UT; Figure~\ref{spec_fits}) also shows that emission from 9--20~keV is predominantly nonthermal, consistent with the idea that enhancements in both the HXR and radio lightcurves are evidence for an increase in the number of accelerated particles. This spectrum was also generate using only detector \#1 to remain consistent with the lightcurve shown in Figure~\ref{hsi_ltc_ht_radio}. This increased nonthermal emission is consistent with the simulations of \cite{bart08b} but is clearly coronal in nature, rather than chromospheric as predicted. Chromospheric rebrightening cannot be ruled out however but may be difficult to simultaneously detect both coronal plasmoids and footpoint emission during on-disk events due to RHESSI's limited dynamic range.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{f5.eps}
\caption{$a$) RHESSI lightcurves in the 3--6, 6--9, 9--12, 12--15, and 15--18~keV energy bands from the front segment of detector \#1 only. The horizontal bars marked A0 and A1 at the top of the plot denote the attenuator state. $b$) Emission in the 245, 410 and 610~MHz channels from Learmonth radio telescope. The fluxes in the 410 and 610 MHz channels have been scaled by factors of 2 and 2.5 for clarity, respectively. $c$) Height-time plots of the two 5--10~keV sources as observed by RHESSI. The plasmoid source is denoted by crosses while the looptop source is given by diamonds, both with error bars. The solid line represents a least-sqaures fit to the downward moving source. The two vertical dotted lines mark the approximate time of enhanced HXR and radio emission during which the two RHESSI sources appeared to merge.}
\label{hsi_ltc_ht_radio}
\end{center}
\end{figure}
\subsection{CME Kinematics}
\label{cme_acc}
One limitation of many previous studies of CMEs is the absence of data below $\sim$3~R$_{\odot}$, where most of the CME acceleration takes place. This is due in part to the loss of the C1 coronagraph on SOHO/LASCO in 1998. With the launch of STEREO in 2006, this gap has been filled with the SECCHI suite of instruments. EUVI captures full-disk observations out to 1.7~R$_{\odot}$, while the COR1 and COR2 coronagraphs cover 1.4--4~R$_{\odot}$ and 2--15~R$_{\odot}$, respectively.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{f6.eps}
\caption{RHESSI photon spectra with the associated residuals using the front segment of detector \#1 integrated over 06:41:40--06:42:00~UT during the merging phase. The dotted line represents the best fit to the thermal distribution while the dashed line represents the thick-target component. The solid line shows the sum of the two components and the dot-dashed line marks the background.}
\label{spec_fits}
\end{center}
\end{figure}
The data used in this study are exclusively from the STEREO-Behind (STEREO-B) spacecraft and were prepped using the standard {\sc secchi\_prep} routine inside SSWIDL. For EUVI, this entails standard corrections for de-bias, vignetting, and exposure time normalization, along with rotating the images with a cubic convolution interpolation to place solar north at the top of the image. For COR1, this involved the extra step of individually background subtracting each polarization state before combining using a M\"ueller matrix to form polarized brightness images. For COR2, total brightness images were created and then studied as base difference images. Both COR1 and COR2 images were further enhanced using a wavelet technique \citep{byrn09}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=\textwidth,angle=90]{f7.eps}
\caption{Six select EUVI images in the 195~\AA~passband covering the time range 04:56--06:51~UT. Panels $a$--$c$ show the initial gradual rise of the CME front ($\it triangles$). A structure believed to be the southern leg of the CME is also noted. Overplotted on panels $e$ and $f$ are contours of the concurrent 5--10~keV emission observed by RHESSI, the plasmoid (source A) and looptop (source B), respectively. Note that the leg of the CME is no longer visible in these panels.}
\label{euvi_cme_front}
\end{center}
\end{figure*}
The CME front was first detected in the EUVI 195~\AA~passband at 04:56~UT at a height of $\sim$150~Mm above the eastern limb and gradually increased in height over the subsequent $\sim$1.5 hours (see Figures~\ref{euvi_cme_front}$a$--$c$). After 06:30~UT, when the CME became visible in COR1 images (as shown in Figure~\ref{euvi_cor1}), it began to expand more rapidly. At the same time a structure believed to be one leg of the CME (Figures~\ref{euvi_cme_front}$a$--$d$) was observed to sweep northwards to the site of the RHESSI 5--10~keV emission as noted in Figure~\ref{euvi_cme_front}e. This motion is interpreted as evidence for the predicted inflows associated with reconnection and will be discussed further in Section~\ref{inflows}.
The maximum height of the CME front above the solar limb was measured in each frame to create a height-time profile. The assigned uncertainty in height of the CME front was taken to be five pixels, corresponding to uncertainties of 5, 50, and 200~Mm for EUVI, COR1 and COR2, respectively. From these, velocity and acceleration profiles along with their associated uncertainties were numerically derived using a three-point Lagrangian interpolation (see Figures~\ref{hsi_cme_ht}$a$--$c$) similar to that used by \cite{gall03}. This technique is not as sensitive to the uncertainties in the height-time measurements as a standard two-point numerical differentiation and can give an accurate representation of the acceleration profile. However, by smoothing the data in this way the magnitude of the values can only be taken as upper limits, at best.
Figures~\ref{hsi_cme_ht}$a$--$c$ show that the CME front rose gradually for 1.5 hours with a mean velocity of $<$100~km~s$^{-1}$ before beginning to accelerate at $\sim$06:15~UT, when it was at a height of 250~Mm (1.35~R$_{\odot}$). The acceleration profile peaks some 20 minutes later when the CME was at a height of 400~Mm above the limb (1.57~R$_{\odot}$). Subsequently it continued to increase in height and velocity but at a decreasing rate. It obtained its maximum velocity of 1400~km~s$^{-1}$ at a height of 7000~Mm ($\sim$11~R$_{\odot}$) at $\sim$08:00~UT after which it began to decelerate. Figures~\ref{hsi_cme_ht}$d$ and \ref{hsi_cme_ht}$e$ show the height-time plot and lightcurves of the associated X-ray emission, respectively. It can be seen that the time of the observed downward motion of the plasmoid observed by RHESSI occurred during the acceleration phase of the CME. This lends further support to the idea that the CME front and the plasmoid were connected by a mutual current sheet and that the primary episode of reconnection both accelerated the CME and generated the magnetic tension in the field lines necessary to drive the plasmoid downwards. However, it is also possible that the CME acceleration was driven by external forces (e.g. kink instability in the flux rope) which led to filamentation of the current sheet and subsequent reconnection and plasmoid motion.
\subsection{Reconnection Inflows}
\label{inflows}
During the initial gradual rise of the CME front, a linear structure believed to be the southern `leg' of the CME can be seen in panels $a$--$c$ of Figure~\ref{euvi_cme_front}. At 06:26~UT (Figure~\ref{euvi_cme_front}$d$) this structure was observed to sweep northwards towards the location of the RHESSI emission visible in the subsequent panel. It was no longer observed at its original location. Unfortunately, the northern leg was not visible, presumably obscured by the multitude of bright loops of the active region.
To track the motion of this structure, a vertical one-pixel slice at Solar X = -1010$\arcsec$ was taken through a series wavelet enhanced EUVI images and stacked together in sequence to form a time series. The left-hand panel of Figure~\ref{euvi_time_slice} shows one such image taken at 06:51:47~UT with a solid vertical line denoting the pixel column used in the time series. The dotted line indicates the position of the CME leg some 3 hours earlier and the arrow denotes its inferred direction of motion. The solid contours mark the concurrent 5-10~keV emission observed by RHESSI (source B), which appeared elongated with a cusp to the southeast. A long narrow structure extending from the looptop to the southeast was also apparent in EUVI images. Such features are also often attributed to the reconnection process (e.g. \citealt{mcke99}). The emission associated with the plasmoid when it was first observed by RHESSI at 06:29~UT is also overlaid (source A; dashed contours) and is also located along the narrow structure in the EUVI image.
The right-hand panel of Figure~\ref{euvi_time_slice} shows the time series of the one-pixel wide column through the series of wavelet-enhanced EUVI images. A feature was observed to propagate northwards from Solar Y $\approx-$195$\arcsec$ at 03:30~UT to Solar Y $\approx-$175$\arcsec$ at $\sim$06:50~UT, which was close to the site of the emission observed by RHESSI at that time. This time period also corresponds to the gradual rise phase of the CME front (noted in Figure~\ref{hsi_cme_ht}$a$). This feature is interpreted as evidence for the inflowing magnetic field lines associated with the slow reconnection prior to the main eruption. From this time series, an inflow velocity of 1.5~km~s$^{-1}$ was inferred, comparable to the 1.0--4.7~km~s$^{-1}$ value found by \cite{yoko01} using a similar method. Knowledge of the inflow velocity during a flare can provide information on the rate of reconnection and hence the energy release rate. The reconnection rate, $M_A$, is defined as the ratio of the inflow speed to the local Alfv\'{e}n speed. Taking a typical coronal Alfv\'{e}n speed of 1000~km~s$^{-1}$ the inflow velocity measured here would result in a value of $M_A$ = 0.001. This is also consistent with the lower end of the range of values for $M_A$ found by \citet{yoko01}. The brighter feature in the figure originating at solar Y $\approx$ -200$\arcsec$ and moving south is likely to be one of the active region loops being displaced as the CME erupts.
\section{DISCUSSION AND CONCLUSIONS}
\label{conc}
Rare observations are presented of a downward-propagating X-ray plasmoid appearing to merge with a looptop kernel during an eruptive event seen above the solar limb; the first case observed with RHESSI and perhaps only the second ever. Although the majority of above-the-looptop sources observed (in both white light and X-rays) tend to rise due to weaker magnetic field and decreasing density above the flare loops, in certain instances, conditions can be right for downward-moving plasmoids to form also. Enhanced HXR emission detected with RHESSI and radio emission observed by the Learmonth radio telescope suggest that this merging resulted in a secondary episode of particle acceleration (see Figure~\ref{hsi_ltc_ht_radio}). Images of the plasmoid formed over finer energy bins (as shown in Figure~\ref{hsi_ht_vs_en}) show that higher energy emission was observed at higher altitudes. This is consistent with the idea that the reconnection rate above the source was greater than that below, unlike rising plasmoids previously observed with RHESSI which show mean photon energy decreasing with height (e.g. \citealt{sui03}). Complementary observations from STEREO show that the plasmoid-looptop merging was concurrent with the period of the most rapid acceleration of the associated CME (Figures~\ref{hsi_cme_ht}$c$ and $d$). These observations are in agreement with a recent numerical simulation that predicts an increase in liberated energy during the merging of a plasmoid with a looptop source \citep{bart08a}. The formation of plasmoids is attributed to the tearing-mode instability during current sheet formation in the wake of an erupting CME \citep{furt63,lin00}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{f8.eps}
\caption{Summary of the kinematics of both the CME observed with STEREO and the coronal X-ray sources observed with RHESSI. {\it a}) Height-time plot of the CME front from EUVI (diamonds), COR1 (triangles), and COR2 (squares). {\it b}) and {\it c}) The associated velocity and acceleration profiles, respectively. {\it d}) Height-time plot of the 5--10~keV sources as observed by RHESSI. The downward-moving coronal source is shown as crosses with error bars. The solid line denoted a least-squares fit to the data points and has been extended beyond the data points for clarity. The rising looptop source is represented by diamonds also with error bars. {\it e}) Observing summary profiles for RHESSI in the 3--6, 6--12 and 12--25~keV energy bands. Horizontal bars marked N and S denote RHESSI nighttimes and SAA passes, respectively.}
\label{hsi_cme_ht}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[height=8.5cm,angle=90]{f9.eps}
\caption{{\it Left}: A wavelet enhanced EUVI image taken at 06:51:47~UT. The solid contours overlaid mark the position of the HXR looptop source at the time of the image. The dashed contour marks the position of the plasmoid at 06:29~UT. The dotted line shows the location of the CME leg at 04:46~UT and the arrow points in the direction of its motion. The solid vertical line denotes the pixel column used for the time-slice plot on the right. {\it Right}: A temporal evolution of a vertical slice through a series of EUVI images. The dotted line marks the time of the image in the left-hand panel. The structure believed to be the inflowing CME leg is identified between the two parallel lines.}
\label{euvi_time_slice}
\end{center}
\end{figure}
\cite{bart07,bart08a} have shown theoretically that the deceleration of the plasmoid as it collides with the looptop source can lead to significant episodes of energy release. During this deceleration, antiparallel magnetic field lines begin to pile up between the two sources and a secondary current sheet is formed. This in turn leads to a secondary episode of magnetic reconnection that is driven by the magnetic tension of the field lines that govern the plasmoid motion. The authors also claim that the merging process triggers the excitation of large amplitude waves which can carry with them some of the stored magnetic energy. Although it is not possible to detect any acceleration or deceleration from the RHESSI images presented, a mean downward velocity of 12~km~s$^{-1}$ was calculated. This value is commensurate with the previous observation of \cite{kolo07}, who measured 16~km~s$^{-1}$ during a similar event observed with Yohkoh. However, both these observed values are considerably lower than the value predicted by \cite{bart08a} of $\sim$40\% of the local Alfv\'{e}n speed (i.e. $\sim$400~km~s$^{-1}$). Similar values of $\sim$200~km~s$^{-1}$ were predicted by \cite{rile07} for downward-moving white-light plasmoids. The low velocity measured here may be attributed to the low value of the reconnection rate as estimated from the inflows observed with EUVI (assuming that these field lines converged above the plasmoid). The value of $M_A \approx$ 0.001 is an order of magnitude lower than that used in the numerical simulation. As the amount of tension exerted on the plasmoid is sensitive to the net reconnection rate, this would result in a lower tension force and therefore lower downward velocity. This in turn may also affect the amount of energy liberated in the subsequent collision with the looptop. It is possible that the model of \cite{bart08a} may overestimate the velocity (and subsequent dissipated energy) given that the simulation is two-dimensional and does not take into account 3D structures, such as a twisted flux rope. Similarly the plasmoid detected with RHESSI is observed for more than 10 minutes before merging with the looptop source, whereas the simulations which yielded higher velocities predict that the source should exist for only $\sim$1 minute before merging. While the simulations of \cite{bart08a} predict a rebrightening of the loop footpoints in HXRs and/or chromospheric emission, both the analysis presented here and that of \citealt{kolo07} show a distinct increase in coronal emission. A recent analysis of Miklenic et al. (2010; submitted) appears to refute the idea that plasmoid-looptop interactions could be responsible for chromospheric rebrightenings. These observations provide further evidence that the particle acceleration process occurs in the corona rather than at the footpoints as recently suggested by \cite{flet08}, although acceleration at the footpoints as recently suggested by \cite{brow09} cannot be ruled out.
While plasmoid-looptop interactions are rarely observed, it is possible that they occur more often but are difficult to observe due to the brighter emission from the flare itself and RHESSI's limited dynamic range. A newly developed technique of deep integrations using RHESSI visibility-based X-ray imaging \citep{sain09} may help to identify faint X-ray sources in the corona during eruptive limb events. By comparing other similar events it may be possible to determine how great an effect the CME acceleration (and magnetic reconnection rate, if possible) has upon the resulting HXR and radio flux. It would therefore be useful to find events observed at a time when RHESSI's detector calibration was better known in order to perform a more rigorous spectral analysis which was not possible for this event due to poorly known calibration. This could reveal more detailed information on the energetics of the resulting accelerated particles.
\acknowledgements
ROM would like to thank Gordon Holman and Jack Ireland for their very helpful and insightful discussions, and Kim Tolbert for modifications made to the RHESSI software. We also thank the anonymous referee for their constructive comments which improved the quality of this paper. RTJMA is funded by a Marie Curie Intra European Fellowship. CAY acknowledges support from NASA Heliophysics Guest Investigator grand NNG08EL33C.
\bibliographystyle{apj}
| 2024-02-18T23:39:59.586Z | 2010-03-02T21:02:59.000Z | algebraic_stack_train_0000 | 1,032 | 5,507 |
|
proofpile-arXiv_065-5151 | \section{I. Introduction}
The study of reaction processes in catalytic reaction networks is
generally important to understand the dynamics and fluctuations in
biochemical systems and their functionality. Obviously,
understanding the generic features of equilibrium characteristics
and relaxation to equilibrium is the first step toward gaining
such an understanding. Indeed, such reaction systems often exhibit
anomalous slow relaxation to equilibrium due to some kinetic
constraints such as diffusion-influenced (limited)
reaction\cite{DLR} and formations of transient Turing
patterns\cite{AK2}. In this paper, we consider a novel mechanism
to realize such slow relaxation in catalytic reaction networks,
where the discreteness in molecule number that may reach zero
induces drastic slowing down.
Most intra-cellular reactions progress with the aid of catalysts
(proteins), whereas catalysts have to be synthesized as a
result of such catalytic reactions. Indeed, reaction dynamics in
catalytic networks have been extensively investigated. In most
such studies, a limiting case with a strong non-equilibrium
condition was assumed by adopting a unidirectional reaction
process (i.e., by neglecting backward reactions). To understand
the basic properties of biochemical reactions, however, it is
important to study both equilibrium and non-equilibrium
characteristics by including forward and backward reactions that
satisfy the detailed balance condition. Such a study is not only
important for statistical thermodynamics but it also provides some
insight on the regulation of synthesis or degradation reactions
for homeostasis in cells.
Recently, we discovered a slow relaxation process to equilibrium,
which generally appears in such catalytic reaction networks, and
proposed
"chemical-net glass" as a novel class of nonequilibrium phenomena.
In this case, relaxation in the vicinity of equilibrium is
exponential, whereas far from it, much slower logarithmic
relaxation with some bottlenecks appears due to kinetic
constraints in catalytic relationships\cite{AK3}. In this study,
we adopted continuous rate equations and assumed that the molecule
number is sufficiently large.
In biochemical reaction processes, however, some chemical species
can play an important role at extremely low concentrations of even
only a few molecules per cell\cite{cell2,cell3,cell4}. In such
systems, fluctuations and discreteness in the molecule number are
important. Indeed, recent studies by using a stochastic simulation
of catalytic reaction networks have demonstrated that the
smallness in the molecule number induces a drastic change with
regard to statistical and spatiotemporal behaviors of molecule
abundances from those obtained by the rate equation, i.e., at the
limit of large molecule
numbers\cite{togashi1,ookubo,AK1,AK11,mif1,mif2,Solomon,togashi3,marion,zhdanov,Dau,Kaneko-Adv,Furusawa,Furusawa2}.
In these studies, the strong nonequilibrium condition is assumed
by taking a unidirectional reaction.
Now, it is important to study the relaxation process to
equilibrium by considering the smallness in the molecule number.
Does the discreteness in molecule number influence the equilibrium
and relaxation behaviors? Is the relaxation process slowed down
by the smallness in the molecule number? To address this question,
we have carried out several simulations of the relaxation dynamics
of random catalytic reaction networks by using stochastic
simulations. Numerical results from several networks\cite{SAK1,SAK2}
suggest that the relaxation time is prolonged drastically when the
number of molecules is smaller. The increase from the continuum
limit is expressed by the factor $\exp(\beta \delta E)$, where
$\delta E$ is the additional energy required to pass through the
bottleneck due to the discreteness in molecule number and $\beta$
is the inverse temperature.
In this paper, we analyze such slowing down of a reaction process
to equilibrium that is induced by the smallness in molecule
numbers. Instead of taking complex reaction networks, we choose
simple networks or network motives to estimate the relaxation time
analytically. In fact, complex networks are often constructed by
combining a variety of simple network motives with simple branch
or loop structures. We focus on the relaxation dynamics of
reversible catalytic reaction systems with such simple network
motives as a first step toward understanding the general
relaxation properties in complex catalytic reaction networks.
In section II, we introduce two network motives, where the
synthesis of a product from resource molecules (and its reverse
reaction) is catalyzed by one of the other products. Here, we note
that some specific network motives may exhibit incomplete
equilibration when the molecule number decreases, and the average
chemical concentration in the steady state deviates from the
equilibrium concentration derived by the continuous rate
equations.
In section III, we show relaxation characteristics from the
stochastic simulations. The relaxation of the fluctuation around
the steady state slows down as the molecule number is decreased
below a critical value. This increase is represented by a scaling
function by using $h = N \exp(-\beta V)$, where $N$ is the
molecule number and $V$, the energy gap between a product and a
resource. In section IV, we present an analytic estimate for this
relaxation suppression due to the smallness in molecule number by
using a suitable approximation for Master equation. In section
V, we present a summary and discuss the generality of our results.
\section{II. Models}
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm]{AKS_E_FIG1.ps}
\end{center}
\caption{Illustration of (a) cascade system and (b) loop system.
Solid arrows indicate reaction paths (their width indicates the
transition tendency) and dashed arrows indicate catalyzation.}
\end{figure}
Here, we consider reversible catalytic reaction systems with two
simple network structures, cascade system and loop system, as
shown in Fig. 1, which may function as network motives for complex
reaction networks. These systems consist of $2S$ chemical species,
which are Product $P_i$ and Resource $R_i$ with $i = 1, 2, ...,
S$. Here, each product chemical can catalyze at most one of the
other Resource-Product reactions, whereas each reaction is
catalyzed at most by some product. (Instead, we can interpret that
there exist $S$ chemical species with excited and non-excited
states, and chemicals in an excited state can catalyze an
excitation reaction of one of the other molecules.)
If all chemicals are catalyzed by one of them, we can renumber
$P_i$ and $R_i$ for $i = 1, 2, ..., S-1$ and write the reaction
as
$P_i + P_{i+1} \rightleftharpoons^{k_{P_i,R_i}}_{k_{R_i,P_i}} R_i + P_{i+1}$,
\\[1ex]
where $P_S + P_{1} \rightleftharpoons^{k_{P_S,R_S}}_{k_{R_S,P_S}}
R_S + P_{1}$, which leads to the loop system (b). When there
exist a reaction that is not catalyzed, the cascade system in
Fig.1a) is obtained where
$P_S \rightleftharpoons^{k_{P_S,R_S}}_{k_{R_S,P_S}} R_S$.
(Neglecting cases in which some pair of resource and product is
totally disconnected from others, the loop and cascade systems are
the only possibilities).
The rates of forward ($k_{P_i,R_i}$) and backward ($k_{R_i,P_i}$)
reactions are set so that they satisfy the detailed balance
condition. We assume that the energy of the chemical $P_i$ is
larger than that of $R_i$, and we set $k_{P_i,R_i} = 1$ and
$k_{R_i,P_i} = \exp(-\beta V_i)$, where $V_{i}$ is the energy gap
between $P_i$ and $R_i$ and $\beta$ is the inverse temperature. We
define $p_i$ and $r_i$ as the number of molecules of the chemical
species $P_i$ and $R_i$, respectively. We fix the total number of
molecules as $SN$, and $p_i + r_i = N$ holds for each $i$. The
state of the system is represented by a set of numbers $(p_1, p_2,
... , p_S)$.
In both the systems, it is noted that for $N \to \infty$ (i.e.,
the continuous limit), $<p_i> \to p_i^{eq} = \frac{N e^{-\beta
V_i}}{1+e^{-\beta V_i}}$ and $<r_i> \to r_i^{eq} =
\frac{N}{1+e^{-\beta V_i}}$ holds at the equilibrium distribution,
which is reached at $t \to \infty$.
For finite $N$, however, there is a difference between the
distribution of the cascade and the loop systems. In the cascade
system, the average of the equilibrium chemical concentrations are
identical to the continuum limit, and are given by
$<p_{i}> = \frac{Ne^{-\beta V}}{1+e^{-\beta V}}$, that is, they
are independent of $N$ and $\beta$. This is because all the states
$(p_1, p_2, ... , p_S)$ ($0 \leq p_i\leq N$) are connected by
reactions and the above equilibrium distribution is only the
stationary solution for Master equation.
On the other hand, in the loop system, there is a deviation in the
steady chemical concentration from the continuum limit, which
becomes more prominent as $N$ becomes smaller. This is because the
state $(p_1, p_2, ..., p_S) = (0, 0, ..., 0)$ cannot be reached
from other states, whereas the state cannot move to any other
states. Hence, the steady distribution from the initial conditions
without $(p_1, p_2, ..., p_S) = (0, 0, ..., 0)$ deviates from the
continuum limit. This deviation becomes prominent as $N$ becomes
smaller. For example, for $N=1$ and $V_i=V$, the distribution from
the initial condition without $(p_1, p_2, ..., p_S) = (0, 0, ...,
0)$ is given by $<p_{i}> = \frac{e^{-\beta V}(1+e^{-\beta
V})^{S-1}}{(1+e^{-\beta V})^S - 1}$. Note that $<p_{i}>$ tends to
$1/S$ with an increase in $\beta$.
\section{III. Simulation results}
In this section, we present the results of stochastic simulations
and show the dependence of the relaxation process on the number of
molecules $N$ and the inverse temperature $\beta$. For simplicity,
we consider $V_i$ to be uniform for all species ($= V$); however,
this assumption can be relaxed.
Numerical simulations are carried out by iterating the following
stochastic processes. (i) We randomly pick up a pair of molecules,
say, molecule 1 and 2. (ii) Molecule 1 is transformed with its
reaction rate (if it is P, it is transformed to R, and vice versa)
if molecule 2 can catalyze the reaction of molecule 1. In the
cascade case, there is a reaction that progresses without a
catalyst, and in this case, if molecule 1 is the one that reacts
without a catalyst, then it is transformed with the reaction rate
independently of 2. Here, a unit time is defined as the time span
in which the above processes for catalytic reactions are repeated
$SN$ times. In each unit time, each molecule is picked up on
average to check if the transformation occurs.
In the following, we focus on the behavior of the system after a
sufficiently long time from the initial time where the numbers of
each molecule $p_i$ and $r_i$ are set randomly from $[0,N]$ under
the constraint $p_i + r_i = N$ and $(p_1, p_2, ..., p_S) \neq (0,
0, ..., 0)$.
\begin{figure}
\begin{center}
\includegraphics[width=7.0cm]{AKS_E_FIG2abc.ps}
\includegraphics[width=7.0cm]{AKS_E_FIG2def.ps}
\end{center}
\caption{$C(t)$ of cascade systems with (a) $S = 2$, (b) $S = 3$,
and (c) $S=4$, and loop systems with (d) $S = 2$ and (e) $S = 3$
for several N with $\beta = 3$. (f) $C(\infty)$ as a function of
$h$ in loop systems for several $\beta$ and $S$. $C_{ODE}$
indicates the auto-correlations given by Eq. (4) in (a)-(c) and
Eq. (3) in (d) and (e). $C^{*} = \exp(-e^{-\beta V}t)$ in (d), and
$C^{*} = \exp(-\frac{e^{-2\beta V}}{2}t)$ in (b) and (e) with
$\beta = 3$ ($V = 1$).}
\end{figure}
Figure 2(a)-2(e) show the auto-correlation functions of the
deviation from the equilibrium concentration of the cascade system
((a)-(c)) and the loop system ((d) and (e)) for some $S$ and N
with $\beta = 3$, defined by $C(t) = c(t)/c(0)$ and $c(t) =
<\sum_i [(p_i(t) - p_i^{eq})(p_i(0) - p_i^{eq}) +
(r_i(t)-r_i^{eq})(r_i(0) - r_i^{eq})]>$. As already discussed,
$C(\infty) \to 0$ in the cascade system whereas $C(\infty) > 0$
for small $N$. The value $C(\infty)$ starts to deviate when $h=N
e^{-\beta V}$ becomes less than 1. Hence, we have plotted
$C(\infty)$ of the loop system as a function of $h$ in Fig.2(f)
for $\beta = 1$ and $3$. As shown, $C(\infty) > 0$ holds for $h <
1$ independently of $\beta$. On the other hand, in both systems,
the relaxation to the final state with $C(t) = const.$ for small
$N$ is drastically slowed down as compared to that for large $N$
when $S > 2$, whereas the relaxation for small $N$ is faster when
$S = 2$.
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{AKS_E_FIG3.ps}
\end{center}
\caption{$\tau$ as a function of $N$ in (a) cascade system and (b)
loop system, and $\tau^{C}(S)$ and $\tau^{L}(S)$
for $\beta=3$ and $S= 2, 3, 4$. $\rho$ as a function of $h$ in (c)
cascade system (d) loop system with $S= 2, 3, 4$ for several
$\beta $}
\end{figure}
To observe the dependence of the relaxation time on $N$, we
measured the integrated relaxation time defined as $\tau =
\int_0^{\infty} \frac{C(t)-C(\infty)}{1-C(\infty)}dt$. Figure 3(a)
and (b) show $\tau$ as a function of $N$ for $\beta=3$ with $S= 2,
3, 4$ for the (a) cascade system and (b) loop system. For $S \geq
3$, the relaxation time $\tau$ increases by several orders of
magnitude with a decrease in $N$ in both systems. On the other
hand, $\tau$ for $S = 2$ does not exhibit any drastic change with
the decrease in $N$ in both systems.
This prolongation of $\tau$ for $S>2$ becomes more prominent as
$\beta$ is increased. From several data, $\tau$ is suggested to
increase as a function of $\exp(\beta V$). Combining $N$ and
$\beta$ dependencies, we introduce a parameter $h=N \exp(-\beta
V)$. The discreteness effect is dominant when $h=N \exp(-\beta V)$
is less than unity. Figure 3(c) and 3(d) show $\rho = \tau /
\tau_{N \to \infty}$ as a function of $h$ for the (c) cascade
system and (d) loop system for several values of $\beta$ and $S=
2, 3, 4$. For $S>2$, the deviation of $\rho$ from the continuum
limit ($\rho=1$) becomes prominent when $h$ is below unity in both
systems. The increase in $\rho$ appears to become steeper with an
increase in $S$. On the other hand, $\rho$ for $S = 2$ does not
exhibit a drastic increase with a decrease in $h$.
\section{IV. Origin of slow relaxations and crossover}
\subsection{A. Relaxation processes for $N \to \infty$ and $N=1$}
Now, we analytically estimate the enhancement in relaxation time
and explain its representation in the form $h = N\exp(-\beta V)$.
For this purpose, we compare the estimate by Master equation
analysis for small $N$ and compare it with that from the continuum
limit $N \to \infty$.
In the continuum limit, the reaction dynamics are represented by
the following rate equation:
\begin{equation}
\dot{x_{i}} = x_{c}[ e^{-\beta V} (\frac{1}{S} - x_{i}) - x_{i}]
\end{equation}
with $x_{i} = p_i / SN$. Here, $x_c = 1$ for $i=S$ in the cascade
system and $x_c = x_1$ for $i=S$ in the loop system. In both
systems, $x_{i} \to x_i^{eq} = \frac{e^{-\beta V}}{S(1+e^{-\beta
V})}$ holds for $t \to \infty$. When the deviation from
equilibrium $\delta x_i=x_{i} - x_i^{eq}$ is small, its evolution
for the loop systems obeys the following linearized equation
\begin{equation}
\dot{\delta x_{i}} = -\frac{e^{-\beta V}}{S} \delta x_{i}.
\end{equation}
For the cascade system, this equation is also valid for the
elements $i \neq S$, whereas $\dot{\delta x_{S}} = -\delta x_{S}$.
Then, the auto-correlation function of a small fluctuation of
$p_i$ around $p_i^{eq}$ is obtained as
\begin{equation}
C(t) = \exp(-\frac{e^{-\beta V}}{S}t)
\end{equation}
for the loop system, and
\begin{equation}
C(t) = \frac{1}{S}\exp(-t) + \frac{S-1}{S}\exp(-\frac{e^{-\beta V}}{S}t)
\end{equation}
for the cascade system. Indeed, these agree quite well with the
simulation results for a sufficiently large $N$ (e.g., $N= 1024$
in Fig. 2.). Thus, the characteristic time of the relaxation is
estimated as $\tau^{L}(S) \sim Se^{\beta V}$ for the loop system
and $\tau^{C}(S) \sim \frac{1}{S}+(S-1)e^{\beta V}$ for the
cascade system, which are consistent with the simulation results
shown in Fig. 3.
As the other extreme limit, consider the case with $N=1$. In this
case, the relaxation dynamics are dominated by a completely
different process induced by the absence of catalysts whose number
can often go to zero. In such cases, states are trapped at some
local energy minimum that appears due to the deficiency of
catalysts. Then, the hopping processes among them play an
important role in the relaxation dynamics, as shown below. In the
following, we focus the cases with $S = 2$ and $S = 3$ to clarify
that such an effect is induced by discreteness in the molecule
number. Note that, as shown in the last section, the behavior for
$S \ge 3$ is distinct from that for $S=2$; in the former case, the
relaxation time is enhanced by the decrease in $N$, in contrast to
the latter case.
\begin{figure}
\begin{center}
\includegraphics[width=7.0cm]{AKS_E_FIG4.ps}
\end{center}
\caption{(a) Illustration of transition diagrams of (a) loop
system with $S=2$, (b) loop system with $S=s$, (c) cascade system
with $S=2$, and (d) cascade system with $S=3$, where arrows
indicate possible transitions and the values next to them specify
the transition ratios.}
\end{figure}
First, we study the loop system. When $S = 2$, the system realizes
3 states from the initial conditions---$(p_1, p_2) = (1,0)$,
$(0,1)$, and $(1,1)$---as shown in Fig. 4(a).
Then, we estimate the time of the transition between $(1,0)$ and $(0,1)$.
First, the transition rate from the state $(1,0)$ to $(1,1)$ is estimated
as follows: for this transition, a pair of molecules from the product of
the first species and the resource of the second species has to be chosen.
This probability is given by $\frac{1}{2}\frac{1}{2-1}$, while
the reaction rate is given by $e^{-\beta V}$. Hence the rate is given by
$2 \cdot \frac{1}{2}\frac{1}{2-1}e^{-\beta V}= e^{-\beta V}$. Thus, the
characteristic time of the correlation of each $p_i$ is given by
$\sim e^{\beta V}$, which is consistent with the results shown in
Fig. 2(d).
On the other hand, for $S = 3$, the system realizes 7
states---$(p_1, p_2, p_3) = (1,0,0)$, $(0,1,0)$, $(0,0,1)$
,$(1,1,0)$, $(1,0,1)$, $(0,1,1)$, and $(1,1,1)$---as shown in Fig.
4(b). The characteristic time of the correlation of each $p_i$ is
given by the transition time among the three branches including
lowest-energy states, $(1,0,1)$ - $(1,0,0)$,
$(1,1,0)$ - $(0,1,0)$, and $(0,1,1)$ - $(0,0,1)$.
Here, in order to hop from one branch to another, the system must go
through the highest-energy state $(1,1,1)$, due to the
restriction by the catalytic relation. Now, we define the probability that
the states in the branch $(1,0,1)$ - $(1,0,0)$ are realized as $Q_{1,0,0}$.
Then, the probability to realize the state $(1,0,1)$ is given by
$\frac{e^{-\beta V}}{1+e^{-\beta V}}Q_{1,0,0}$.
Here, the transition rate from $(1,0,1)$ to $(1,1,1)$ is given by
$\frac{e^{-\beta V}}{2}$. Then, the probability current from the
the branch $(1,0,1)$ - $(1,0,0)$ is estimated by
$\sim \frac{e^{-\beta V}}{2}\frac{e^{-\beta V}}{1+e^{-\beta V}}Q_{1,0,0} \sim \frac{1}{2}e^{-2\beta V}Q_{1,0,0}$ ($e^{-\beta V} << 1$). Because of the symmetry among
the catalytic reactions, the
probability currents from the other branches are obtained in the same
way, to get the same form. Thus, the escape rate from each branch is
estimated by $\sim \frac{1}{2}e^{-2\beta V}$, and the characteristic time
of the correlation of each $p_i$ is estimated as $\sim 2e^{2\beta V}$.
Because the relaxation time in the continuum limit is proportional to
$\exp(\beta V)$, the deviation $\rho$ from it increases with $\exp(\beta V)$,
which is consistent with the results shown in Fig. 2(e). Thus, the
enhancement of the relaxation time from the continuous case is explained.
Essentially the same argument is also valid for the cascade
systems. When $S = 2$, the system can realize transitions among 4
states---$(0,1) - (1,1) - (1,0) - (0,0)$---as shown in Fig. 4(c).
Here, $(0,1)$ is a metastable state and $(0,0)$ is the
lowest-energy state. The relaxation is characterized by the escape
rate from a metastable state, which is given by $\sim e^{-\beta
V}$. Thus, the characteristic time of the correlation of each
$p_i$ is given by $\sim e^{\beta V}$.
On the other hand, for $S = 3$, the system realizes 8
states--$(p_1, p_2, p_3) = (0,0,0)$, $(1,0,0)$, $(0,1,0)$,
$(0,0,1)$ ,$(1,1,0)$, $(1,0,1)$, $(0,1,1)$, and $(1,1,1)$---as
shown in Fig. 4(d). The slowest characteristic time of the relaxation
is given by the transition time from the branch, $(1,1,0)$ - $(0,1,0)$
since the system must go through the highest-energy state $(1,1,1)$,
which is a limiting process for this case. Then,
in a manner similar to the loop system with $S=3$, the
characteristic time is obtained as $\sim 2e^{2\beta V}$. This
gives the characteristic time of the slowest motions of the
system. This estimation fits well with the numerical result shown
in Fig. 2(b).
\subsection{B. $N$, $\beta$ dependencies of $C(\infty)$ and relaxation time}
Next, we extend the argument of the last subsection to analyze the
$N$ and $\beta$ dependencies of $C(\infty)$ and the relaxation
time in greater detail. In particular, we explain why $h =
N\exp(\beta V) \sim 1$ gives a critical value and how the
amplification of relaxation time depends on $h$ for $h<1$. Because
of the simplicity due to the symmetry in the catalytic
relationship, we only study loop systems; however, the argument
presented below can be extended to cascade systems.
Figure 5(a) shows the transition diagram of the loop system with
$S = 2$, where each circle indicates each state $(p_1,p_2)$ and
the arrows indicate possible transitions. Generally, for any
values of $S$, the transition rate from a state $(p_1,p_2,...,
p_i=n, p_{i+1}, ...,p_S)$ to a state $(p_1,p_2,..., p_i=n+1,
p_{i+1}, ...,p_S)$ per unit time is estimated as follows. For this
transition, a pair of molecules from the resource of the $i$th
species ($R_i$) and the product of the $(i+1)$th species
($P_{i+1}$) has to be chosen. This probability is given by
$\frac{N-p_i}{SN}\frac{p_{i+1}}{SN-1}$, and the reaction rate is
given by $e^{\beta V}$. Hence, the transition rate per unit time
is given by $W^i_{n \to n+1} = \frac{(N-n)p_{i+1}}{SN-1}e^{-\beta
V}$. Similarly, the transition rate in the opposite direction is
given as $W^i_{n+1 \to n} = \frac{(n+1)p_{i+1}}{SN-1}$. If the
molecule number is so large or $\beta$ is so small that $h =
Ne^{-\beta V} >> 1$, $W^i_{n \to n+1} > W^i_{n+1 \to n}$ holds for
small $n$ and $W^i_{n \to n+1} < W^i_{n+1 \to n}$ holds for large
$n$. Then, the dominant states of the system are located in an
intermediate region in the phase space $[0,N]$. For example, the
blue region in Fig. 5(a) indicates such dominant states for $S=2$.
Now, we define the probability that $p_i=n$ as $Q_n^i$, and the joint
probability to realize $p_i=n$ and $p_{i+1}=m$ as $Q_{n,m}^i$. Here,
$Q_n^i = \sum_{m=0}^NQ_{n,m}^i$ and $Q_{n,m}^i = Q_n^iQ_m^{i+1}$. Then the time evolution of $Q_{n,m}^i$ follows
\begin{equation}
\dot{Q}_{n,m}^i = \frac{m}{SN-1}[ (N-(n-1))e^{-\beta V}Q_{n-1,m}^i+ (n+1)Q_{n+1,m}^i -nQ_{n,m}^i-(N-n)e^{-\beta V}Q_{n,m}^i ].
\end{equation}
Then, we obtain
\begin{equation}
\dot{Q_n^i} = \frac{<p_{i+1}>}{SN-1}[ (N-(n-1))e^{-\beta V}Q_{n-1}^i+ (n+1)Q_{n+1}^i -nQ_n^i-(N-n)e^{-\beta V}Q_n^i ],
\end{equation}
where $<p_i> = \sum_{n=0}^NnQ_n^i$ ($<p_{i+1}> =
\sum_{m=0}^NmQ_m^{i+1}$). Using this equation, we obtain the time
evolution of $<p_i>$ as
\begin{equation}
\dot{<p_i>} = \frac{<p_{i+1}>}{SN-1}[-<p_i>+(N-<p_i>)e^{-\beta V}]
\end{equation}
This implies that $x_{i} = <p_i> / SN$ obeys equation (1) for a
sufficiently large value of $N$.
On the other hand, if $N$ is so small or $\beta$ is so large that
$h << 1$, $W^i_{n \to n+1} << W^i_{n+1 \to n}$ holds for all $i$
and $n$. Thus, $p_i$ for all $i$ tend to decrease to $0$. Then,
there exist $S^N$ metastable states---$(n,0,0,...,0)$,
$(0,n,0,...,0)$, ... , $(0,0,...,0,n,0,..,0)$, ... , and
$(0,0,0,...,n)$ ($1 \leq n \leq N$). Among them, the following
$S$ states, $(1,0,0,...0)$, $(0,1,0,...0)$, ..., and
$(0,0,0,...1)$, have the lowest energy. For example, in the cases
with $S=2$, the states $(0, p)$ and $(q, 0)$ ($p, q \ne 0$) are
metastable states and $(1,0)$ and $(0,1)$ are the lowest-energy
states.
It should be noted that the lowest-energy states are the dominant
states for $h << 1$. The probability to realize these
lowest-energy states tends to $1/S$ with an increase in $\beta$.
Thus, with the increase in $\beta$, $<p_i>$ approaches $1/S$ for
small $N$, which indicates $C(\infty) =const. > 0$ for small $N$
and large $\beta$.
Moreover, for $h << 1$, the transitions among lowest-energy states
contribute dominantly to the relaxation process. Then, we estimate
the characteristic time of the fluctuations of the system for $h
<< 1$ by considering the transition processes from one lowest-energy
states such as $(0, 0,...,0, p_j =1, 0 ...,0)$ to the other lowest-energy
states such as $(0, 0...,0, p_j = 0, 0, p_{j'} = 1, 0, ...,0)$.
In the following, we consider only the cases with
$S=2$ and $S=3$. We only focus on the dynamics of $p_j$
under the constraint that $p_{j}$ has only $0$ or $1$, because $h<<1$.
\begin{figure}
\begin{center}
\includegraphics[width=7.0cm]{AKS_E_FIG5a.ps}
\\[2ex]
\includegraphics[width=8.0cm]{AKS_E_FIG5bc.ps}
\end{center}
\caption{
(a) Illustration of the transition diagrams for $S=2$, and
effective transition diagrams for (b) $S=2$ and (c) $S=3$, where
bold arrows indicate the focused transitions in the text.
}
\end{figure}
First, consider the case with $S=2$. Figure 5(b) shows a detailed
transition diagram around the region where $p_i$ ($i = 1, 2$) are
only $0$ or $1$. The escape rate from $(1,0)$ and $(0,1)$ are given by
$\sim \frac{N}{2N-1}e^{-\beta V}$. Thus, the
characteristic time of the correlation of each $p_i$ is given by
\begin{equation}
\tau^{L}_d(2) \sim \frac{2N-1}{N} e^{\beta V},
\end{equation}
which is consistent with the results shown in Fig. 6(a).
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{AKS_E_FIG6.ps}
\end{center}
\caption{Relaxation time $\tau$ obtained from simulations (points)
and its approximate analytical expression $\tau^{L}_d(S)$ (curves)
estimated in the text. Plotted as a function of $N$ for the loop
systems with (a) $S = 2$ and (b) $S = 3$ with $\beta= 2, 3, 4$.
The analytical expression agrees with the simulation data both for small $N$,
and for large
$N$, where $\tau$ approaches a constant value expected from the rate
equation. The crossover occurs at around
$h=Ne^{-\beta V}\sim1$ ($N \sim \exp(2)$, $\exp(3)$, and $\exp(4)$ for
$\beta = 2$, $3$, and $4$.).}
\end{figure}
Next, we study the case with $S=3$. The transition diagram of the
states $(p_1,p_2,p_3)$ is shown in Fig.5(c) when $p_i$ ($i = 1, 2, 3$)
take only $0$ or $1$. Similar to the $N=1$ case, the characteristic time
of the transition among the three branches including lowest-energy states,
$(1,0,1)$ - $(1,0,0)$, $(1,1,0)$ - $(0,1,0)$, and $(0,1,1)$ - $(0,0,1)$
through the state $(1,1,1)$ is considered. In a manner similar to
the $N=1$ case, the transition rate from each branch is estimated by
$\sim \frac{Ne^{-\beta V}}{3N-1}\frac{Ne^{-\beta V}}{1+Ne^{-\beta V}} = \frac{N^2e^{-\beta 2V}}{(3N-1)(1+Ne^{-\beta V})}$. Thus, the relaxation time of the
fluctuation of $p_1$ is estimated as the decrease with $N$ as
\begin{equation}
\tau^{L}_{d}(3) \sim \frac{(3N-1)(1+Ne^{-\beta V})}{N^2}e^{2\beta V}.
\end{equation}
Considering the $e^{\beta V}$ dependence of $\tau_{N\rightarrow
\infty}$, the above estimate is consistent with Fig. 6(b).
For $S$ larger than $3$, the transition diagram becomes rather
complicated. However, a similar analysis should be possible to
estimate the prolongation in the relaxation time.
\section{V. Summary and discussions}
In the present paper, the slowing down of the relaxation in
reversible catalytic reaction networks induced by the smallness of
molecule number is investigated as a general property of catalytic
reaction networks. This prolongation of relaxation is a result of
bottlenecks in reactions; these appear due to the deficiency of
the catalyst required for a reaction. The number of molecules can
be so small that the number of catalysts becomes zero. In this
case, a pair of a substrate and the corresponding catalyst
molecule species can hardly exist simultaneously. Such a
constraint makes it difficult to realize a specific configuration
necessary for the relaxation. The probability for realization is
given by $\exp(-\beta E_{bottle})$, with $E_{bottle}$ as the
corresponding energy barrier to realize such rare conditions, or
the sum of such energy barriers. This bottleneck energy is
generally different from the energy gap in the continuum limit
that is obtained from the rate equation (ordinary differential
equation). Thus, the relaxation time at a small molecule number
deviates from the continuum case by the factor $\exp(\beta \delta
E)$ with an appropriate effective energy difference, $\delta E$.
By considering the models of simple catalytic reaction networks
consisting of resource chemicals of $S$ species and the
corresponding products, we have demonstrated this deviation of
relaxation time from both direct simulations and analysis by using
Master equation. From the numerical and analytic estimates,
$E_{bottleneck}=2V$ and $\delta E=V$ for $S=3$, where $V$ is the
energy gap between the resource and the product chemicals. For
$S>2$, in general, the prolongation of the relaxation time becomes
prominent when $h=N\exp(-\beta V)$ is less than unity, and its
amplification ratio from the continuum limit is represented as a
function of $S$ and $h$. Note that the cascade system in the $N=1$
case is equivalent to the "Asymmetrically Constrained Ising Chain"
(ACIC), Hierarchically constrained Ising model, or East model,
which are studied as simple abstract models for glassy
states\cite{aici_1, aici_2, aici_3}. Following the interpretation
therein, the increase in relaxation time at $h < 1$ as a result of
the decrease in $N$ or temperature may be regarded as a type of
glass transition. According to the recent studies on ACIC, the
correlation time of the motion of $p_1$ (not the relaxation time
of the total system) is estimated as $\tau_1 \sim (1+2e^{\beta
V})^k$ where the integer $k$ obeys $2^{(k-1)} < S \le 2^k$
\cite{aici_2, aici_3}. In cases with $S=2, 3, 4$, this fact is
consistent with our estimate of the relaxation time of the cascade
system with $N=1$. The estimation of $\delta E$ as a function of
$S$ and $h$ for general cases both for cascade and loop systems is
an important issue that should be studied in the future.
In addition to the slow down in relaxation, the equilibrium
distribution deviates in a network called a loop system, where all
the reactions are catalyzed by one of the products. The constraint
that the numbers of a certain pair of chemical species cannot
simultaneously be zero leads to the deviation of the average
distribution of molecule numbers from the continuum limit. Again,
this deviation becomes prominent when $h$ is less than unity.
Although we have adopted simple network motives to analyze the
relaxation, the prolongation of relaxation time is quite general
in catalytic reaction networks. Catalytic bottlenecks often appear
as the number of molecules is decreased in a large variety of
reaction networks in which catalysts are synthesized
within\cite{SAK1,SAK2}. The present study can provide a basis for the
general case with complex networks, as the motives here are
sufficiently small and can exist within such complex networks.
Biochemical reactions generally progress in the presence of
catalysts that are themselves synthesized as products of such
reactions. These reactions form a network of a variety of chemical
species. Here, the molecule number of each species is generally
not very large. Hence, the slow relaxation process and deviation
from equilibrium discussed in this study may underlie
intracellular reaction processes. Moreover, the present network
motives are so simple that they are suggested to exist in
biochemical networks. We also note that the resource and product
in our model can be interpreted as non-excited and excited states
of enzymatic molecules. Indeed, many molecules are known to
exhibit catalytic activity only when they are in an excited state,
which can help other chemicals to switch to an excited state. In
fact, such networks with mutual excitation are known in
signal-transduction networks\cite{sig1,sig2,sig3}, where the
present slow relaxation mechanism may be relevant to sustain the
excitability of a specific enzyme type over a long time span. It
is important to pursue the relevance of the present mechanism in
cell-biological problems by considering more realistic models in
the future.
We also note that not only the discreteness in the molecule number
but also the negative correlation between a substrate and the
corresponding catalyst within a reaction network or in a spatial
concentration pattern suppresses the relaxation
process\cite{AK2,AK3,SAK1,SAK2}. The present mechanism due to
discreteness may work synergetically with the earlier mechanism to
further suppress the relaxation to equilibrium. The construction
of reaction networks to achieve slower relaxation together with
the network analysis will be an important issue in the future.
The authors would like to Shinji Sano, for informing us of his finding
on the prolongation of relaxation in reaction networks due to the
discreteness in molecule numbers, which triggers the present study.
A. A. was supported in part by a Grant-in Aid for Young Scientists
(B) (Grant No. 19740260).
| 2024-02-18T23:39:59.922Z | 2010-03-01T06:22:55.000Z | algebraic_stack_train_0000 | 1,049 | 5,893 |
|
proofpile-arXiv_065-5210 | \section{Introduction}
Permutations tests can be useful as distribution-free tests and also have exact size (as opposed to the asymptotic validity of most conventional tests).
However the use of permutation tests in regression problems has been limited because valid permutation tests obtain only if the observations are exchangeable under the null hypothesis. A vector $Y$ has an exchangeable distribution if $PY$ has the same distribution as $Y$, for any permutation matrix $P$.
If we consider a test statistic $T(Y)$, a permutation test is obtained, if $Y$ is exchangeable, by conditioning on the order statistics $Y_{(o)}=\{Y_{(1)},\ldots,Y_{(n)}\}$ \cite{kalb78}. The assumption of exchangeability, although a little less stringent than the assumption of identically independently distributed (i.i.d.) observations, is still quite restrictive, and does not hold for instance in regression problems.
The has been many applications of permutation tests; a particularly interesting permutation test was proposed by Mantel \cite{mantel67}. Permutation tests are often based on score tests. For some theory about permutation tests see \cite{com03} and for score tests see \cite{com97} and \cite{dry}.
In this paper we propose a new approach, called expected permutation p-value (Eppv), based on permuting an unobserved exchangeable variable.
Section 2 presents permutation versions of score tests in generalized linear models. In sectiin 3 some theory about p-values, permutation and conditioning is developed and the Eppv are presented. This approach is then applied to the logistic regression model in section 4. Section 5 presents a short simulation. An illustration with real data is given ins ectiion 6 which concludes.
\section{Permutation score tests}
Consider a sample of independent random variables $Y_i$, $i=1,\ldots,n$, and assume a generalized linear model; the contribution of observation $i$ to the likelihood is:
$$
f(Y_i;\theta_i,\eta)= \exp \left\{\eta^{-1}\left[\theta_i Y_i-b(\theta_i)\right]+c(Y_i,\eta)\right\}
$$
with $E(Y_i)=b'(\theta_i)=\mu _i$ and $\theta_i=Z^i\beta$ where $Z^i=(z_1^i, \ldots, z^i_p)$
is a row vector of explanatory variables (considered here as deterministic) and $\beta$ is a $p\times 1$ vector of regression coefficients; here $\eta$ denotes the dispersion parameter. Then the score equation obtained by equating to zero the derivative of the loglikelihood $L$ relatively to $\beta$ is
$Z^T\hat R=0$,
where $Z$ is the $n\times p$ matrix of explanatory variables $z_j^i$ , and $\hat R=(\hat R_1,\ldots,\hat R_n)^T$ is the vector of residuals $\hat R_i=Y_i-\mu_i(\hat \beta)$. Thus the estimated residuals are orthogonal to the space of explanatory variables.
If we consider an explanatory variable indexed by $p+1$, the model becomes $\theta_i=Z^i\beta+z^i_{p+1} \beta_{p+1}$. Lets us denote the parameters $\gamma=(\eta, \beta, \beta_{p+1})$. The score statistic for testing $H_0$: ``$\beta_{p+1}=0$'' has the linear form:
\begin{equation}S(Y)={\partial L \over \partial \ \beta_{p+1}}(\beta_{p+1}=0)=z_{p+1}^T\hat R,\label{scorestat}\end{equation}
where $z_{p+1}^T=(z^1_{p+1},\ldots,z^n_{p+1})$ is the vector of values for explanatory variable $p+1$ and $\hat R$ is the vector of residuals in the model not including variable $p+1$.
A test for $H_0$: ``$\beta_{p+1}=0$'' may be based on the asymptotic distribution of $n^{-1/2}S(Y)$. Let us call $\phi(Y)$ the critical function of the test ($\phi(Y)=1$: $H_0$ rejected, $\phi(Y)=0$: $H_0$ not rejected); except in simple cases it is not possible to construct exact tests, that is with ${\rm E}_{\gamma} [\phi(Y)]=\alpha$, $\gamma \in \omega$, where $\omega$ is the subset of the parameter space corresponding to $H_0$. For small sample sizes the difference between the nominal and true Type I error rates may be large. In regression models it is tempting to try to construct tests based on permutation
of the residuals in the score statistics \cite{schmoyer94}. Fisher exact test can be shown to be a permutation of the residuals in a score test, in a case where the observations are exchangeable under the null hypothesis. However, generally as soon as there is one explanatory variable under the null hypothesis, neither $Y$ nor $\hat R$ are exchangeable; hence, permutation tests cannot be constructed \cite{com03}.
\section{Some theory about p-values, permutation and conditioning}
\subsection{p-values}
Consider a test $\phi(Y)$ based on a statistic $T(Y)$.
We examine the case where the decision to reject $H_0$ is taken if $T(Y) \ge c_{\alpha}$,
$c_{\alpha}$ being chosen such ${\rm E}_{\gamma} [\phi(Y)]=\alpha$.
A definition of the p-value which allows to consider it as a random variable (and hence to study its properties) is
$$pv[T(Y)]=\rm E_{\gamma}[ I_{T(Y^*)\ge T(Y)}|\sigma(Y)]$$
where $Y^*$ is a random variable independent from $Y$ but with the same distribution and $\sigma (Y)$ is the sigma-algebra generated by $Y$. See \cite{will} for properties of the conditional expectation.
We can construct a size $\alpha$ test by rejecting $H_0$
if $pv[T(Y)] \le \alpha$, that is: $\phi(Y)=I_{pv[T(Y)] \le \alpha}$.
\subsection{Conditional p-values}
We may define a p-value conditional on ${\cal C}$, where ${\cal C}\subset \sigma(Y,Y^*)$ as:
$$pv_{\cal C}[T(Y)]=\rm E_{\gamma}[ I_{T(Y^*)\ge T(Y)}|\sigma(Y)\vee {\cal C}\}.$$
Conditional tests can be constructed as $\phi(Y)=I_{pv_{{\cal C}}[T(Y)] \le \alpha}$.
We have ${\rm E}_{\gamma} [\phi(Y)|{\cal C}]=\alpha$; it follows that we also have ${\rm E}_{\gamma} [\phi(Y)]=\alpha$. That is, marginally the test has size $\alpha$, but the critical regions (and the power) depend on ${\cal C}$. The conditional approach has been advocated for two different situations \cite{lehmann86}.
The first arises if we have a sufficient statistic $C$ for the family of measure ${\cal P}^Y=\{P_{\gamma}, {\gamma}\in \omega\}$, where $\omega=H\cap K$, the frontier between the sets representing the null (H) and the alternative (K) hypotheses. If ${\cal C}$ is the sigma-algebra generated by $C$, then $pv_{\cal C}[T(Y)]$ no longer depends on $\gamma$, so that we obtain a similar test, ${\rm E}_{\gamma} [\phi(Y)]=\alpha$, $\gamma \in \omega$. Such a test is said to have the Neyman structure relatively to $C$.
As an example consider the case where we observe variables $Y_i$, $i=1,\ldots,n$ which are i.i.d. under the family of measures ${\cal P}^Y=\{P_{\gamma}, {\gamma}\in \omega\}$.
Then the order statistic.
$Y_{(o)}=Y_{(1)},\ldots, Y_{(n)}$ is sufficient for $\gamma$ and if we take ${\cal C}=\sigma(\{Y^*_{(o)}=Y_{(o)}\})$ we obtain a permutation test,
that is we have ${\rm E}[\phi(Y)|Y_{(o)}]=\alpha$. Due to the discrete character of the conditional distribution of $T(Y)$, it is not possible to achieve ${\rm E}[\phi(Y)|Y_{(o)}]=\alpha$ for all $\alpha$, except by resorting to randomisation; we will neglect this problem in the sequel.
The second situation arises in the presence of ancillary statistics $Z$: here the motivation is to perform the test adapted to the situation fixed by the particular realization of $Z$. We may also consider S-ancillary statistics whose distribution depends on an unknown parameter $\xi$, while the distribution of $Y$ given $Z$ does not depend on $\xi$. While the unconditional p-value depends on both $\gamma$ and $\xi$, the p-value conditional on $Z$ does not depend on $\xi$.
As an example consider the case of a regression model where explanatory variables $Z^i$ are associated to response variables $Y_i$: the regression model specifies the conditional distribution of $Y_i$ given $Z^i$ and depend on $\gamma$, while the marginal distribution depends on $\xi$ only. It is natural to consider tests which are conditional on $Z$; in our formalism, for a test stastic $T(Y,Z)$ we then compute the conditional p-value $pv_{\cal C}[T(Y,Z)]$ with ${\cal C}=\sigma(\{Z^*=Z\})$.
The two situations have in common the fact that there is a reduction of the number of parameters on which the p-value depends. In the particular case where there is a sufficient statistic for $\gamma$, the p-value does not depend on any parameter. However in complex problems this may not be achieved without loosing too much power. One possibility is to replace $pv_{\cal C}[T(Y);\gamma]$ by $pv_{\cal C}[T(Y);\hat \gamma]$, where $\hat \gamma$ is an estimator of $\gamma$. We would like to a have a procedure such that
$|pv_{\cal C}[T(Y);\hat \gamma]-pv_{\cal C}[T(Y);\gamma]|$ is as small as possible. Choosing large ${\cal C}$ may help to reduce the variance of this random variable. Another way is to apply a minimax argument. If it is known that $\gamma$ belongs to a compact set $\Gamma$, then we may base a test on $\max_{\gamma\in \Gamma} pv_{\cal C}[T(Y);\gamma]$. This leads to a test of size lower or equal to $\alpha$.
\subsection{The expected conditional p-value}
Consider the case where $Y=g(\varepsilon)$, where $g(.)$ is a non-decreasing function; if $g$ is not one-to-one we have $\sigma(Y)\subset \sigma(\varepsilon)$. If we have a statistic $T(Y)$, this defines a statistic $S(\varepsilon)=T(g(Y))$. We may consider the p-value
$pv_{\cal C}[S(\varepsilon)]=\rm E_{\gamma}[ I_{S(\varepsilon^*)\ge S(\varepsilon)}|\sigma(\varepsilon)\vee {\cal C}\}$, where ${\cal C} \subset \sigma(\varepsilon^*,\varepsilon)$.
Since in general this is not $\sigma(Y)$-measurable, we may consider its expectation
$Epv_{\cal C}[S(\varepsilon)]=\rm E_{\gamma}[pv_{\cal C}[S(\varepsilon)]|\sigma(Y)]$.
A size-$\alpha$ test can be constructed using this expected conditional p-value as usual.
This approach can in particular be connected with the
Cox-Snell family which represents $Y$ as
$Y=g(\varepsilon)$,
where $\varepsilon$ is exchangeable. Such a representation
was proposed by Cox and Snell \cite{cox68} to define residuals.
If $\varepsilon$ were observed a permutation test could be constructed by conditioning on the order statistic of $\varepsilon$.
It is appealing thus to use an expected conditional p-value choosing
${\cal C}=\sigma(\varepsilon^*_{(o)}=\varepsilon_{(o)})$. Such a p-value will be called expected permutation p-value (Eppv).
Numerically this method is easy to implement:
draw at random $\varepsilon^*$ from the distribution of $\varepsilon$ conditional on $Y$; compute the permutation p-value; take the mean of the p-values for a sufficient number of drawings. However the distribution of $\varepsilon$ conditional on $Y$ may depend on parameters that may have to be estimated (see sections 3.2 and 4).
\section{Applications of the Eppv approach to the logistic model}
A logistic regression model is specified by:
$\Pr (Y_i=1)=\pi_i$; logit$(\pi_i)=z^i\beta$.
It can be depicted in terms of latent i.i.d. variables $\varepsilon_i$ having a uniform distribution on [0,1]:
$$Y_i=I_{\varepsilon_i \le \pi_i}$$
A score test for $H_0:$ ''$\beta_{p+1}=0$'' is
$T(Y)=S(\varepsilon)=z^T_{p+1}(I_{\varepsilon\le \pi}-\pi)$
with obvious vectorial notation. For a permutation test only the first part $z^T_{p+1}I_{\varepsilon\le \pi}$ is needed. However, because $\sum_i I_{\varepsilon_i\le \pi_i}$ is not constant under permutation of $\varepsilon$, the test is not invariant for a change of origin of $z$:
there is a need to center one of the two vectors involved in this scalar product, a concept also related to that of
``clean'' form as in \cite{com03}. Thus the proposed statistic is $T(Y)=S(\varepsilon)=\sum_iz^i_{p+1}(I_{\varepsilon_i\le \pi_i}-n^{-1}\sum_i I_{\varepsilon_i\le \pi_i})=\sum_i (z_{p+1}^i-\bar z_{p+1})I_{\varepsilon_i\le \pi_i})$ (where $\bar z_{p+1}$ is the mean of $z_{p+1}^i$), which is invariant.
For computing the Eppv we draw $\varepsilon$ from its conditional distribution which is
\begin{itemize}
\item $\varepsilon_i \sim U[0, \pi_i]$ if $Y_i=1$;
\item $\varepsilon_i \sim U[ \pi_i,1]$ if $Y_i=0$.
\end{itemize}
If the $\pi_i$ are known, an exact permutation test follows.
In practice one may replace $\pi_i$ by an estimator $\hat \pi_i$, the maximum likelihood estimator of $\pi_i$ under $H_0$, leading to an approximate test. It is conjectured that the type I error probability is $\alpha+ O_p(n^{-1/2})$, similar as when using the asymptotic distribution of the standardized score statistic. However for small sample size the Eppv approach may have better performance because of the non-standard conditioning.
Another possibility is to apply the minimax approach. Consider the case $p=1$ and it is known that $\beta_1\in [a,b]$. One can find $\max_{\beta_1\in [a,b]} Eppv(\beta_1)$ and this leads to a test with type I error probability lower or equal to $\alpha$. In practice the maximum can be found numerically.
It is interesting to note that when there is no explanatory variable under the null hypothesis, the Eppv test reduces to Fischer's exact test; this happens because for all $i$, $\hat \pi_i=\bar Y$ so that permuting $\varepsilon$ is identical to permuting $Y$.
\section{Simulation study}
We have simulated a Logistic regression model given by:
$$ logit (\pi_i)=\beta_0+\beta_1 z^i _1+ \beta_2 z^i_2$$
with $\beta_0=0$; $\beta_1=1$;
$z_1^i=w_1^i-1$; $z_2^i=(w_2^i-1)(z_1^i)^d$,
where $w_1^i$ and $w_2^i$ are independent with exponential distributions.
The values $d=0$, where $z_1^i$ and $z_2^i$ were independent, and $d=1$ and $d=-1$, producing two different cases of non-linear dependencies between $z_1^i$ and $z_2^i$, were tried.
Samples of sizes 30 and 15 were generated from this model.
The problem was: testing $H_0:$ ``$\beta_2=0$'' at size $\alpha=0.05$.
The empirical sizes (for $\beta_2=0$) and powers (for $\beta_2=1$ for $n=30$ and $\beta_2=2$ for $n=15$) of
the likelihood ratio (LR) test, the Wald test, a score test based on permutation of residuals (PR) and the Eppv test have been
estimated by simulation using 10000 replicates.
We have also tried a Bootstrap test: among several possibilities we have chosen the one which seemed the most natural that is a non-parametric bootstrap of the Wald test; the guidelines given in \cite{hal}, that is resampling $(\beta_2^*-\hat \beta_2)/\sigma ^*_{\beta_2}$ (where $\beta_2^*$ is the maximum likelihood estimate of $\beta_2$ for a resample and $\sigma ^*_{\beta_2}$ is the estimated standard deviation of $\beta_2^*$), have been applied; this time-consuming test (using 499 resamples) has been studied on only 1000 replicates.
For simplicity, for all the tests, only marginal probabilities were estimated, that is we regenerated
the $z_1^i$ and $z_2^i$ at each replicate.
The results appear in Table 1 (with $\beta_2$ simply denoted $\beta$).
It is clear that the Wald test tends to be conservative while the LR test tends to be anti-conservative. These behaviours are more marked for $n=15$ than for $n=30$. The tests based on permutation better respect the size of the tests with a tendency to conservative for $d=1$; the Eppv test has a better stability than permutation of residuals. The bootstrap Wald test is not really practical for $n=15$ because many configurations generated by resampling are too particular and lead to failure of convergence of the algorithm; so the results of this test are not displayed in Table 1. For $n=30$ it is strongly anti-conservative: the estimated type I error risks are 0.088, 0.097, 0.14 for $d=0$, $1$ and $-1$ respectively.
The power of the Eppv test is always higher than that of the Wald test and of the test based on permutation of residuals; it is sometimes lower than that of the likelihood ratio test but the latter is not very reliable in the situations considered. In conclusion when working with small samples and when we can suspect a dependency between the factor studied and the other explanatory variables, the Eppv test seems the most reliable among the tests considered here.
\setlength{\parindent}{5mm}
\begin{table}
\caption{Simulation results based on 10000 replicates of a logistic regression model
comparing the Wald test, the likelihood ratio test (LR), the test based on permutation of residuals (PR) and the Eppv test; the theoretical size of the tests is 0.05.}
\vspace{10mm}
\begin {center}
\begin{tabular}{cc|cccc|}
& & Wald & LR & PR & Eppv \\ \hline
${\bf n=30}$ & & & & & \\
&$d=0$ & 0.044 & 0.063 & 0.051& 0.052\\
$\beta=0$ &$d=1$ & 0.025 &0.069 &0.015 & 0.025\\
(Type I error)&$d=-1$ &0.020 &0.080 &0.062 &0.046 \\ \hline
&$d=0$ &0.45 &0.53 &0.47 &0.48\\
$\beta=1$ &$d=1$& 0.17 &0.31 &0.14 &0.17 \\
(Power)&$d=-1$ &0.81 &0.91 & 0.85&0.88 \\ \hline
${\bf n=15}$ & & & & & \\
&$d=0$ &0.020 &0.072 &0.049 &0.049\\
$\beta=0$ &$d=1$ &0.009 &0.094 & 0.018&0.020 \\
(Type I error) &$d=-1$ &0.007 &0.094 &0.066 &0.041\\ \hline
&$d=0$ &0.22 &0.52 &0.57 & 0.58\\
$\beta=2$ &$d=1$ &0.10 &0.41 &0.19 &0.22\\
(Power) &$d=-1$ &0.16 &0.65 &0.75 &0.79\\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Illustration on real data}
Even in a large study very small numbers may occur in some categories of
the sample which are of interest. The small problem treated here for illustration
is taken from a real study on the effect of wine consumption of the risk of developing
dementia \cite{orgo}. In this study, 2273 non-demented subjects were followed up during three years.
Subjects were classified according to their wine consumption as: no drinkers,
mild drinkers moderate or heavy drinkers. During the follow-up 99 cases of dementia developed. Potentially important confounding factors were age, gender and educational level (here coded as a binary variable: no primary diploma vs primary diploma or above).
Globally it appeared from a logistic regression analysis that moderate wine consumption was a protective factor against dementia. However if we try to analyze the data separately
by gender (which is legitimate because both the curse of dementia and drinking habits are different among genders) very small numbers occur. In particular, there were 28 dementia cases among
811 non-drinking women and 0 cases among 44 moderate or heavy drinking women. With such figures,
a logistic regression with wine consumption as an explanatory variable fails to converge so that it is not possible to use a Wald test and a likelihood ratio test is probably not very reliable. For one-sided alternative, Fisher's exact test gave a p-value equal to 0.21 and when adjusting on age and educational level, we obtained p-values equal to 0.18 and 0.13
with the PR and Eppv tests respectively; on the basis of these data, taking into account possible confounding factors, the hypothesis that consumption of wine has no effect on risk of dementia among women cannot be rejected.
In conclusion the Eppv approach extends permutation tests ideas to
complex problems. Bootstrap was also in part motivated by such an extension but unlike bootstrap, the Eppv approach keeps the idea of conditioning on the order statistic of an exchangeable vector.
| 2024-02-18T23:40:00.211Z | 2010-03-04T11:06:09.000Z | algebraic_stack_train_0000 | 1,059 | 3,269 |
|
proofpile-arXiv_065-5354 | \section*{Introduction}
Significant modifications of weak nuclear rates due to the influence of atomic electrons have been predicted and their importance emphasized. J.N.Bahcall~\cite{Bah} developed a general
discussion and treatment of the correlations and rates of bound beta decay processes for arbitrary electronic configurations and their effects on
the behavior of hot stellar plasmas.
Recently, an experiment at GSI has reported~\cite{Otsh} the measurement of the ratio of bound state to continuum state total beta decay rates for the case of bare $1/2^{+}$ ${}^{207}$Tl decaying into the ground state of $1/2^{-}$ ${}^{207}$Pb. It is a nuclear first forbidden transition and the ratio is in excellent agreement with the theory~\cite{Taka},~\cite{Fab}.\\
New facilities allow electronic capture (EC) and $\beta^{+}$ studies on highly ionized atoms, combining novel experimental tools which use high energy accelerators, in-flight separators and heavy ions storage rings.
An experiment performed recently at GSI~\cite{1Lit} has studied the radioactive ionic decays of $1^{+}$ ${}^{140}$Pr (Z=59) with 0,1,2 K-orbital electrons into $0^{+}$ ${}^{140}$Ce (Z=58). The Lorentz factor $\gamma$ is 1.43 and the results reported in~\cite{1Lit} are EC and $\beta^{+}$ rates in the rest frame of ions. In this new kind of experiments the atomic contribution is well known, permitting cleaner
measurements of the weak nuclear parameters. EC experiments on neutral atoms were reviewed in ref.~\cite{Bam}.\\
The hyperfine structure due to the coupling of the nuclear spin $I$ and the angular momentum $1/2$ of the K-electron, giving a total angular momentum $F=I\pm 1/2$ ($F_{\pm}$), is fundamental in order to understand the EC results.
The ground state (gs) of ${}^{140}$Pr${}^{58+}$ has $F=1/2$, as follows from its positive magnetic moment $\mu$~\cite{Sha}, whereas the weak decay of the excited $F=3/2$ state is forbidden in the allowed aproximation. As the relaxation time for the upper hyperfine state is much shorter than the cooling time, the ions are dominantly stored with $F=1/2$ in the experiment considered in~\cite{1Lit}.\\
The importance and origin of the influence of the hyperfine structure on the decay rates was clearly showed by Folan and Tsifrinovich~\cite{Fol} in their seminal analysis of EC decays of H-like atoms. Among other results, they showed that, in a spin $1/2$ mirror transition ${}^{31}$S$\rightarrow $ ${}^{31}$P, the rate for the $F=0$ states is 340 times larger than the rate for states with $F=1$. They also pointed out the possibility of observing this kind of effects with trapping techniques of cold ions.\\
More recently, a theoretical analysis of general EC hyperfine rates has been given in~\cite{Pat}, where the ratio of the decay rates for helium -like and hidrogen -like ions is calculated explicitly as a function of the nucleus spin $I$ in the allowed aproximation, and suitable candidates for an experimental confirmation are discussed.
A more refined analysis of K-shell EC and $\beta^{+}$ weak decays rates of $^{140}$ Pr $^{58+}$and$ ^{140}$ Pr$^{ 57+}$, which takes into account the nuclear size and uses relativistic electron wave functions, has been presented in~\cite{1Yva,2Yva}. The ratios of the decay rates computed with these refinements agree with experimental results with an accuracy better than 3 percent. \\
Experiments at GSI have also measured time dependence of the electron capture rate of H-like ${}^{140}$Pr${}^{58+}$, ${}^{142}$Pm${}^{60+}$ and ${}^{122}$I${}^{52+}$ ions~\cite{2Lit,Win,Kie}. They have found an A-dependent modulation in the capture rate $dN_{EC}(t)/dt$ that has been interpreted as due to neutrino oscilations~\cite{3Yva}, although this interpretation is still controversial, see for instance~\cite{4Yva,Giu}.\\
In a different line of developments, EC experiments have been proposed,~\cite{Sat1,Ber1}, as sources of monoenergetic neutrino beams, aiming at the determination of the neutrino mixing angle $\theta _{13}$, and the CP violation parameter $\delta _{CP}$. The experiments would use extremely high energy heavy ions ($\gamma $ $\approx$ 10$^{2}$-10$^{3}$)~\cite{Zuc} that capture one electron and produce a line neutrino spectrum which is rotationally symmetric in the CM frame. Neutrino mixing would change the nature of the neutrinos detected at a distance $L$.
The fluxes of $\nu_{\mu}$ are predicted by well known methods, see for instance~\cite{Ber2} and references therein. Currently, a systematic study of EC in rare-Earth nuclei, relevant for neutrino beams, is being carried out~\cite{Alg}.\\
In this note a general analysis of allowed K-electronic capture, still lacking in the literature as far as we know, is presented and an application to the measurement of neutrino mixing parameters using polarized ions is outlined.
The allowed transitions from the mother nucleus $i$ into the daughter $f$ give the selection rules $I \rightarrow I'=I-1$ ($F_{-}$), $I'=I$ ($F_\pm$) and $I'=I+1$ ($F_+$).The cases $I'= I \pm 1$ are pure Gamow-Teller, whereas for $I'= I$ both Fermi and Gamow-Teller contribute. In all cases, the initial and final parities are the same ($\pi _{i}= \pi _{f }$). In what follows, we consider separately the EC decays of H-like and He-like ions.
\section*{H-like EC}
Let us consider the transition amplitude for a K-EC process from an initial hyperfine $|FM >$ state to a final state where the nucleus spin and $z$-component are $I',m'$ respectively and the left neutrino has momentum $\vec q _{\nu}$= $E_{\nu} \vec n_{\nu}$. This amplitude will be denoted as $T_{FM\rightarrow m'\nu}$.\\
The general structure of gs H-like transition amplitudes $T_{FM\rightarrow m'\nu}$ in terms of reduced angular momentum Wigner-Eckart amplitudes $\overline{T}$ is
\begin{eqnarray}
T_{FM\rightarrow m'\nu} &=& a_{+} (\vec n _{\nu})\delta _{m',M'-1/2}
\left[\sqrt{\frac{I'+M'+1/2}{2I'+1}}<F'_{+}M'|T|FM> \right. \nonumber \\
&& -\left.\sqrt{\frac{I'-M'+1/2}{2I'+1}}<F'_{-}M'|T|FM>\right] \nonumber \\
&&+ a_{-}(\vec n _{\nu})\delta _{m',M'+1/2}
\left[\sqrt{\frac{I'-M'+1/2}{2I'+1}}<F'_{+}M'|T|FM> \right. \nonumber \\
&&+ \left.\sqrt{\frac{I'+M'+1/2}{2I'+1}}<F'_{-}M'|T|FM>\right] .
\end{eqnarray}
where the fundamental property
\begin{equation}
<F'M'|T|FM>=\delta _{F',F}\delta _{M,M'} \overline{T}_{F\rightarrow F'
\end{equation}
incorporates angular momentum conservation and rotational invariance at the outset and
the left handed neutrino spin wave function is given by
\begin{eqnarray}
|\nu_{L}>&=& a_{+}(\vec n _{\nu})^{*}|\uparrow> + a_{-}(\vec n _{\nu})^{*} |\downarrow> \nonumber \\
&=& -\sin (\theta /2) e^{-i\phi} |\uparrow> + \cos (\theta /2) |\downarrow> .
\end{eqnarray}
The reduced $\overline{T}$ are obtained by calculating suitable weak interaction $S$-matrix elements at first order, using the standard $\Delta S=0$ piece of the Hamiltonian with coupling constant $G_{F}V_{ud}/\sqrt{2}$ and renormalized neutron beta decay axial coupling $g_{A}$~\cite{Ams}. The matrix elements factorize into a lepton factor that can be computed explicitly and the nuclear matrix elements of the Vector $V^{\mu}$ and Axial $A^{\mu}$ weak hadronic currents. In the allowed aproximation these nuclear matrix elements depend only on two caracteristic constants $\mathcal{M}_{\sigma,F }(see $~\cite{des} and the Appendix), that one should obtain by using a good nuclear model.\\
The results for the reduced $\overline{T}$, with the weak coupling constants and the value of the electron wave function omitted, are collected in the Appendix.
The cases $I'=I\pm 1$ are pure Gamow-Teller, whereas in the case $I'=I$ both Gamow-Teller and Fermi contribute so that the hiperfine rates depend on $\mathcal{M}_{F}^{2}$, $\mathcal{M}_{\sigma }^{2}$ and the interference term $\mathcal{M}_{F}\mathcal{M}_{\sigma}$. As expected, the interference disappears when the initial spin ion is not polarized ~\cite{Bam,des}. The $I = 1/2$ results in~\cite{Fol} are easily recovered.\\
For fixed $M$, $|T _{FM\rightarrow m'\nu}|^{2}$ depends on both the final neutrino direction and the polarization of the final nucleus. Upon summing over the unobserved polarization of the daughter nucleus
the angular distribution exhibits a characteristic $\cos\theta$ linear distribution.
For instance in the case $ I'$=$I-$1 (the remaining cases are collected in the Appendix) we find
\begin{equation}
\sum _{m' }|T _{F_{- }M\rightarrow m'\nu}|^{2} =\left(
\frac{1}{2}- \frac{M}{2I-1}\cos\theta \right)|\overline{T}_{F_-}|^{2} .
\end{equation}
The capture rate is $M$-independent when one sums over both final nuclear polarization and neutrino momentum (rotational invariance)
\begin{equation}
\int d\Omega _{\nu}\sum_{m'} |T _{FM\rightarrow m'\nu} |^{2}= 2 \pi |\overline{T}|^{2}
\end{equation}
and the hyperfine rate $W$ is
\begin{equation}
\label{eq:W}
W=\frac{(G_{F}V_{ud})^{2}}{\pi}(g_{A}\mathcal{M}_{\sigma})^{2}|\varphi_{0}|^{2}Q_{\nu}^{2}\frac{2I+1}{2I}
\end{equation}
where $Q_{\nu}$ is the neutrino energy. The two body phase space originates the $Q_{\nu}^{2}$ factor.\\
In the K-EC $1^{+}\rightarrow 0^{+}$ decay for $^{140}$Pr$^{58+}$, or in a $0^{+}\rightarrow 1^{+}$ transition~\cite{Ber1}, the use of an effective interaction hamiltonian density in terms of effective relativistic nuclear spin 1, 0 fields $H_{\mu},\phi$
\begin{equation}
g\overline{\nu}(1+\gamma _{5})\gamma ^{\mu}eH_{\mu}\phi +h.c.
\end{equation}
provides a quick way to get our results. The amplitude from initial $I=1,I_{z}=m, S_{z}=s$ to $m'=0, \nu$ is
\begin{equation}
T _{ms\rightarrow 0\nu}\propto \chi_{ L}^{+}\vec \sigma \chi _{s}\vec \epsilon _{m}
\end{equation}
where $\vec \epsilon _{m}, m=\pm1,0$, are the spin-$1$ states of the mother nucleus and $\chi_{s,L}$ are
the Pauli spin function of the captured electron and the final neutrino respectively.
Elaborating the hyperfine amplitudes turn out to be \\
\begin{equation}
T_{F=1/2,\lambda} \propto -\sqrt{3} \chi _{L}^{+}\chi _{\lambda},\qquad T _{F=3/2,\lambda}=0
\end{equation}
in agreement~\cite{F1} with the general results in the Appendix.
\section*{He-like EC}
For ions with two K electrons in the ground state, S=0, the capture process
changes the initial $|(ee)_{S=0}I m >$ state into the state $|I 'm'\nu e_{s'}>$, with a daughter nucleus, one neutrino and one bound spectator electron. The transition amplitude is
\begin{equation}
T^{He}_{(ee)_{gs}Im \rightarrow I 'm'\nu s'}= (T^{H} _{(e\uparrow)_{gs}Im \rightarrow I 'm'\nu}) \delta _{s'\downarrow}-(T^{H} _{(e\downarrow)_{gs}Im \rightarrow I 'm'\nu}) \delta _{s'\uparrow}
\end{equation}
and the transition probability after summing over final electron
spin W$_{(ee)_{gs} m \rightarrow m'\nu}$ is thus given by
\begin{equation}
W_{(ee)_{gs} m \rightarrow m'\nu}=\left|(T^{H} _{(e\uparrow)Im \rightarrow I 'm'\nu})\right|^{2}+\left|(T^{H} _{(e\downarrow)Im \rightarrow I 'm'\nu})\right|^{2}
\end{equation}
In terms of the hyperfine amplitudes
\begin{eqnarray}
(2I+1)&&W_{(ee)_{gs}m \rightarrow m'\nu}= \nonumber \\
&&
(I+m+1)\left|T^{H} _{F_{+},M=m+1/2\rightarrow I 'm'\nu}\right|^{2}+
(I-m)\left|T^{H} _{F-,M=m+1/2\rightarrow I 'm'\nu}\right|^{2} \nonumber \\
&&-2\sqrt{(I+1/2)^{2}-M^{2}} \mathrm{Re}[T^{H}_{F+,M=m+1/2\rightarrow I 'm'\nu}(T^{H}_{F-,M=m+1/2\rightarrow I 'm'\nu})^{*}] \nonumber \\
&&+(I-m+1)\left|T^{H} _{F_{+},M=m-1/2\rightarrow I 'm'\nu}\right|^{2}
+(I+m)\left|T^{H} _{F-,M=m-1/2\rightarrow I 'm'\nu}\right|^{2} \nonumber \\
&&+ 2\sqrt{(I+1/2)^{2}-M^{2}}\mathrm{Re}(T^{H}_{F+,M=m-1/2\rightarrow I 'm'\nu}(T^{H}_{F-,M=m-1/2\rightarrow I 'm'\nu})^{*})
\end{eqnarray}
Upon summing and integrating over final $m', \nu$ the interference term disappears and the RHS of (11) becomes
\begin{eqnarray}
&&(I+m+1)\sum_{m' \nu}\left|T^{H}_{F_{+},M=m+1/2\rightarrow I 'm'\nu}\right|^{2}+
(I-m+1)\sum_{m' \nu}\left|T^{H} _{F_{+},M=m-1/2\rightarrow I 'm'\nu}\right|^{2} \nonumber\\
&& +(I-m)\sum _{m' \nu}\left|T^{H}_{F_{-},M=m+1/2\rightarrow I 'm'\nu}\right|^{2} .
\end{eqnarray}
\\
Therefore the total H, He-like $W$ rates are related by
\begin{equation}
W^{He}= 2\frac{I+1}{2I+1}W ^{H}_{(+)}+2\frac{I}{2I+1}W ^{H}_{(-)}
\end{equation}
This relation was first obtained in~\cite{Pat}. For $^{140}$Pr it gives good agreement with experiment~\cite{1Lit} once the important corrections due to relativistic Coulomb effects are taken into account~\cite{1Yva,2Yva}.\\
In the case $1^{+}\rightarrow 0^{+}$ one obtains
\begin{equation}
3W_{(ee)_{gs} m \rightarrow 0\nu}=
(1-m)\left|T^{H}_{1/2,M=m+1/2\rightarrow 0\nu}\right|^{2}+
(1+m)\left|T^{H}_{1/2,M=m-1/2\rightarrow 0\nu}\right|^{2}
\end{equation}
and therefore
\begin{equation}
W_{(ee)_{gs} m \rightarrow 0\nu}\propto (1-m\cos\theta) ,\quad m=1, 0, -1 .
\end{equation}
As required by rotational invariance, the magnetic $m$-number dependence vanishes after integrating over the neutrino directions. This dependence disagrees~\cite{F2} with the results of~\cite{1Yva,2Yva}.
In the general case $I\rightarrow I-1$ the neutrino angular dependence is given by
\begin{equation}
\frac{I-m \cos\theta }{I},\qquad m =I, I-1,..., -I .
\end{equation}
\section*{EC and Neutrino Parameters}
EC could be useful in order to fix the values of the yet unknown neutrino mixing parameters
$\theta_{13}, \delta _{CP}$~\cite{Ber1,Ber2}.\\
Monoenergetic -- in the EC rest frame -- pure electronic neutrinos are detected as $\nu _{\mu}$ in a long baseline neutrino experiment. Accelerated stored high gamma ions have been proposed as suitable neutrino sources (Zucchelli~\cite{Zuc}).\\
J. Sato~\cite{Sat1} and M. Rolinec and J. Sato~\cite{Sat2} have investigated the physical potential for neutrinos coming from the EC process e$^{-}$ $^{110}$Sn$\rightarrow$ $^{110}$ In $^{*}$ $\nu _{e}$, $Q = 267$ kev, which is an allowed $0^{+}\rightarrow 1^{+}$ nuclear transition.
H-like ions would be assembled by using two equal $\gamma$ paralell beams of bare nuclei and electrons that would be captured in fly. As $Q$ is small the lifetime should be large
(recall Eq.(\ref{eq:W}), experimentally $\tau$ = 4.11h) and therefore the emited neutrino flux would be low. The proposed Setups have baseline $L = 250$, $600$ km and $\gamma = 900-2000$.\\
J. Bernabeu and collaborators have independently proposed~\cite{Ber1} EC neutrino factories and thoroughly studied the fenomenology and viability of EC neutrino experiments aiming to measure the CP violating phase $\delta _{CP}$. Recently~\cite{Ber2} they have proposed an hybrid beta decay and EC Setup using $^{156}$ Yb ions that decay
38\% via EC with a $\nu$-energy of 3.46 Mev and 52\% via $\beta$ $^{+}$ with end neutrino energy of $2.44$ Mev. The daughter nucleus $^{156}$Tm$^{*}$ is an excited $1^{+}$ giant Gamow-Teller resonance state so that the halflife, $t_{1/2}=26.1$ seconds, is short enough to allow EC in the decay ring.\\
One should note that the use of polarized H-like ions would produce a neutrino flux depence inside the detector that could be useful in order to disentangle the values of neutrino mixing parameters. In the case of a 0$^{+}\rightarrow$ 1$^{+}$ nuclear transition with capture of one K-electron
the neutrino angular distribution from ions with polarization vector $\vec P$ = $< \vec \sigma >$ is
\begin{equation}
W(\vec P,\vec n _{\nu} ) =\frac{1}{2}+\frac{1}{3}\vec P\cdot\vec n _{\nu},\qquad |\vec P |= p
\end{equation}
With $\vec P$ in the ion beam direction, the $\nu$-distributions in the ion rest frame (rf) show a caracteristic parity violating linear $\cos \theta _{rf}$ dependence
\begin{equation}
W(\theta _{rf}, p ) =\frac{1}{2}( 1 \pm \frac{p }{3}\cos \theta _{rf})
\end{equation}
This angular modulation is observable in the Rolinec, Sato proposal, Setup II, with a large Water Cherenkov detector of radius R = 100 m, base L = 250 km. With this geometry $\tan \theta _{max}$ = $\frac{R}{L}$ and, for a neutrino emited at right angle $\theta _{rf} = \frac{\pi}{2}$ to be detected, a boos
\begin {equation}
\underline{\gamma}= \frac{1}{\sin \theta _{max}} = 2500, \qquad
\end{equation}
is required. Rolinec and Sato take $\gamma$ = 2000 and therefore $\cos (\theta _{rf })_{max}$ = .22.
The flux of $\nu_{\mu}$ in points inside the detector would depend now, in a known way, also on the ion polarization. The requirements to achieve such a performance, i.e., very high $\gamma$, large number of isotope decays per year, very low beam divergences for the stored isotopes~\cite{Sat2}, together with the use of very high $\gamma$ polarized electron beams are extremely demanding.
| 2024-02-18T23:40:00.942Z | 2010-03-02T14:22:57.000Z | algebraic_stack_train_0000 | 1,089 | 2,927 |
|
proofpile-arXiv_065-5457 | \section*{\refname}}
\title{Optical Excitations with Electron Beams: Challenges and Opportunities
}
\author{F.~Javier~Garc\'{\i}a~de~Abajo}
\email{[email protected]}
\affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain}
\affiliation{ICREA-Instituci\'o Catalana de Recerca i Estudis Avan\c{c}ats, Passeig Llu\'{\i}s Companys 23, 08010 Barcelona, Spain}
\author{Valerio~Di~Giulio}
\affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain}
\begin{abstract}
Free electron beams such as those employed in electron microscopes have evolved into powerful tools to investigate photonic nanostructures with an unrivaled combination of spatial and spectral precision through the analysis of electron energy losses and cathodoluminescence light emission. In combination with ultrafast optics, the emerging field of ultrafast electron microscopy utilizes synchronized femtosecond electron and light pulses that are aimed at the sampled structures, holding the promise to bring simultaneous sub-{\AA}--sub-fs--sub-meV space-time-energy resolution to the study of material and optical-field dynamics. In addition, these advances enable the manipulation of the wave function of individual free electrons in unprecedented ways, opening sound prospects to probe and control quantum excitations at the nanoscale. Here, we provide an overview of photonics research based on free electrons, supplemented by original theoretical insights, and discussion of several stimulating challenges and opportunities. In particular, we show that the excitation probability by a single electron is independent of its wave function, apart from a classical average over the transverse beam density profile, whereas the probability for two or more modulated electrons depends on their relative spatial arrangement, thus reflecting the quantum nature of their interactions. We derive first-principles analytical expressions that embody these results and have general validity for arbitrarily shaped electrons and any type of electron-sample interaction. We conclude with some perspectives on various exciting directions that include disruptive approaches to non-invasive spectroscopy and microscopy, the possibility of sampling the nonlinear optical response at the nanoscale, the manipulation of the density matrices associated with free electrons and optical sample modes, and appealing applications in optical modulation of electron beams, all of which could potentially revolutionize the use of free electrons in photonics.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
The last two decades have witnessed spectacular progress in our ability to control light down to deep-subwavelength scales thanks to advances in nanofabrication using bottom-up approaches (colloid chemistry \cite{BCN05} and surface science \cite{CRJ10}) and top-down techniques (electron-beam \cite{DFB12} (e-beam) and focused-ion-beam \cite{NLO09} lithographies), as well as combinations of these two types of methods \cite{paper335,DJD20}. In parallel, substantial improvements in optics have enabled the acquisition of spectrally resolved images through scanning near-field optical microscopy \cite{HTK02,BKO10,WFM14} (SNOM) and super-resolution far-field optics \cite{BPS06,YZ19}, in which the diffraction limit is circumvented either by relying on nanoscale scatterers ({\it e.g.}, metallic tips \cite{HTK02,BKO10,WFM14}) or by targeting special kinds of samples ({\it e.g.}, periodic gratings \cite{YZ19} or fluorophore-hosting cells \cite{BPS06}). However, light-based imaging is far from reaching the atomic level of spatial resolution that is required to investigate the photonic properties of vanguard material structures.
\begin{figure*}
\centering{\includegraphics[width=1.0\textwidth]{Fig1}}
\caption{Probing nanoscale optical excitations. We show examples of mode dispersion relations (a,d,g), spatial mode distributions (b,e,h), and spectrally narrow plasmons (c,f,i) probed through EELS (a-c), CL (d-f), and PINEM (g-i).
(a) Plasmon dispersion measured in a self-standing aluminum film through angle- and energy-resolved transmitted electrons. Adapted from ref\ \citenum{PSV1975}.
(b) Plasmon standing waves in long silver nanowires ($1.22\,\mu$m and $2.07\,\mu$m long in the top and bottom images, respectively) mapped by using 80\,keV TEM electrons and having energies (in eV) as indicated by labels. Adapted from ref\ \citenum{RB13}.
(c) Spectral features associated with high-quality-factor plasmon standing waves in a long copper nanowire ($15.2\,\mu$m length, 121\,nm diameter) extending from the mid- to the near-infrared, as resolved through high-resolution EELS. Adapted from ref\ \citenum{paperarxiv5}.
(d) Trivial and topological photonic crystal bands observed through 30\,keV SEM-based angle-resolved CL from two arrays of silicon pillars (200\,nm high, 88\,nm wide) deposited on a 10\,nm thick Si$_3$N$_4$ membrane and arranged on a hexagonal superlattice ($455$\,nm period) of either shrunken (138 hexagon side length) or expanded (168 side length) hexamers (see labels) formed by six pillars per lattice site. Adapted from ref\ \citenum{PSN19}.
(e) Polarization-resolved CL intensity (lower maps) and emission Stokes parameters (center-right maps) produced by 80\,keV electrons in a TEM as a function of e-beam position over a silicon sphere (250\,nm diameter, see upper-right SEM image), as obtained by filtering $1.8\pm0.1\,$eV photons emitted with an angle of $45^\circ$ relative to the electron velocity. Adapted from ref\ \citenum{paperxx3}.
(f) Plasmon standing waves confined to circular grooves of different radii (see labels) carved into a single gold crystal (see upper-right SEM image) and mapped through CL, with the azimuthal number $m$ defining the number of periods along the circumference, as shown in the lower-right inset. Adapted from ref\ \citenum{paper137}.
(g,h) Dispersion relation (g) and near-field maps (h) of TM and TE modes in a 2D 200\,nm thick Si$_3$N$_4$ photonic crystal formed by a hexagonal hole array of 600\,nm period, mapped through PINEM using 80\,keV electrons. Adapted from ref\ \citenum{WDS20}.
(i) Silver nanowire plasmon standing wave spectrally resolved with 20\,meV accuracy (right) through the depletion observed in the zero-loss peak (ZLP) (left) as the frequency of the PINEM laser is scanned over the mode resonance. Adapted from ref\ \citenum{paper306}.}
\label{Fig1}
\end{figure*}
\begin{figure*}
\centering{\includegraphics[width=1.00\textwidth]{Fig2}}
\caption{Electron-beam vibrational spectromicroscopy.
(a) Spectral features of phonon polaritons in LiF recorded through energy losses and gains experienced by 25\,keV electrons transmitted through a thin foil, with the gains originating in thermally populated modes at room temperature $T\approx300\,$K and the loss-to-gain peak ratio approximately given by $1+1/n_T(\omega)=\ee^{\hbar\omega/\kB T}$ ($\sim7$ at $\hbar\omega=50\,$meV). Adapted from ref\ \citenum{BGS1966}.
(b,c) Nanoscale e-beam thermometry based on high-resolution EELS of a MgO cube (b), whereby the sample temperature is determined upon examination of the loss-to-gain intensity ratio (c). Adapted from ref\ \citenum{LB18}.
(d) Atomic resolution in the mapping of vibrational spectra, here used to image the localization of the phonon density of states produced by a Si defect in monolayer graphene. Adapted from ref\ \citenum{HRK20}.
(e) Strong coupling between hBN photon polaritons and silver nanowire plasmons observed through high-resolution EELS by iterative e-beam drilling to shrink the wire length and scan one of its plasmon resonances over the phononic spectral region. Adapted from ref\ \citenum{paper342}.
(f) Phonon dispersion in graphite and hBN obtained by high-resolution angle-resolved EELS. Adapted from ref\ \citenum{SSB19}.}
\label{Fig2}
\end{figure*}
Spatial resolution down to the atomic scale can be achieved by using electrons as either probes or drivers of the sampled optical excitations. In particular, inelastically scattered beam electrons carry information on the excited states of the specimen, which can be revealed by performing electron energy-loss spectroscopy (EELS) \cite{E96,E03,EB05,B06}, as extensively demonstrated in the spectral and spatial mapping of optical modes covering a broad frequency range, stretching from the ultraviolet to the far infrared \cite{paper149,KS14,paper338,KUB09,KLD14,LTH17,LB18,HNY18,HHP19,HKR19,paper342,HRK20,YLG21,paper359}. Several examples of application are reviewed in Figures\ \ref{Fig1}a-c and \ref{Fig2}. In this field, benefiting from recent advances in instrumentation \cite{BDK02,KLD14,KDH19}, state-of-the-art transmission electron microscopes (TEMs) operated at $\sim30-300$\,kV acceleration voltages can currently deliver spectrally filtered images with combined sub-{\AA} and few-meV space-energy resolution \cite{KLD14,LTH17,LB18,HNY18,HHP19,HKR19,paper342,HRK20,YLG21,paper359} (see Figures\ \ref{Fig1}c and \ref{Fig2}d,e). Indeed, the reduction in the width of the electron zero-loss peak (ZLP) below $\sim10\,$meV and the ensuing high spectral resolution in EELS enable the exploration of optical modes down to the mid-infrared, including phonons in graphene \cite{HRK20} and silicon carbide \cite{YLG21} along with their modification due to atomic-scale defects (Figure\ \ref{Fig2}d), phonons and phonon polaritons in graphite \cite{SSB19} and hexagonal boron nitride \cite{paper342,SSB19} (hBN) (Figure\ \ref{Fig2}e,f), and low-energy plasmons in long silver \cite{RB13} (Figure\ \ref{Fig1}b) and copper \cite{paperarxiv5} (Figure\ \ref{Fig1}c) nanowires. In addition, under parallel e-beam illumination, the inelastic electron signal can be resolved in energy and deflection angle to provide dispersion diagrams of surface modes in planar structures \cite{BGI1966,PSV1975,CS1975,CS1975_2,SSB19} (see Figures\ \ref{Fig1}a and \ref{Fig2}f). A vibrant field of e-beam vibrational spectromicroscopy has emerged in this context (see Figure\ \ref{Fig2}), with achievements such as the determination of the sample temperature distribution with nanometer precision thanks to the analysis of energy gains produced in the electrons by absorption of thermally populated modes \cite{LTH17,ILT18,LB18,paperarxiv5} (Figure\ \ref{Fig2}b,c), thus adding high spatial resolution to previous demonstrations of this approach \cite{BGS1966} (Figure\ \ref{Fig2}a).
A limiting factor in TEMs is imposed by the requirement of electron-transparent specimens with a total thickness of $\lesssim100\,$nm. At the cost of reducing spatial resolution, low-energy ($\sim50-500\,$eV) electron microscopy (LEEM) allows studying thicker samples by recording surface-reflected electrons \cite{R95}. This approach enables the acquisition of dispersion diagrams in planar surfaces by resolving the electron deflections associated with in-plane momentum transfers \cite{NHH01}, even in challenging systems such as monoatomic rows of gold atoms arranged on a vicinal silicon surface, which were neatly shown to support 1D plasmons through LEEM \cite{NYI06}. Likewise, using intermediate e-beam energies ($\sim1-50\,$keV), secondary electron microscopes (SEMs) offer the possibility of studying optical modes also in thick samples through the cathodoluminescence (CL) photon emission associated with the radiative decay of some of the created excitations \cite{paper149}, as extensively demonstrated in the characterization of localized \cite{paper035,paper116,paper137,paper167,WLC18} and propagating \cite{BJK06,VVP06,YS08} surface plasmons (see an example in Figure\ \ref{Fig1}f), as well as optical modes in dielectric cavities \cite{SCR12,paper341,paperxx3} (see Figure\ \ref{Fig1}e) and topological 2D photonic crystals \cite{PSN19} (see Figure\ \ref{Fig1}d), with spatial resolution in the few-nm range \cite{SMS19}. Some of these and other related studies were performed in TEMs \cite{YST96,paper035,KZ17,paper251,paper341,paperxx3}, where a direct comparison between CL and EELS was found to reveal similarities of the resulting spectra and those associated with optical elastic scattering and extinction, respectively \cite{paper251}. Combined with time-resolved detection, CL permits determining the lifetime and autocorrelation of sample excitations created by the probing electrons \cite{MSC05,TK13,MTC15,BMT16,MCW18,SMC20}, while the analysis of the angular distribution of the light emission provides direct information on mode symmetries \cite{paper116,SCR12,SAG20,paper341,paperxx3}. Nevertheless, EELS has the unique advantage of being able to detect dark optical excitations that do not couple to propagating radiation ({\it e.g.}, dark plasmons) but can still interact with the evanescent field of the passing electron probe \cite{KBK09,paper121,SDH12,BRF14}. In this respect, the presence of a substrate can affect the modes sampled in a nanostructure, for example by changing their optical selection rules, therefore modifying the radiation characteristics that are observed through CL \cite{SAG20,paper341}. Additionally, by collecting spectra for different orientations of the sample relative to the e-beam, both EELS \cite{NPL13} and CL \cite{ABC15} have been used to produce tomographic reconstructions of plasmonic near fields.
\begin{figure*}
\centering{\includegraphics[width=0.70\textwidth]{Fig3}}
\caption{Microscopies at the frontier of space-time-energy resolution. (a) We organize different microscopy techniques according to their spatial (vertical axis), spectral (horizontal axis), and temporal (color scale) resolutions. The latter is limited to the sub-ns regime when relying on fast electronics \cite{MSC05} (green and blue), while it reaches the fs domain with optical pulses (yellow) and the attosecond range with X-ray pulses (red), but also with ultrashort electron pulses. In particular, measurement of CL driven by temporally compressed e-beams could potentially provide simultaneous sub-{\AA}--attosecond--sub-meV resolution (see main text). (b) Schematic illustration of an ultimate ultrafast electron microscope, encompassing (1) a photocathode tip that acts as an electron source driven by photoemission upon laser pulse irradiation; (2) an electron-modulation block based on PINEM-like interaction and subsequent free-space propagation that generates attosecond electron pulses; (3) a sample stage accessed by synchronized electron and laser pulses; and (4) the acquisition of several types of signals that include angle-resolved EELS and CL. The three fs laser pulses illuminating the photocathode, the sample, and the PINEM intermediate element are synchronized with attosecond-controlled delays. Currently available TEM and SEM setups incorporate different partial combinations of these possibilities. (c) Schematic illustration of time-resolved PEEM, where photoelectrons are used to construct fs- and nm-resolved movies by scanning the time delay between pump and probe laser pulses. (d) Illustration of STML, which enables atomic resolution through the detection of luminescence produced by inelastically tunneling electrons (right) and could be acquired with sub-ps temporal precision through modulation of the tip gate voltage. Femtosecond resolution could be potentially achieved through measurement of the laser-assisted electron tunneling current using pump-probe optical pulses (left).}
\label{Fig3}
\end{figure*}
The emergence of ultrafast transmission electron microscopy (UTEM) has added femtosecond (fs) temporal resolution to the suite of appealing capabilities of e-beams \cite{GLW06,BPK08,BFZ09,ARM20}. In this field, fs laser pulses are split into a component that irradiates a photocathode to generate individual fs electron pulses and another component that illuminates the sample with a well-controlled delay relative to the time of arrival of each electron pulse \cite{GLW06,BPK08,BFZ09} (Figure\ \ref{Fig3}b). Slow (sub-ps) structural changes produced by optical pumping have been tracked in this way \cite{GLW06,BPK08}, while the optical-pump--electron-probe (OPEP) approach holds the additional potential to resolve ultrafast electron dynamics \cite{HBL16,paperarxiv1}. It should be noted that an alternative method in UTEM, consisting in blanking the e-beam with sub-ns precision, can be incorporated in high-end SEMs and TEMs without affecting the beam quality \cite{paper325}, although with smaller temporal precision than the photocathode-based technique.
The electron-sample interaction is generally weak at the high kinetic energies commonly employed in electron microscopes, and consequently, the probability for an electron to produce a valence excitation or give rise to the emission of one photon is typically small ($\lesssim10^{-4}$). Nevertheless, low-energy electrons such as those used in LEEMs (and also in SEMs operated below $\sim1\,$keV) can excite individual nanoscale confined modes with order-unity efficiency \cite{paper228}, although a yield $\ll1$ should be expected in general at higher electron energies. The OPEP approach thus addresses nonlinear processes triggered by optical pumping and sampled in a perturbative ({\it i.e.}, linear) fashion by the electron \cite{GLW06,BPK08,paperarxiv1}. Furthermore, UTEM setups can produce multiple photon exchanges with each beam electron even if the specimen responds linearly to the optical pulse. Indeed, while a net absorption or emission of photons by the electron is kinematically forbidden in free space \cite{paper311}, the presence of the sample introduces evanescent optical field components that break the energy-momentum mismatch, leading to a nonvanishing electron-photon interaction probability, which is amplified by stimulated processes in proportion to the large number of incident photons ($\propto$ laser intensity) contained in each optical pulse. This effect has been argued to enable high spectral resolution by performing electron energy-gain spectroscopy (EEGS) while scanning the pumping light frequency \cite{H99,paper114,H09,paper306}, so that energy resolution is inherited from the spectral width of the laser, whereas the atomic spatial resolution of TEM setups can be retained. A similar approach has been followed to push energy resolution down to the few-meV range by analyzing the depletion of the ZLP upon intense laser irradiation \cite{paper306} (see Figure\ \ref{Fig1}i). We reiterate that potential the degradation of beam quality and energy width introduced at the photocathode can be avoided by resorting instead to e-beam blanking in combination with synchronized nanosecond laser pulses \cite{paper325}.
\begin{figure*}
\centering{\includegraphics[width=1.00\textwidth]{Fig4}}
\caption{Optical modulation of free electrons.
(a) Energy comb of electron losses and gains produced by ultrafast interaction with evanescent light fields in the PINEM approach: experiment \cite{BFZ09} and theory \cite{paper151} comparison. Adapted from ref\ \citenum{paper151}.
(b) Laser-amplitude dependence of the electron energy comb produced by PINEM interaction, revealing quantum billiard dynamics among different electron energy channels separated by the photon energy $\hbar\omega$. Adapted from ref\ \citenum{FES15}.
(c,d) Tilt-angle dependence of the PINEM energy comb produced by using a planar film (c) and associated transfers of lateral linear momentum (d). Adapted from ref\ \citenum{paper311}.
(e) PINEM in the intermediate-coupling regime showing a $(n+1)/n$ loss-gain intensity ratio in the EELS spectra of silver nanoparticles with 100\,keV electrons under ns-laser illumination, superimposed on regular spontaneous EELS features, for beam positions as shown in the color-coordinated spots of the upper-left image, along with gain and loss energy-filtered images in the upper-middle and -right plots. Adapted from ref\ \citenum{paper325}.
(f) Intense-coupling regime resulting in a large number of PINEM energy sidebands under total-internal-reflection phase-matched illumination ({\it i.e.}, with the electron velocity matching the surface-projected light speed inside the glass). Adapted from ref\ \citenum{DNS20}.
(g) Transfer of angular momentum between light and electrons, as revealed in a configuration similar to (c) through a donut shape of the electron intensity in the Fourier plane after PINEM interaction. Adapted from ref\ \citenum{paper332}.
(h) Electron modulation into a train of attosecond pulses upon propagation from the PINEM interaction region over a sufficiently large distance to interlace different energy sideband components in an electron microscope. Adapted from ref\ \citenum{PRY17}.
(i,j) Single electron pulses produced by streaking a train of pulses following the scheme shown in panel (i) and experimental demonstration based on the observation of the time-resolved electron current in a table-top e-beam-line setup (j). Adapted from ref\ \citenum{MB20}.
}
\label{Fig4}
\end{figure*}
In this context, intense efforts have been devoted to studying nonlinear interactions from the electron viewpoint in UTEM setups, assisted by the linear response of the sample to optical laser pumping. As a manifestation of these interactions, multiple quanta can be exchanged between the light and electron pulses in what has been termed photon-induced near-field electron microscopy (PINEM) \cite{BFZ09,paper151,PLZ10,PZ12,KGK14,PLQ15,FES15,paper282,EFS16,KSE16,RB16,VFZ16,paper272,PRY17,KML17,FBR17,paper306,paper311,paper312,MB18,MB18_2,paper325,paper332,K19,PZG19,paper339,RML20,DNS20,KLS20,WDS20,RK20,MVG20,paper360,VMC20}. The longitudinal (along the e-beam direction) free-electron wave function is then multiplexed in a periodic energy comb formed by sidebands separated from the ZLP by multiples of the laser photon energy \cite{BFZ09,paper151,PLZ10,PLQ15,FES15,EFS16} and associated with discrete numbers of net photon exchanges (Figure\ \ref{Fig4}a,b,c), the probability of which can be expressed in terms of a single coupling parameter $\beta$ that encapsulates the electron interaction with the optical near field and depends on lateral position in the transverse e-beam plane (see below). Such transverse dependence can be engineered to imprint an on-demand phase pattern on the electron wave function, giving rise, for example, to discretized exchanges of lateral linear momentum \cite{paper272,paper311,FYS20} (see Figure\ \ref{Fig4}d and also ref\ \citenum{FYS20} for sharper features associated with momentum discretization) and orbital angular momentum \cite{paper332,paper312} (Figure\ \ref{Fig4}g) between the light and the electron. PINEM spectral features ({\it i.e.}, the noted energy comb) do not bear phase coherence relative to spontaneous excitations associated with EELS \cite{paper325}, as experimentally verified for relatively low laser intensities, which lead to stimulated (PINEM loss and gain peaks) and spontaneous (EELS, only loss) energy peaks in the observed spectra with comparable strengths (Figure\ \ref{Fig4}e). In this regime, single-loss and -gain peak intensities are proportional to $n+1$ and $n$, respectively, where $n$ is the population of the laser-excited sample mode to which the electron couples. In contrast, we have $n\gg1$ at high laser fluence, so gain and loss features configure a symmetric spectrum with respect to the ZLP. As the intensity increases (Figure\ \ref{Fig4}a,b), multiple photon exchanges take place. These events were predicted \cite{paper151}, and subsequently confirmed in experiment \cite{FES15}, to give rise to a sub-fs quantum billiard dynamics (Figure\ \ref{Fig4}b). Enhanced order-unity electron-photon coupling is achieved under phase-matching conditions when the electron travels at the same velocity as the optical mode to which it couples \cite{paper180,K19}. Under this condition, the number of PINEM energy sidebands is strongly enlarged \cite{KLS20,DNS20} (see Figure\ \ref{Fig4}f), eventually reducing the loss-gain spectral symmetry, presumably due to departures from phase-matching produced by electron recoil. Incidentally, inelastic ponderomotive interactions can also be a source of asymmetry, as we discuss below, and so are corrections due to electron recoil \cite{T20}.
The optical near-field dynamics in nanostructures has been explored through PINEM, as illustrated by the acquisition of fs-resolved movies of surface plasmons evolving in nanowires \cite{PLQ15} and buried interfaces \cite{paper282}, as well as in the characterization of optical dielectric cavities and the lifetime of the supported optical modes \cite{KLS20,WDS20} (see Figure\ \ref{Fig1}g,h). It should be noted that analogous plasmon movies can be obtained through optical pump-probing combined with photoemission electron microscopy (PEEM, Figure\ \ref{Fig3}c) performed on clean surfaces \cite{MYH20}, as demonstrated for propagating plane-wave \cite{KOP05,KPP07}, chiral \cite{SKM17,DJD20}, and topological \cite{DZG20} plasmons. Nevertheless, by employing different types of particles to pump and probe ({\it e.g.}, photons and electrons), PINEM-modulated e-beams can potentially enable access into the attosecond regime without compromising energy resolution, as we argue below.
Complementing the above advances, the generation of temporally compressed electron pulses has emerged as a fertile research area \cite{BZ07,SCI08,PRY17,MB18_2,KES17,KSH18,MB18,SMY19,RTN20} that holds potential to push time resolution toward the attosecond regime. An initial proposal relied on free-space electron-light interactions \cite{BZ07}. Indeed, electron energy combs can also be produced in free space through ponderomotive interaction with two suitably oriented light beams of different frequencies $\omega_1$ and $\omega_2$ as a result of stimulated Compton scattering, subject to the condition $\omega_1-\omega_2=(\kb_1-\kb_2)\cdot\vb$, where $\kb_1$ and $\kb_2$ denote the photon wave vectors and $\vb$ is the electron velocity. The resulting electron spectrum consists of periodically spaced energy sidebands separated from the ZLP by multiples of the photon energy difference $\hbar|\omega_1-\omega_2|$ \cite{KES17}. After a long propagation distance beyond the electron-photon interaction region, different energy components in the electron wave function, traveling at slightly different velocities, become interlaced and can give rise to a periodic train of compressed-probability-density pulses with a temporal period $2\pi/|\omega_1-\omega_2|$. For sufficiently intense light fields, these pulses were argued to reach sub-fs duration \cite{BZ07}, as neatly confirmed in free-space experiments \cite{KES17,KSH18}. In a separate development, compression down to sub-fs pulses was achieved for spatially ($\sim100\,\mu$m) and spectrally ($\sim30\,$keV) broad multi-electron beams accelerated to 60\,MeV \cite{SCI08} using an inverse-free-electron-laser approach that relied on the coupling to the optical near field induced in a grating by irradiation with sub-ps laser pulses. In a tour-de-force experiment, PINEM-based production of attosecond pulse trains (Figure\ \ref{Fig4}h) was eventually pioneered in an electron microscope \cite{PRY17} at the single-electron level, yielding it compatible with $<1\,$nm e-beam spots and quasimonochromatic incident electrons ($<0.6\,$eV spread), thus raising the control over the electron wave function to an unprecedented level, and simultaneously rendering temporally modulated electrons accessible for use in spatially resolved spectroscopy. A demonstration of attosecond compression followed soon after using a table-top e-beam line setup \cite{MB18_2}, along with the generation of single electron pulses by subsequent angular sorting based on optical streaking \cite{MB20} (Figure\ \ref{Fig4}i,j), which is promising for the synthesis of individual attosecond electron pulses, although its combination with sub-nm lateral e-beam focusing in a microscope remains as a major challenge.
We organize the above-mentioned techniques in Figure\ \ref{Fig3}a according to their degree of space-time-energy resolution. Notably, electron-based methods offer better spatial resolution than all-optical approaches because of the shorter wavelength of such probes compared to photons. Incidentally, for the typical $30-300$\,keV e-beam energies, the electron wavelength lies in the $7-2$\,pm range, which sets an ultimate target for the achievable spatial resolution, currently limited by the numerical aperture of electron optics (NA$\sim10^{-2}$, leading to an e-beam focal size of $\sim0.5\,${\AA}). In contrast, far-field light optics and even SNOM offer a lower spatial resolution. We include for comparison laser-induced electron diffraction (LIED), which relies on photoemission from spatially oriented individual molecules produced by attosecond X-ray pulses, followed by electron acceleration driven by a synchronized infrared laser and subsequent elastic scattering back at the molecules; this technique grants us access into the molecular atomic structure with sub-{\AA}--attosecond precision \cite{WPL16} and it also provides indirect information of electronic potential-energy surfaces \cite{paper324}. Interestingly, time-resolved low-energy electron diffraction has also been employed to study structural dynamics in solid surfaces using photoemission e-beam sources analogous to UTEM \cite{VSH18}. In a radically different approach, scanning tunneling microscope luminescence \cite{KGM17} (STML, Figure\ \ref{Fig3}d) provides atomic spatial precision combined with optical spectral resolution in the determination of electronic defects in conducting surfaces \cite{LGD20,paper354}, which can in principle be combined with fast electronics to achieve sub-ns temporal resolution similar to CL \cite{MSC05}. Additionally, laser-driven tunneling in the STM configuration can provide fs resolution by measuring the electron current under optical pump-probe laser irradiation \cite{MPT02,DAZ11,KGM17} (Figure\ \ref{Fig3}c). In this article, we speculate that the team formed by synchronized ultrafast laser and free-electron pulses combined with measurement of angle-resolved CL (Figure\ \ref{Fig3}b) holds the potential to reach the sought-after sub-{\AA}--attosecond--sub-meV simultaneous level of resolution in the study of optical excitations, while even higher accuracy is still possible from the point of view of the fundamental limits (see below). These ideas can be implemented in TEMs, SEMs, and LEEMs, with the last two of these types of instruments presenting the advantage of offering stronger electron interaction with nanoscale optical modes.
\section{Fundamentals of Electron-Beam Spectroscopies}
Theoretical understanding of electron microscopy has benefited from a consolidated formalism for the analysis of EELS and CL spectra, as well as new emerging results in the field of UTEM. We present below a succinct summary of the key ingredients in these developments.
\subsection{Spontaneous Free-Electron Interaction with Sample Optical Modes} For the swift electron probes and low excitation energies under consideration, EELS and CL transition probabilities can be obtained by assimilating each beam electron to a point charge $-e$ moving with constant velocity vector $\vb=v\zz$ (nonrecoil approximation, see below) and interacting linearly with each sample mode. The electron thus acts as an external source of evanescent electromagnetic field, and in particular, the frequency decomposition of the electric field distribution as a function position $\rb=(\Rb,z)$ (with $\Rb=(x,y)$) for an electron passing by $\rb=(\Rb_0,0)$ at time zero admits the expression \cite{paper149}
\begin{align}
\Eb^{\rm ext}(\rb,\omega)=\frac{2e\omega}{v^2\gamma}\,\ee^{{\rm i}} \def\ee{{\rm e}\omega z/v}\,
\Fb(\Rb-\Rb_0,\omega),
\nonumber
\end{align}
where
\begin{align}
\Fb(\Rb,\omega)=\frac{{\rm i}} \def\ee{{\rm e}}{\gamma}K_0\left(\frac{\omega R}{v\gamma}\right)\zz-K_1\left(\frac{\omega R}{v\gamma}\right) \RR
\label{Fb}
\end{align}
and $\gamma=1/\sqrt{1-v^2/c^2}$ is the relativistic Lorentz factor. The time-dependent field is obtained through the Fourier transform
\[\Eb^{\rm ext}(\rb,t)=\frac{1}{2\pi}\int_{-\infty}^\infty\,d\omega\,\Eb^{\rm ext}(\rb,\omega)\,\ee^{-{\rm i}} \def\ee{{\rm e}\omega t}.\]
At large radial separations $R$, the two modified Bessel functions in $\Fb$ decay exponentially as $K_m(\zeta)\approx\ee^{-\zeta}\sqrt{\pi/2\zeta}$, whereas at short distances it is $K_1(\zeta)\approx1/\zeta$ that provides a dominant divergent contribution and explains the excellent spatial resolution of e-beams \cite{E07}. The induced field $\Eb^{\rm ind}$ acts back on the electron to produce a stopping force. By decomposing the resulting energy loss in frequency components, we can write the EELS probability as \cite{paper149}
\begin{align}
\Gamma_{\rm EELS}(\Rb_0,\omega)=\frac{e}{\pi\hbar\omega} \int_{-\infty}^\infty dz \,{\rm Re} \left\{\ee^{-{\rm i}} \def\ee{{\rm e}\omega z/v}E_z^{\rm ind}(\Rb_0,z,\omega) \right\}.
\label{EELS}
\end{align}
This quantity is normalized in such a way that $\int_0^\infty\,d\omega\,\Gamma_{\rm EELS}(\Rb_0,\omega)$ is the total loss probability and $\int_0^\infty\,d\omega\,\hbar\omega\,\Gamma_{\rm EELS}(\Rb_0,\omega)$ is the average energy loss.
It is convenient to express the EELS probability in terms of the $3\times3$ electromagnetic Green tensor $G(\rb,\rb',\omega)$, implicitly defined by the equation
\begin{align}
&\nabla \times \nabla \times G(\rb,\rb',\omega)-\frac{\omega^2}{c^2}\epsilon(\rb,\omega) G(\rb,\rb',\omega) \nonumber\\
&=-\frac{1}{c^2}\delta(\rb -\rb')
\label{Green}
\end{align}
for structures characterized by a local, frequency- and position-dependent permittivity $\epsilon(\rb,\omega)$ (and by an analogous relation for nonlocal media \cite{paper357}) and allowing us to obtain the induced field created by an external current $\jb^{\rm ext}(\rb,\omega)$ as
\[\Eb^{\rm ind}(\rb,\omega)=-4\pi{\rm i}} \def\ee{{\rm e}\omega\int d^3\rb'\,G^{\rm ind}(\rb,\rb',\omega)\cdot\jb^{\rm ext}(\rb',\omega).\]
The classical current associated with the electron is $\jb^{\rm ext}(\rb,\omega)=-e\,\zz\,\delta(\Rb-\Rb_0)\,\ee^{{\rm i}} \def\ee{{\rm e}\omega z/v}$, which upon insertion into the above expression, in combination with eq\ \ref{EELS}, yields
\begin{align}
\Gamma_{\rm EELS}(\Rb_0,\omega)=\frac{4e^2}{\hbar}&\int_{-\infty}^\infty dz\int_{-\infty}^\infty dz'\;\cos\left[\omega(z'-z)/v\right]\nonumber\\
&\times{\rm Im}\{-G_{zz}(\Rb_0,z,\Rb_0,z',\omega)\},
\label{EELSQM}
\end{align}
where we have replaced $G^{\rm ind}$ by $G$ because $G-G^{\rm ind}$ produces a vanishing contribution to the $z$ integrals as a consequence of kinematical mismatch between electrons and photons in free space \cite{paper149}. We remark the quantum nature of this result, which is revealed by the presence of $\hbar$, introduced through the lost energy $\hbar\omega$ in the denominator as a semiclassical prescription to convert the energy loss into a probability. This is also corroborated by a first-principles quantum-electrodynamics derivation of eq\ \ref{EELSQM}, which we offer in detail in the Appendix under the assumption that the sample is initially prepared at zero temperature.
An extension of this analysis to samples in thermal equilibrium at finite temperature $T$ allows us to relate the EELS probability to the zero-temperature result in eqs\ \ref{EELS} and \ref{EELSQM} as
\begin{align}
\Gamma_{\rm EELS}^T(\Rb_0,\omega)=\Gamma_{\rm EELS}(\Rb_0,|\omega|)\,\left[n_T(\omega)+1\right]\,{\rm sign}(\omega)
\label{EELST}
\end{align}
(with $\omega<0$ and $\omega>0$ indicating energy gain and loss, respectively), also derived in detail from first principles in the Appendix.
The far-field components of the induced field give rise to CL, with an emission probability that can be obtained from the radiated energy ({\it i.e.}, the time- and angle-integrated far-field Poynting vector). The classical field produced by the external electron source is thus naturally divided into frequency components, so an emission probability (photons per incident electron) is obtained by dividing by $\hbar\omega$, remarking again the quantum nature of the emission, which also reflects in how individual photon counts are recorded at the spectrometer in experiments. More precisely, using the external electron current and the Green tensor defined above, the electric field produced by the electron at a position $\rb_\infty$ far away from the sample can be written as
\begin{align}
\Eb^{\rm CL}(\rb_\infty,\omega)&=4\pi{\rm i}} \def\ee{{\rm e} e\omega \int_{-\infty}^\infty dz'\,\ee^{{\rm i}} \def\ee{{\rm e}\omega z'/v}\,G(\rb_\infty,\Rb_0,z',\omega)\cdot \zz \nonumber\\
&\xrightarrow[\omega r_\infty/c\to\infty]{} \frac{\ee^{{\rm i}} \def\ee{{\rm e}\omega r_\infty/c}}{r_\infty}\,\fb_{\rr_\infty}^{\rm CL}(\Rb_0,\omega),
\label{CLf}
\end{align}
where $\fb_{\rr_\infty}^{\rm CL}(\Rb_0,\omega)$ is the far-field amplitude. From the aforementioned analysis of the Poynting vector, we find that the CL emission probability reduces to
\[\Gamma_{\rm CL}=\int d^2\Omega_{\rr_\infty}\int_0^\infty d\omega\,\frac{d\Gamma_{\rm CL}}{d\Omega_{\rr_\infty}d\omega},\]
where \cite{paper149}
\begin{align}
\frac{d\Gamma_{\rm CL}}{d\Omega_{\rr_\infty}d\omega}=\frac{c}{4\pi^2\hbar\omega}\left|\fb_{\rr_\infty}^{\rm CL}(\Rb_0,\omega)\right|^2
\label{anothereq}
\end{align}
is the angle- and frequency-resolved probability.
A large number of EELS and CL experiments have been successfully explained using eq\ \ref{EELS} and the approach outlined above for CL by describing the materials in terms of their frequency-dependent local dielectric functions and finding $\Eb^{\rm ind}$ through numerical electromagnetic solvers, including the boundary-element method \cite{paper014,paper040,HK05,HDK09,HT12,paper197} (BEM) (see open-access implementation in ref\ \citenum{HT12}), the discrete-dipole approximation \cite{GH10,MGY12} (DDA), multiple scattering approaches \cite{paper025,TMH16}, and finite difference methods \cite{MNH10,CML15,DCP12}. Analytical expressions for the EELS and CL probabilities are also available for simple geometries, such as homogeneous planar surfaces, anisotropic films, spheres, cylinders, and combinations of these elements (see ref\ \citenum{paper149} for a review of analytical results), recently supplemented by an analysis of CL from a sphere for penetrating electron trajectories \cite{paperxx3}. It is instructive to examine the simple model of a sample that responds through an induced electric dipole, which admits the closed-form expressions
\begin{widetext}
\begin{align}
\left[\begin{matrix}
\Gamma_{\rm EELS}(\omega) \\
\Gamma_{\rm CL}(\omega)
\end{matrix}\right]
&=\frac{4e^2\omega^2}{\pi\hbar v^4\gamma^2}\;\times\left[\begin{matrix}
{\rm Im}\{\Fb^*(\Rb_0,\omega)\cdot\bar{\bar{\alpha}}(\omega)\cdot\Fb(\Rb_0,\omega)\} \\
(2\omega^3/3c^3)\left|\bar{\bar{\alpha}}(\omega)\cdot\Fb(\Rb_0,\omega)\right|^2
\end{matrix}\right] \nonumber\\
&=\frac{4e^2\omega^2}{\pi\hbar v^4\gamma^2}\;\left[K_1^2(\omega R_0/v\gamma)+\frac{1}{\gamma^2}K_0^2(\omega R_0/v\gamma)\right] \times\left[\begin{matrix}
{\rm Im}\{\alpha(\omega)\} \\
(2\omega^3/3c^3)\left|\alpha(\omega)\right|^2
\end{matrix}\right]
\label{EELSCLdip}
\end{align}
\end{widetext}
for the EELS and CL probabilities, where $\bar{\bar{\alpha}}(\omega)$ is the frequency-dependent $3\times3$ polarizability tensor, and the last equation applies to isotropic particles with $\bar{\bar{\alpha}}=\alpha(\omega)\mathcal{I}_3$. We remark that these results are quantitatively accurate even for large particles ({\it e.g.}, dielectric spheres sustaining Mie modes), provided we focus on spectrally isolated electric dipole modes \cite{paper149}. The above-mentioned properties of the $K_m$ functions readily reveal that the interaction strength diverges in the $R_0\rightarrow0$ limit ({\it i.e.}, when the e-beam intersects the point dipole). However, the finite physical sizes of the particle and the e-beam width prevent this divergence in practice. (Incidentally, the divergence also disappears in a quantum-mechanical treatment of the electron, which relates small $R_0$'s to large momentum transfers, limited to a finite cutoff imposed by kinematics.) In virtue of the optical theorem \cite{V1981} ({\it i.e.}, ${\rm Im}\{-1/\alpha(\omega)\}\ge2\omega^2/3c^3$), we have $\Gamma_{\rm EELS}\ge\Gamma_{\rm CL}$, as expected from the fact that emission events constitute a subset of all energy losses. Additionally, both EELS and CL share the same spatial dependence for dipolar modes, contained in the function $\Fb(\Rb_0,\omega)$ (eq\ \ref{Fb}).
As we show below, the transition probabilities are independent of the electron wave function, but a dependence is obtained in the partial electron inelastic signal when a selection is done on the incident and transmitted (final) wave functions ($\psi_i$ and $\psi_f$). Assuming a factorization of these wave functions as $\psi_{i|f}(\rb)\propto\psi_{i|f\perp}(\Rb)\,\ee^{{\rm i}} \def\ee{{\rm e} q_{i|f,z}z}/\sqrt{L}$, where $L$ is the quantization length along the beam direction, and integrating over longitudinal degrees of freedom (the $z$ coordinate), the state-selected transition probability depends on the transverse components as (see self-contained derivation in the Appendix)
\begin{widetext}
\begin{align}
\Gamma_{fi}(\omega)= \frac{4e^2}{\hbar} &\int d^2\Rb\int d^2\Rb' \;
\psi_{f\perp}(\Rb)\psi_{i\perp}^*(\Rb)\psi_{f_\perp}^*(\Rb')\psi_{i_\perp}(\Rb') \nonumber\\
&\times\int_{-\infty}^\infty dz\int_{-\infty}^\infty dz'\;\cos\left[\omega(z'-z)/v\right]\,{\rm Im}\left\{-G_{zz}(\rb,\rb',\omega)\right\}, \label{Gammafi}
\end{align}
\end{widetext}
where $G(\rb,\rb',\omega)$ is the electromagnetic Green tensor defined in eq\ \ref{Green}. Reassuringly, when summing $\Gamma_{fi}(\omega)$ over a complete basis set of plane waves for $\psi_{f\perp}(\Rb)$, we find $\sum_f\Gamma_{fi}(\omega)=\int d^2\Rb\,|\psi_{i\perp}(\Rb)|^2\,\Gamma_{\rm EELS}(\Rb,\omega)$, so we recover eq\ \ref{EELSQM} in the limit of a tightly focused incident beam ({\it i.e.}, $|\psi_{i\perp}(\Rb)|^2\approx\delta(\Rb-\Rb_0)$). Interestingly, the transition probability only depends on the product of transverse wave functions $\psi_{f_\perp}(\Rb)\psi_{i\perp}^*(\Rb)$. The possibility of selecting sample excitations by shaping this product has been experimentally confirmed by preparing the incident electron wave function in symmetric and antisymmetric combinations that excite dipolar or quadrupolar plasmons in a sample when the electrons are transmitted with vanishing lateral wave vector \cite{GBL17} ({\it i.e.}, for uniform $\psi_{f\perp}$ with $\qb_{f\perp}=0$). Similarly, under parallel beam illumination (uniform $\psi_{i\perp}$ with $\qb_{i\perp}=0$), angle-resolved Fourier plane imaging provides maps of transition probabilities to final states $\psi_{f\perp}\propto\ee^{{\rm i}} \def\ee{{\rm e}\qb_{f\perp}\cdot\Rb}$ of well-defined lateral momentum $\hbar\qb_{f\perp}$; actually, this approach is widely used to measure dispersion relations in planar films \cite{PSV1975,SSB19} (see Figures\ \ref{Fig1}a and \ref{Fig2}f), while a recent work tracks electron deflections produced by interaction with localized plasmons \cite{KGS18}. Analogously, the excitation of chiral sample modes by an incident electron plane wave produces vortices in the inelastically transmitted signal, an effect that has been proposed as a way to discriminate different enantiomers with nanoscale precision \cite{paper243}.
\subsection{Stimulated Free-Electron Interaction with Optical Fields} Under intense laser irradiation in UTEM setups, coupling to the optical near field in the sample region dominates the interaction with the electron. For typical conditions in electron microscopes, we can assume the electron to always consist of momentum components that are tightly focused around a central value $\qb_0$ parallel to the $z$ axis (nonrecoil approximation). This allows us to recast the Dirac equation into an effective Schr\"odinger equation \cite{PZ12},
\[(\partial_t+v\partial_z)\phi(\rb,t)=-\frac{{\rm i}} \def\ee{{\rm e}}{\hbar}\,\hat{\mathcal{H}}_1(\rb,t)\,\phi(\rb,t),\]
where we separate a slowly-varying envelope $\phi$ from the fast oscillations associated with the central energy $E_0$ and wave vector $\qb_0$ in the electron wave function \[\psi(\rb,t)=\ee^{{\rm i}} \def\ee{{\rm e} q_0z-{\rm i}} \def\ee{{\rm e} E_0t/\hbar}\phi(\rb,t)\] and we adopt the minimal-coupling light-electron interaction Hamiltonian \cite{paperarxiv4}
\[\hat{\mathcal{H}}_1=\frac{ev}{c}A_z+\frac{e^2}{2m_{\rm e}} \def\kB{{k_{\rm B}} c^2\gamma}\left(A_x^2+A_y^2+\frac{1}{\gamma^2}A_z^2\right),\]
written in terms of the optical vector potential $\Ab(\rb,t)$ in a gauge with vanishing scalar potential without loss of generality. The nonrecoil approximation also implies that the initial electron wave function can be written as \[\psi_i(\rb,t)=\ee^{{\rm i}} \def\ee{{\rm e} q_0z-{\rm i}} \def\ee{{\rm e} E_0t/\hbar}\phi_i(\rb-\vb t),\] where $\phi_i$ defines a smooth invariant profile depending only on the rest-frame coordinates $\rb-\vb t$. Assuming that this behavior is maintained within the interaction region, the full electron wave function admits the solution \cite{T17}
\begin{align}
\psi(\rb,t)=\psi_i(\rb,t)\;\exp\left[\frac{-{\rm i}} \def\ee{{\rm e}}{\hbar}\int_{-\infty}^tdt'\,\hat{\mathcal{H}}_1(\rb-\vb t+\vb t',t')\right].
\nonumber
\end{align}
We focus for simplicity on monochromatic light of frequency $\omega$, for which the vector potential can be written as $\Ab(\rb,t)=(2c/\omega){\rm Im}\left\{\Eb(\rb)\ee^{-{\rm i}} \def\ee{{\rm e}\omega t}\right\}$, where $\Eb(\rb)$ is the optical electric field amplitude contributed by both the external laser and the components scattered by the sample. We are interested in evaluating the electron wave function at a long time after interaction, such that $\psi_i$ vanishes in the sample region. In this limit, combining the above results, we find that the transmitted wave function reduces to
\begin{align}
\psi(\rb,t)=&\psi_i(\rb,t)\;\ee^{{\rm i}} \def\ee{{\rm e}\varphi(\Rb)}\nonumber\\
&\times\mathcal{P}_0[\beta(\Rb),\omega,z-vt]\;\mathcal{P}_0[\beta'(\Rb),2\omega,z-vt],
\label{psiPINEM}
\end{align}
where
\begin{align}
\mathcal{P}_0(\beta,\omega,z)&=\exp\left(-\beta\ee^{{\rm i}} \def\ee{{\rm e}\omega z/v}+\beta^*\ee^{-{\rm i}} \def\ee{{\rm e}\omega z/v}\right) \nonumber\\
&=\sum_{l=-\infty}^\infty J_l(2|\beta|)\,\ee^{{\rm i}} \def\ee{{\rm e} l\,{\rm arg}\{-\beta\}}\,\ee^{{\rm i}} \def\ee{{\rm e} l\omega z/v}
\label{PPINEM}
\end{align}
describes the above-mentioned energy comb, associated with the absorption or emission of different numbers $l$ of photons of frequency $\omega$ by the electron, as ruled by the coupling coefficient
\begin{align}
\beta(\Rb)=\frac{e}{\hbar\omega}\int_{-\infty}^\infty dz\;E_z(\rb)\,\ee^{-{\rm i}} \def\ee{{\rm e}\omega z/v},
\label{beta}
\end{align}
which is determined by the optical field component along the e-beam direction. The rightmost expression in eq\ \ref{PPINEM} is derived by applying the Jacobi-Anger expansion $\ee^{{\rm i}} \def\ee{{\rm e} u\sin\theta}=\sum_lJ_l(u)\ee^{{\rm i}} \def\ee{{\rm e} l\theta}$ (eq\ 9.1.41 of ref\ \citenum{AS1972}) with $u=2|\beta|$ and $\theta={\rm arg}\{-\beta\}+\omega z/v$. The two other factors accompanying the incident wave function in eq\ \ref{psiPINEM} are produced by the ponderomotive force ({\it i.e.}, the $A^2$ term in the coupling Hamiltonian $\hat{\mathcal{H}}_1$). Namely, a phase
\begin{align}
\varphi(\Rb)=\frac{-1}{\mathcal{M}\omega^2}\int_{-\infty}^\infty dz\;\left[|E_x(\rb)|^2+|E_y(\rb)|^2+\frac{1}{\gamma^2}|E_z(\rb)|^2\right],
\label{phase}
\end{align}
where $\mathcal{M}=m_{\rm e}} \def\kB{{k_{\rm B}}\gamma v/c\alpha$ plays the role of an effective mass and $\alpha\approx1/137$ is the fine structure constant; and an extra energy comb of double frequency given by eq\ \ref{PPINEM} with $\omega$ substituted by $2\omega$ and $\beta$ by
\begin{align}
\beta'(\Rb)=&\frac{-{\rm i}} \def\ee{{\rm e}}{2\mathcal{M}\omega^2} \label{betap}\\
&\times\int_{-\infty}^\infty dz\;\left[E_x^2(\rb)+E_y^2(\rb)+\frac{1}{\gamma^2}E_z^2(\rb)\right]\,\ee^{-2{\rm i}} \def\ee{{\rm e}\omega z/v}.
\nonumber
\end{align}
We remark that the multiplicative factors in eq\ \ref{psiPINEM} depend on the transverse coordinates $\Rb=(x,y)$. In the absence of a scattering structure, $\beta$ and $\beta'$ vanish, yielding $\mathcal{P}_0=1$ as a result of the aforementioned electron-photon kinematic mismatch, although a spatially modulated ponderomotive phase $\varphi$ can still be produced, for example by interfering two counter-propagating lasers, giving rise to electron diffraction (the Kapitza-Dirac effect \cite{KD1933,FAB01,B07,TL19}). From an applied viewpoint, this phenomenon enables optical sculpting of e-beams in free space \cite{MJD10,SAC19,ACS20,paperarxiv4}.
The relative strength of $A^2$ interactions can be estimated from the ratio $|\beta'/\beta|\sim |\Eb|/E_{\rm thres}$ (see eqs\ \ref{beta} and \ref{betap}), where $E_{\rm thres}=2m_{\rm e}} \def\kB{{k_{\rm B}}\gamma v\omega/e$ ($\approx5\times10^{12}\,$V/m for $\hbar\omega=1.5\,$eV and 100\,keV electrons) defines a threshold field amplitude that exceeds by $\sim4$ orders of magnitude the typical values used in PINEM experiments \cite{FES15,paper311}, although they should be reachable using few-cycle laser pulses in combination with nonabsorbing high-index dielectric structures.
Neglecting $A^2$ corrections, the remaining PINEM factor trivially satisfies the relation $\mathcal{P}_0(\beta_1,\omega,z)\times\mathcal{P}_0(\beta_2,\omega,z)=\mathcal{P}_0(\beta_1+\beta_2,\omega,z)$ (see eq\ \ref{PPINEM}), so that the effect of two simultaneous or consecutive PINEM interactions with mutually coherent laser pulses at the same photon frequency is equivalent to a single one in which the coupling coefficient is the sum of the individual coupling coefficients, as neatly demonstrated in double-PINEM experiments \cite{EFS16}. Additionally, $\beta$ imprints a lateral dependent phase $l\,{\rm arg}\{-\beta(\Rb)\}$ on the wave function component associated with each inelastic electron sideband, where $l$ labels the net number of exchanged photons; this effect has been experimentally verified through the observation of transverse linear \cite{paper311,FYS20} and angular \cite{paper332} momentum transfers to the electron (Figure\ \ref{Fig4}d,g), and it has been predicted to produce electron diffraction by plasmon standing waves in analogy to the Kapitza-Dirac effect \cite{paper272}.
The Schr\"odinger equation mentioned at the beginning of this section neglects the effect of recoil, which can substantially affect the electron over long propagation distances $d$ beyond the PINEM interaction region. Incidentally, recoil can even manifest within the interaction region if it spans a relatively large path length. Neglecting again $A^2$ terms, the leading longitudinal recoil correction results in the addition of an $l$-dependent phase $-2\pi l^2d/z_T$ to each term of the sum in eq\ \ref{PPINEM}, where
\[z_T=\frac{4\pim_{\rm e}} \def\kB{{k_{\rm B}} v^3\gamma^3}{\hbar\omega^2}\]
is a Talbot distance ({\it e.g.}, $z_T\approx159$mm for $\hbar\omega=1.5\,$eV and 100\,keV electrons) that indeed increases with kinetic energy. More precisely, the electron wave function becomes $\psi(\rb,t)=\psi_i(\rb,t)\,\mathcal{P}_d[\beta(\Rb),\omega,z-vt]$, where
\begin{align}
\mathcal{P}_d(\beta,\omega,z)=\sum_{l=-\infty}^\infty J_l(2|\beta|)\,\ee^{{\rm i}} \def\ee{{\rm e} l\,{\rm arg}\{-\beta\}+{\rm i}} \def\ee{{\rm e} l\omega z/v-2\pi{\rm i}} \def\ee{{\rm e} l^2d/z_T}.
\label{PPINEMd}
\end{align}
We remark that this result is valid if we neglect ponderomotive forces and assume the e-beam to be sufficiently well collimated as to dismiss lateral expansion during propagation along the distance $d$. We also assume that $\psi_i$ is sufficiently monoenergetic as to dismiss its drift along $d$. Different $l$ components move with different velocities, resulting in a temporal compression of the electron wave function \cite{SCI08} that has been demonstrated to reach the attosecond regime \cite{KML17,PRY17,MB18_2,MB18,SMY19,RTN20,MB20}.
The above results refer to coherent laser illumination, but additional possibilities are opened by using quantum light instead, and in particular, we have predicted that the electron spectra resulting from PINEM interaction with optical fields carry direct information on the light statistics \cite{paper339} ({\it e.g.}, the second-order autocorrelation function $g^{(2)}$). Additionally, temporal electron pulse compression can be accelerated using phase-squeezed light (see Figure\ \ref{Fig7}d below), while the electron density matrix acquires nontrivial characteristics with potential application in customizing its eventual interaction with a sample \cite{paper360}.
The extension of the above results to multicolor illumination opens additional possibilities, with the linear $A$ term in $\hat{\mathcal{H}}_1$ producing multiplicative PINEM factors (one per light frequency) that lead to asymmetric electron spectra \cite{PRY17}. Also, the ponderomotive-force $A^2$ term introduces frequency-sum and frequency-difference PINEM factors, which in free space, with lasers arranged under phase-matching propagation directions, can give rise to energy combs similar to PINEM through stimulated Compton scattering \cite{KSH18}; this effect, combined with free-space propagation, has been exploited to achieve attosecond electron compression without requiring material coupling structures \cite{KES17}.
\subsection{Relation between PINEM and CL} In CL, the electron acts as a source from which energy is extracted to produce light emission, whereas PINEM is just the opposite: an external light source exchanges energy with the electron. It is thus plausible that a relation can be established between the two types of processes if the sample exhibits a reciprocal response, so that the electromagnetic Green tensor satisfies the property $G_{aa'}(\rb,\rb',\omega)=G_{a'a}(\rb',\rb,\omega)$, where $a$ and $a'$ denote Cartesian components. To explore this idea, we start from the PINEM coupling coefficient defined in eq\ \ref{beta} and consider far-field illumination from a well-defined direction $\rr_\infty$, as produced by an external distant dipole $\pb^{\rm ext}\perp\rr_\infty$ at the laser source position $\rb_\infty$. Using the Green tensor to relate this dipole to the electric field as $\Eb(\rb)=-4\pi\omega^2\,G(\rb,\rb_\infty,\omega)\cdot\pb^{\rm ext}$, we find
\begin{align}
\beta(\Rb)=-\frac{4\pi e\omega}{\hbar}\sum_a p_a^{\rm ext} \int_{-\infty}^\infty dz\;G_{za}(\rb,\rb_\infty,\omega)\,\ee^{-{\rm i}} \def\ee{{\rm e}\omega z/v}.
\nonumber
\end{align}
In the absence of a sample, the external laser field is obtained from the far-field limit of the free-space Green tensor, giving rise to an external plane-wave of electric field $\Eb(\rb)=\Eb^{\rm ext}\ee^{{\rm i}} \def\ee{{\rm e}\kb\cdot\rb}$ with wave vector $\kb=-\rr_{\infty}\omega/c$ and amplitude $\Eb^{\rm ext}=\left(\ee^{{\rm i}} \def\ee{{\rm e}\omega r_\infty/c}/r_\infty\right)\,(\omega^2/c^2)\,\pb^{\rm ext}$, which allows us to recast the coupling coefficient into
\begin{align}
&\beta(\Rb)=\frac{{\rm i}} \def\ee{{\rm e} c^2}{\hbar\omega^2}\sum_a E_a^{\rm ext} \label{betaCL1}\\
&\times\left[4\pi{\rm i}} \def\ee{{\rm e} e\omega\frac{r_\infty}{\ee^{{\rm i}} \def\ee{{\rm e}\omega r_\infty/c}}\int_{-\infty}^\infty dz\;G_{az}(\rb_\infty,\rb,\omega)\,\ee^{-{\rm i}} \def\ee{{\rm e}\omega z/v}\right],
\nonumber
\end{align}
where we have used the noted reciprocity property. Now, we identify the expression inside square brackets as the CL far-field amplitude by comparison to eq\ \ref{CLf}. Finally, we find
\begin{align}
\beta(\Rb)=\frac{{\rm i}} \def\ee{{\rm e} c^2}{\hbar\omega^2}\;\tilde{\fb}_{\rr_\infty}^{\rm CL}(\Rb,\omega)\cdot\Eb^{\rm ext},
\label{betaCL2}
\end{align}
where the tilde in $\tilde{\fb}_{\rr_\infty}^{\rm CL}(\Rb_0,\omega)$ indicates that it has to be calculated for an electron moving with opposite velocity ({\it i.e.}, $-\vb$ instead of $\vb$; {\it cf.} $\ee^{\pm{\rm i}} \def\ee{{\rm e}\omega z/v}$ factors in eqs\ \ref{CLf} and \ref{betaCL1}). Equation\ \ref{betaCL2} establishes a direct relation between PINEM and CL: the coupling coefficient in the former, for far-field plane-wave illumination from a given direction $\rr_\infty$ ({\it i.e.}, light propagating toward $-\rr_\infty$), is readily obtained from the electric far-field amplitude of CL light emitted toward $\rr_\infty$, but with the electron velocity set to $-\vb$ instead of $\vb$. A recent study has partially verified this relation by exploring the spatial characteristics of EELS, CL, and PINEM for the same single gold nanostar \cite{paperarxiv7}. For completeness, we provide the expression
\[\beta(\Rb)=\frac{2{\rm i}} \def\ee{{\rm e} e\omega}{\hbar v^2\gamma}\;\alpha(\omega)\,\Fb^*(\Rb,\omega)\cdot\Eb^{\rm ext}\]
obtained for an isotropic dipolar scatterer (see eqs\ \ref{Fb} and \ref{EELSCLdip}) under continuous-wave illumination conditions.
The high degree of control over the free-electron wave function embodied by the above developments opens exciting opportunities to explore new physics and applications. However, before presenting some perspectives on these possibilities, we discuss in more detail the role of the electron wave function in the interaction with optical sample modes.
\section{Quantum and Classical Effects Associated with the Free-Electron Wave Function}
Like for any elementary particle, the wave nature of free electrons manifests in interference phenomena observed through double-slit experiments and diffraction by periodic lattices, which are typical configurations used to image material structures and their excitation modes. Electron interference has been extensively exploited in TEMs to this end \cite{HS1972,B94,E96,E03,EB05,B06,MD09,SKK17}, as well as in photoelectron diffraction \cite{F10_2}, low-energy electron diffraction \cite{P1974}, and LIED \cite{WPL16}. Shaping and probing the electron wave function lies at the heart of these techniques, in which the electrons are scattered elastically, and consequently, no final sample excitations are produced. Likewise, interference is expected to show up, associated with the creation of sample excitations by e-beams, as demonstrated in the so-called inelastic electron holography \cite{LF00,H08_2}.
It should be noted that electron beam spectroscopies involve the creation of excitations in the sample by one electron at a time when using typical beam currents $\lesssim1\,$nA ({\it i.e.}, $\lesssim6$ electrons per nanosecond). Such relatively low currents are employed to avoid Coulomb electron-electron repulsion and the resulting beam degradation and energy broadening, which are detrimental effects for spatially resolved EELS, although they can still be tolerated in diffraction experiments relying on electron bunches to retrieve structural information \cite{BHM20}, and also in EEGS based on depletion of the ZLP with few-meV energy resolution obtained by tuning the laser frequency \cite{paper306}. Understandably, the quantum character of individual electrons has been explored to pursue applications such as cavity-induced quantum entanglement \cite{K19,ZSF21}, qubit encoding \cite{RML20}, and single-photon generation \cite{paper180}.
Now, a recurrent question arises \cite{RH1977,paper149,GBL17,RKT19,PG19,GY20,paperarxiv3,paperarxiv6,paper360}, can the excitation efficiency be modulated by shaping the electron wave function? For single monoenergetic electrons, nonretarded theory was used to show that the excitation probability reduces to that produced by a classical point charge, averaged over the intensity of the transverse beam profile \cite{RH1977}. This result was later generalized to include retardation \cite{paper149}, and the predicted lack of dependence on transverse electron wave function was experimentally corroborated for Smith-Purcell radiation emission \cite{RKT19}. Some dependence can however be observed in EELS by collecting scattered electrons only within a partial angular range, as neatly demonstrated by Ritchie and Howie \cite{RH1977} in the nonretarded limit. This result was later generalized to include retardation \cite{paper149}. Specifically, for transmission along the center of the Fourier plane in an electron microscope, wave function shaping was experimentally demonstrated to actively select plasmon losses of dipolar or quadrupolar symmetry in metallic nanowires \cite{GBL17}.
The dependence on the longitudinal wave function is not as clear, and for example, a recent report \cite{GY20} based on a semiclassical description of the electric field generated by free electrons claims that the probability of exciting a sample initially prepared in the ground state could be enhanced for an individual electron distributed along a periodic density profile. However, this conclusion is inconsistent with a fully quantum-mechanical treatment of the electron-sample system (see detailed analysis below). Importantly, the same study claims that $N$ electrons arriving at random times produce an overall probability $\propto N^2$ when they are previously PINEM-modulated by the same laser, an effect that is indeed supported by a quantum description of the electrons, as we show below. In addition, a wave function dependence should be observed for interaction with samples prepared in a coherent superposition of ground and excited states that is phase-locked with respect to the electron wave function, as experimentally illustrated in double-PINEM experiments \cite{EFS16} (see below). While PINEM commonly relies on bosonic sample modes, an extension of this effect to two-level systems has also been discussed in recent theoretical works \cite{PG19,ZSF21}.
In this section, we elucidate the role of the electron wave function in the excitation of sample modes for any type of interactions with matter, photons, and polaritons. We derive analytical expressions from first-principles for the excitation probability produced by single and multiple electrons with arbitrarily shaped wave functions, based on which we conclude that the excitation by single electrons with the specimen prepared in any stationary state ({\it e.g.}, the ground state) can be described fully classically with the electron treated as a point particle, regardless of its wave function, apart from a trivial average over the transverse beam profile. In contrast, multiple electrons give rise to correlations between their respective wave functions, which enter through the electron probability densities, whereas phase information is completely erased. More precisely, the few-electron case (see analysis for two electrons below) reveals a clear departure from the classical point-particle picture, while in the limit of many electrons $N$, a classical description prevails, leading to an excitation probability $\propto N^2$ if they are bunched with a small temporal width relative to the optical period of the sampled excitation \cite{UGK98} or if their probability density is optically modulated with a common coherent light field \cite{NS1954,SH1969,UGK98,FFK1971,SCI08,GY20}. Crucially, these results follow from the nonrecoil approximation ({\it i.e.}, the fact that the electron velocity can be considered to be constant during the interaction), which accurately applies under common conditions in electron microscopy (small beam-energy spread and low excitation energies compared with the average electron energy). Our hope is that the present discussion clarifies current misunderstandings on the role of the electron wave function in inelastic scattering and provides simple intuitive rules to tackle configurations of practical interest.
\subsection{Lack of Wave-Function Dependence for a Single Electron} We first consider a free electron propagating in vacuum and interacting with arbitrarily shaped material structures. Without loss of generality, the wave function of this combined electron-sample system can be decomposed as
\begin{align}
|\psi(t)\rangle=\sum_{\qb n}\alpha_{\qb n}(t)\ee^{-{\rm i}} \def\ee{{\rm e}(\varepsilon_\qb+\omega_n)t}|\qb n\rangle
\label{defpsi}
\end{align}
using a complete basis set of combined material (and possibly radiation) states $|n\rangle$ of energy $\hbar\omega_n$ and electron plane-wave states $|\qb\rangle$ of well-defined momentum $\hbar\qb$ and energy $\hbar\varepsilon_\qb$. The elements of this basis set are eigenstates of the noninteracting Hamiltonian $\hat{\mathcal{H}}_0$, so they satisfy $\hat{\mathcal{H}}_0|\qb n\rangle=\hbar(\varepsilon_\qb+\omega_n)|\qb n\rangle$. This description is valid as long as no bound states of the electrons are involved. Under common conditions in electron microscopes, the states $|n\rangle$ describe excitations in the sample, including the emission of photons, but also undesired excitations in other parts of the microscope ({\it e.g.}, phonons in the electron source). For simplicity, we assume the electron to be prepared in a pure state $\sum_{\qb} \alpha^0_{\qb} |\qb \rangle$ and the sample in a stationary state $n=0$ prior to interaction ({\it i.e.}, $\alpha_{\qb n}(-\infty)=\delta_{n0}\alpha_\qb^0$, subject to the normalization condition $\sum_\qb|\alpha_\qb^0|^2=1$), in the understanding that the mentioned undesired excitations can later be accounted for by tracing over different incoherent realizations of the electron wave function in the beam.
By inserting eq\ \ref{defpsi} into the Schr\"odinger equation $(\hat{\mathcal{H}}_0+\hat{\mathcal{H}}_1)|\psi\rangle={\rm i}} \def\ee{{\rm e}\hbar\partial_t|\psi\rangle$, where the Hamiltonian $\hat{\mathcal{H}}_1$ describes electron-sample interactions, we find the equation of motion for $n\neq0$
\[{\rm i}} \def\ee{{\rm e}\hbar\dot{\alpha}_{\qb n}=\sum_{\qb'n'}\ee^{{\rm i}} \def\ee{{\rm e}(\varepsilon_\qb-\varepsilon_{\qb'}+\omega_n-\omega_{n'})t}\langle\qb n|\hat{\mathcal{H}}_1|\qb'n'\rangle\alpha_{\qb'n'}\]
for the expansion coefficients $\alpha_{\qb n}$. Now, the results presented in this section are a consequence of the following two assumptions, which are well justified for typical excitations probed in electron microscopy \cite{paper149}:
(i) {\it Weak Coupling.} The electron interaction with the sample is sufficiently weak as to neglect higher-order corrections to the excitation probability beyond the first order. This allows us to rewrite the equation of motion as ${\rm i}} \def\ee{{\rm e}\hbar\dot{\alpha}_{\qb n}=\sum_{\qb'}\ee^{{\rm i}} \def\ee{{\rm e}(\varepsilon_\qb-\varepsilon_{\qb'}+\omega_{n0})t}\langle\qb n|\hat{\mathcal{H}}_1|\qb'0\rangle\alpha^0_{\qb'}$ (with $\omega_{n0}=\omega_n-\omega_0$), which can be integrated in time to yield the solution
\begin{align}
\alpha_{\qb n}(\infty)=-\frac{2\pi{\rm i}} \def\ee{{\rm e}}{\hbar}\sum_{\qb'}\delta(\varepsilon_\qb-\varepsilon_{\qb'}+\omega_{n0})\langle\qb n|\hat{\mathcal{H}}_1|\qb'0\rangle\alpha^0_{\qb'}
\label{alphainfinity}
\end{align}
for the wave function coefficients after interaction. We remark that $n=0$ can be the ground state or any excited state in the present derivation, as long as it is stationary.
(ii) {\it Nonrecoil Paraxial Approximation.} Electron beams feature small divergence angle ($\sim$ a few mrad) and low energy spread compared with the mean electron energy ({\it i.e.}, $\alpha_{\qb n}$ is negligible unless $|\qb-\qb_0|\ll q_0$, where $\hbar\qb_0$ is the central electron momentum). Additionally, we assume that the interaction with the sample produces wave vector components also satisfying $|\qb-\qb_0|\ll q_0$. This allows us to write the electron frequency difference as
\begin{align}
\varepsilon_\qb-\varepsilon_{\qb'}\approx\vb\cdot(\qb-\qb'),
\label{nonrecoil}
\end{align}
indicating that only momentum transfers parallel to the beam contribute to transfer energy to the sample \cite{paper149}. The nonrecoil approximation is generally applicable in the context of electron microscopy, unless the excitation energy is a sizeable fraction of the electron kinetic energy \cite{T20,WRM21}.
Putting these elements together and using the real-space representation of the electron states $\langle\rb|\qb\rangle=V^{-1/2}\,\ee^{{\rm i}} \def\ee{{\rm e}\qb\cdot\rb}$ with quantization volume $V$ in eq\ \ref{alphainfinity}, we find that the probability that a single beam electron excites a sample mode $n$, expressed through the trace of scattered electron degrees of freedom $\Gamma_n^0=\sum_\qb|\alpha_{\qb n}(\infty)|^2$, reduces to (see Appendix)
\begin{align}
\Gamma_n^0=\int d^3\rb\;|\psi^0(\rb)|^2 \,|\tilde{\beta}_n(\Rb)|^2
\label{P0n}
\end{align}
where
\begin{align}
\psi^0(\rb)=V^{1/2}\int\frac{d^3\qb}{(2\pi)^3}\,\alpha^0_\qb\,\ee^{{\rm i}} \def\ee{{\rm e}\qb\cdot\rb}
\label{psi0}
\end{align}
is the incident electron wave function,
\begin{align}
\tilde{\beta}_n(\Rb)=\frac{1}{\hbar v}\int_{-\infty}^\infty dz\;\ee^{-{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\langle0|\hat{\mathcal{H}}_1(\rb)|n\rangle
\label{betan}
\end{align}
is an electron-sample coupling coefficient that depends on the transverse coordinates $\Rb=(x,y)$, and we choose the beam direction along $\zz$. We note that this definition of $\tilde{\beta}_n$ coincides with previous studies in which $\hat{\mathcal{H}}_1$ describes electron-light PINEM interaction and $n$ refers to optical modes \cite{paper339,paper360}. Also, the PINEM coupling coefficient in eq\ 11 is obtained from eq\ 22 by multiplying it by the laser-driven amplitude associated with mode $n$ and summing over $n$.
We observe from eq\ \ref{P0n} that the excitation probability does not depend on the electron wave function profile along the beam direction $\zz$, because this enters just through an integral of the electron density along that direction. Additionally, the dependence on transverse directions $\Rb$ consists of a weighted average of the probability $|\tilde{\beta}_n(\Rb)|^2$ over the transverse profile of the beam intensity.
\subsection{Wave-Function Dependence in the Correlation Among Multiple Electrons} The above analysis can readily be extended to a beam bunch consisting of $N$ distinguishable electrons with incident wave functions $\psi^j(\rb)$ labeled by $j=0,\dots,N-1$. The probability of exciting a sample mode $n$ then reduces to (see detailed derivation in the Appendix)
\begin{align}
\Gamma_n^{\rm total}=\sum_j\int d^3\rb\;|\psi^j(\rb)|^2 \,|\tilde{\beta}_n(\Rb)|^2+\sum_{j\neq j'} Q_n^jQ_n^{j'*}, \label{PNn}
\end{align}
where
\begin{align}
&Q_n^j=\int d^2\Rb \; M_n^j(\Rb)\tilde{\beta}_n(\Rb), \label{Qj}\\
&M_n^j(\Rb)=\int_{-\infty}^\infty dz\;\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\,|\psi^j(\rb)|^2. \label{Mnj}
\end{align}
The first term in eq\ \ref{PNn} corresponds to the sum of uncorrelated excitation probabilities produced by $N$ independent electrons, each of them expressed as a weighted average over the transverse electron density profile, just like for a single electron in eq\ \ref{P0n}. The second term accounts for two-electron correlations, in which the phase of the electron wave functions is also erased, but there is however a dependence on the electron probability densities through their Fourier transforms in eq\ \ref{Mnj}. Interestingly, the factor $|M_n^j(\Rb)|^2$ is in agreement with the result obtained for excitation with a classical charge distribution having the same profile as the electron probability density, which is well studied in the context of beam physics \cite{NS1954,SCI08}. Also, this factor has recently been identified as a measure of the degree of coherence of the electron in its interaction with mutually phase-locked external light \cite{paperarxiv3,paperarxiv6}. Obviously, $|M_n^j(\Rb)|^2$ is bound by the inequality $|\int d^2\Rb\, M_n^j(\Rb)|\le1$, with the equal sign standing for any value of the excitation frequency $\omega_{n0}$ in the limit of point-particle electrons ({\it i.e.}, $|\psi^j(\rb)|^2=\delta(\rb-\rb_j)$), and also for a fixed $\omega_{n0}$ and its multiples if the electron probability density is periodically modulated as
\begin{align}
|\psi^j(\rb)|^2=|\psi^j_\perp(\Rb)|^2\,\sum_s b_{j,s}\,\delta\left(z-z_0-\frac{2\pi sv}{\omega_{n0}}\right)
\label{psicomb}
\end{align}
with arbitrary coefficients $b_{j,s}$ ({\it i.e.}, a train of temporally compressed pulses separated by a spatial period $v/\omega_{n0}$). Periodically modulated electrons with a limited degree of compression are currently feasible through strong PINEM interaction followed by free-space propagation.
In the derivation of these results, we have assumed electrons prepared in pure states ({\it i.e.}, with well-defined wave functions). The extension to mixed electron states requires dealing with the joint electrons-sample density matrix elements $\rho_{\{\qb\} n,\{\qb'\}n'}(t)$ and calculating $\Gamma_n^{\rm total}=\sum_{\{\qb\}}\rho_{\{\qb\} n,\{\qb\}n}(\infty)$. Starting with $\rho_{\{\qb\} n,\{\qb'\}n'}(-\infty)=\delta_{n0}\delta_{n'0}\prod_j\rho^j_{\qb_j\qb'_j}$, where $\rho^j_{\qb_j\qb'_j}$ are the matrix elements of electron $j$ before interaction, and solving ${\rm i}} \def\ee{{\rm e}\hbar(d\hat\rho/dt)=\big[\hat{\mathcal{H}},\hat\rho\big]$ to the lowest order contribution, we find exactly the same expressions as above, but replacing $\big|\psi^j(\rb)\big|^2$ by the probability densities $\big\langle\rb|\hat\rho^j|\rb\big\rangle=(1/V)\sum_{\qb\qb'}\rho^j_{\qb\qb'}\ee^{{\rm i}} \def\ee{{\rm e}(\qb-\qb')\cdot\rb}$, based on which we can deal with electrons that have experienced decoherence before reaching the sample region.
An important point to consider is that bunched electrons are affected by Coulomb repulsion, which can increase the beam energy width and introduce undesired lateral deflections. For example, two 100\,keV electrons traversing a sample interaction region of length $L\sim10\,\mu$m with a relative longitudinal (transverse) separation distance of 1\,$\mu$m undergo a change in their energy (lateral deflection angle) of 14\,meV (0.1\,$\mu$rad). These values are still tolerable when probing visible and near-infrared optical excitations, but they increase linearly with $L$, becoming a limiting factor for propagation along the macroscopic beam column. We therefore anticipate that a strategy is needed to avoid them, such as introducing a large beam convergence angle ({\it i.e.}, large electron-electron distances except near the sampled region) or separating them by multiples of the optical period associated with the sampled excitation ({\it e.g.}, $4.1\,$fs for 1\,eV modes, corresponding to a longitudinal electron peak separation of 680\,nm at 100\,keV).
\begin{figure*}
\centering{\includegraphics[width=0.85\textwidth]{Fig5}}
\caption{Interference in single- and double-electron interactions with a localized excitation. (a) Sketch of an electron wavepacket interacting with a nanoparticle (top) and typical EELS spectrum (bottom) dominated by one resonance of frequency $\omega_{n0}$ and polarization $\pb$ normal to the electron velocity $\vb$. (b) Interaction with two electron wavepackets separated by a longitudinal distance $a$. If the wavepackets are part of a single-electron wave function, the EELS probability is independent of $a$ (one-electron solid curve). With two electrons, each of them in a different wavepacket, the EELS intensity per electron oscillates with $\omega_{n0}a/v$ and presents a maximum at $a=0$ (two-electron solid curve). For two electrons with their wave functions equally shared among the two wavepackets, the oscillations with $a$ exhibit less profound minima (two-electron dashed curve). (c) Interaction with two electron wavepackets in symmetrically arranged beams. We find similar results as in (b), but now the two-electron probability displays a minimum at $a=0$. We consider wavepackets of width $\Delta$ defined by $\omega_{n0}\Delta/v=0.5$ (see Appendix). The EELS intensity is normalized to the result for uncorrelated electrons.}
\label{Fig5}
\end{figure*}
\subsection{Bunched and Dilute Electron-Beam Limits} We first consider $N$ electrons sharing the same form of the wave function, but separated by their arrival times $t_j=z_j/v$ at the region of interaction with the sample (also, see below an analysis of PINEM-modulated electrons, which belong to a different category), so we can write the incident wave functions as $\psi^j(\Rb,z)=\psi^0(\Rb,z-z_j)$, where $\psi^0$ is given by eq\ \ref{psi0}. Then, eq\ \ref{PNn} for the total excitation probability of mode $n$ reduces to
\begin{align}
&\Gamma_n^{\rm total}=N\Gamma_n^0+\left|Q_n^0\right|^2\sum_{j\neq j'}\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}(z_{j'}-z_j)/v}
\label{bunch}
\end{align}
with $Q_n^0=\int d^3\rb\,\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\,|\psi^0(\rb)|^2\tilde{\beta}_n(\Rb)$ and $\Gamma_n^0$ given by eq\ \ref{P0n}. In addition, if the wave function displacements of all electrons satisfy $|z_j-z_{j'}|\ll v/\omega_{n0}$, neglecting linear terms in $N$, the sum in eq\ \ref{bunch} becomes $\approx N^2\left|Q_n^0\right|^2$, which can reach high values for large $N$, an effect known as superradiance when $n$ represents a radiative mode. We note that this effect does not require electrons confined within a small distance compared with the excitation length $v/\omega_{n0}$: superradiance is thus predicted to also take place for extended electron wave functions, provided all electrons share the same probability density, apart from some small longitudinal displacements compared with $v/\omega_{n0}$ (or also displacements by multiples of $v/\omega_{n0}$, see below); however, the magnitude of $Q_n^0$ will obviously decrease when each electron extends over several $v/\omega_{n0}$ spatial periods. Of course, if the electron density is further confined within a small region compared with $v/\omega_{n0}$ (or if it consists of a comb-like profile as given, for example, by eq\ \ref{psicomb}), we readily find $\Gamma_n^{\rm total}\approx N^2\Gamma_n^0$. Superradiance has been experimentally observed for bunched electrons over a wide range of frequencies \cite{SH1969,UGK98} and constitutes the basis for free-electron lasers \cite{AB04,EAA10,GIF19}.
In the opposite limit of randomly arriving electrons ({\it i.e.}, a dilute beam), with the displacements $z_j$ spanning a large spatial interval compared with $v/\omega_{n0}$ (even under perfect lateral alignment conditions), the sum in eq\ \ref{bunch} averages out, so we obtain $\Gamma_n^{\rm total}=N\Gamma_n^0$, and therefore, correlation effects are washed out.
\subsection{Superradiance with PINEM-Modulated Electrons} When $N$ electrons are modulated through PINEM interaction using the same laser (and neglecting $A^2$ corrections), their probability densities take the form
\[|\psi^j(\rb)|^2=|\psi_i^j(\rb)|^2\;|\mathcal{P}_d(\beta,\omega,z)|^2,\]
where the modulation factor $\mathcal{P}_d(\beta,\omega,z)$, defined in eq\ \ref{PPINEMd}, is shared among all of them and the PINEM coupling coefficient $\beta$ is taken to be independent of lateral position. Assuming well collimated e-beams, we consider the incident wave functions to be separated as $\psi_i^j(\rb)=\psi_\perp(\Rb)\psi_{i,\parallel}^j(z)$ ({\it i.e.}, sharing a common transverse component $\psi_\perp(\Rb)$ that is normalized as $\int d^2\Rb\,|\psi_\perp(\Rb)|^2=1$). Inserting these expressions into eqs\ \ref{PNn}-\ref{Mnj}, we find
\[\Gamma_n^{\rm total}=N\Gamma_n^0+|Q_n|^2\sum_{j\neq j'} M_n^jM_n^{j'*}\]
with
\[M_n^j=\int_{-\infty}^\infty dz\;\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\,|\psi_\parallel^j(z)\,\mathcal{P}_d(\beta,\omega,z)|^2,\]
where
\begin{align}
\Gamma_n^0&=\int d^2\Rb\;|\psi_{\perp}(\Rb)|^2 \,|\tilde{\beta}_n(\Rb)|^2, \nonumber\\
Q_n&=\int d^2\Rb\;|\psi_{\perp}(\Rb)|^2\tilde{\beta}_n(\Rb)
\nonumber
\end{align}
are transverse averages of the electron-sample coupling coefficient $\tilde{\beta}_n$. In general, the envelopes $|\psi_\parallel^j(z)|^2$ of the incident electrons are smooth functions that extend over many optical periods ({\it i.e.}, a large length $L$ compared with $v/\omega_{n0}$) and varies negligibly over each of them, so we can approximate
\begin{align}
M_n^j\approx M_n\equiv\lim_{L\to\infty}\frac{1}{L}\int_{-L/2}^{L/2} dz\;\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\,|\mathcal{P}_d(\beta,\omega,z)|^2. \nonumber
\end{align}
In this limit, $M_n$ is independent of the electron wave functions and arrival times, so it vanishes unless the sampled frequency $\omega_{n0}$ is a multiple of the PINEM laser frequency $\omega$. In particular, for $\omega_{n0}=m\omega$, where $m$ is an integer, using eq\ \ref{PPINEMd}, we find
\begin{align}
|M_n|&=\left|\sum_{l=-\infty}^\infty J_l(2|\beta|)\,J_{l+m}(2|\beta|)\,\ee^{4\pi{\rm i}} \def\ee{{\rm e} mld/z_T}\right| \nonumber\\
&=\big|J_m\big[4|\beta|\sin(2\pi md/z_T)\big], \label{MMnum}
\end{align}
where the second line is in agreement with ref \citenum{ZSF21} and directly follows from the first one by applying Graf’s addition theorem (eq\ (9.1.79) in ref\ \citenum{AS1972}). The total excitation probability then becomes
\begin{align}
\Gamma_n^{\rm total}=N\Gamma_n^0+N(N-1)\,|Q_n M_n|^2,
\label{N2pinem}
\end{align}
which contains an $N^2$ term ({\it i.e.}, superradiance). For tightly focused electrons, such that $|\psi_\perp(\Rb)|^2\approx\delta(\Rb-\Rb_0)$, we have $|Q_n|^2\approx\Gamma_n^0$, and consequently, eq\ \ref{N2pinem} reduces to $\Gamma_n^{\rm total}=\Gamma_n^0\;\left[N+N(N-1)\,|M_n|^2\right]$. This effect was predicted by Gover and Yariv \cite{GY20} by describing the electrons through their probability densities, treated as classical external charge distributions, and calculating the accumulated excitation effect, which is indeed independent of the arrival times of the electrons, provided they are contained within a small interval compared with the lifetime of the sampled mode $n$. Analogous cooperative multiple-electron effects were studied in the context of the Schwartz-Hora effect \cite{SH1969} by Favro {\it et al.}\cite{FFK1971}, who pointed out that a modulated "beam of electrons acts as a carrier of the frequency and phase information of the modulator and is able to probe the target with a resolution which is determined by the modulator". The obtained $N^2$ term thus provides a potential way of enhancing the excitation probability to probe modes with weak coupling to the electron. Incidentally, by numerially evaluating eq\ \ref{MMnum}, PINEM modulation using monochromatic light can be shown to yield \cite{paperarxiv6} $|M_n|^2\le34\%$, so additional work is needed in order to push this value closer to the maximum limit of $100\%$ obtained for $\delta$-function pulse trains.
\subsection{Interaction with Localized Excitations} For illustration purposes, we consider a laterally focused Gaussian electron wavepacket with probability density $|\psi^0(\rb)|^2\approx\delta(\Rb-\bb)\,\ee^{-z^2/\Delta^2}/(\sqrt{\pi}\Delta)$ interacting with a localized excitation of frequency $\omega_{n0}$ and transition dipole $\pb$ oriented as shown in Figure\ \ref{Fig5}a. The EELS probability is then described by a coupling coefficient that depends on $\pb$ and the direction of $\Rb$ as \cite{paper339} $\tilde{\beta}_0(\Rb)\propto\pb\cdot\hat{\Rb}$. Using these expressions for a single electron arranged in the two-wavepacket configurations of Figure\ \ref{Fig5}b,c, we find from eq\ \ref{P0n} an excitation probability $\Gamma_n^0=|\tilde{\beta}_n(\bb)|^2\propto|\pb|^2$ that is independent of the longitudinal ({\it i.e.}, along the beam direction) wavepacket separation $a$. In contrast, for two electrons with each of them in a different wavepacket, we find from eqs\ \ref{PNn}-\ref{Mnj}
\begin{align}
\Gamma_n^{\rm total}/2\Gamma_n^0=1\pm S\cos(\varphi),
\label{Pnlocal1}
\end{align}
where $\varphi=\omega_{n0}a/v$, $S=\ee^{-\omega_{n0}^2\Delta^2/2v^2}$, and the $+$ and $-$ signs apply to the configurations of Figures\ \ref{Fig5}b and \ref{Fig5}c, respectively (see Appendix). Interestingly, for two electrons with their wave functions equally shared among the two wavepackets, we also observe oscillations with $a$ as
\begin{align}
\frac{\Gamma_n^{\rm total}}{2\Gamma_n^0}=1+S\cos^2(\varphi/2)
\label{Pnlocal2}
\end{align}
in the $a\gg\Delta$ limit for the configuration of Figure\ \ref{Fig5}b (and the same expression with cos replaced by sin for Figure\ \ref{Fig5}c), which corresponds to the situation considered in eq\ \ref{bunch} for $z_j$ independent of $j$ and two electrons sharing the same wave function. In general, for $N$ laterally focused electrons ({\it i.e.}, a generalization of Figure\ \ref{Fig5}b), each of them having a wave function that is periodically distributed among $L$ wavepackets with separation $a$, we have
\begin{align}
\frac{\Gamma_n^{\rm total}}{N\Gamma_n^0}=1+\frac{N-1}{L^2}\;S\;\frac{\sin^2(L\varphi/2)}{\sin^2(\varphi/2)}
\label{Gtotn}
\end{align}
(see Appendix), which presents a maximum excitation probability $\Gamma_n^{\rm total}=N\,\left[1+(N-1)S\right]\,\Gamma_n^0$ (for $\varphi\rightarrow0$ or a multiple of $2\pi$) independent of the number of periods $L$.
\begin{figure}
\centering{\includegraphics[width=0.4\textwidth]{Fig6}}
\caption{Interference in the interaction with delocalized modes. For the two-wavepacket beam configuration of Figure\ \ref{Fig5} and a sample that has lateral translational invariance, a single electron of split wave function emits in-plane polaritons and transition radiation with an intensity that is insensitive to the longitudinal and lateral wavepacket separations $a$ and $b$. This is in contrast to the emission intensity observed when each wavepacket is populated by one electron (two-electron solid curve) or when considering two electrons with each of them equally shared among the two wavepackets (two-electron dashed curve). We adopt the same beam parameters as in Figure\ \ref{Fig5} (see also Appendix).}
\label{Fig6}
\end{figure}
\subsection{Interference in the Emission of Photons and Polaritons} When the sample possesses lateral translational invariance, like in Figure\ \ref{Fig6}, the excited modes possess well-defined in-plane wave vectors $\kb_{n\parallel}$, so the coupling coefficients exhibit a simple spatial dependence, $\tilde{\beta}_n(\Rb)\propto\tilde{\beta}_n(0)\ee^{{\rm i}} \def\ee{{\rm e}\kb_{n\parallel}\cdot\Rb}$. Proceeding in a similar way as above for Gaussian wavepackets, we find no dependence on the wave function for single electrons, whereas for two electrons we obtain the same results as in eqs\ \ref{Pnlocal1} and \ref{Pnlocal2} with $\varphi$ redefined as $\omega_{n0}a/v-\kb_n\cdot\bb$. The emission probability thus oscillates with both longitudinal and lateral wavepacket displacements, $a$ and $\bb$, respectively, as illustrated in Figure\ \ref{Fig6}.
Incidentally, if the e-beam is laterally focused within a small region compared to $2\pi/k_{n\parallel}$, polaritons emitted to the left and to the right can interfere in the far field ({\it i.e.}, the final state $n$ is then comprising the detection system through which interference is measured by introducing an optical delay between the two directions of emission), while the interference is simply washed out as a result of lateral intensity averaging over the transverse beam profile if this extends over several polariton periods. This argument can be equivalently formulated in terms of the recoil produced on the electron due to lateral momentum transfer and the respective loss or preservation of {\it which way} information in those two scenarios, depending on whether such transfer is larger or smaller than the momentum spread of the incident electron \cite{KRA21}.
\subsection{Are Free Electrons Quantum or Classical Probes?} When examining a sample excitation of frequency $\omega_{n0}$ within a classical treatment of the electron as a point charge, the external source can be assimilated to a line charge with an $\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}$ phase profile. The excitation strength by such a classical charge distribution coincides with $|\tilde{\beta}_n(\Rb)|^2$ (see eq\ \ref{betan}), where $\Rb$ gives the transverse position of the line. Actually, summing over all final states to calculate the EELS probability $\sum_n|\tilde{\beta}_n|^2\delta(\omega-\omega_{n0})$, we obtain a compact expression in terms of the electromagnetic Green tensor of the sample \cite{paper357} (eq\ \ref{EELSQM}, see detailed derivation in the Appendix), which is widely used in practical simulations \cite{paper149}. Extrapolating this classical picture to the configuration of Figure\ \ref{Fig6}, we consider two point electrons with lateral and longitudinal relative displacements, which directly yield an emission probability as described by eq\ \ref{Pnlocal1}. However, the classical picture breaks down for electrons whose wave functions are separated into several wavepackets: for single electrons, no classical interference between the emission from different wavepackets is observed, as the excitation probability reduces to a simple average of the line charge classical model over the transvese beam profile; likewise, for multiple electrons the excitation probability depends on the electron wave function in a way that cannot be directly anticipated from the classical picture ({\it cf.} solid and dashed curves in Figures\ \ref{Fig5} and \ref{Fig6}). The effect is also dramatic if the incident electrons are prepared in mutually entangled states, as discussed in a recent study \cite{KRA21_2}, while entangled electrons have also been proposed as a way to reduce beam damage in transmission electron microscopy \cite{OK14}.
The classical model provides an intuitive picture of interference in the CL emission from structured samples, such as in Smith-Purcell radiation \cite{SP1953} from periodic \cite{V1973b,HRS97}, quasiperiodic \cite{paper273}, and focusing \cite{RSR17} gratings. In our formalism, the coherent properties of the emitted radiation are captured by the $z$ integral in eq\ \ref{betan}, where the matrix element of the interaction Hamiltonian reduces to the electric field associated with the excited mode \cite{paper339}. In CL, the excited state $n$ refers to a click in a photon detector, and therefore, the sample must be understood as a complex system composed by the structure probed in the microscope, the optical setup, and the detector itself.
We remark that our results hold general applicability to any type of interaction Hamiltonian whose matrix elements $\langle n|\hat{\mathcal{H}}_1(\rb)|0\rangle$ are just a function of electron position $\rb$ (see eq\ \ref{betan}). This includes arbitrarily complex materials and their excitations, as well as the coupling to any external field. In particular, when describing the interaction with quantum electromagnetic fields through a linearized minimal-coupling Hamiltonian $\hat{\mathcal{H}}_1(\rb)\propto\hat\Ab(\rb)$, where $\hat\Ab(\rb)$ is the vector potential operator, the present formalism leads to the well-known EELS expression in eq\ \ref{EELSQM} (see derivation in the Appendix), which does account for coupling to radiation, and in particular, it can readily be used to explain the Smith-Purcell effect in nonabsorbing gratings \cite{paper149} ({\it i.e.}, when $\Gamma_{\rm CL}=\Gamma_{\rm EELS}$). This corroborates the generality of the present procedure based on treating the sample ({\it i.e.}, the universe excluding the e-beam) as a closed system, so its excitations are eigenstates of infinite lifetime. In a more traditional treatment of the sample as an open system, our results can directly be applied to excitations of long lifetime compared with the electron pulse durations. Additionally, coupling to continua of external modes can be incorporated through the Fano formalism \cite{F1961} to produce, for example, spectral CL emission profiles from the probabilities obtained for the excitation of confined electronic systems ({\it e.g.}, plasmonic nanoparticles).
We hope that this discussion provides some intuitive understanding on the role of the wave function in e-beam inelastic scattering, summarized in the statement that the excitation process by an individual swift electron (in EELS and CL) can be rigorously described by adopting the classical point-particle model, unless recoil becomes important ({\it e.g.}, for low-energy electrons or high-energy excitations). In contrast, the excitation by multiple electrons is affected by their quantum mechanical nature and depends on how their combined wave function is initially prepared. The predicted effects could be experimentally corroborated using few-electron pulses produced, for instance, by shaped laser pulses acting on photocathodes or {\it via} multiple ionization from ultracold atoms or molecules \cite{FFV17}. Besides its fundamental interest, the dependence of the excitation probability on the wave function for multiple electrons opens the possibility of realising electron-electron pump-probe imaging with an ultimate time resolution that is fundamentally limited by approximately half of the electron period $\pi/vq_0$ ({\it e.g.}, $\sim10^{-20}\,$s for 100\,keV electrons).
\begin{figure*}
\centering{\includegraphics[width=1.00\textwidth]{Fig7}}
\caption{Future directions in photonics with electron beams.
(a) Combination of a fs laser pump synchronized with an attosecond electron pulse and detection of CL as an approach towards sub-{\AA}--attosecond--sub-meV resolution.
(b) Interferometric detection of a small sample object through EEGS measurements yielding the PINEM coupling coefficient $|\beta_{\rm ref}+\beta_{\rm sample}|^2\approx|\beta_{\rm ref}|^2+2{\rm Re}\{\beta_{\rm ref}^*\beta_{\rm sample}\}$, where the sample signal $\beta_{\rm sample}$ ($\ll1$) enters linearly and is amplified by an order-unity reference $\beta_{\rm ref}$. Alternatively, a similar scheme can be followed with the CL far-field intensity $I_{\rm CL}=|\fb_{\rm ref}+\fb_{\rm sample}|^2\approx|\fb_{\rm ref}|^2+2{\rm Re}\{\fb_{\rm ref}^*\cdot\fb_{\rm sample}\}$.
(c) Quantum electron microscopy for interaction-free imaging based on the quantum Zeno effect, whereby the presence of an object produces unity-order effects in the electron signal without the electron ever intersecting the sample materials. Adapted from ref\ \citenum{PY09}.
(d) Electron temporal compression after propagating a distance $z$ beyond the region of PINEM interaction (at time $t_p$) using classical and quantum light; the contour plots show the electron probability density as a function of propagation-distance-shifted time $\tau=t-t_p-z/v$ \cite{paper360}. Adapted from ref\ \citenum{paper360}.
(e) Sampling the nonlinear response of materials with nanoscale precision through the observation of harmonic-assisted asymmetry in the PINEM spectra. Adapted from ref\ \citenum{paper347}.
(f) Electron-beam-induced nonlinearities in small nanostructures, whereby low-energy electrons act equivalently to a high-fluence light pulse (left, for 25 eV electrons) and modify the EELS or CL spectra relative to the linear-interaction limit (right). Adapted from ref\ \citenum{paper350}.
}
\label{Fig7}
\end{figure*}
\section{Outlook and Perspectives}
We conclude this article with a succinct discussion of several promising directions for future research at the intersection of electron microscopy and photonics. The following is not an exhaustive list, but we hope that the reader can find in it some of the elements that are triggering a high degree of excitement in the nascent community gathered around this expanding field, including the promise for radical improvements in our way to visualize optical excitations with unprecedented space-time-energy resolution, as well as the opening of new directions in the study of fundamental phenomena.
\subsection{Towards Combined Sub-{\AA}--Attosecond--Sub-meV Resolution} PINEM-based UTEM is already in place to simultaneously combine nm--fs--sub-eV resolution inherited from focused e-beams, ultrafast optics, and EELS detection (see Figure\ \ref{Fig4} and references therein). The implementation of this technique in state-of-the-art aberration-corrected microscopes could push it further to the sub-{\AA} range, which, combined with fine tuning of the laser frequency, could lead to simultaneous sub-meV resolution via EEGS \cite{paper114,paper221}. Temporal resolution is then limited by the uncertainty principle $\sigma_E\sigma_t\ge\hbar/2\sim300\,{\rm meV}\times{\rm fs}$ relating the standard deviations of the electron pulse energy spread and time duration ($\sigma_E$ and $\sigma_t$, respectively) if the probe that is used to provide temporal resolution ({\it i.e.}, the compressed electron) is also energy-analyzed to resolve the excitation frequency through EELS. However, this limitation can be overcome if two different particles are employed to provide energy and time resolutions, respectively ({\it i.e.}, the uncertainty principle affects each of them individually, but not their crossed uncertainties). This possibility could be realized, for instance, by using single attosecond electron pulses to achieve time resolution with respect to a phased-locked optical pump, in combination with detection of the CL signal produced by the electron, as indicated by the red colored CL blob in Figure\ \ref{Fig1}a; sub-meV spectral resolution could then be gained through optical spectroscopy (see Figure\ \ref{Fig7}a). Besides the technical challenge of combining fs-laser and attosecond-electron pulses \cite{MB20}, detection of CL emission can be difficult because it may be masked by light scattered from the laser, so it needs to be contrasted with the optical signal observed in separate measurements using only electrons or laser irradiation, or alternatively, laser scattering could be interferometrically removed at the light spectrometer.
\subsection{Non-Invasive Imaging: Interferometric and Quantum Electron Microscopies} Sample damage is a major source of concern in electron microscopy, particularly when investigating soft and biological materials. Besides cooling the sample to make it more resistant (cryogenic electron microscopy \cite{F02_2}), various strategies can be followed to combat this problem, essentially consisting in enhancing the signal contrast produced by the specimen with a minimum interaction with the electrons. This is the principle underlying the proposed quantum electron microscope \cite{PY09} (see Figure\ \ref{Fig7}c), inspired in a previously explored form of interaction-free optical microscopy \cite{KWM99}, and consisting in initially placing the electron in a cyclic free path (upper potential well) that has a small probability amplitude $T$ of transferring into a second cyclic path (lower potential well) during a cycle time period $\tau_c$. The second path is taken to intersect the sample, and therefore, the quantum Zeno effect resolves the question whether a given pixel contains material or is instead empty: when the lower path passes through a {\it filled} sample pixel, the electron wave function collapses, so the overall transfer into this path after a time $N\tau_c$ ({\it i.e.}, after $N$ roundtrips) reduces to $\sim N|T|^2$; in contrast, when the lower path passes through an {\it empty} sample pixel, the accumulated transfer of probability amplitude becomes $\sim NT$, and the transferred probability is instead $\sim|NT|^2$. Consequently, for large $N$ and small $|T|$, such that $|NT|^2\sim1$, detection of the electron in the upper path indicates that a filled pixel is being sampled, involving just a marginal probability $\sim N|T|^2$ of electron-sample collision; on the contrary, an empty sample pixel is revealed by a depletion $\sim|NT|^2$ in the electron probability associated with the upper path, equally avoiding sample damage because there is no material to collide. An international consortium is currently undertaking the practical implementation of this challenging and appealing form of microscopy \cite{KHK16}. An extension of this idea to incorporate the detection of sample optical excitations and their spectral shapes would be also desirable in order to retrieve valuable information for photonics.
Interferometry in the CL signal offers a practical approach to study the response of small scatterers by using the electron as a localized light source that is positioned with nanometer precision in the neighborhood of the object under study \cite{paper341,SAG20}. In a related development, CL light produced by an engineered metamaterial reference structure has been postulated as a source of ultrafast focused light pulses that could be eventually combined with the exciting electron in a pump-probe configuration \cite{T18,TMG19}. These studies inspire an alternative way of reducing sample damage (Figure\ \ref{Fig7}b, CL emission), also in analogy to infrared SNOM \cite{HTK02}: by making the electron to traverse a reference structure ({\it e.g.}, a thin film), followed by interaction with the sample, the CL far-field amplitudes $\fb_{\rm ref}$ and $\fb_{\rm sample}$ produced by these events are coherently superimposed ({\it i.e.}, both of them maintain phase coherence, just like the emission emanating from the different grooves of a grating in the Smith-Purcell effect \cite{SP1953}), giving rise to a CL intensity $I_{\rm CL}=|\fb_{\rm ref}+\fb_{\rm sample}|^2\approx|\fb_{\rm ref}|^2+2{\rm Re}\{\fb_{\rm ref}^*\cdot\fb_{\rm sample}\}$, where the sample signal in the second term is amplified by a stronger reference signal ({\it i.e.}, we take $|\fb_{\rm ref}|\gg|\fb_{\rm sample}|$) that can be calibrated {\it a priori}. This strategy can provide a large sample signal compared with direct (unreferenced) CL detection ({\it i.e.}, $|2{\rm Re}\{\fb_{\rm ref}^*\cdot\fb_{\rm sample}\}|\gg|\fb_{\rm sample}|^2$), and thus, the electron dose needed to collect a given amount of information is reduced, or alternatively, there is some flexibility to aim the e-beam a bit farther apart from the specimen to reduce damage.
In the context of UTEM, the demonstration of coherent double-PINEM interactions \cite{EFS16} opens a similar interferometric avenue to reduce sample damage by associating them with reference and sample structures (Figure\ \ref{Fig7}b). The PINEM spectrum responds to the overall coupling strength $|\beta_{\rm ref}+\beta_{\rm sample}|^2$ (see the discussion on the addition property of $\mathcal{P}_0(\beta,\omega,z)$ after eq\ \ref{PPINEM}), which contains an interference term $2{\rm Re}\{\beta_{\rm ref}^*\beta_{\rm sample}\}$ that can again amplify a weak PINEM signal from an illuminated sample by mixing it with a strong reference. This effect has also been studied in connection with the interaction between a free electron and a two-level atom \cite{PG19,ZSF21,RGM21}, where the inelastic electron signal is found to contain a component that scales linearly with the electron-atom coupling coefficient if the electron wave function is modulated and the atom is prepared in a coherent superposition of ground and excited states that is phase-locked with respect to the electron modulation (in contrast to a quadratic dependence on the coupling coefficient if the atom is prepared in the ground state). We remark the necessity of precise timing ({\it i.e.}, small uncertainty compared with the optical period of the excitation) between the electron modulation and the amplitudes of ground and excited states in the two-level system. This condition could be met in the double-PINEM configuration, giving rise to an increase in sensing capabilities, so that a smaller number of beam electrons would be needed to characterize a given object ({\it e.g.}, a fragile biomolecule).
It should be noted that, despite their appeal from a conceptual viewpoint, individual two-level Fermionic systems present a practical challenge because the transition strength of these types of systems is typically small ({\it e.g.}, they generally contribute with $\lesssim1$ electrons to the transition strength, as quantified through the f-sum rule \cite{PN1966,paper267}), and in addition, coupling to free electrons cannot be amplified through PINEM interaction beyond the level of one excitation, in contrast to bosonic systems ({\it e.g.}, linearly responding plasmonic and photonic cavities, which can be multiply populated). Nevertheless, there is strong interest in pushing e-beam spectroscopies to the single-molecule level, as recently realized by using high-resolution EELS for mid-infrared atomic vibrations \cite{HHP19,HKR19,HRK20,ZZW21} (see Figure\ \ref{Fig2}d), which are bosonic in nature and give rise to measurable spectral features facilitated by the increase in excitation strength with decreasing frequency. However, e-beam-based measurement of valence electronic excitations in individual molecules, which generally belong to the two-level category, remains unattained with atomic-scale spatial resolution. In this respect, enhancement of the molecular signal by coupling to a nanoparticle plasmon has been proposed to detect the resulting hybrid optical modes with the e-beam positioned at a large distance from the molecule to avoid damage \cite{KNA18}. The interferometric double-PINEM approach could provide another practical route to addressing this challenge. The $N^2$ excitation predicted for PINEM-modulated electrons \cite{GY20} (see eq\ \ref{N2pinem}) is also promising as a way to amplify specific probed excitation energies while still maintaining a low level of damage $\propto N$.
Interferometric CL and PINEM approaches should enable the determination of the phase associated with the emitted and induced optical near fields, respectively. In CL, this could be achieved without modifying the e-beam components of the microscope by introducing a tunable optical delay line in the light component emanating from the reference structure before mixing it with the sample component. In PINEM, the delay line could be incorporated in the laser field illuminating the reference structure. The quantities to be determined are the complex scattering amplitude (CL) and the near field (PINEM), which are actually two sides of the same coin, related through eq\ \ref{betaCL2}. If the reference signal is well characterized and in good correspondence with theory ({\it e.g.}, transition radiation from a thin film \cite{YAT01}), this procedure should enable the determination of the frequency-dependent optical phase. In addition, self-interference of the CL signal ({\it e.g.}, by mixing different emission directions through a bi-prism) could provide a simple method to measure the angular dependence of the far-field complex amplitude, while the interferometric detection discussed above can supply the missing information from the spectral dependence of the phase.
\subsection{Interference between E-Beam and External Light Excitations} Recent reports \cite{paperarxiv3,paperarxiv6} have revealed that CL emission can interfere with external light that is synchronized with the electron wave function. This effect has been found to be controlled by the same coherence factors $M_n^j$ that intervene in the interference among different beamed electrons (eq\ \ref{Mnj}). An extension of those results to general excitations in the specimen can be obtained by following the procedure used in the derivation of eq\ \ref{P0n}, but including the interaction with a weak ({\it i.e.}, acting linearly) classical field ({\it e.g.}, laser light) of finite temporal duration. The latter can be introduced through an additional time-dependent interaction Hamiltonian
\begin{align}
\hat{\mathcal{H}}_2(t)=\int \frac{{\rm d}\omega}{2\pi}\,\hat{\mathcal{H}}_2(\omega)\ee^{-{\rm i}} \def\ee{{\rm e}\omega t}
\nonumber
\end{align}
This expression automatically implies synchronization of the classical field and the beam electrons by selecting a common time origin. Expanding the wave function of the system as in eq\ \ref{defpsi}, we find the post-interaction coefficients given by eq\ \ref{alphainfinity}, but now supplemented by an additional term $(-{\rm i}} \def\ee{{\rm e}/\hbar)\big\langle n\big|\hat{\mathcal{H}}_2(\omega_{n0})\big|0\big\rangle\alpha_\qb^0$. From here, proceeding in a way analogous to the derivation of eq\ \ref{P0n} in the Appendix, the excitation probability of a mode $n$ is found to be
\begin{align}
&\Gamma_n^0=
\int d^3\rb\;\big|\psi^0(\rb)\big|^2 \,\bigg|\tilde{\beta}_n(\Rb)\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}+\beta_n^{\rm field}\bigg|^2 \nonumber\\
\nonumber\\
&=\int d^3\rb\;\big|\psi^0(\rb)\big|^2 \,\big|\tilde{\beta}_n(\Rb)\big|^2+\big|\beta_n^{\rm field}\big|^2 +2{\rm Re}} \def\Imm{{\rm Im}\bigg\{\beta_n^{{\rm field}*}\,Q_n^0\bigg\}
\nonumber
\end{align}
where
\begin{align}
\beta_n^{{\rm field}*}=\frac{1}{\hbar}\big\langle n\big|\hat{\mathcal{H}}_2(\omega_{n0})\big|0\big\rangle
\nonumber
\end{align}
is an excitation amplitude associated with the external classical field, whereas $Q_n^0$ is defined in eq\ \ref{Qj}. Finally, following the same approach as in the derivation of eqs\ \ref{PNn}-\ref{Mnj} in the Appendix, we find an extension of this result to e-beams consisting of multiple distinguishable electrons:
\begin{align}
\Gamma_n^{\rm total}
=&\sum_j\int d^3\rb\;\big|\psi^j(\rb)\big|^2 \,\big|\tilde{\beta}_n(\Rb)\big|^2+\sum_{j\neq j'}Q_n^jQ_n^{j'*} \nonumber\\
&+\big|\beta_n^{\rm field}\big|^2 +2\sum_j{\rm Re}} \def\Imm{{\rm Im}\bigg\{\beta_n^{{\rm field}*}\,Q_n^j\bigg\}
\label{EEQ1}
\end{align}
where $j$ and $j'$ are electron labels. We thus confirm that the synchronized interactions between different electrons and light with a sample are both governed by the coherence factors defined in eqs\ \ref{Qj} and \ref{Mnj}. When the excitation mode corresponds to an emitted photon, this equation produces the angle- and frequency-dependent far-field photon probability
\begin{widetext}
\begin{align}
&\frac{d\Gamma_{ \rm rad}}{d\Omega_{\rr_\infty}d\omega}=\frac{c}{4\pi^2\hbar\omega}\Bigg{\{} \sum_j\int d^2\Rb\, M_0^j(\Rb) |\fb_{\rr_\infty}^{\rm CL}(\Rb,\omega)|^2 \nonumber\\
&\quad+|\fb_{\rr_\infty}^{\rm scat}(\omega)|^2+2\sum_j\int d^2\Rb\, {\rm Re}\left\{M_{\omega/v}^j(\Rb)\; \fb_{\rr_\infty}^{{\rm CL}*}(\Rb,\omega)\cdot \fb_{\rr_\infty}^{\rm scat}(\omega)\right\} \nonumber\\
&+\sum_{j\neq j'}
\left[\int d^2\Rb\, M_{\omega/v}^j(\Rb)\fb_{\rr_\infty}^{{\rm CL}*}(\Rb,\omega)\right]
\left[\int d^2\Rb'\, M_{\omega/v}^{j'*}(\Rb')\fb_{\rr_\infty}^{\rm CL}(\Rb',\omega)\right]
\Bigg{\}}
\label{EEQ2}
\end{align}
\end{widetext}
which is derived in ref\ \cite{paperarxiv6} from an alternative quantum electrodynamics formalism and constitutes an extension of eq\ \ref{anothereq} to include the simultaneous interaction with multiple electrons and an external light field. Here, the excitation frequency is denoted $\omega=\omega_{n0}$, the coherence factors are renamed as $M_{\omega/v}^j(\Rb)\equiv M_n^j(\Rb)$ (see eq\ \ref{Mnj}), and the far-field amplitude component $\fb_{\rr_\infty}^{\rm scat}(\omega)$ refers to the scattered laser field arriving at the same photon detector as the CL emission, either after scattering at the sample or directly from the employed laser. We obtain eq\ \ref{EEQ2} from eq\ \ref{EEQ1} by multiplying by $\delta(\omega-\omega_{n0})$, making the transformations $\tilde\beta_n(\Rb)\rightarrow\sqrt{c/4\pi^2\hbar\omega}\;\fb_{\rr_\infty}^{{\rm CL}*}(\Rb,\omega)$ and $\beta_n^{\rm field}\rightarrow\sqrt{c/4\pi^2\hbar\omega}\;\fb_{\rr_\infty}^{{\rm scat}*}(\omega)$, and summing over modes $n$ that contribute to the emission direction $\rr_\infty$. Obviously, in order to observe the interference between CL and laser light, the latter has to be dimmed, so that both of them have commensurate amplitudes, as extensively discussed in ref\ \cite{paperarxiv6}. The coherence factor $M_{\omega/v}^j(\Rb)$ determines the ability of each electron $j$ to interfere with synchronized light. This factor is maximized ($\big|M_{\omega/v}^j(\Rb)\big|\rightarrow1$) in the point-particle limit (see discuss above). This analysis reveals that temporally compressed electrons act as partially coherent, localized sources of excitation ({\it e.g.}, CL emission), tantamount to the external light, but with the faculty of acting with sub-nm spatial precision. Besides the prospects opened by these findings to control nanoscale optical excitations, this approach offers an alternative way of determining the absolute magnitude and phase of $\fb_{\rr_\infty}^{\rm CL}$ through the interference term in the above equation.
Incidentally, we remark again that the above expressions are directly applicable to electrons prepared in mixed states by substituting $\big|\psi^j(\rb)\big|^2$ by the electron probability density (see above).
\subsection{Manipulation of the Quantum Density Matrix Associated with Sample Modes} In addition to the aforementioned implementations of shaped electron beams for microscopy and imaging, the modulated electron wave function has been investigated as a means to manipulate the quantum state of confined optical excitations. This is relevant because of its potential to create states of light with nontrivial statistics, enabling exciting applications in quantum computing \cite{KMN07}, metrology \cite{GLM11}, and information \cite{WFP17}. An initially separable joint electron-sample state is generally brought to a complex entangled state after interaction, which upon partial tracing and projection over the electron degrees of freedom, allows us to modify the sample density matrix. Obviously, a wider range of sample states could be accessed by controlling the incoming electron density matrix, for example, through PINEM interaction with nonclassical light \cite{paper360} (see below). For a general initial electron-photon (e-p) density matrix $\rho_{\rm e,p}^i$, the joint final state after interaction can be written as $\rho_{\rm e,p}^f=\hat{\mathcal{S}}\rho_{\rm e,p}^i\hat{\mathcal{S}}^\dagger$ in terms of the scattering operator $\hat{\mathcal{S}}$. If the electron is not measured, the resulting photonic density matrix is obtained through the partial trace over electron degrees of freedom, $\rho^{\rm no-meas}_{\rm p}={\rm Tr}_{\rm e}\{\rho^f_{\rm e,p}\}$. When the sample is initially prepared in its ground state, the diagonal elements of $\rho^{\rm no-meas}_{\rm p}$ define a Poissonian distribution, regardless of the incident electron wave function \cite{paper360}, while off-diagonal terms exhibit a pronounced dependence that can potentially be measured through optical interferometry \cite{paperarxiv3} and direct mixing of CL and laser light scattering \cite{paperarxiv6}. Incidentally, in the point-particle limit for the electron, the interaction is equivalent to excitation of the sample by a classical current, which is known to transform an initial coherent state ({\it e.g.}, the sample ground state) into another classical coherent state \cite{GL91} (the excited sample). In contrast, if the electron is measured ({\it i.e.}, only instances of the experiment with a given final electron state $|\qb\rangle$ are selected), the interaction-induced e-p entanglement leads to a wide set of optical density matrices $\rho^{\rm meas}_{\rm p}={\rm Tr}_{\rm e}\{|\qb\rangle\langle\qb|\rho^f_{\rm e,p}\}\neq \rho^{\rm no-meas}_{\rm p}$ that can be post-selected through the detection of a transmitted electron with, for example, a specific wave vector $\qb$; obviously, using more than one electron further increases the range of possible outcomes. Single-photon generation triggered by energy-momentum-resolved transfers from an electron to a waveguide constitutes a trivial example of this strategy \cite{paper180}. This approach has also been proposed to produce thermal, displaced Fock, displaced squeezed, and coherent sample states \cite{HRN21}.
\subsection{Manipulation of the Electron Density Matrix} If no measurement is performed on the sample, interaction with the electron modifies the density matrix of the latter, which becomes $\rho^f_{\rm e}={\rm Tr}_{\rm p}\left\{\rho^f_{\rm e,p}\right\}$. For example, after PINEM interaction with laser light, we find (going to the Schr\"odinger picture) $\rho^f_{\rm e}(\rb,\rb',t)=\psi(\rb,t)\psi^*(\rb',t)$, where the wave function $\psi(\rb,t)$ (eq\ \ref{psiPINEM}) is controlled by a single coupling parameter $\beta$ (eq\ \ref{beta}). Also, the tranformation of a general incident density matrix $\rho^i(\rb,\rb',t)$ is mediated by the factors defined in eq\ 15 as
\begin{align}
&\rho^f(\rb,\rb',t) \nonumber\\
&=\mathcal{P}_d[\beta(\Rb),\omega,z-vt]\;\mathcal{P}^*_d[\beta(\Rb'),\omega,z'-vt]\;\rho^i(\rb,\rb',t).
\nonumber
\end{align}
More complex forms of $\rho^f_{\rm e}$ are obtained when using nonclassical light. In this respect, recent advances in quantum light sources ({\it e.g.}, squeezed light generation \cite{AGM16}) provide a practical way to induce nonclassical sample states, which in turn modulate the electron density matrix through PINEM-like interaction \cite{paper360}. We illustrate this idea by showing in Figure\ \ref{Fig7}d the diagonal part of the density matrix ({\it i.e.}, the electron probability density) for both laser and nonclassical illumination. When the phase uncertainty in the light state is decreased (phase-squeezed and minimum-phase-uncertainty \cite{KK93} optical states), the electron density peaks are found to be more compressed in time, and in addition, because of conservation of the total probability, a complementary elongation takes place along the propagation direction. In contrast, the opposite trend is observed when using amplitude-squeezed light. In the limit of illumination with maximum phase uncertainty, such as Fock and thermal optical states, the electron does not undergo compression because there is no coherence among sample states of different energy \cite{paper360}.
If the length of the e-beam--specimen interaction region is sufficiently small as to assume that eq\ \ref{nonrecoil} holds during the passage of the electron, the real-space representations of the initial and final electron density matrices (before and after interaction) depend on time as $\rho^{i,f}(\rb-\vb t,\rb'-\vb t)$. Then, after linear interaction with a specimen prepared in the ground state, these quantities are related as
\begin{align}
&\rho^f(\rb-\vb t,\rb'-\vb t) \nonumber\\
&=\exp\big[K(\Rb,\Rb',z-z')\big]\;\rho^i(\rb-\vb t,\rb'-\vb t)
\nonumber
\end{align}
where
\begin{align}
&K(\Rb,\Rb',z-z')=\frac{2e^2}{\hbar} \nonumber\\
&\times\int_0^\infty d\omega\int_{-\infty}^\infty {\rm d}z''\int_{-\infty}^\infty {\rm d}z'''\;\ee^{{\rm i}} \def\ee{{\rm e}\omega(z''-z''')/v}\nonumber\\
&\times\bigg[\ee^{-{\rm i}} \def\ee{{\rm e}\omega(z-z')/v}
\;2\Imm\big\{-G_{zz}(\Rb,z'',\Rb',z''',\omega)\big\} \nonumber\\
&\quad-{\rm i}} \def\ee{{\rm e} G_{zz}(\Rb,z'',\Rb,z''',\omega)+{\rm i}} \def\ee{{\rm e} G_{zz}^*(\Rb',z'',\Rb',z''',\omega)\bigg]
\nonumber
\end{align}
for $\vb$ along $z$. We have derived a linearized form of this expression ({\it i.e.}, with $\ee^K$ substituted by $1+K$) only assuming time reversal symmetry and the nonrecoil approximation as a direct extension of the techniques used in the Appendix when proving eqs\ \ref{EELSQM} and \ref{Gammafi}. The full result (with $\ee^K$) was obtained elsewhere within a quantum-electrodynamics formalism \cite{paper357}. Reassuringly, we have $K(\Rb,\Rb,0)=0$, so the norm $\int d^3\rb\,\rho(\rb,\rb)=1$ is preserved. In addition, the property $K^*(\Rb,\Rb',z-z')=K(\Rb',\Rb,z'-z)$ guarantees the Hermiticity of the transformed density matrix. We note that the $\Imm\{\dots\}$ term originates in inelastic scattering, while the remaining two terms are associated with elastic processes from the electron viewpoint, which are essential to conserve the norm.
For completeness, we note that, incorporating in eq\ \ref{nonrecoil} the lowest-order nonrecoil correction (i.e., $\varepsilon_\qb-\varepsilon_{\qb'}\approx\vb\cdot(\qb-\qb')+(\hbar/2m_{\rm e}} \def\kB{{k_{\rm B}}\gamma^3)\big(|\qb-\qb_0|^2-|\qb'-\qb_0|^2\big)$ with $\qb_0=m_{\rm e}} \def\kB{{k_{\rm B}}\vb\gamma/\hbar$), free electron propagation over a distance $d$ transforms the density matrix as
\begin{align}
&\rho^f(\Rb,z,\Rb',z',t) \nonumber\\
&=\int_{-\infty}^\infty {\rm d}z''\int_{-\infty}^\infty {\rm d}z'''\;T(z-z'',z'-z''')\;\rho^i(\Rb,z'',\Rb',z''',t)
\nonumber
\end{align}
with $T(z,z')=(-{\rm i}} \def\ee{{\rm e}\gamma^2q_0/2\pi d)\,\exp\big[({\rm i}} \def\ee{{\rm e}\gamma^2q_0/2d)(z^2+z'^2)\big]$. In particular, this procedure readily yields eq\ \ref{PPINEM} from eq\ \ref{PPINEMd}.
\subsection{Nanoscale Sampling of the Nonlinear Optical Response} Electron beams potentially grant us access into the nonlinear response of materials with unprecedented nanoscale spatial resolution. Specifically, PINEM offers a possible platform to perform nonlinear nanoscale spectroscopy \cite{paper347} (Figure\ \ref{Fig7}e): under intense laser pulse irradiation, the sample can generate evanescent near fields not only at the fundamental frequency but also at its harmonics, which produce a departure from the gain-loss symmetry in the resulting EELS spectra. These types of asymmetries have already been demonstrated by performing PINEM with simultaneous $\omega$ and $2\omega$ external irradiation \cite{PRY17} ({\it i.e.}, through a combination of two PINEM interactions at such frequencies, as described by eq\ \ref{psiPINEM}, but with the $2\omega$ component now produced by external illumination having phase coherence relative to the $\omega$ laser field).
At lower kinetic energies, electrons produce an increasingly stronger perturbation on the sample, which has been speculated to eventually trigger a measurable nonlinear material response \cite{paper350}. The idea is that the electron acts as a relatively high-fluence optical pulse (Figure\ \ref{Fig7}f, left), so the resulting nonlinear field emanating from the sample could be traced through the shift in spectral features revealed by EELS or CL as the e-beam velocity or impact parameter are scanned (Figure\ \ref{Fig7}f, right).
In a related context, nanoscale ultrafast probing could eventually assist the exploration of quantum nonlinearites, such as those imprinted on bosonic cavity modes due to hybridization with two-level systems ({\it e.g.}, quantum emitters), which have been a recurrent subject of attention in recent years \cite{BHA05,DSF10,HCS15,paper176,paper339}.
\subsection{Optical Approach to Electron-Beam Aberration Correction} Advances in electron microscopy have been fuelled by a sustained reduction in e-beam aberrations and energy spread. In particular, both aberration-correction and lateral beam shaping rely on our ability to control the lateral electron wave function. This can be done with great precision using static microperforated plates, which, for example, enable the synthesis of highly chiral vortex electron beams \cite{VTS10,MAA11}. Dynamical control is however desirable for applications such as fast tracking of sample dynamics. Substantial progress in this direction is being made through the use of perforated plates with programable potentials that add a position-dependent electric Aharonov-Bohm phase to the electron wave function \cite{VBM18}. In a separate development, intense laser fields have been used to optically imprint a ponderomotive phase on the electrons \cite{MJD10,SAC19,ACS20} ({\it i.e.}, as described by eq\ \ref{phase}). Combined with UTEM and structured illumination, one could use strong, spatially modulated lasers to imprint an on-demand transverse phase profile on the electron wave function in order to correct aberrations and customize the focal spot profile. This general approach has been theoretically explored through PINEM interaction for light reflected on a continuous thin foil \cite{paper351}, as well as by relying on the free-space ponderomotive elastic phase \cite{paperarxiv4}. A recent study also proposes the use of PINEM interactions with spectrally shaped light pulses to reduce e-beam energy spreading \cite{RK20}. These advances constitute promising directions to enhance our control over the wave function of free electrons for application in improved e-beam-based, spectrally resolved microscopies.
\subsection{Nanoscale Electron-beam Photon Sources} By interacting with material boundaries, the evanescent field carried by electrons is transformed into propagating CL light emission. This effect has been extensively exploited to produce efficient light sources \cite{LCS16,CTM20}, for example with the e-beam flying parallel to a grating surface (Smith-Purcell effect \cite{SP1953,paper027,paper252,paper273,RKT19}), where superradiance ({\it i.e.}, when the emission intensity scales quadratically with the e-beam current) has been demonstrated in the generation of THz radiation \cite{UGK98}. Electron wiggling caused by periodic structures is equally used in undulators at synchrotrons, while a nanoscale version of this effect has also been proposed \cite{WKI16}. A particularly challenging task is the production of X-ray photons with nanometer control, which recent studies have tackled following different strategies, such as through the simultaneous generation of polaritons in a nonlinear two-quanta emission process \cite{RWJ19}, or by an atomic-scale version of the Smith-Purcell effect using atomic planes in van der Waals materials as the periodic structure \cite{paper356}. Additionally, a quantum klystron has recently been proposed based on spatially modulated intense electron beams in a PINEM-related configuration followed by free-space propagation, giving rise to a periodic train of electron bunches that could trigger superradiance from two-level emitters \cite{RHS21}, in analogy to the intriguing Schwartz-Hora effect \cite{SH1969,FFK1971}, which modern technology could perhaps revisit.
\subsection{Towards Free-Space Nanoelectronics at Low Kinetic Energies} In nanophotonics, there is a plethora of photon sources that can be integrated in nanostructured environments to control the flow of light for information processing, sensing, and other applications. When using free electrons instead of photons, things become more complicated because of the unavailability of nanoscale sources. As a preliminary step to fill this gap, multiphoton photoemission amplified by strong plasmonic field enhancement at the nm-sized tips of metallic nanoparticles has been demonstrated to provide a localized source of free electrons that can be generated using relatively weak light intensities down to the continuous-wave limit \cite{paper310}. Free-space nanoelectronics, consisting in moulding the flow of these electrons through nanostructured electric-potential and magnetic-field landscapes, thus emerges as an appealing research frontier with applications in micron-scale free-electron spectroscopy for sensing and detection devices.
In a parallel approach, electrical and magnetic manipulation of ballistic electrons has recently been achieved in graphene \cite{CHE16,LGR17,BHS17,BCS17} and other 2D materials \cite{BSB19}, sharing some of the properties of free electrons, including the possibility of generating single-electron wavepackets \cite{FSH13}. Based on these developments, we envision the implementation of photon-free spectroscopy performed within 2D material devices, whereby electrical generation and detection of inelastically scattered ballistic electrons provides spectral information on the surrounding environment. A recent exploration of this idea has resulted in the proposal of ultrasensitive chemical identification based on electrical detection of EELS-like vibrational fingerprints from analytes placed in the vicinity of a 2D semiconductor exposed to a nanostructured potential landscape that could be achieved using existing gating technology \cite{paper349}.
\section{Appendix}
\renewcommand{\thesection}{A}
\renewcommand{\theequation}{A\arabic{equation}}
\subsection{Expressing the EELS Probability in Terms of the Electromagnetic Green Tensor: First-Principles Derivation of Equation\ \ref{EELSQM}} We start from eq\ \ref{P0n} for the probability $\Gamma_n^0$ of exciting a mode $n$, which is in turn derived below. The spectrally resolved EELS probability is then given by
\begin{align}
\Gamma_{\rm EELS}(\omega)&=\sum_n\,\Gamma_n^0\,\delta(\omega-\omega_{n0}) \nonumber\\
&=\int d^3 \rb\,|\psi^0(\rb)|^2 \sum_n |\tilde{\beta}_n(\Rb)|^2 \delta(\omega -\omega_{n0}),
\label{PEELSintermediate}
\end{align}
where $\tilde{\beta}_n(\Rb)$ is defined in eq\ \ref{betan}. Starting from the Dirac equation, we derive an effective Schr\"odinger equation to describe the electron and its interaction with an external light field in the linearized-minimal-coupling and nonrecoil approximations (see details in ref\ \citenum{paper339}). The interaction Hamiltonian then reduces to
\begin{align}
\hat{\mathcal{H}}_1(\rb)=\frac{e\vb}{c}\cdot\hat{\Ab}(\rb),
\label{H1}
\end{align}
where $\hat\Ab(\rb)$ is the vector potential operator, using a gauge in which the scalar potential vanishes. Inserting eq\ \ref{H1} into eq\ \ref{betan}, and this in turn into eq\ \ref{PEELSintermediate}, we find
\begin{align}
&\Gamma_{\rm EELS}(\omega)=\frac{e^2}{\hbar^2c^2}\int d^3\rb|\psi^0(\rb)|^2 \label{EELSprefinal}\\
&\times\int_{-\infty}^\infty dz' \int_{-\infty}^\infty dz'' \ee^{{\rm i}} \def\ee{{\rm e}\omega(z''-z')/v} \nonumber\\
&\times\sum_n \big\langle 0\big|\hat{A}_z(\Rb,z') \big|n\big\rangle\big\langle n\big|\hat{A}_z(\Rb,z'')\big|0\big\rangle\;\delta(\omega-\omega_{n0}),
\nonumber
\end{align}
where we have used the hermiticity of $\hat{\Ab}(\rb)$ and taken $\vb=v\zz$. This result can be expressed in terms of the electromagnetic Green tensor, implicitly defined in eq\ \ref{Green} for local media (and by an analogous relation when including nonlocal effects \cite{paper357}), by using the identity (see below)
\begin{align}
&\sum_n \big\langle 0\big|\hat{A}_z(\rb)|n\rangle\langle n|\hat{A}_z(\rb')\big|0\big\rangle \delta(\omega-\omega_{n0}) \nonumber\\
&=-4\hbar c^2\, {\rm Im}\{G_{zz}(\rb,\rb',\omega)\},
\label{AAG}
\end{align}
which is valid for reciprocal materials held at zero temperature, with $n=0$ referring to the sample ground state. Combining eqs\ \ref{EELSprefinal} and \ref{AAG}, we find
\begin{align}
&\Gamma_{\rm EELS}(\omega)=\frac{4e^2}{\hbar}\int d^3\rb|\psi^0(\rb)|^2 \nonumber\\
&\times\int_{-\infty}^\infty dz' \int_{-\infty}^\infty dz'' \cos\left[\omega(z''-z')/v\right]\nonumber\\
&\times{\rm Im}\{-G_{zz}(\Rb,z',\Rb,z'',\omega)\},
\nonumber
\end{align}
where we have transformed $\ee^{{\rm i}} \def\ee{{\rm e}\omega(z''-z')/v}$ into a cosine function by exploiting the reciprocity relation $G_{zz}(\rb,\rb',\omega)=G_{zz}(\rb',\rb,\omega)$. Finally, eq\ \ref{EELSQM} is obtained by considering an electron wave function that is tightly confined around a lateral position $\Rb=\Rb_0$ ({\it i.e.}, for $\int_{-\infty}^\infty dz\,|\psi^0(\rb)|^2\approx\delta(\Rb-\Rb_0))$.
\subsection{Derivation of Equation\ \ref{AAG}} Starting with the definition of the retarded electromagnetic Green tensor in a gauge with zero scalar potential at zero temperature,
\begin{align}
G^{\rm R}_{aa'}({\bf r},{\bf r}',t-t')=-\frac{{\rm i}} \def\ee{{\rm e}}{4\pi\hbar c^2}\langle0|[\hat{A}_a({\bf r},t),\hat{A}_{a'}({\bf r'},t')]|0\rangle \theta(t-t'),
\nonumber
\end{align}
where $a$ and $a'$ denote Cartesian components, whereas $\theta$ is the step function, we introduce a complete set of eigenstates $|n\rangle$ of the light+matter Hamiltonian $\hat{\mathcal{H}}_0$ ({\it i.e.}, $\hat{\mathcal{H}}_0|n\rangle=\hbar\omega_n|n\rangle$), use the relation $\hat{\Ab}({\bf r},t)=\ee^{{\rm i}} \def\ee{{\rm e}\hat{\mathcal{H}}_0t/\hbar}\hat{\Ab}({\bf r})\ee^{-{\rm i}} \def\ee{{\rm e}\hat{\mathcal{H}}_0t/\hbar}$ between operators in the Schr\"odinger and Heisenberg pictures, and apply the integral $\int_0^\infty dt \,\ee^{{\rm i}} \def\ee{{\rm e} s t}={\rm i}} \def\ee{{\rm e}/(s+{\rm i}} \def\ee{{\rm e} 0^+)$ to write \cite{BT1962}
\begin{align}
&G^{\rm R}_{aa'}({\bf r},{\bf r}',\omega) \label{greenw} \\
&=\frac{1}{4\pi \hbar c^2}\int_0^\infty d\omega ' \left[\frac{J_{aa'}(\bf r,r',\omega')}{\omega-\omega'+\ii0^+}-\frac{J_{aa'}^*(\bf r,r',\omega')}{\omega+\omega'+\ii0^+}\right],\nonumber
\end{align}
where
\[J_{aa'}(\rb,\rb',\omega)=\sum_n \langle0| \hat{A}_a({\bf r})|n\rangle \langle n|\hat{A}_{a'}({\bf r}')|0\rangle \delta(\omega -\omega_{n0})\]
is the spectral tensor, $\omega_{n0}=\omega_n-\omega_0$, and $G^{\rm R}({\bf r},{\bf r}',\omega)=\int_{-\infty}^\infty dt\,\ee^{{\rm i}} \def\ee{{\rm e} \omega t}\,G^{\rm R}({\bf r},{\bf r}',t)$. The electromagnetic Green tensor in eq\ \ref{greenw} can be shown \cite{AGD1965} to satisfy eq\ \ref{Green} ({\it i.e.}, we have $G^{\rm R}\equiv G$), provided the optical response of the system is assumed to be described by a local, frequency-dependent permittivity $\epsilon(\rb,\omega)$. Now, we introduce the quantum mechanical version of the time-reversal operator $\hat{\Theta}$. Under the assumption of time-reversal symmetry, we have $[\hat{\mathcal{H}}_0,\hat{\Theta}]=0$, and consequently, $\hat{\mathcal{H}}_0|\hat{\Theta} n\rangle=\hbar \omega_n|\hat{\Theta} n\rangle$. Furthermore, assuming a non-degenerate ground state $|0\rangle$, it must obviously satisfy $|\hat{\Theta}0\rangle =|0\rangle$, and therefore, because the time-reversed eigenstates form a complete basis set with the same energies, we can rewrite the spectral tensor as
\begin{align}
&J_{aa'}(\rb,\rb',\omega) \nonumber\\
&=\sum_n\langle\hat{\Theta}0|\hat{A}_a({\bf r})|\hat{\Theta}n\rangle\langle \hat{\Theta}n|\hat{A}_{a'}({\bf r}')|\hat{\Theta}0\rangle\delta(\omega-\omega_{n0}).
\nonumber
\end{align}
Then, using the relation \cite{S1994} $\langle n|\hat{O}|n'\rangle^*=\pm\langle \hat{\Theta} n|\hat{O}|\hat{\Theta} n'\rangle$, which is valid for any Hermitian operator $\hat{O}$ ({\it e.g.}, with $-$ for $\hat{O}=\hat{\Ab}$), we find that $J(\rb,\rb',\omega)=J^*(\rb,\rb',\omega)$ is real. Finally, taking the imaginary part of eq\ \ref{greenw} and using the above property of $J$, together with $1/(s+{\rm i}} \def\ee{{\rm e} 0^+)=P[1/s]-{\rm i}} \def\ee{{\rm e} \pi \delta(s)$, we obtain $J_{aa'}(\rb,\rb',\omega)=-4\hbar c^2\,{\rm Im}\left\{G_{aa'}({\bf r},{\bf r}',\omega)\right\}$, which reduces to eq\ \ref{AAG} for $a=a'=z$.
\subsection{Inelastic Electron Scattering at Finite Temperature: Derivation of Equation\ \ref{EELST}} The large kinetic energy of beam electrons allows us to safely distinguish them from other electrons in the sample. A free electron initially prepared in state $\qb$ can experience transitions to final states $\qb'$ accompanied by excitations $i$ in the sample. The most general Hamiltonian that describes this interaction, assuming linear coupling to the sample and neglecting electron spin-flips, can be written as
\begin{align}
\hat{\mathcal{H}}_1=\sum_{i\qb\qb'}c^\dagger_{\qb'}c_\qb\,\left(V_{i\qb\qb'}a_i+V_{i\qb'\qb}^*a^\dagger_i\right),
\nonumber
\end{align}
where $a_i$ and $c_\qb$ ($a^\dagger_i$ and $c^\dagger_\qb$) annihilate (create) an excitation $i$ and an electron in state $\qb$, respectively. The label $i$ runs over all possible modes in the system, including plasmons, excitons, phonons, and photons in the radiation field. The details of the interaction are fully contained in the coupling coefficients $V_{i\qb\qb'}$. Within the linear response approximation, and assuming the sample to be initially prepared in thermal equilibrium at temperature $T$, we can write the transition rate between $\qb$ and $\qb'$ electron states using the Fermi golden rule as
\begin{align}
&P_{\qb'\qb}=\frac{2\pi}{\hbar^2}\frac{1}{Z}\sum_{\{n'_i\}}\sum_{\{n_i\}}\exp\left(-\sum_i n_i\,\frac{\omega_i}{\omega_T}\right) \nonumber\\
&\times\left|\left\langle \qb',\{n'_i\}|\hat{\mathcal{H}}_1|\qb,\{n_i\}\right\rangle\right|^2\;\delta\big[\varepsilon_{\qb'}-\varepsilon_\qb+\sum_i(n'_i-n_i)\omega_i\big],
\nonumber
\end{align}
where $\omega_T=\kB T/\hbar$ is the thermal frequency, $\{n_i\}$ describes the initial state of the system through the occupation numbers $n_i$ of modes $i$ having energies $\hbar\omega_i$; the sum in $\{n'_i\}$ runs over all possible final occupations; we introduce the partition function $Z\equiv\sum_{\{n_i\}}\exp(-\sum_in_i\omega_i/\omega_T)=\prod_i\sum_{n_i}\ee^{-n_i\omega_i/\omega_T}$, which allows us to weight each initial configuration $\{n_i\}$ by $Z^{-1}\exp(-\sum_in_i\omega_i/\omega_T)$ (its statistical probability at temperature $T$); and the electron initial and final energies are denoted $\hbar\varepsilon_\qb$ and $\hbar\varepsilon_{\qb'}$, respectively. Now, given the linear dependence of $\hat{\mathcal{H}}_1$ on the operators $a_i$ and $a^\dagger_i$, the initial and final occupation numbers within each term of the sum in $P_{\qb'\qb}$ must differ only for a single $i$, with $n'_i=n_i\pm1$. We can factor out all other $i$'s and separate the rate in energy loss ($n'_i=n_i+1$) and gain ($n'_i=n_i-1$) contributions to write $P_{\qb'\qb}=\int_0^\infty d\omega\,P_{\qb'\qb}(\omega)$, where
\begin{align}
P_{\qb'\qb}(\omega)=N^+(\omega)P_{\qb'\qb,0}^+(\omega)+N^-(\omega)P_{\qb'\qb,0}^-(\omega)
\label{PP}
\end{align}
is the spectrally resolved transition rate,
\begin{align}
P_{\qb'\qb,0}^\pm(\omega)=\frac{2\pi}{\hbar^2}\sum_i|V_{i\qb'\qb}|^2\;\delta(\omega-\omega_i)\;\delta\left[(\qb'-\qb)\cdot\vb\pm\omega\right]
\label{PP0}
\end{align}
are temperature-independent loss (+) and gain (-) rates, and
\[N^\pm(\omega)=\frac{\sum_{n_i}\ee^{-n_i\omega/\omega_T}\,\left|\left\langle n_i\pm1|a^\dagger_i+a_i|n_i\right\rangle\right|^2}{\sum_{n_i}\ee^{-n_i\omega/\omega_T}}.\]
In the derivation of these expressions, we have adopted the nonrecoil approximation for the electron energy difference (eq\ \ref{nonrecoil}) and assumed the condition $\sum_{i}|V_{i\qb\qb'}|^2\delta(\omega-\omega_i)=\sum_{i}|V_{i\qb'\qb}|^2\delta(\omega-\omega_i)$ for each partial sum restricted to degenerate modes $i$ sharing a common frequency $\omega$. This condition, which is satisfied in reciprocal media, also renders $\sum_{\qb'}P_{\qb'\qb,0}^-(\omega)=\sum_{\qb'}P_{\qb'\qb,0}^+(\omega)$ after summing over final states $\qb'$. Finally, we obtain the EELS probability $\Gamma_{\rm EELS}$ by dividing the rates $P$ by the electron current.
For bosonic excitations ({\it e.g.}, photons, phonons, and plasmons), we have $|\langle n_i+1|a^\dagger_i+a_i|n_i\rangle|^2=n_i+1$ and $|\langle n_i-1|a^\dagger_i+a_i|n_i\rangle|^2=n_i$, which allow us to carry out the $n_i$ sums to find $N^+(\omega)=n_T(\omega)+1$ and $N^-(\omega)=n_T(\omega)$, where
\begin{align}
n_T(\omega)=\frac{1}{\ee^{\omega/\omega_T}-1}
\label{BE}
\end{align}
is the Bose-Einstein distribution function. Using these elements in combination with eqs\ \ref{PP} and \ref{PP0}, we directly obtain eq\ \ref{EELST} for the relation between the finite- and zero-temperature EELS probabilities.
For Fermionic excitations ({\it e.g.}, two-level atoms), $n_i$ can take the values 0 or 1, so we have instead $N^+(\omega)=1-n_T^{\rm F}(\omega)$ and $N^-(\omega)=n_T^{\rm F}(\omega)$, where $n_T^{\rm F}(\omega)=1\big{/}(\ee^{\omega/\omega_T}+1)$ is the Fermi-Dirac distribution function for zero chemical potential. The loss and gain probabilities are then given by eq\ \ref{EELST}, but with $n_T(\omega)$ substituted by $-n_T^{\rm F}(\omega)$.
\subsection{Derivation of Equation\ \ref{Gammafi}} For a free electron prepared in an initial monochromatic state $\psi_i(\rb)\ee^{-{\rm i}} \def\ee{{\rm e}\varepsilon_it}$ of energy $\hbar\varepsilon_i$, the inelastic scattering probability can be decomposed in contributions arising from transitions to specific final states $\psi_f(\rb)\ee^{-{\rm i}} \def\ee{{\rm e}\varepsilon_ft}$. Working within first-order perturbation theory, we consider the transition matrix element
\[\langle fn|\hat{\mathcal{H}}_1|i0\rangle=\frac{ev}{c}\int d^3\rb\;\psi_f^*(\rb)\psi_i(\rb)\,\left\langle n\left|\hat{A}_z(\rb)\right|0\right\rangle\]
for electron-sample transitions driven by the interaction Hamiltonian in eq\ \ref{H1}. From here, Fermi's golden rule yields the transition probability $\Gamma_{fi}=\int_0^\infty d\omega\,\Gamma_{fi}(\omega)$ with
\begin{align}
\Gamma_{fi}(\omega)=\frac{2\pi e^2vL}{\hbar^2c^2}&\sum_n\left|\int d^3\rb\;\psi_f^*(\rb)\psi_i(\rb)\,\left\langle n\left|\hat{A}_z(\rb)\right|0\right\rangle\right|^2 \nonumber\\
&\times\delta(\omega-\omega_{n0})\,\delta(\varepsilon_f-\varepsilon_i+\omega),
\label{gfi}
\end{align}
where $L$ is the quantization length along the e-beam direction and we have multiplied by the interaction time $L/v$ to transform the rate into a probability. Incidentally, this quantity is related to the EELS probability through $\Gamma_{\rm EELS}(\omega)=\sum_f\Gamma_{fi}(\omega)$. Now, expanding the squared modulus in eq\ \ref{gfi} and using eq\ \ref{AAG}, we find
\begin{align}
\Gamma_{fi}(\omega)=\frac{8\pi e^2vL}{\hbar}&\int d^3\rb\int d^3\rb'\;\psi_f(\rb)\psi_i^*(\rb)\psi_f^*(\rb')\psi_i(\rb') \nonumber\\
&\times{\rm Im}\{-G_{zz}(\rb,\rb',\omega)\}
\,\delta(\varepsilon_f-\varepsilon_i+\omega).
\label{Gammafi3D}
\end{align}
Finally, eq\ \ref{Gammafi} is derived from eq\ \ref{Gammafi3D} by factorizing the incident and final electron wave functions as $\psi_{i|f}(\rb)\propto\psi_{i|f\perp}(\Rb)\,\ee^{{\rm i}} \def\ee{{\rm e} q_{i|f,z}z}/\sqrt{L}$, summing over final longitudinal wave vectors by means of the prescription $\sum_{q_{f,z}}\rightarrow(L/2\pi)\int_{-\infty}^\infty dq_{f,z}$, and using the $\delta$ function in combination with the nonrecoil approximation $\varepsilon_f-\varepsilon_i\approx(q_{f,z}-q_{i,z})\,v$ (eq\ \ref{nonrecoil}).
\subsection{Derivation of Equation\ \ref{P0n}} We calculate the excitation probability of a sample mode $n$ by tracing out over all final electron states as
\begin{align}
&\Gamma_n^0=\sum_\qb|\alpha_{\qb n}(\infty)|^2 \nonumber\\
&=\frac{(2\pi)^2}{\hbar^2}\sum_\qb\left|\sum_{\qb'}\delta(\varepsilon_\qb-\varepsilon_{\qb'}+\omega_{n0})\,\langle\qb n|\hat{\mathcal{H}}_1|\qb'0\rangle\,\alpha^0_{\qb'}\right|^2,
\label{S1}
\end{align}
where the rightmost expression is obtained by using eq\ \ref{alphainfinity}. We now apply the prescription $\sum_\qb\rightarrow V\int d^3\qb/(2\pi)^3$ to convert electron wave vector sums into integrals, adopt the nonrecoil approximation (eq\ \ref{nonrecoil}), and express the electron part of the matrix element in eq\ \ref{S1} as a real-space integral, using the representation $\langle\rb|\qb\rangle=V^{-1/2}\,\ee^{{\rm i}} \def\ee{{\rm e}\qb\cdot\rb}$ for the electron momentum states. Then, taking the electron velocity vector $\vb$ along $\zz$, we obtain $\langle\qb n|\mathcal{H}_1|\qb'0\rangle=V^{-1}\int d^3\rb\;\ee^{{\rm i}} \def\ee{{\rm e}(\qb'-\qb)\cdot\rb}\,\langle n|\mathcal{H}_1(\rb)|0\rangle$, and from here
\begin{align}
\Gamma_n^0=\frac{V}{(2\pi)^7\hbar^2v^2}&\int d^3\qb\;\bigg|\int d^3\qb'\,\delta(q_z-q'_z+\omega_{n0}/v) \nonumber\\
&\times\int d^3\rb\;\ee^{{\rm i}} \def\ee{{\rm e}(\qb'-\qb)\cdot\rb}\,\langle n|\hat{\mathcal{H}}_1(\rb)|0\rangle\,\alpha^0_{\qb'}\bigg|^2.
\nonumber
\end{align}
We can use the $\delta$ function to perform the $q'_z$ integral and then change the integration variable from $q_z$ to $q_z+\omega_{n0}/v$, so $\Gamma_n^0$ becomes
\begin{align}
\Gamma_n^0=\frac{V}{(2\pi)^7\hbar^2v^2}&\int d^3\qb\bigg|\int d^2\Rb\;\ee^{-{\rm i}} \def\ee{{\rm e}\qb_\perp\cdot\Rb}\int d^2\qb'_\perp \,\alpha^0_{(\qb'_\perp,q_z)} \nonumber\\
&\times\ee^{{\rm i}} \def\ee{{\rm e}\qb'_\perp\cdot\Rb}\int_{-\infty}^\infty dz\;\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\,\langle n|\hat{\mathcal{H}}_1(\rb)|0\rangle\ \bigg|^2,
\nonumber
\end{align}
where we adopt the notation $\rb=(\Rb,z)$ and $\qb=(\qb_\perp,q_z)$ with $\Rb$ and $\qb_\perp$ standing for real-space and wave-vector coordinate components in the plane perpendicular to the beam direction. This expression can be simplified using the relation $\int d^2\qb_\perp \left|\int d^2\Rb\,\ee^{-{\rm i}} \def\ee{{\rm e}\qb_\perp\cdot\Rb}f(\Rb)\right|^2=(2\pi)^2\int d^2\Rb\,|f(\Rb)|^2$ and then changing $\qb'_\perp$ to $\qb_\perp$ to obtain
\begin{align}
\Gamma_n^0=\frac{V}{(2\pi)^5\hbar^2v^2}&\int d^2\Rb\left[\int_{-\infty}^\infty dq_z\left|\int d^2\qb_\perp \,\alpha^0_\qb\,\ee^{{\rm i}} \def\ee{{\rm e}\qb_\perp\cdot\Rb}\right|^2\right]\nonumber\\
&\times\left[\left|\int_{-\infty}^\infty dz\;\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\,\langle n|\hat{\mathcal{H}}_1(\rb)|0\rangle\ \right|^2\right].
\nonumber
\end{align}
Finally, using the identity $\int_{-\infty}^\infty dq_z\left|\int d^2\qb_\perp \,\alpha^0_\qb\,\ee^{{\rm i}} \def\ee{{\rm e}\qb_\perp\cdot\Rb}\right|^2
=(2\pi)^{-1}\int_{-\infty}^\infty dz\left|\int d^3\qb \,\alpha^0_\qb\,\ee^{{\rm i}} \def\ee{{\rm e}\qb\cdot\rb}\right|^2$, we find the result
\begin{align}
\Gamma_n^0=&\int d^3\rb\left[\left|V^{1/2}\int\frac{d^3\qb}{(2\pi)^3}\,\alpha^0_\qb\,\ee^{{\rm i}} \def\ee{{\rm e}\qb\cdot\rb}\right|^2\right]\nonumber\\
&\times\left[\left|\frac{1}{\hbar v}\int_{-\infty}^\infty dz\;\ee^{-{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\,\langle 0|\hat{\mathcal{H}}_1(\rb)|n\rangle\ \right|^2\right],
\label{Pn0final}
\end{align}
which reduces to eq\ \ref{P0n} with $\psi^0(\rb)$ and $\tilde{\beta}_n(\Rb)$ defined by eqs\ \ref{psi0} and \ref{betan}.
\subsection{Derivation of Equations\ \ref{PNn}-\ref{Mnj}} A direct extension of the general formalism used in the previous paragraph allows us to deal with $N$ free independent electrons prepared in initial states (before interaction with the sample) described by their wave function coefficients $\alpha_\qb^j$ with $j=0,\dots,N-1$. The wave function of the combined system formed by the sample and the electrons can be written as
\begin{align}
|\psi(t)\rangle=\sum_{\{\qb\} n}\alpha_{\{\qb\} n}(t)\ee^{-{\rm i}} \def\ee{{\rm e}\left(\sum_j\varepsilon_{\qb_j}+\omega_{n0}\right)t}|\{\qb\}n\rangle,
\nonumber
\end{align}
where $\{\qb\}$ denotes the ensemble of wave vectors $\qb_j$. Given the large size of the electron configuration space in a microscope, we consider that it is safe to disregard spin degrees of freedom and the Pauli exclusion principle ({\it i.e.}, we consider distinguishable electrons). We further neglect electron-electron Coulomb interaction in the beam. Additionally, we work in the weak coupling regime, under the assumption that the sample is excited once at most by the passage of the $N$ electrons, which is a good approximation for $N\ll 1/\Gamma_n^0$ (we note that typical excitation probabilities are $\Gamma_n^0\lesssim10^{-5}$ per electron for single sample modes $n$). This allows us to integrate the Schr\"odinger equation to find the wave function coefficients after interaction as a generalization of eq\ \ref{alphainfinity}:
\begin{align}
\alpha_{\{\qb\}n}(\infty)=-\frac{2\pi{\rm i}} \def\ee{{\rm e}}{\hbar}&\sum_{\{\qb'\}}\delta\left(\omega_{n0}+{\sum}_j\varepsilon_{\qb_j\qb'_j}\right)\nonumber\\
&\times\langle\{\qb\}n|\hat{\mathcal{H}}_1|\{\qb'\}0\rangle\prod_j\alpha^j_{\qb'_j},
\label{alphanN}
\end{align}
where $\varepsilon_{\qb_j\qb'_j}=\varepsilon_{\qb_j}-\varepsilon_{\qb'_j}$. Now, each of the terms in the real-space representation of the interaction Hamiltonian $\hat{\mathcal{H}}_1({\rb})=\sum_j\hat{\mathcal{H}}_1(\rb_j)$ depends on just one of the electron coordinates, and thus, because of the orthogonality of the electron momentum states, $\{\qb\}$ and $\{\qb'\}$ in eq\ \ref{alphanN} differ by no more than one of the electron wave vectors. This allows us to recast eq\ \ref{alphanN} as
\begin{align}
\alpha_{\{\qb\}n}(\infty)=-\frac{2\pi{\rm i}} \def\ee{{\rm e}}{\hbar }&\left({\prod}_j\alpha^j_{\qb_j}\right)\sum_{j}\sum_{\qb'_j}\delta\left(\omega_{n0}+\varepsilon_{\qb_j\qb'_j}\right)\nonumber\\
&\times\langle\qb_jn|\hat{\mathcal{H}}_1|\qb'_j0\rangle\left(\alpha^j_{\qb'_j}/\alpha^j_{\qb_j}\right).
\label{alphanNbis}
\end{align}
The excitation probability of sample mode $n$ is obtained by tracing out the final electron states as
\begin{align}
\Gamma_n^{\rm total}=\sum_{\{\qb\}}\left|\alpha_{\{\qb\}n}(\infty)\right|^2,
\label{PnN0}
\end{align}
which, in combination with eq\ \ref{alphanNbis} and the normalization condition of the initial states $\sum_\qb\left|\alpha^j_\qb\right|^2=1$, leads to (eq\ \ref{PNn})
\begin{align}
\Gamma_n^{\rm total}=\sum_j \Gamma_n^j + \sum_{j\neq j'} Q_n^jQ_n^{j'*},
\label{Pntotal}
\end{align}
where
\begin{align}
\Gamma_n^j=\frac{(2\pi)^2}{\hbar^2}\sum_{\qb_j}\left|\sum_{\qb'_j}\delta\left(\varepsilon_{\qb_j\qb'_j}+\omega_{n0}\right)\langle\qb_jn|\hat{\mathcal{H}}_1|\qb'_j0\rangle\,\alpha^j_{\qb'_j}\right|^2
\label{Pnj}
\end{align}
and
\begin{align}
Q_n^j=\frac{2\pi}{\hbar}\sum_{\qb_j\qb'_j}\delta\left(\varepsilon_{\qb_j\qb'_j}+\omega_{n0}\right)\langle\qb'_j0|\hat{\mathcal{H}}_1|\qb_jn\rangle\,\alpha^{j*}_{\qb'_j}\alpha^j_{\qb_j}.
\label{Pnjj}
\end{align}
Noticing that eq\ \ref{Pnj} is just like eq\ \ref{S1} with $\alpha^0_{\qb'}$ substituted by $\alpha^{j}_{\qb'_j}$, we can write from eq\ \ref{Pn0final}
\begin{align}
\Gamma_n^j=\int d^3\rb\;|\psi^j(\rb)|^2 |\tilde{\beta}_n(\Rb)|^2
\label{Pnjfinal}
\end{align}
with
\begin{align}
\psi^j(\rb)=V^{1/2}\int \frac{d^3\qb}{(2\pi)^3}\,\alpha^j_\qb\,\ee^{{\rm i}} \def\ee{{\rm e}\qb\cdot\rb}.
\label{psij}
\end{align}
Now, using the nonrecoil approximation $\varepsilon_{\qb_j\qb'_j}=v(q_{jz}-q'_{jz})$, transforming wave vector sums into integrals, expressing matrix elements as real-space integrals, and proceeding in a similar way as in the derivation of eq\ \ref{Pn0final}, we can rearrange eq\ \ref{Pnjj} as
\begin{align}
Q_n^j=\frac{V}{(2\pi)^5\hbar v}&\int d^3\qb_j\int d^3\qb'_j\int d^2\Rb\;\ee^{{\rm i}} \def\ee{{\rm e}(\qb_{j\perp}-\qb'_{j\perp})\cdot\Rb}\nonumber\\
&\times\alpha^{j*}_{\qb'_j}\alpha^j_{\qb_j} \; \delta(q_{jz}-q'_{jz}+\omega_{n0}/v) \nonumber\\
&\times\int_{-\infty}^\infty dz\;\ee^{-{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\langle0|\hat{\mathcal{H}}_1(\rb)|n\rangle \label{Qnj}
\end{align}
which reduces to eq\ \ref{Qj} with $\tilde{\beta}_n(\Rb)$ defined in eq\ \ref{betan}, whereas
\begin{align}
M_n^j(\Rb)&=\frac{V}{(2\pi)^{5}}\int d^3\qb_j\int d^3\qb'_j\;\ee^{{\rm i}} \def\ee{{\rm e}(\qb_{j\perp}-\qb'_{j\perp})\cdot\Rb}\nonumber\\
&\quad\quad\quad\times\alpha^j_{\qb_j}\alpha^{j*}_{\qb'_j}\;\delta(q_{jz}-q'_{jz}+\omega_{n0}/v) \nonumber\\
&=\frac{V}{(2\pi)^{5}}\int d^3\qb_j\int d^3\qb'_j\;\ee^{{\rm i}} \def\ee{{\rm e}(\qb_{j\perp}-\qb'_{j\perp})\cdot\Rb}\nonumber\\
&\quad\quad\quad\times\alpha^j_{\qb_j}\alpha^{j*}_{\qb'_j}\;\frac{1}{2\pi}\int_{-\infty}^\infty dz \;\ee^{{\rm i}} \def\ee{{\rm e}(q_{jz}-q'_{jz}+\omega_{n0}/v)z} \nonumber\\
&=\int_{-\infty}^\infty dz \;\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v} \bigg[V^{1/2}\int \frac{d^3\qb_j}{(2\pi)^3}\,\alpha^j_{\qb_j}\;\ee^{{\rm i}} \def\ee{{\rm e}\qb_j\cdot\rb}\bigg]\nonumber\\
&\quad\quad\quad\quad\quad\times\bigg[V^{1/2}\int \frac{d^3\qb'_j}{(2\pi)^3}\,\alpha^{j*}_{\qb'_j}\;\ee^{-{\rm i}} \def\ee{{\rm e}\qb'_j\cdot\rb}\bigg] \nonumber\\
&=\int_{-\infty}^\infty dz \;\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z/v}\;|\psi^j(\rb)|^2
\nonumber
\end{align}
becomes eq\ \ref{Mnj}, the Fourier transform of the electron probability density in the incident electron wave function $j$.
\subsection{Derivation of Equations\ \ref{Pnlocal1} and \ref{Pnlocal2}} We consider electron wave functions constructed in terms of normalized Gaussian wavepackets of the form $\psi_G(\rb)=\psi_\perp(\Rb)\,\ee^{-z^2/2\Delta^2}/\pi^{1/4}\Delta^{1/2}$, where we factorize the transverse dependence in $\psi_\perp(\Rb)$. For simplicity, we approximate $|\psi_\perp(\Rb)|^2\approx\delta(\Rb)$ under the assumption that the transverse width $w$ is small compared with the characteristic length of variation of the electric field associated with the excited mode $n$, or equivalently, $|\nabla_\Rb\tilde{\beta}_n(\Rb)|\ll1/w$. The configurations discussed in Figures\ \ref{Fig5} and \ref{Fig6} involve electron wave functions of the general form
\begin{align}
\psi^j(\rb)=N_j^{-1}\sum_s\gamma_s^j\psi_G(\rb-\rb_s),
\label{psi11}
\end{align}
where we assume the same longitudinal wavepacket width $\Delta$ for all components, and $N_j=\big(\sum_{ss'}\gamma_s^j\gamma_{s'}^{j*}I_{ss'}\big)^{1/2}$ is a normalization constant that depends on the overlap integrals
\begin{align}
I_{ss'}=\left\{\begin{matrix}
\ee^{-(z_s-z_{s'})^2/4\Delta^2}, & \quad\quad\quad\text{if $\Rb_s=\Rb_{s'}$,} \\ \!\!\!0, & \quad\quad\quad\text{otherwise.}\end{matrix}\right.
\nonumber
\end{align}
Plugging eq\ \ref{psi11} into eqs\ \ref{Pnjfinal} and \ref{Qj}, we readily find
\begin{subequations}
\label{PandQ}
\begin{align}
\Gamma_n^j&=\frac{\sum_{ss'}\gamma_s^j\gamma_{s'}^{j*}I_{ss'}\,\left|\tilde{\beta}_n(\Rb_s)\right|^2}{\sum_{ss'}\gamma_s^j\gamma_{s'}^{j*}I_{ss'}} \nonumber\\
&\approx \frac{\sum_{s}|\gamma_s^j|^2\,\left|\tilde{\beta}_n(\Rb_s)\right|^2}{\sum_{s}|\gamma_s^j|^2},
\label{Pnjsup}\\
Q_n^j&=\sqrt{S}\;\frac{\sum_{ss'}\gamma_s^j\gamma_{s'}^{j*}I_{ss'}\,\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}(z_s+z_{s'})/2v}\,\tilde{\beta}_n(\Rb_s)}{\sum_{ss'}\gamma_s^j\gamma_{s'}^{j*}I_{ss'}} \nonumber\\
&\approx\sqrt{S}\;\frac{\sum_{s}|\gamma_s^j|^2\,\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}z_s/v}\,\tilde{\beta}_n(\Rb_s)}{\sum_{s}|\gamma_s^j|^2},
\label{Qnjsup}
\end{align}
\end{subequations}
where
\begin{align}
S
=\ee^{-\omega_{n0}^2\Delta^2/2v^2}.
\nonumber
\end{align}
The rightmost approximations in eqs\ \ref{PandQ} correspond to the nonoverlapping wavepacket limit ({\it i.e.}, $|z_s-z_{s'}|\gg\Delta$ for $s\neq s'$ and $\Rb_s=\Rb_{s'}$), which yields $I_{ss'}=\delta_{s,s'}$. Now, we adopt this limit and specify eqs\ \ref{Pntotal} and \ref{PandQ} for the beams studied in Figures\ \ref{Fig5} and \ref{Fig6}:
\begin{itemize}
\item {\bf Figure\ \ref{Fig5}b.}
(1) We consider two Gaussian wavepackets $s=0,1$ with longitudinal coordinates $z_0=0$ and $z_1=a$, where $a\gg\Delta$ is the wavepacket separation, and the same lateral coordinates $\Rb_s=\bb$, so $\tilde{\beta}_n(\Rb_s)=\tilde{\beta}_n(\bb)$ is independent of $s$ and factors out in eqs\ \ref{PandQ}; in particular, eq\ \ref{Pnjsup} reduces to $\Gamma_n^j=\left|\tilde{\beta}_n(\bb)\right|^2$.
(2) For two electrons $j=0,1$, each of them fully contained in one of the two wavepackets, we have $|\gamma_s^j|^2=\delta_{s,j}$, so eq\ \ref{Qnjsup} gives $Q_n^0=\sqrt{S}\tilde{\beta}_n(\bb)$ and $Q_n^1=\sqrt{S}\tilde{\beta}_n(\bb)\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}a/v}$; inserting these expressions in eq\ \ref{Pntotal}, we find $\Gamma_n^{\rm total}=2\left|\tilde{\beta}_n(\bb)\right|^2\left[1+S\,\cos(\omega_{n0}a/v)\right]$ ({\it i.e.}, eq\ \ref{Pnlocal1} with the $+$ sign) (incidentally, this result remains unchanged even when the wavepackets overlap).
(3) If each of the two electrons is equally shared among the two wavepackets, we have $|\gamma_s^j|^2=1/2$; evaluating eq\ \ref{Qnjsup} with these coefficients, we find $Q_n^0=Q_n^1=\sqrt{S}\tilde{\beta}_n(\bb)\left(1+\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}a/v}\right)/2$, which together with eq\ \ref{Pntotal} lead to the result $\Gamma_n^{\rm total}=2\left|\tilde{\beta}_n(\bb)\right|^2\,\left[1+S\cos^2(\omega_{n0}a/2v)\right]$ ({\it i.e.}, eq\ \ref{Pnlocal2}).
\item {\bf Figure\ \ref{Fig5}c.}
(1) We consider two wavepackets $s=0,1$ with $\Rb_0=-\Rb_1=\bb$, $z_0=0$, and $z_1=a$; because $\left|\tilde{\beta}_n(\Rb_s)\right|$ is also independent of $s$ (see below), we can factor it out in eq\ \ref{Pnjsup}, thus leading again to $\Gamma_n^j=\left|\tilde{\beta}_n(\bb)\right|^2$.
(2) To describe two electrons, each of them separated in different wavepackets, we take $|\gamma_s^j|^2=\delta_{s,j}$, so eq\ \ref{Qnjsup} yields $Q_n^0=\sqrt{S}\tilde{\beta}_n(\bb)$ and $Q_n^1=-\sqrt{S}\tilde{\beta}_n(\bb)\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}a/v}$, where we have used the property $\tilde{\beta}_n(-\bb)=-\tilde{\beta}_n(\bb)$ for the coefficient of coupling to an excitation with the transition dipole oriented as shown in Figure\ \ref{Fig5}; we thus find from eq\ \ref{Pntotal} the result $\Gamma_n^{\rm total}=2\left|\tilde{\beta}_n(\bb)\right|^2\,\left[1-S\,\cos(\omega_{n0}a/v)\right]$ ({\it i.e.}, eq\ \ref{Pnlocal1} with the $-$ sign).
(3) Proceeding as above for the configuration in which each of the two electrons is equally shared among the two wavepackets, we find $Q_n^0=Q_n^1=\sqrt{S}\tilde{\beta}_n(\bb)\left(1-\ee^{{\rm i}} \def\ee{{\rm e}\omega_{n0}a/v}\right)/2$, which now results in $\Gamma_n^{\rm total}=2\left|\tilde{\beta}_n(\bb)\right|^2\,\left[1+S\sin^2(\omega_{n0}a/2v)\right]$ ({\it i.e.}, eq\ \ref{Pnlocal2} with cos replaced by sin).
\item {\bf Figure\ \ref{Fig6}.} In this configuration, the coupling coefficient has the same spatial periodicity as the excited mode ({\it i.e.}, $\tilde{\beta}_n(\Rb_s)=\tilde{\beta}_n(0)\,\ee^{{\rm i}} \def\ee{{\rm e}\kb_{n\parallel}\cdot\Rb_s}$ picks up the mode propagation phase at the region of electron-sample interaction). With the same choice of wave function coefficients as in the above analysis of Figure\ \ref{Fig5}c, and considering a lateral separation $\bb=\Rb_0-\Rb_1$ between the two wavepackets, we straightforwardly find the same expressions for the excitation probability as in Figure\ \ref{Fig5}b, but with $\omega_{n0}a/v$ replaced by $\omega_{n0}a/v-\kb_n\cdot\bb$.
\end{itemize}
In the main text, we also discuss a generalization of Figure\ \ref{Fig5}b to a beam consisting of $N$ electrons ($j=0,\dots,N-1$), each of them distributed among $L$ periodically arranged wavepackets ($s=0,\dots,L-1$) with longitudinal spacing $a$ and the same lateral position $\Rb_s=\bb$ for all. Proceeding in a similar way as in the above analysis of Figure\ \ref{Fig5}b, we take $|\gamma_s^j|^2=1$ and find from eqs\ \ref{PandQ} the results $\Gamma_n^j=\left|\tilde{\beta}_n(\bb)\right|^2$ and $Q_n^j=\sqrt{S}\left|\tilde{\beta}_n(\bb)\right|^2(1/L)\sum_s\ee^{{\rm i}} \def\ee{{\rm e} s\omega_{n0}a/v}$, which, combined with eq\ \ref{Pntotal}, leads to eq\ \ref{Gtotn}.
\section*{Acknowledgments}
We thank Fabrizio Carbone, Archie Howie, Ido Kaminer, Ofer Kfir, Mathieu Kociak, Albert Polman, Claus Ropers, Nahid Talebi, and Jo Verbeeck for helpful and enjoyable discussions. This work has been supported in part by the European Research Council (Advanced Grant 789104-eNANO), the European Commission (Horizon 2020 Grants FET-Proactive 101017720-EBEAM and FET-Open 964591-SMART-electron), the Spanish MINECO (MAT2017-88492-R and Severo Ochoa CEX2019-000910-S), the Catalan CERCA Program, and Fundaci\'{o}s Cellex and Mir-Puig. V.D.G. acknowledges financial support from the EU (Marie Sk\l{}odowska-Curie Grant 713729).
| 2024-02-18T23:40:01.349Z | 2021-03-25T01:14:56.000Z | algebraic_stack_train_0000 | 1,108 | 25,847 |
|
proofpile-arXiv_065-5485 | \section*{Introduction}
\label{intro}
Nonlinear parametrized partial differential equations (PDE($\boldsymbol{\mu}$)s) play a fundamental role in several fields, ranging from Continuum Mechanics to Quantum Mechanics, passing through Fluid Dynamics.
When compared to linear PDE($\boldsymbol{\mu}$)s with smooth parametric dependence, the most striking difference is that in the linear case a stable solution evolves in a continuous (and thus, unique) manner when the parameter changes slightly. In contrast, in the nonlinear case the solution for a given parameter $\bmu$ of a given PDE($\boldsymbol{\mu})$ may not be unique. Indeed, the model can suddenly change its behavior, together with the stability properties of its solutions. Models with such a feature are called bifurcation problems \cite{Prodi,Caloz,seydel2009practical}. Examples of models which are characterized by the non-uniqueness of the solution are the Von K\'arm\'an plate model for buckling \cite{vonka,bauerreiss,berger, pichirozza}, the Gross-Pitaevskii equation for Bose-Einstein condensates \cite{Middelkamp_et_al2011,doi:10.1137/1.9781611973945,Charalampidis_et_al2018,pichiquaini} and Navier-Stokes equations in a channel \cite{AQpreprint, cardio, pintore2019efficient}.
The critical point at which the system loses its original features is called bifurcation point, and it is usually denoted by $\bmu^*$. Many different bifurcating configurations can emerge from these points, and an intuitive way to visualize them is through a plot of a scalar output of the solution against the parameter value for which the solution has been computed, e.g.\ the so called \textit{bifurcation diagram}. As an example, a branch of solutions can give rise at $\bmu^*$ to two further symmetric branches of solutions which coexist with the pre-existing one, and switch their stability properties with it. When such situation occurs, the model is said to undergo a \textit{pitchfork bifurcation} phenomenon \cite{seydel2009practical}.
In particular, we are interested in an application in Fluid Dynamics, where we consider sudden-expansion channel flows, which are motivated by many practical scenarios. Flow profiles are described by Navier-Stokes equations, parametrized by the viscosity value $\mu$.
Here we consider a simplified version for a model of a cardiac disease, called mitral valve regurgitation, that may cause either a symmetric regurgitant jet or a wall-hugging, non-symmetric regurgitant jet.
The latter phenomenon, which can be clinically detected through echocardiography, is called the Coanda effect \cite{tritton2012physical}, and expresses the tendency of a fluid jet to be attracted to a nearby surface. This represents an issue from the medical point of view, because the wall-hugging jet might lead to inaccurate echocardiography measurements. It is therefore of the utmost practical interest to try to drive the system towards the branch of symmetric solutions, which are more favorable for the measurement process.
Towards this goal, parametrized optimal control problems (\ocp s) governed by PDE($\boldsymbol{\mu}$)s might be employed. Indeed, \ocp s can be interpreted as an input-output system which achieves an observable configuration \cite{bochev2009least, gunzburger2003perspectives, hinze2008optimization, lions1971, troltzsch2010optimal}. They have been exploited in several applications in different scientific fields, see e.g.\ \cite{leugering2014trends} for an overview.
The main goal of this work is to use \ocp s to drive bifurcating state profiles towards a different desired state, which might possibly belong to another state solution branch.
In such \ocp s, a parametric study of the state solution is necessary in order to understand the behavior of the system.
The complexity of this task can be tackled using a combination of existing methodologies, which allow the complete reconstruction of the aforementioned bifurcation diagram.
However, the discretization through standard techniques can lead to a possibly huge system to be solved for many values of the parameters. In this work we exploit Galerkin Finite Element (FE) approach, which can be challenging when several instances of parametrized problem have to be studied, most of all because the optimal control setting requires additional equations to be solved on top of the original PDE($\boldsymbol{\mu}$)s. For this reason we also propose Reduced Order Modeling (ROM) as a tool to overcome this issue, see \cite{hesthaven2015certified} as an introductory reference. Namely, we build a low-dimensional space from FE \emph{high-fidelity} optimal solutions and we perform a Galerkin projection in a \emph{reduced space}, usually much smaller than the considered high-fidelity one. The main ingredients are the reduced space construction exploiting Proper Orthogonal Decomposition (POD) algorithm \cite{ballarin2015supremizer, burkardt2006pod, Chapelle2013} and then, using a Galerkin projection, solving a reduced problem in the lower dimensional reduced space, for every parameter instance.
The main novelty of this work is the formulation, numerical simulation and subsequent model reduction of \ocp s governed by bifurcating parametrized Navier-Stokes equations, together with the analysis of the properties of their eigenvalues. We develop several test cases corresponding to different control actions in order to understand and validate our findings. At the best of our knowledge, a thorough description of the mathematical analysis and widespread exploitation of \ocp s for bifurcating nonlinear PDE($\boldsymbol{\mu}$)s have not been extensively addressed in literature. This work could then pave the way towards application of \ocp s as a tool to drive the behavior of complex bifurcation problems to a desired branch of solutions.
The work is outlined as follows: in Section \ref{general_problem_sec} we describe the structure of \ocp s for general nonlinear bifurcating systems both at the continuous and discretized level, exploiting a branch-wise approach through a FE approximation. Furthermore, we introduce the basic notions of stability analysis and its connection to eigenvalue problems for the uncontrolled and the controlled systems. In Section \ref{sec:state} we introduce the uncontrolled sudden-expansion channel problem, while in Section \ref{NS_ocp} we build several \ocp s assigning different roles to the control variable. There we also perform a global eigenvalue analysis in order to understand the main features of the achieved optimal solutions. We present two different boundary control problems and two distributed ones, which give very different solution configurations. The ROM strategy is described in Section \ref{sec_ROM}: the presented approach is tested for all the numerical simulations performed at the FE level. Conclusions follow in Section \ref{conclusions}.
\section{Conclusions}
\label{conclusions}
In this work we proposed a first attempt to steer bifurcation phenomena arising from nonlinear PDE($\bmu$) through an optimal control formulation. First of all, we built a general framework which can be applied to general nonlinear \ocp s and we proposed a global stability analysis trough the solution of the eigenvalue problem associated to the optimization system. We tested the stability analysis over four control problems governed by bifurcating Navier-Stokes equations in a sudden-expansion channel. We studied how the control can affect the classical stable wall-hugging solution of the uncontrolled state equation and we proposed some observations on different configurations and features of the optimal solution based on the eigenvalue systems. \\
Furthermore, we employed ROM for all the test cases, proposing it as a strategy to solve the parametrized analysis of the optimization system in a low-dimensional setting, while confirming the applicability of reduction strategies to complex nonlinear models. \\
We believe that this work is a first step towards a better comprehension of the very complex action of optimal control over a nonlinear bifurcating system. We believe that the content of this work could pave the way to many improvements on the topic. Among them, one could be the deeper analysis of the Dirichlet test case, which gives the more unexpected and, consequently, difficult to interpret, results.
\section*{Acknowledgements}
We acknowledge the support by European Union Funding for Research and Innovation -- Horizon 2020 Program -- in the framework of European Research Council Executive Agency: Consolidator Grant H2020 ERC CoG 2015 AROMA-CFD project 681447 ``Advanced Reduced Order Methods with Applications in Computational Fluid Dynamics''. We also acknowledge the PRIN 2017 ``Numerical Analysis for Full and Reduced Order Methods for the efficient and accurate solution of complex systems governed by Partial Differential Equations'' (NA-FROM-PDEs) and the INDAM-GNCS project ``Tecniche Numeriche Avanzate per Applicazioni Industriali''.
The computations in this work have been performed with RBniCS \cite{rbnics} library, developed at SISSA mathLab, which is an implementation in FEniCS \cite{fenics} of several reduced order modelling techniques; we acknowledge developers and contributors to both libraries.
\bibliographystyle{abbrv}
\section{Steering bifurcating governing equations towards desired branches by means of Optimal Control}
\label{NS_ocp}
In this Section we focus on several \ocp s governed by Navier-Stokes equations \eqref{eq:NS_eq} in the geometrical configuration of a contraction-expansion channel. We aim at understanding how different control problems can affect the solution behavior discussed in Section \ref{sec:state} for the uncontrolled case, especially when bifurcation phenomena are taken into account.
This leads us to analyze the controlled systems, trying to reach state profiles which are different from the expected uncontrolled solution. Our goal is to investigate and better understand the role that optimal control plays as an attractor towards a desired configuration.\\
We thus follow the general procedure described in Section \ref{general_problem}, and discuss the specific case of optimal control for the Coanda effect. Nonetheless, the procedure adopted here is general and can be used in wide variety of applications.\\
For all the applications, we will simulate the physical phenomenon over the domain $\Omega$ shown in Figure \ref{fig:channel}.
Moreover, for the OCPs structure, we will require
the velocity solution $v \in \mathbb V$ to be the most similar to a desired profile $v_\text{d} \in \mathbb V_{\text{obs}} \eqdot (L^2(\Gamma_{\text{obs}}))^2$. The \emph{observation domain} $\Gamma_{\text{obs}}= \{47\}\times [0, 7.5]$ is a line near the end of the channel. This structure allows the control to change the solution at the outflow following a prescribed convenient configuration. During the rest of this work, we will employ two velocity solution profiles, which are showed in Figure \ref{fig:vd}: we will denote them as the \emph{symmetric desired profile (or target)} for Figure \ref{fig:vd_S} and the \emph{asymmetric desired profile (or target)} for Figure \ref{fig:vd_NS}. The first is the result of a Stokes system over $\Omega$ for $\mu = 1$ with the same boundary conditions of the Navier-Stokes uncontrolled equations \eqref{eq:NS_eq}. On the contrary, the latter is the physically stable solution of \eqref{eq:NS_eq} for $\mu = 0.49$. While the former choice aims at controlling the system towards a globally symmetric configuration with a weaker outgoing flux, the latter is set to achieve the opposite goal.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/Stokes}
\caption{}
\label{fig:vd_S}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/NavierStokes}
\caption{}
\label{fig:vd_NS}
\end{subfigure}
\caption{\emph{Desired velocity profiles}: (a) symmetric profile obtained as Stokes solution for $\mu = 1$; (b) asymmetric profile given by the physically stable Navier-Stokes solution for $\mu = 0.49$.}
\label{fig:vd}
\end{figure}
The purpose of steering the bifurcating behavior is summarized in the minimization of the functional
\begin{equation}
\label{eq:J_NS}
J_{\text{NS}}(v,u; v_\text{d}) = \half \norm{v - v_\text{d}}_ {\mathbb V_{\text{obs}} }^2+ \alf \norm{u}_{\control}^2,
\end{equation}
where $\control \eqdot (L^2(\Omega_u))^2$ with $\Omega_{u} \subset \overline \Omega$: indeed, the control action can be performed even over a portion of the boundary $\partial \Omega$. We will refer to $\Omega_u$ as the \emph{control domain}.
Within this study we will analyze how the choice of $\Omega_{u}$, combined with different values of the penalization parameter $\alpha$, affects the solution behavior of the system, compared to the uncontrolled Navier-Stokes state equation.
\C{We remark that hypothesis (i)-(ix) are verified in this specific \ocp\, setting, see e.g.\ \cite{hinze2008optimization}.}
Then, after a general introduction in Section \ref{sec:41}, we will provide the analysis of different optimal control systems, as follows.
\begin{enumerate}
\item[Section \ref{neumann}.]
\begin{enumerate}
A \emph{weak control} is built by controlling a Neumann boundary and the optimality system slightly affects the usual bifurcating nature of the uncontrolled Navier-Stokes equations.
\end{enumerate}
\item[Section \ref{distributed}.]
\begin{enumerate}
A \emph{strong control} effect can be observed over the classical bifurcating behavior of the uncontrolled solution by acting on the forcing term.
\end{enumerate}
\item [Section \ref{inlet}.]
\begin{enumerate}
The \emph{penalization parameter} $\alpha$ is analyzed while acting at the end of the inlet channel, and we discuss how changing $\alpha$ results in different orders of magnitude for the optimal control.
\end{enumerate}
\item [Section \ref{dirichlet}.]
\begin{enumerate}
We show how imposing different \emph{boundary flux} conditions completely changes the known behavior of the starting system.
\end{enumerate}
\end{enumerate}
Finally, in Section \ref{comparison}, remarks and comparisons on the spectral analysis of the four test cases are presented.
\subsection{\ocp s governed by Navier-Stokes equations}
\label{sec:41}
We recast \ocp s constrained to Navier-Stokes equations in the algebraic formulation presented in Section \ref{general_problem}. \\
The steady and incompressible controlled Navier-Stokes equations in a given domain $\Omega$ are:
\begin{equation}
\label{eq:OCP_NS_eq}
\begin{cases}
-\mu \Delta v + v\cdot\nabla v + \nabla p=C(u) \quad &\text{in} \ \Omega, \\
\nabla \cdot v = 0 \quad &\text{in} \ \Omega, \\
\end{cases}
\end{equation}
accompanied by some boundary conditions. The control operator $C : \mathbb U \rightarrow \V\dual$ can represent an external forcing term or a boundary term. If $C$ is defined in the whole domain we will say that the control is \emph{distributed}, while if it is defined in a portion of the internal domain, we will deal with \emph{localized control}. Furthermore, we will refer to \emph{Neumann control} and \emph{Dirichlet control}, if the control acts as Neumann or Dirichlet boundary conditions, respectively.
The weak formulation of \eqref{eq:OCP_NS_eq} reads: given $\mu \in \mathcal{P}$, find $v \in \V$, $p \in \Q$ and $u \in \control$ such that
\begin{equation}
\label{eq:gal_ocp_ns2}
\begin{cases}
a(v,\psi; \mu) +s(v,v,\psi) +b(\psi,p) = c(u, \psi) \quad &\forall \, \psi \in \V, \\
b(v,\pi) = 0\quad &\forall \, \pi \in \Q ,
\end{cases}
\end{equation}
where $a (\cdot, \cdot; \mu)$, $b \cd $ and $s(\cdot, \cdot, \cdot)$
have been already defined in \eqref{eq:forms} while $c : \control \times \state \rightarrow \mathbb R$ is a bilinear form associated to the operator $C$.
First of all, to derive the optimality conditions, we need the adjoint variables $w \in \mathbb V$ and $q \in \mathbb Q$ for velocity and pressure, respectively. Let $X = ((v,p),u,(w,q)) \in \mathbb X \eqdot \mathbb Y \times \mathbb U \times \mathbb Y$ be an optimal solution, where $\state \eqdot \mathbb V \times \mathbb Q$. The Lagrangian functional for this specific problem is
\begin{equation}
\begin{aligned}
\label{lg_ocp_ns}
\Lg_{\text{NS}}(X; v_\text{d}, \bmu) = J_{\text{NS}}(v, u; v_\text{d}) \, + \,
\mu \int_\Omega\nabla v\cdot\nabla w \, & d\Omega \, +\,
\int_\Omega \left(v\cdot\nabla v\right)w \, d\Omega \\
& \, -\, \int_\Omega p\nabla\cdot w \, d\Omega
\,+\,
\int_\Omega q\nabla\cdot v \, d\Omega \,-\, c(u, w).
\end{aligned}
\end{equation}
The optimality system built through Fr\'echet differentiation is given by:
\begin{equation}
\label{KKT_NS}
\begin{cases}
D_{v}\Lg_{\text{NS}}(X; v_\text{d}, \bmu)[\varphi] = 0 & \forall \, \varphi \in \mathbb V,\\
D_{p}\Lg_{\text{NS}}(X; v_\text{d}, \bmu)[\xi] = 0 & \forall \, \xi \in \mathbb Q,\\
D_u\Lg_{\text{NS}}(X; v_\text{d}, \bmu)[\tau] = 0 & \forall \, \tau \in \control,\\
D_{w}\Lg_{\text{NS}}(X; v_\text{d}, \bmu)[\psi] = 0 & \forall \, \psi \in \mathbb V,\\
D_{q}\Lg_{\text{NS}}(X; v_\text{d}, \bmu)[\pi] = 0 & \forall \, \pi \in \mathbb Q,\\
\end{cases}
\end{equation}
where the first two equations form the \emph{adjoint system}, the last two form the \emph{state system} \eqref{eq:gal_ocp_ns2}, while differentiating w.r.t.\ the variable $u$ leads to the \emph{optimality equation}.
In particular, the adjoint system combined with the optimality equation has the following form:
\begin{equation}
\label{eq:gal_adj_ocp_ns2}
\begin{cases}
m(v, \varphi) + a(w,\varphi; \mu) +s(\varphi, v, w) +s(v,\varphi, w) +b(\varphi,q) = m(v_\text{d}, \varphi) \quad &\forall \, \varphi \in \V, \\
b(w,\xi) = 0\quad &\forall \, \xi \in \Q ,\\
\alpha r(u, \tau) = c(\tau, w) \quad & \forall \, \tau \in \control,
\end{cases}
\end{equation}
where $m\goesto{\V}{\V}{\mathbb R}$ and $r\goesto{\control}{\control}{\mathbb R}$ terms come from
the Fr\'echet derivative of \eqref{eq:J_NS} w.r.t.\ the velocity and control, respectively. They represent the $L^2$ scalar product in
$\Gamma_{\text{obs}}$ and $\Omega_{u}$, respectively. Furthermore, we remark that $s(\varphi,v, w) + s(v,\varphi, w)$ is the linearization around $v$ of the trilinear form $s(v,v,\varphi)$, by definition. Therefore, the strong formulation for \eqref{eq:gal_adj_ocp_ns2}
reads:
\begin{equation}
\label{eq:OCP_ADJ_OPT_NS_eq}
\begin{cases}
v\mathbb{I}_{\Omega_{\text{obs}}} -\mu \Delta w - v\cdot\nabla w + (\nabla v)^T w + \nabla q= v_\text{d} \mathbb{I}_{\Omega_{\text{obs}}} \quad &\text{in} \ \Omega, \\
\nabla \cdot w = 0 \quad &\text{in} \ \Omega, \\
\alpha u \mathbb{I}_{\Omega_u} = C^* w \quad &\text{in} \ \Omega, \\
\end{cases}
\end{equation}
where $\mathbb{I}_{\Omega_u}$ and $\mathbb{I}_{\Omega_{\text{obs}}}$ are the indicator functions of the control and observation domains, respectively. \A{The detailed derivation of the optimality system is addressed in several works, see e.g.\ \cite{hinze2008optimization,Fursikov1998852,Gunzburger2000249}}. The global optimization problem reads: given $\mu \in \Cal P$, find $X = ( (v, p),u, (w, q)) \in \mathbb X$ such that \eqref{eq:OCP_NS_eq} and \eqref{eq:OCP_ADJ_OPT_NS_eq} are verified.
\\We remark that, if we call $y \eqdot (v, p)$ and $z \eqdot (w, q)$, we recover the global algebraic formulation presented in Section \ref{general_problem} and the saddle point structure is preserved. Indeed,
let us suppose that we apply the Taylor-Hood approximation $\mathbb{P}^2$-$\mathbb{P}^1$ for state $y$ and adjoint variable $z$. Furthermore, we discretize the space $\control$ with FE using $\mathbb P^2$ polynomials.
Recalling the notation of Section \ref{FE}, we define the quantities
\begin{equation}
\mathsf y =
\begin{bmatrix}
\mathsf v \\
\mathsf p
\end{bmatrix} , \qquad
\mathsf z =
\begin{bmatrix}
\mathsf w \\
\mathsf q
\end{bmatrix}, \qquad
\mathsf M_{y} =
\begin{bmatrix}
\mathsf M_v & 0 \\
0 & 0
\end{bmatrix}, \quad \text{and} \quad
\mathsf C =
\begin{bmatrix}
\mathsf C_v \\
0
\end{bmatrix},
\end{equation}
where $\mathsf v, \mathsf p, \mathsf w, \mathsf q$ are the column vectors of FE coefficients for state and adjoint, velocities and pressures respectively, while
$\mathsf M_{v}$ is the mass velocity matrix and $\mathsf C_v$ derives by the bilinear form $c\cd$.
Furthermore, the linearized state equation structure can be now expressed as
\begin{equation}
\label{eq:NS_matrix}
\mathsf {E}_{\textit{n}\ell}'[\mathsf y^j] + \mathsf E_\ell =
\begin{bmatrix}
\mathsf S[\mathsf v^j] & 0 \\
0 & 0
\end{bmatrix} +
\begin{bmatrix}
\mathsf K & \mathsf D^T \\
\mathsf D & 0
\end{bmatrix} =
\begin{bmatrix}
\mathsf K + \mathsf S[\mathsf v^j]& \mathsf D^T \\
\mathsf D & 0
\end{bmatrix}
,
\end{equation}
where $\mathsf K$ is the stiffness matrix associated to the bilinear form $a(\cdot, \cdot; \mu)$, $\mathsf D$ is the continuity equation matrix coming from $b \cd$ and $ \mathsf S[\mathsf v^j]$ is the algebraic formulation of $s(v, \cdot, \cdot) + s(\cdot, v, \cdot)$ evaluated at the FE velocity basis functions. It remains to understand the specific structure of $\mathsf D_{\mathsf y}( \mathsf {E}_{\textit{n}\ell}'[\mathsf y]^T)[\mathsf z^j]$ defined in \eqref{J_ocp}. To this end, we define $s_{\text{ad}}(v, w, \varphi)$ as the \emph{adjoint operator} of the linearized trilinear form $s(v, v, \cdot)$ around the state velocity $v$. Applying $s_{\text{ad}}(v, \cdot, \cdot)$ to the basis functions of $\mathbb V^{\Cal N_v}$
will result in $ \mathsf S[\mathsf v^j]^T$. In the Jacobian matrix evaluation, a linearization of $s_{\text{ad}}(w,v, \varphi)$ is performed not only in $w$, but also w.r.t.\ the variable $v$. This process will lead to
\begin{equation}
\mathsf D_{\mathsf y^j}( \mathsf {E}_{\textit{n}\ell}'[\mathsf y^j]^T)[\mathsf z^j] =
\begin{bmatrix}
\mathsf D_{\mathsf v}(\mathsf S[\mathsf v^j]^T)([\mathsf w^j]) & 0 \\
0 & 0 \\
\end{bmatrix},
\end{equation}
where $\mathsf D_{\mathsf v}(\mathsf S[\mathsf v^j]^T)([\mathsf w^j]) $ is given by the form $ s_{\text{ad}} (w, \cdot, \cdot)$ applied to the FE velocity basis. Then, the Jacobian reads
\begin{equation}
\label{eq:J_ocp_NS}
\mathsf{Jac}_{\text{NS}}(\mathsf X^j; \bmu) =
\begin{bmatrix}
\mathsf M_v + \mathsf D_{\mathsf v}(\mathsf S[\mathsf v^j]^T)[\mathsf w^j] & 0 & 0 & \mathsf K^T + \mathsf S[\mathsf v^j]^T & \mathsf D^T \\
0 & 0 & 0 & \mathsf D & 0 \\
0 & 0 & \alpha \mathsf M_u & - \mathsf C^T_v & 0 \\
\mathsf K + \mathsf S[\mathsf v^j] & \mathsf D^T & - \mathsf C_v & 0 & 0 \\
\mathsf D & 0 & 0& 0 & 0 \\
\end{bmatrix}
=
\begin{bmatrix}
\mathsf A & \mathsf B^T \\
\mathsf B & 0 \\
\end{bmatrix}
\end{equation}
where $\mathsf X$ is the FE coefficient vector of the optimal solution and
\begin{equation}
\mathsf A =
\begin{bmatrix}
\mathsf M_v + \mathsf D_{\mathsf v}(\mathsf S[\mathsf v^j]^T)([\mathsf w^j]) & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & \alpha \mathsf M_u \\
\end{bmatrix}
\quad \text{and} \quad \mathsf B =
\begin{bmatrix}
\mathsf K + \mathsf S[\mathsf v^j] & \mathsf D^T & - \mathsf C_v \\
\mathsf D & 0 & 0 \\
\end{bmatrix}.
\end{equation}
As already specified in Section \ref{FE}, we assume that for $\mu \neq \mu^*$ the saddle point \eqref{eq:J_ocp_NS} is well-posed. Moreover, we highlight that we are dealing with a \emph{nested saddle point} structure: indeed, for the state equation \eqref{eq:NS_matrix} we require that, for a given $\mu \neq \mu^*$ and fixed $\mathsf v^j$,
the matrix $\mathsf K + \mathsf S[\mathsf v^j] $ is invertible and that Brezzi inf-sup condition holds, i.e.
\begin{equation}
\label{NS_FE_lbb}
\beta_{\text{Br}, \text{NS}}\disc \eqdot \adjustlimits\inf_{\mathsf p \neq 0} \sup_{\mathsf v \neq 0} \frac{\mathsf p^T\mathsf D \mathsf v}{\norm{\mathsf v}_{\V}\norm{\mathsf p}_{ \Q}} \geq \overline \beta_{\text{Br}, \text{NS}} \disc > 0.
\end{equation}
This is indeed the case for the Taylor-Hood discretization introduced in Section \ref{sec:state}.
In the next subsections we analyze how the controlled problem behaves, comparing its properties with the ones of the uncontrolled system presented in Section \ref{sec:state}.
We will use the word \emph{natural optimal branch} to describe the branch that is obtained by running Algorithm \ref{alg:01} with a trivial initial guess. This branch may consist of either symmetric or asymmetric configurations, depending on the test case. Further branches may exist, but are much harder to compute in practice and require very tailored initial guesses that can be provided by running Algorithm \ref{alg:01} in a neighborhood of $\mu^*$, and will be named \emph{non-natural optimal branches}\footnote{For the sake of exposition, each branch is extended to $\mu > \mu^*$ with the unique solution.}. We interpret the concept of natural optimality as a \emph{numerical stability} property of the optimal control system. Indeed, as already specified in Section \ref{general_problem}, for \ocp s it makes no sense to talk about the physical stability of the global optimal solution. In fact, the system is ``artificially" built by adding non-physical adjoint variables, with the aim of changing the system behavior.
\subsection{Neumann Control: weak steering}
\label{neumann}
The first test case we present is a Neumann control over the boundary $\Gamma_{\text{out}}$, where homogeneous Dirichlet conditions are applied to $\Gamma_{\text{wall}} \eqdot \Gamma_0 \cup \Gamma_{\text{D}}$.
More specifically, in this case, the optimality conditions read: given $\mu \in \Cal P$ find $X \in \mathbb X$ such that
\begin{equation}
\label{eq:Neumann_eq}
\begin{cases}
v\mathbb{I}_{\Gamma_{\text{obs}}} -\mu \Delta w - v\cdot\nabla w + (\nabla v)^T w + \nabla q= v_\text{d} \mathbb{I}_{\Gamma_{\text{obs}}} \quad &\text{in} \ \Omega, \\
\nabla \cdot w = 0 \quad &\text{in} \ \Omega, \\
w =0 \quad &\text{on} \ \Gamma_{\text{in}} \cup \Gamma_{\text{wall}}, \\
- qn + (\mu \nabla w) n = 0 \quad &\text{on} \ \Gamma_{\text{out}}, \\
\alpha u \mathbb{I}_{\Gamma_{\text{out}}} = w\mathbb{I}_{\Gamma_{\text{out}}} \quad &\text{in} \ \Omega, \\
-\mu \Delta v + v\cdot\nabla v + \nabla p=0 \quad &\text{in} \ \Omega, \\
\nabla \cdot v = 0 \quad &\text{in} \ \Omega, \\
v = v_{\text{in}} \quad &\text{on} \ \Gamma_{\text{in}}, \\
v = 0 \quad &\text{on} \ \Gamma_{\text{wall}}, \\
- pn + (\mu \nabla v) n = u \quad &\text{on} \ \Gamma_{\text{out}}.\\
\end{cases}
\end{equation}
The desired velocity $v_\text{d}$ will always be of the symmetric type for this specific example. In other words, we are studying which is the best choice for Neumann boundary condition, to reach the exiting symmetric profile shown in Figure \ref{fig:vd_S}. We study the behavior of the controlled solution varying $\alpha = 1, 0.1, 0.001, 0.0001$, where the greater is the value of $\alpha$ the lower is the strength of the control.
In Figure \ref{fig:Neumann_solution} we show some representative solutions for $\alpha = 0.01$ and $\mu = 0.5$, for state velocity and pressure variables. In this case, the natural optimal branch is composed by asymmetric solutions (Figure \ref{fig:Neumann_solution}, top), while there is a further non-natural optimal branch made up by symmetric solutions (Figure \ref{fig:Neumann_solution}, bottom). Results obtained following the natural optimal and non-natural optimal branches are shown in Figures \ref{fig:mag} and \ref{fig:s_mag}, respectively.
Therefore, we conclude that the Neumann control affects \emph{weakly} the system, as it is not able to steer the system towards the desired symmetric configuration after bifurcation has occurred, thus not changing drastically the features already observed for the uncontrolled state equations (see Figure \ref{fig:bifurcation}).\\
The left plot of Figure \ref{fig:mag} depicts the velocity profile magnitude over $\Gamma_{\text{obs}}$ for the highest value of the Reynolds number when following the natural optimal branch. Even though the obtained velocity (marked by an orange line) is indeed different from the desired profile (denoted by a blue line), especially for what concerns peak values, we observe that the Neumann control straightens the flux near the end of the channel (compare the orange line to the green line, which represents the uncontrolled asymmetric profile), even when high Reynolds numbers are considered. The resulting profile is similar to the uncontrolled symmetric velocity (red line), even though full symmetry is not achieved. The action of the control variable is shown in the right plot of Figure \ref{fig:mag} when changing the parameter $\mu$ following the natural optimal branch: the control is stronger for $\mu < \mu^*$ (i.e., when the wall-hugging phenomenon occurs and straightening in necessary), while it remains low in magnitude for $\mu > \mu^*$.\\
Similarly, the left plot of Figure \ref{fig:s_mag} shows the velocity profile magnitude over $\Gamma_{\text{obs}}$ for the highest value of the Reynolds number when following the non-natural optimal branch. In this case, the controlled symmetric profile (orange line) coincides with the uncontrolled symmetric profile (red line). Furthermore, the right plot of Figure \ref{fig:s_mag} shows that the control variable around the critical $\mu^*$ (e.g., $\mu = 1$ and $\mu = 0.95$) is asymmetric to counteract the stable wall-hugging physically driven behavior of the uncontrolled system. We further remark that, compared to the natural optimal branch, the control variable of the non-natural optimal branch is much lower in magnitude.\\
Table \ref{Neumann_J} shows the value of the cost functional \eqref{eq:J_NS} for several values of $\mu$ (rows) and $\alpha$ (columns), following either the natural optimal or non-natural optimal branches. The first column also shows the value of the \emph{uncontrolled functional}, i.e. \eqref{eq:J_NS} evaluated for the uncontrolled velocity $v$ of the equation \eqref{eq:NS_eq} and zero control.
The main observation is that decreasing the value of $\alpha$ results in lower cost functional values, since a lower value of $\alpha$ allows stronger control to take place and drive the velocity to the desired configuration. In all cases, the non-natural branch presents lower values of the functional compared to the natural branch; this has to be expected, as the cost functional measures deviation from a symmetric target, and the non-natural branch is clearly closer to the target being made of symmetric solution (compare e.g.\ for $\mu = 0.05$ and $\alpha = 0.001$ the left panels of Figures \ref{fig:mag}-\ref{fig:s_mag}). However, the natural branch is the one for which the control procedure is influencing the most the cost functional values: for instance, for $\mu = 0.05$ and $\alpha = 0.001$, the cost functional is decreased by $6\%$ on the non-natural branch and by $55\%$ on the natural one w.r.t.\ the corresponding uncontrolled configuration. Again, this has to be expected from the previous discussion of Figures \ref{fig:mag}-\ref{fig:s_mag}, which shows larger impact of the control procedure to straighten the solution on the natural branch. Finally, large values of $\mu$ have negligible cost functionals, as the target velocity almost coincides with the uncontrolled velocity.
From such an analysis we deduce that, when bifurcating phenomena occur, a configuration can perform better than another one, and finding all the solution branches can be of great importance to understand the solution that best recover the desired profile.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\hspace*{-2mm}\includegraphics[width=\textwidth]{/ocp/neumann_v_asy}
\caption{}
\label{fig:v_as}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\hspace*{-6mm}\includegraphics[width=\textwidth]{/ocp/neumann_p_asy}
\caption{}
\label{fig:p_as}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{/ocp/neumann_v_sy}
\caption{}
\label{fig:v_s}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\hspace{3mm}\includegraphics[width=0.98\textwidth]{/ocp/neumann_p_sy}
\caption{}
\label{fig:p_s}
\end{subfigure}
\hfill
\caption{\emph{Neumann Control}: optimal solutions with $\alpha = 0.01$ and $\mu=0.5$, belonging to the natural optimal (panels (a) and (b) for state velocity and pressure, respectively) and the non-natural optimal (panels (c) and (d)) branches.}
\label{fig:Neumann_solution}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/Neumann_velocity_exit}
\includegraphics[width=0.46\textwidth]{/ocp/Neumann_asy_control_exit}
\caption{\emph{Neumann Control}. \emph{Left}: comparison of velocity profiles in the controlled and uncontrolled cases for $\alpha = 0.01$, $\mu = 0.5$ on $\Gamma_{\text{obs}}$ w.r.t.\ the desired profile when following the natural optimal branch. \emph{Right}: representation of control variable evolution for $\alpha = 0.01$, $\mu=2, 1, 0.95, 0.5$ over $\Gamma_{\text{out}}$ when following the natural optimal branch.}
\label{fig:mag}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/Neumann_s_velocity_exit}
\includegraphics[width=0.46\textwidth]{/ocp/Neumann_s_control_exit}
\caption{\emph{Neumann Control}. \emph{Left}: comparison of velocity profiles in the controlled and uncontrolled cases for $\alpha = 0.01$, $\mu = 0.5$ on $\Gamma_{\text{obs}}$ w.r.t.\ the desired profile when following the non-natural optimal branch. The lines marked by ``Controlled Symmetric Velocity'' and ``Uncontrolled Symmetric Velocity'' overlap. \emph{Right}: representation of control variable evolution for $\alpha = 0.01$, $\mu=2, 1, 0.95, 0.5$ over $\Gamma_{\text{out}}$ when following the non-natural optimal branch.}
\label{fig:s_mag}
\end{figure}
\begin{table}
\caption{\emph{Neumann Control}: comparison of the functional value w.r.t. stable and unstable uncontrolled solutions. (Nat.) Natural optimal branch. (n-Nat.) Non-natural optimal branch.}
\label{Neumann_J}
\begin{center}
\tabcolsep=0.11cm
\footnotesize{
\begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c||}
\hline
& Stable & \cellcolor[HTML]{E5E3E3}Unstable & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. \\ \cline{2-11}
\multirow{-2}{*}{$\mu$} & \multicolumn{2}{c||}{Uncontrolled} & \multicolumn{2}{c||}{$\alpha = 1$} & \multicolumn{2}{c||}{$\alpha = 0.1$} & \multicolumn{2}{c||}{$\alpha = 0.01$} & \multicolumn{2}{c||}{$\alpha = 0.001$} \\ \hline
$2$ & 5.14e--9 & \cellcolor[HTML]{E5E3E3}5.14e--9 & 5.13e--9 & \cellcolor[HTML]{E5E3E3}5.13e--9 & 5.13e--9 & \cellcolor[HTML]{E5E3E3}5.13e--9 & 5.13e--9 & \cellcolor[HTML]{E5E3E3}5.13e--9 & 5.07e--9 & \cellcolor[HTML]{E5E3E3}5.07e--9 \\ \hline
$1.5$ & 4.38e--6 & \cellcolor[HTML]{E5E3E3}4.38e--6 & 4.38e--6 & \cellcolor[HTML]{E5E3E3}4.38e--6 & 4.38e--6 & \cellcolor[HTML]{E5E3E3}4.38e--6 & 4.37e--6 & \cellcolor[HTML]{E5E3E3}4.37e--6 & 4.28e--6 & \cellcolor[HTML]{E5E3E3}4.28e--6 \\ \hline
$1$ & 4.10e--3 & \cellcolor[HTML]{E5E3E3}4.10e--3 & 4.10e--3 & \cellcolor[HTML]{E5E3E3}4.10e--3 & 4.10e--3 & \cellcolor[HTML]{E5E3E3}4.10e--3 & 4.08e--3 & \cellcolor[HTML]{E5E3E3}4.10e--3 & 3.92e--3 & \cellcolor[HTML]{E5E3E3}3.92e--3 \\ \hline
$0.9$ & 3.33e--2 & \cellcolor[HTML]{E5E3E3}1.63e--2 & 3.33e--2 & \cellcolor[HTML]{E5E3E3}1.63e--2 & 3.30e--2 & \cellcolor[HTML]{E5E3E3}1.63e--2 & 3.15e--2 & \cellcolor[HTML]{E5E3E3}1.63e--2 & 2.93e--2 & \cellcolor[HTML]{E5E3E3}1.55e--2 \\ \hline
$0.8$ & 2.08e--1 & \cellcolor[HTML]{E5E3E3}6.52e--2 & 2.07e--1 & \cellcolor[HTML]{E5E3E3}6.52e--2 & 2.04e--1 & \cellcolor[HTML]{E5E3E3}6.51e--2 & 1.88e--1 & \cellcolor[HTML]{E5E3E3}6.51e--2 & 1.70e--1 & \cellcolor[HTML]{E5E3E3}6.15e--2 \\ \hline
$0.7$ & 1.01e+0 & \cellcolor[HTML]{E5E3E3}2.59e--1 & 1.01e+0 & \cellcolor[HTML]{E5E3E3}2.59e--1 & 9.80e--1 & \cellcolor[HTML]{E5E3E3}2.59e--1 & 8.63e--1 & \cellcolor[HTML]{E5E3E3}2.59e--1 & 7.67e--1 & \cellcolor[HTML]{E5E3E3}2.43e--1 \\ \hline
$0.6$ & 4.48e+0 & \cellcolor[HTML]{E5E3E3}1.70e+0 & 4.44e+0 & \cellcolor[HTML]{E5E3E3}1.02e+0 & 4.15e+0 & \cellcolor[HTML]{E5E3E3}1.02e+0 & 3.33e+0 & \cellcolor[HTML]{E5E3E3}1.02e+0 & 2.91e+0 & \cellcolor[HTML]{E5E3E3}9.57e--1 \\ \hline
$0.5$ & 1.88e+1 & \cellcolor[HTML]{E5E3E3}3.92e+0 & 1.83e+1 & \cellcolor[HTML]{E5E3E3}3.92e+0 & 1.50e+1 & \cellcolor[HTML]{E5E3E3}3.92e+0 & 9.61e+0 & \cellcolor[HTML]{E5E3E3}3.92e+0 & 8.54e+0 & \cellcolor[HTML]{E5E3E3}3.68e+0 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
Concerning the stability of the solution, we performed the eigenvalues analysis described in Algorithm \ref{alg:01}. We can derive several insights from the Figure \ref{fig:eig_neumann}, which represents the global eigenvalue problem for the natural branch, against the parameter $\mu$ such that $\Re(\sigma_{\mu}) = [-0.01, 0.01]$.\\
We plot the first $N_{eig} = 100$ eigenvalues of the linearized system \eqref{eq:eigen_ocp} around the global optimal solution, using a Krylov-Schur algorithm.
From the plot, we observe two eigenvalues (highlighted with blue markers) approaching $\Re(\sigma_{\mu}) = 0$: we will refer to this behavior as \emph{shears phenomenon}. Moreover, the number of positive eigenvalues grows inversely with the value of the penalization parameter, and the negative eigenvalues are lowering except for the negative shear eigenvalue.
Furthermore, the positive real eigenvalues
accumulate in the value of $\alpha$: this is very clear in subplots \ref{fig:n_eig_1e2} and \ref{fig:n_eig_1e3}. From the plot, a single eigenvalue (denoted by red markers) approaching zero is visible. \\
One of the conclusions we can obtain from the global eigenvalue analysis is how the concentration of negative eigenvalues is affected by the greater action of the control variable obtained by decreasing $\alpha$: for a fixed range of $\Re(\sigma_{\mu})$, decreasing $\alpha$ (i.e., a more controlled system) results in larger number of positive eigenvalues in $\Re(\sigma_{\mu})$.
Unfortunately, we cannot derive information about the physical stability of the global solution from the performed global eigenvalue analysis, since similar eigenvalue structures are observed for both the natural and non-natural branches (only the former being shown here for the sake of brevity). Therefore, our considerations throughout the work are limited to the numerical stability represented by natural optimality, as discussed above.
Thus, we conclude that the Neumann control is not able to fully steer uncontrolled solutions towards the desired symmetric configuration. However, this will be achieved in the next Section, where a stronger control action will be presented.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_neumann1_br}
\caption{}
\label{fig:n_eig1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_neumann1e1_br}
\caption{}
\label{fig:n_eig_1e1}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_neumann1e2_br}
\caption{}
\label{fig:n_eig_1e2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_neumann1e3_br}
\caption{}
\label{fig:n_eig_1e3}
\end{subfigure}\\
\caption{\emph{Neumann Control}: spectral analysis with $\alpha = 1, 0.1, 0.01, 0.001$}.
\label{fig:eig_neumann}
\end{figure}
\subsection{Distributed Control: strong steering}
\label{distributed}
This Section deals with a distributed control in $\Omega_{u} \equiv \Omega$, thus the control variable $u$ acts as an external forcing term on the whole domain.
Here we consider again $\Gamma_{\text{wall}} = \Gamma_{0} \cup \Gamma_{\text{D}}$. Given $\mu \in \Cal P$, the optimal solution $X \in \mathbb X$ satisfies the following system:
\begin{equation}
\label{eq:Distributed_eq}
\begin{cases}
v\mathbb{I}_{\Gamma_{\text{obs}}} -\mu \Delta w - v\cdot\nabla w + (\nabla v)^T w + \nabla q= v_\text{d} \mathbb{I}_{\Gamma_{\text{obs}}} \quad &\text{in} \ \Omega, \\
\nabla \cdot w = 0 \quad &\text{in} \ \Omega, \\
w =0 \quad &\text{on} \ \Gamma_{\text{in}} \cup \Gamma_{\text{wall}}, \\
- qn + (\mu \nabla w) n = 0 \quad &\text{on} \ \Gamma_{\text{out}}, \\
\alpha u = w \quad &\text{in} \ \Omega, \\
-\mu \Delta v + v\cdot\nabla v + \nabla p=u \quad &\text{in} \ \Omega, \\
\nabla \cdot v = 0 \quad &\text{in} \ \Omega, \\
v = v_{\text{in}} \quad &\text{on} \ \Gamma_{\text{wall}}, \\
v = 0 \quad &\text{on} \ \Gamma_{0}, \\
- pn + (\mu \nabla v) n = 0 \quad &\text{on} \ \Gamma_{\text{out}}, \\
\end{cases}
\end{equation}
First of all, we underline that in distributed \ocp s the action of the control is usually stronger and it affects deeply
the original system.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/dist_u_sy_low}
\caption{}
\label{fig:u_dist_s_low}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/dist_u_sy_high}
\caption{}
\label{fig:u_dist_s_high}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/dist_u_asy_low}
\caption{}
\label{fig:u_dist_as_low}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/dist_u_asy_high}
\caption{}
\label{fig:u_dist_as_high}
\end{subfigure}\\
\caption{\emph{Distributed Control}: optimal control profiles for $\alpha = 0.01$. Left: $\mu = 2$ in (a) and (c); right: $\mu = 0.5$ in (b) and (d). Top: symmetric target in (a) and (b); bottom: asymmetric target in (c) and (d).}
\label{fig:Dist_solution}
\end{figure}
To show this, we will steer the system towards either symmetric or asymmetric desired profiles $v_\text{d}$:
\begin{itemize}
\item[{$\small{\circ}$}] \emph{Symmetric target}: the aim of this setting is to steer the solution of \eqref{eq:Distributed_eq} to a symmetric profile. We plot two representative control solutions in Figures \ref{fig:u_dist_s_low} and \ref{fig:u_dist_s_high}, obtained for $\mu = 2$ and $\mu = 0.5$ when following the natural optimal branch, which is composed of symmetric solutions.
The stronger action of the control allows the controlled velocity profile to be more diffusive compared to the uncontrolled symmetric profile, as represented in the left plot of Figure \ref{fig:dist_s_mag}, corresponding to the observed slice of the velocity solution for $\mu = 0.5$: in this case the controlled velocity (orange line) and the symmetric target (blue line) almost coincide.
The right plot of Figure \ref{fig:dist_s_mag} shows that a slightly asymmetric control is required only near the critical value $\mu^*$ (also compare to Figures \ref{fig:u_dist_s_low} and \ref{fig:u_dist_s_high} for the cases $\mu = 2$ and $\mu = 0.5$).
Furthermore, the control action is clearly higher when the Re value increases. Indeed, for $\mu = 2$ the control exclusively acts in the proximity of $\Gamma_{\text{obs}}$ with a maximum magnitude of $1.8 \small{\cdot} 10^{-4}$, while for $\mu = 0.5$ it reaches a value of $1.6$ of magnitude.\\
A further non-natural optimal branch exists, and is made of symmetric solutions, but is hardly reachable by numerical continuation methods unless tailored guesses are provided restarting Algorithm \ref{alg:01} in a small neighborhood of $\mu^*$.
\item[{$\small{\circ}$}] \emph{Asymmetric target}: in this case, we desire to recover the asymmetric target for all $\mu \in \mathcal P$.
We plot two representative control solutions in Figures \ref{fig:u_dist_as_low} and \ref{fig:u_dist_as_high}, obtained for $\mu = 2$ and $\mu = 0.5$ when following the natural optimal branch, which is made of asymmetric solutions.
The action of the control is also visible in the left plot of Figure \ref{fig:dist_as_mag}, obtained for $\mu = 2$: indeed, we see how the flux over $\Gamma_{\text{obs}}$ is pushed towards the domain wall (orange line), in contrast to the symmetric profile of the uncontrolled velocity (green line). Namely, also in this case, the distributed control is able to drive the solution towards the desired state.
In order to do so, the control variable has to be large when $\mu > \mu^*$, i.e.\ when the uncontrolled configuration on $\Gamma_{\text{obs}}$ would lead to a symmetric profile. Indeed, in Figure \ref{fig:u_dist_as_low} the maximum control value reaches $7$ for $\mu = 2$ in the upper part of the domain. In contrast Figure \ref{fig:u_dist_as_high} it lowers to $10^{-11}$ for $\mu = 0.5$ when the stable asymmetric velocity solution does not need to be controlled by an external forcing term. This is confirmed in the right plot of Figure \ref{fig:dist_as_mag} for several values of $\mu$.\\
Also in this case a non-natural optimal branch (featuring symmetric solutions) continues to exist, but it is numerically difficult to reach.
\end{itemize}
\begin{table}
\caption{\emph{Distributed Control}: comparison of the functional value. (Sym.) Natural optimal branch for symmetric target. (Asym.) Natural optimal branch for asymmetric target. (Sym.-U.) Unstable uncontrolled solution with symmetric target. (Asym.-S.) Stable uncontrolled solution with asymmetric target. (B.M.E.) Below machine epsilon.}
\label{Distributed_J}
\begin{center}
\tabcolsep=0.11cm
\footnotesize{
\begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c||}
\hline
& Sym.-U. & \cellcolor[HTML]{E5E3E3}Asym.-S. & Sym.& \cellcolor[HTML]{E5E3E3}Asym. & Sym. & \cellcolor[HTML]{E5E3E3}Asym. & Sym. & \cellcolor[HTML]{E5E3E3}Asym. & Sym. & \cellcolor[HTML]{E5E3E3}Asym. \\ \cline{2-11}
\multirow{-2}{*}{$\mu$} & \multicolumn{2}{c||}{Uncontrolled} & \multicolumn{2}{c||}{$\alpha = 1$} & \multicolumn{2}{c||}{$\alpha = 0.1$} & \multicolumn{2}{c||}{$\alpha = 0.01$} & \multicolumn{2}{c||}{$\alpha = 0.001$} \\ \hline
$2$ & 5.14e--9 & \cellcolor[HTML]{E5E3E3}1.88e+1 &5.06e--9 & \cellcolor[HTML]{E5E3E3}1.81e+1& 4.51e--9 & \cellcolor[HTML]{E5E3E3}1.36e+1 & 2.22e--9 & \cellcolor[HTML]{E5E3E3}4.23e+0 & 4.04e--10 & \cellcolor[HTML]{E5E3E3}5.66e--1 \\ \hline
$1.5$ & 4.38e--6 & \cellcolor[HTML]{E5E3E3}1.88e+1 & 4.29e--6 & \cellcolor[HTML]{E5E3E3}1.77e+1 & 3.61e--6 & \cellcolor[HTML]{E5E3E3}1.20e+1 & 1.46e--6 & \cellcolor[HTML]{E5E3E3}3.09e+0 & 2.28e--7& \cellcolor[HTML]{E5E3E3}3.87e--1 \\ \hline
$1$ & 4.10e--3 & \cellcolor[HTML]{E5E3E3}1.86e+1 & 3.95e--3 & \cellcolor[HTML]{E5E3E3}1.67e+1 & 2.99e--3 & \cellcolor[HTML]{E5E3E3}9.15e+0 & 9.14e--4& \cellcolor[HTML]{E5E3E3}1.86e+0& 1.23e--4 & \cellcolor[HTML]{E5E3E3}2.17e--1 \\ \hline
$0.9$ & 1.63e--2 & \cellcolor[HTML]{E5E3E3}1.84e+1 & 1.56e--2 & \cellcolor[HTML]{E5E3E3}1.54e+1 & 1.14e--2 & \cellcolor[HTML]{E5E3E3}7.88e+0 & 3.26e--3 & \cellcolor[HTML]{E5E3E3}1.50e+0 &4.26e--4 & \cellcolor[HTML]{E5E3E3}1.73e--1 \\ \hline
$0.8$ & 6.52e--2 & \cellcolor[HTML]{E5E3E3} 1.54e+1& 6.21e--2 & \cellcolor[HTML]{E5E3E3}1.31e+1 & 4.36e--2 & \cellcolor[HTML]{E5E3E3}6.06e+0 & 1.14e--2 & \cellcolor[HTML]{E5E3E3}1.08e+0& 1.45e--3 & \cellcolor[HTML]{E5E3E3}1.22e--1 \\ \hline
$0.7$ & 2.59e--1 & \cellcolor[HTML]{E5E3E3}1.15e+1 & 2.45e--1 & \cellcolor[HTML]{E5E3E3}9.28e+0 & 1.63e--1 & \cellcolor[HTML]{E5E3E3}3.68e+0 &3.93e--2& \cellcolor[HTML]{E5E3E3}6.16e--1 & 4.81e--3 & \cellcolor[HTML]{E5E3E3}6.90e--2\\ \hline
$0.6$ & 1.70e+0 & \cellcolor[HTML]{E5E3E3}5.34e+0 & 9.54e--1 & \cellcolor[HTML]{E5E3E3}3.76e+0 & 5.94e--1 & \cellcolor[HTML]{E5E3E3}1.24e+0 & 1.28e--1 & \cellcolor[HTML]{E5E3E3}2.00e--1 & 1.70e--2 & \cellcolor[HTML]{E5E3E3}2.22e--2 \\ \hline
$0.5$ & 3.92e+0 & \cellcolor[HTML]{E5E3E3}B.M.E. &3.59e+0 & \cellcolor[HTML]{E5E3E3}B.M.E. & 2.04e+0 & \cellcolor[HTML]{E5E3E3}B.M.E. & 3.92e--1 & \cellcolor[HTML]{E5E3E3}B.M.E. & 4.47e--2 & \cellcolor[HTML]{E5E3E3}B.M.E. \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/Distributed_velocity_exit}
\includegraphics[width=0.46\textwidth]{/ocp/Distributed_s_control_exit}
\caption{\emph{Distributed Control}. \emph{Left}: comparison of velocity profiles in the controlled and uncontrolled cases for $\alpha = 0.01$, $\mu = 0.5$ on $\Gamma_{\text{obs}}$ w.r.t.\ the symmetric desired profile when following the natural optimal branch. \emph{Right}: representation of control variable evolution for $\alpha = 0.01$, $\mu=2, 1, 0.95, 0.5$ for $x_1 = 45$ when following the natural optimal branch.}
\label{fig:dist_s_mag}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/Distributed_asy_velocity_exit}
\includegraphics[width=0.46\textwidth]{/ocp/Distributed_asy_control_exit}
\caption{\emph{Distributed Control}. \emph{Left}: comparison of velocity profiles in the controlled and uncontrolled cases for $\alpha = 0.01$, $\mu = 2.$ on $\Gamma_{\text{obs}}$ w.r.t.\ the asymmetric desired profile when following the natural optimal branch. \emph{Right}: representation of control variable evolution for $\alpha = 0.01$, $\mu=2, 1, 0.95, 0.5$ for $x_1 = 45$ when following the natural optimal branch.}
\label{fig:dist_as_mag}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.65]{ocp/Bifurcation_diagram_nu_5-eps-converted-to.pdf}
\caption{\emph{Distributed Control}: bifurcation diagram (upper branch only) for controlled state velocity obtained with $\alpha = 1, 0.1, 0.01, 0.001$ and asymmetric target, compared to the uncontrolled velocity.}
\label{fig:early_bif}
\end{figure}
We show the comparison of the values of the cost functional \eqref{eq:J_NS} in Table \ref{Distributed_J} for the reached natural branch for both symmetric and asymmetric targets. Several values of $\mu$ (rows) and $\alpha$ (columns) have been analyzed and compared to the uncontrolled functional, computed as in the Neumann test case. As expected, we notice that the functional is lower for smaller $\alpha$.
For the symmetric target, the action of the distributed control is indeed able to steer the solution towards the desired symmetric profile. Indeed, for $\mu = 0.05$, the choice $\alpha = 0.01$ shows that the functional is decreased by a $90\%$ w.r.t. its uncontrolled counterpart, while for $\alpha = 0.001$, the cost functional is almost decreased by $99\%$.
Similarly, for the asymmetric target, the maximum action of the control variable is given for low Reynolds and, for $\mu = 2.$ we can observe a decrease of the functional of the $77.5\% $ for $\alpha = 0.01$, up to a $97\%$ for $\alpha = 0.001$. We remark that no control action is needed for $\mu = 0.5 \approx 0.49$, which is the parameter value for which the asymmetric $v_\text{d}$ was computed: this was underlined by very low values of \eqref{eq:J_NS}, which were below the machine precision.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_distributed1_b}
\caption{}
\label{fig:dist_eig_1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_distributed1e1_b}
\caption{}
\label{fig:dist_eig_1e1}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_distributed_storto1_b}
\caption{}
\label{fig:dist_a_eig_1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_distributed_storto1e1_b}
\caption{}
\label{fig:dist_a_eig_1e1}
\end{subfigure}\\
\caption{\emph{Distributed Control}: spectral analysis with $\alpha = 1, 0.1$ (left to right) for the natural optimal branch with symmetric (top) and asymmetric (bottom) targets.}
\label{fig:dist_eig}
\end{figure}
The plots of Figure \ref{fig:dist_eig} represent the spectral analysis for this optimal control problem: in particular, Figures \ref{fig:dist_eig_1} ($\alpha = 1$) and \ref{fig:dist_eig_1e1} ($\alpha = 0.1$) are associated with the symmetric target when following the corresponding natural optimal branch, while Figures \ref{fig:dist_a_eig_1} ($\alpha = 1$) and \ref{fig:dist_a_eig_1e1} ($\alpha = 0.1$) consider the asymmetric target when following its natural optimal branch. As the behavior between the top and bottom panels of Figure \ref{fig:dist_eig} is comparable, we will only comment in the following on the role of $\alpha$.
We plot the eigenvalues for $\alpha = 1$ in $\Re(\sigma_{\mu}) = [-0.01, 0.01]$ and for $\alpha = 0.1$ in $\Re(\sigma_{\mu}) = [-0.005, 0.005]$. For this test case, the predominance of positive eigenvalues is visible also for large values of the penalization parameter. The smaller is $\alpha$, the more negative eigenvalues are lowered. For all the $\alpha$ taken into account, the shears phenomenon does not appear: for $\alpha = 1$ a small trace of the shears structure is still visible (highlighted in blue) in Figures \ref{fig:dist_eig_1} and \ref{fig:dist_a_eig_1} where the bottom part of the shear is pushed away from $\Re(\sigma_{\mu}) = 0$. Instead, for $\alpha = 0.1$ the shears structure is completely lost: Figures \ref{fig:dist_a_eig_1} and \ref{fig:dist_a_eig_1e1} show that only one eigenvalue
(representing the top of the shears, and marked in blue) approaches $\Re(\sigma_{\mu}) = 0$ without crossing it.
We finally notice that the point $\mu^{**}$, where the upper shears curve is the closest to the axis $\Re(\sigma_{\mu}) = 0$, allows to obtain further information on the bifurcating phenomenon. From Figure \ref{fig:dist_eig}, $\mu^{**} \approx 0.96$ for the symmetric target, regardless of $\alpha$, while requiring an asymmetric target leads to $\mu^{**} \in [1.0, 1.2]$ with a mild dependence on $\alpha$.
Thanks to the aid of Figure \ref{fig:early_bif}, which shows the bifurcation diagram for the controlled solution with asymmetric target (a similar plot can be obtained for the symmetric target as well, but is here omitted because the lines almost overlap), we can state that optimal control is not only able to steer the state solution towards a desired branch, but may also affect the location of the bifurcation point.\\ The role of the penalization parameter $\alpha$ will be clarified in the next Section and it will result into a completely new optimal solution behavior in Section \ref{dirichlet}.
\subsection{Channel Control: the $\boldsymbol \alpha$ effect.}
\label{inlet}
This Section aims at describing how the value of the penalization parameter $\alpha$ can affect the natural convergence towards a symmetric target over $\Gamma_{\text{obs}}$.
Towards this goal, we analyzed the action of a control variable defined at the end of inlet channel, i.e. $\Omega_u = \Gamma_{\text{ch}}$, as depicted in Figure \ref{fig:channel}. The boundary $\Gamma_{\text{wall}}$ is, once again, $\Gamma_{0} \cup \Gamma_{\text{D}}$.
Within this setting, the problem reads: given $\mu \in \Cal P$ find the optimal solution $X \in \mathbb X$ such that the following holds
\begin{equation}
\label{eq:Channel_eq}
\begin{cases}
v\mathbb{I}_{\Gamma_{\text{obs}}} -\mu \Delta w - v\cdot\nabla w + (\nabla v)^T w + \nabla q= v_\text{d} \mathbb{I}_{\Gamma_{\text{obs}}} \quad &\text{in} \ \Omega, \\
\nabla \cdot w = 0 \quad &\text{in} \ \Omega, \\
w =0 \quad &\text{on} \ \Gamma_{\text{in}} \cup \Gamma_{\text{wall}}, \\
- qn + (\mu \nabla w) n = 0 \quad &\text{on} \ \Gamma_{\text{out}}, \\
\alpha u \mathbb{I}_{\Gamma_{\text{ch}}}= w \mathbb{I}_{\Gamma_{\text{ch}}} \quad &\text{in} \ \Omega, \\
-\mu \Delta v + v\cdot\nabla v + \nabla p=u \mathbb{I}_{\Gamma_{\text{ch}}} \quad &\text{in} \ \Omega, \\
\nabla \cdot v = 0 \quad &\text{in} \ \Omega, \\
v = v_{\text{in}} \quad &\text{on} \ \Gamma_{\text{in}}, \\
v = 0 \quad &\text{on} \ \Gamma_{\text{wall}}, \\
- pn + (\mu \nabla v) n = 0 \quad &\text{on} \ \Gamma_{\text{out}}.\\
\end{cases}
\end{equation}
The optimal control acts as a forcing term capable to change the way the flow enters in the expansion channel. In Figure \ref{fig:Channel_solution} we show the adjoint velocity and pressure profiles obtained for $\mu = 0.5$ and two different penalization values, namely $\alpha = 1$ and $\alpha = 0.01$.
In the first case, following Algorithm \ref{alg:01}, the natural optimal branch presents a wall-hugging behavior, while for smaller values of $\alpha$ the control variable is able to drive the velocity towards a straight flux (see the left panels of Figures \ref{fig:ch_as_mag} and \ref{fig:ch_s_mag}). Therefore, for large values of $\alpha$ the natural optimal branch is composed by asymmetric solutions (i.e., far away from the target), while for smaller values of $\alpha$ the natural optimal branch is made of symmetric solutions.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/channel_w_asy}
\caption{}
\label{fig:w_channel_as}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/channel_q_asy}
\caption{}
\label{fig:q_channel_as}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/channel_w_sy}
\caption{}
\label{fig:w_channel_s}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/channel_q_sy}
\caption{}
\label{fig:q_channel_s}
\end{subfigure}\\
\caption{\emph{Channel Control}: two optimal solutions for adjoint velocity and pressure for $\mu = 0.5$: $\alpha = 1$ in (a) and (b), and $\alpha = 0.01$ in (c) and (d), respectively.}
\label{fig:Channel_solution}
\end{figure}
From the right plots of Figures \ref{fig:ch_as_mag} and \ref{fig:ch_s_mag}, the control is very sensitive close to $\mu^*$ and this is shown by its asymmetric configuration both for the wall-hugging solution and the straight one. For $\alpha = 1, 0.1, 0.01$, we were able to detect two solutions using different initial guesses in the continuation method, showing symmetric and asymmetric features coexisting for some values of $\mu < \mu^*$. The smaller was $\alpha$, the more difficult was to recover the non-natural branch. For example, when $\alpha = 0.001$, the action of the control variable drives the wall-hugging phenomenon towards a straight flux so strongly that we were not able to really reconstruct the whole optimal non-natural branch. Indeed, either the Newton's solver did not converge (this happens also for $\alpha = 0.1$ and $\mu = 0.5$, compare Table \ref{Inlet_J}) or converged to the natural branch consisting in symmetric features.
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/Channel_as_velocity_exit}
\includegraphics[width=0.46\textwidth]{/ocp/Channel_as_control_exit}
\caption{\emph{Channel Control}. \emph{Left}: comparison of velocity profiles in the controlled and uncontrolled cases for $\alpha = 1$, $\mu = 0.5$ on $\Gamma_{\text{obs}}$ w.r.t.\ the symmetric desired profile when following the natural optimal branch. \emph{Right}: representation of control variable evolution for $\alpha = 1$, $\mu = 2, 1, 0.95, 0.5$ at $x_1 = 10$ when following the natural optimal branch.}
\label{fig:ch_as_mag}
\end{figure}
As usual, the role of $\alpha$ is highlighted in reducing objective functional,
as Table \ref{Inlet_J} shows. As already specified in Sections \ref{neumann} and \ref{distributed}, the straight configuration is lowering much more the functional than the asymmetric solution, due to its similarity with the symmetric $v_\text{d}$, which is the fixed target for this test case. In this case, the role of $\alpha$ is crucial in order to reach a solution which represents better the desired state. Indeed, the control was able to steer the solution to the symmetric profile for $\alpha = 0.1, 0.01, 0.001$. From the functional point of view, we do not have a notable decrease, as can be observed in Table \ref{Inlet_J}, where the value of \eqref{eq:J_NS} is presented for different values of $\mu$ and $\alpha$ w.r.t. the uncontrolled problem solution. Yet, acting at the end of the inlet channel still allows to drive the optimal solution towards a natural convergence to the symmetric $v_\text{d}$, but the parabolic profile on $\Gamma_{\text{obs}}$ is not reached (the functional decreases only of a $10\%$ for $\mu = 0.5$ and $\alpha = 0.001$ w.r.t.\ the uncontrolled symmetric solution).
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/Channel_s_velocity_exit}
\includegraphics[width=0.46\textwidth]{/ocp/Channel_s_control_exit}
\caption{\emph{Channel Control}. \emph{Left}: comparison of velocity profiles in the controlled and uncontrolled cases for $\alpha = 0.01$, $\mu = 0.5$ on $\Gamma_{\text{obs}}$ w.r.t.\ the symmetric desired profile when following the natural optimal branch. \emph{Right}: representation of control variable evolution for $\alpha = 0.01$, $\mu=2, 1, 0.95, 0.5$ for $x_1 = 10$ when following the natural optimal branch.}
\label{fig:ch_s_mag}
\end{figure}
\begin{table}
\caption{\emph{Channel Control}: comparison of the functional value w.r.t. stable and unstable uncontrolled solutions. \emph{Headers:} (Nat.) Natural optimal branch. (n-Nat.) Non-natural optimal branch.
\emph{Trailing cell characters}: (s) the solution has symmetric profile. (a) The solution has asymmetric profile. (nat-C.) Converging to natural branch despite tailored guess. (non-C.) Non-converging Newton's solver for tailored guess.}
\label{Inlet_J}
\begin{center}
\tabcolsep=0.09cm
\footnotesize{
\begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c||}
\hline
& Stable & \cellcolor[HTML]{E5E3E3}Unstable & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. & Nat. & \cellcolor[HTML]{E5E3E3}n-Nat. \\ \cline{2-11}
\multirow{-2}{*}{$\mu$} & \multicolumn{2}{c||}{Uncontrolled} & \multicolumn{2}{c||}{$\alpha = 1$} & \multicolumn{2}{c||}{$\alpha = 0.1$} & \multicolumn{2}{c||}{$\alpha = 0.01$} & \multicolumn{2}{c||}{$\alpha = 0.001$} \\ \hline
$2$ & 5.14e--9 & \cellcolor[HTML]{E5E3E3}5.14e--9 & 5.14e--9s& \cellcolor[HTML]{E5E3E3}5.14e--9s& 5.14e--9s & \cellcolor[HTML]{E5E3E3}5.14e--9s & 5.14e--9s & \cellcolor[HTML]{E5E3E3}5.14e--9s & 5.07e--9s& \cellcolor[HTML]{E5E3E3}5.14e--9s \\ \hline
$1.5$ & 4.38e--6 & \cellcolor[HTML]{E5E3E3}4.38e--6 & 4.38e--6s& \cellcolor[HTML]{E5E3E3}4.38e--6s & 4.38e--6s & \cellcolor[HTML]{E5E3E3}4.38e--6s & 4.38e--6s & \cellcolor[HTML]{E5E3E3}4.38e--6s & 4.28e--6s & \cellcolor[HTML]{E5E3E3}4.38e--6s \\ \hline
$1$ & 4.10e--3 & \cellcolor[HTML]{E5E3E3}4.10e--3 & 4.10e--3s& \cellcolor[HTML]{E5E3E3}4.10e--3s & 4.10e--3s & \cellcolor[HTML]{E5E3E3}4.10e--3s & 4.08e--3s & \cellcolor[HTML]{E5E3E3}4.10e--3s & 3.92e--3s & \cellcolor[HTML]{E5E3E3}4.10e--3s \\ \hline
$0.9$ & 3.33e--2 & \cellcolor[HTML]{E5E3E3}1.63e--2 & 3.33e--2a& \cellcolor[HTML]{E5E3E3}1.63e--2s & 1.63e--1s & \cellcolor[HTML]{E5E3E3}3.33e--2a & 1.63e--1s & \cellcolor[HTML]{E5E3E3} nat-C. & 2.93e--2s & \cellcolor[HTML]{E5E3E3}non-C. \\ \hline
$0.8$ & 2.08e--1 & \cellcolor[HTML]{E5E3E3}6.52e--2 & 2.08e--1a& \cellcolor[HTML]{E5E3E3}6.52e--2s & 6.52e--2s& \cellcolor[HTML]{E5E3E3}2.07e--1a& 6.52e--2s & \cellcolor[HTML]{E5E3E3}2.04e--1a & 6.51e--2s & \cellcolor[HTML]{E5E3E3}nat-C. \\ \hline
$0.7$ & 1.01e+0 & \cellcolor[HTML]{E5E3E3}2.59e--1 & 1.01e+0a& \cellcolor[HTML]{E5E3E3}2.59e--1s & 2.59e--1s & \cellcolor[HTML]{E5E3E3}1.01e+0a& 2.59e--1s & \cellcolor[HTML]{E5E3E3}9.76e--1a & 2.24e--1s & \cellcolor[HTML]{E5E3E3}nat-C. \\ \hline
$0.6$ & 4.48e+0 & \cellcolor[HTML]{E5E3E3}1.70e+0 & 4.48e+0a& \cellcolor[HTML]{E5E3E3}1.02e+0s& 1.02e+0s & \cellcolor[HTML]{E5E3E3}4.43e+0a& 1.02e+0s & \cellcolor[HTML]{E5E3E3}4.03e+0a& 9.90e--1s & \cellcolor[HTML]{E5E3E3}nat-C.\\ \hline
$0.5$ & 1.88e+1 & \cellcolor[HTML]{E5E3E3}3.92e+0 & 1.87e+1a& \cellcolor[HTML]{E5E3E3}3.92e+0s & 3.92e+1s & \cellcolor[HTML]{E5E3E3}non-C. & 3.87e+0s & \cellcolor[HTML]{E5E3E3}non-C. &3.50e+0s & \cellcolor[HTML]{E5E3E3}nat-C. \\ \hline
\end{tabular}
}
\end{center}
\end{table}
Figure \ref{fig:eig_inlet} shows the eigenvalues of the global eigenproblem
in the range $\Re(\sigma_{\mu}) = [-0.01, 0.01]$ when following the natural optimal branch.
For $\alpha = 1$, we can observe the shears phenomenon, which disappears for other values of the penalization parameter. Lowering the value of $\alpha$, leads to a positive-dominated eigenvalues ensemble.
Furthermore, a clustering around the value of $\alpha$ can be observed in plots \ref{fig:in_eig_1e2} and \ref{fig:in_eig_1e3}. In the next Section, very peculiar features have been observed as well, while changing the value of the penalization parameter $\alpha$ in a Dirichlet control.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_inlet1_b}
\caption{}
\label{fig:in_eig1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_inlet1e1_b}
\caption{}
\label{fig:in_eig_1e1}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_inlet1e2_b}
\caption{}
\label{fig:in_eig_1e2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_inlet1e3_b}
\caption{}
\label{fig:in_eig_1e3}
\end{subfigure}\\
\caption{\emph{Channel Control}: spectral analysis with $\alpha = 1, 0.1, 0.01, 0.001$ following the natural branch.}
\label{fig:eig_inlet}
\end{figure}
\subsection{Dirichlet Control: flux action.}
\label{dirichlet}
In this final example, we propose a Dirichlet control over the boundary $\Omega_u \equiv \Gamma_{\text{D}}$. We fix the symmetric configuration as desired state $v_\text{d}$ over the line $\Gamma_{\text{obs}}$, while we set $\Gamma_{\text{wall}} = \Gamma_0$. In other words, we are trying to control a Dirichlet boundary condition in order to lead the controlled solution towards the symmetric profile. The problem to be solved reads: given $\mu \in \Cal P$ find $X \in \mathbb X$ such that
\begin{equation}
\label{eq:Dirichlet_eq}
\begin{cases}
v\mathbb{I}_{\Gamma_{\text{obs}}} -\mu \Delta w - v\cdot\nabla w + (\nabla v)^T w + \nabla q= v_\text{d} \mathbb{I}_{\Gamma_{\text{obs}}} \quad &\text{in} \ \Omega, \\
\nabla \cdot w = 0 \quad &\text{in} \ \Omega, \\
w =0 \quad &\text{on} \ \Gamma_{\text{in}} \cup \Gamma_{\text{D}} \cup \Gamma_{\text{wall}}, \\
- qn + (\mu \nabla w) n = 0 \quad &\text{on} \ \Gamma_{\text{out}}, \\
\alpha u = w \quad &\text{on} \ \Gamma_D, \\
-\mu \Delta v + v\cdot\nabla v + \nabla p=0 \quad &\text{in} \ \Omega, \\
\nabla \cdot v = 0 \quad &\text{in} \ \Omega, \\
v = v_{\text{in}} \quad &\text{on} \ \Gamma_{\text{in}}, \\
v = u \quad &\text{on} \ \Gamma_{\text{D}}, \\
v = 0 \quad &\text{on} \ \Gamma_{\text{wall}}, \\
- pn + (\mu \nabla v) n = 0 \quad &\text{on} \ \Gamma_{\text{out}}.\\
\end{cases}
\end{equation}
Allowing the flux to freely enter or exit from the boundary $\Gamma_{\text{D}}$ drastically changes the optimal solution behavior. Since we are asking for a symmetric desired profile, the main action of the control is to straighten the flow: this behavior can be observed from Figure \ref{fig:diri_v_1} and the left plot of Figure \ref{fig:diri_mag}. Indeed, even for large values of $\alpha$, the velocity profile reaches the symmetric configuration, while for lower values of the penalization parameter, the velocity on $\Gamma_{\text{obs}}$ is parabolic.
This feature is highlighted also from the functional values in Table \ref{Diri_J}, where the functional \eqref{eq:J_NS} is shown for several $\mu$ (rows) and $\alpha$ (columns) w.r.t. the uncontrolled stable and unstable solutions. The cost functional largely decreases for smaller values of $\alpha$, e.g. $\alpha = 0.001$: for example, focusing on $\mu = 0.5$, the functional only lowers of $18\%$ for $\alpha = 0.01$, in contrast to almost $82\%$ for $\alpha = 0.001$. Within the setting of $\alpha = 0.001$, the system manifests an interesting and unexpected profile, shown in Figure \ref{fig:diri_v_1e3}. The flux presents an asymmetric configuration for low value of $\mu$. Namely, for low $\alpha$ a bifurcating solution appears as depicted in Figure \ref{fig:diri_v_1e3}. The asymmetric behavior is due to the control variable which not only allows the flow to exit from $\Gamma_{\text{D}}$ (in order to avoid the asymmetric recirculation of the wall-hugging solution), but it is adding flux near the channel, in order to achieve the straight configuration and the parabolic velocity profile given by the symmetric target velocity over the observation domain, as it is represented in the right plot of Figure \ref{fig:diri_mag}.
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/Dirichlet_velocity_exit}
\includegraphics[width=0.46\textwidth]{/ocp/Dirichlet_control_exit_2}
\caption{\emph{Dirichlet Control}. \emph{Left}: comparison of velocity profiles in the controlled and uncontrolled cases for $\alpha = 1, 0.01$, $\mu = 0.5$ on $\Gamma_{\text{obs}}$ w.r.t. the symmetric desired profile when following the natural optimal branch. \emph{Right}: representation of control variable evolution for $\alpha=0.1, 0.01, 0.001, 0.001$ and $\mu = 0.5$ at $x_1 = 10$ when following the natural optimal branch.}
\label{fig:diri_mag}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/diri_v_1}
\caption{}
\label{fig:diri_v_1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{/ocp/diri_v_1e3}
\caption{}
\label{fig:diri_v_1e3}
\end{subfigure}
\caption{\emph{Dirichlet Control}: two optimal velocity solutions for $\mu=0.5$, with $\alpha = 1$ and $\alpha = 0.001$, left and right, respectively.}
\label{fig:Dirichlet_solution}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_dirichlet1_b}
\caption{}
\label{fig:diri_eig1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_dirichlet1e1_b}
\caption{}
\label{fig:diri_eig_1e1}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_dirichlet1e2_b}
\caption{}
\label{fig:diri_eig_1e2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{/ocp/plot_eig_r_dirichlet1e3_b}
\caption{}
\label{fig:diri_eig_1e3}
\end{subfigure}\\
\caption{\emph{Dirichlet Control}: spectral analysis with $\alpha = 1, 0.1, 0.01, 0.001$.}
\label{fig:eig_diri}
\end{figure}
\begin{table}[b]
\caption{\emph{Dirichlet Control}: comparison of the functional value w.r.t. the stable and unstable uncontrolled solutions.}
\label{Diri_J}
\begin{center}
\footnotesize{
\begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c||}
\hline
\multirow{2}{*}{$\mu$} &Stable & \cellcolor[HTML]{E5E3E3} Unstable & \multicolumn{8}{c||}{Controlled Solution} \\ \cline{2-11}
& \multicolumn{2}{c||}{Uncontrolled} & \multicolumn{2}{c||}{$\alpha = 1$} & \multicolumn{2}{c||}{$\alpha = 0.1$} & \multicolumn{2}{c||}{$\alpha = 0.01$} & \multicolumn{2}{c||}{$\alpha = 0.001$} \\ \hline
$2$ & { 5.14e--9 } & \cellcolor[HTML]{E5E3E3} { 5.14e--9 } & \multicolumn{2}{c||}{ 4.98e--9 } & \multicolumn{2}{c||}{ 4.83e--9 } & \multicolumn{2}{c||}{ 4.79e--9 } & \multicolumn{2}{c||}{ 4.79e--9 } \\ \hline
$1.5$ & { 4.38e--6 } & \cellcolor[HTML]{E5E3E3} { 4.38e--6 } & \multicolumn{2}{c||}{ 4.24e--6 } & \multicolumn{2}{c||}{ 4.10e--6} & \multicolumn{2}{c||}{ 4.07e--6} & \multicolumn{2}{c||}{ 4.06e--6} \\ \hline
$1$ & { 4.10e--3 } & \cellcolor[HTML]{E5E3E3} { 4.10e--3 } & \multicolumn{2}{c||}{ 3.94e--3 } & \multicolumn{2}{c||}{ 3.78e--3} & \multicolumn{2}{c||}{ 3.74e--3 } & \multicolumn{2}{c||}{ 3.72e--3} \\ \hline
$0.9$ & { 3.33e--2 } & \cellcolor[HTML]{E5E3E3} { 1.63e--2 } & \multicolumn{2}{c||}{ 1.56e--2 } & \multicolumn{2}{c||}{ 1.49e--2} & \multicolumn{2}{c||}{ 1.47e--2 } & \multicolumn{2}{c||}{ 1.45e--2 } \\ \hline
$0.8$ & { 2.08e--1 } & \cellcolor[HTML]{E5E3E3} { 6.52e--2} & \multicolumn{2}{c||}{ 6.20e--2 } & \multicolumn{2}{c||}{ 5.88e--2} & \multicolumn{2}{c||}{ 5.78e--2} & \multicolumn{2}{c||}{ 5.46e--2} \\ \hline
$0.7$ & { 1.01e+0} & \cellcolor[HTML]{E5E3E3} { 2.69e--1 } & \multicolumn{2}{c||}{ 2.44e--1 } & \multicolumn{2}{c||}{ 2.29e--1} & \multicolumn{2}{c||}{ 2.21e--1 } & \multicolumn{2}{c||}{ 1.82e--1 } \\ \hline
$0.6$ & { 4.48e+0} & \cellcolor[HTML]{E5E3E3} { 1.70e+0} & \multicolumn{2}{c||}{ 9.49e--1 } & \multicolumn{2}{c||}{ 8.73e--1} & \multicolumn{2}{c||}{ 8.09e--1 } & \multicolumn{2}{c||}{ 3.57e--1 } \\ \hline
$0.5$ & { 1.88e+1 } & \cellcolor[HTML]{E5E3E3} { 3.92e+0 } & \multicolumn{2}{c||}{ 3.58e+0 } & \multicolumn{2}{c||}{ 3.21e+0} & \multicolumn{2}{c||}{ 2.41e+0 } & \multicolumn{2}{c||}{ 4.73e--1 } \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.489\textwidth]{/ocp/plot_eig_rc_dirichlet1_new_2}
\includegraphics[width=0.489\textwidth]{/ocp/plot_eig_rc_dirichlet1e3_new_2}
\caption{\emph{Dirichlet Control}. Eigenvalues of the state eigenproblem in the complex plane: asymmetric and symmetric solutions, left and right, respectively.}
\label{fig:dirichlet_eig_state}
\end{figure}
Figure \ref{fig:eig_diri} provides the eigenvalue analysis, where we show some close-ups starting with $\Re(\sigma_{\mu}) = [-0.001, 0.001]$ for $\alpha = 1$ in the top-right image, and restricting the vertical interval due to the order of the lowered $\alpha$ in the remaining images. As already noticed in Section \ref{distributed}, the strongest is the control, the more the eigenvalues are positive.
Furthermore, as it already happened in the distributed control case with asymmetric target in Section \ref{distributed}, the value $\mu^*$ seems to be not relevant anymore as an indication of the bifurcation point. Recalling the definition of $\mu^{**}$ in Section \ref{distributed} as the value of the parameter for which the top curve of the shears (marked in blue) is approaching $\Re(\sigma_\mu) = 0$ from above, Figure \ref{fig:eig_diri} shows that such a curve is moving away from $\Re(\sigma_\mu) = 0$ as $\alpha$ decreases, and thus no such point $\mu^{**}$ exists.
The results of the previous Sections have shown that the top shear structure is typically associated to the wall-hugging bifurcation, and that $\mu^{**}$ provides an indication of the bifurcation: we are thus lead to believe that the standard bifurcating configuration observed in the uncontrolled case, consisting of a branch of symmetric solutions and a branch of wall-hugging ones, is not present here, with the latter branch disappearing. However, the system seems to be featuring a \emph{different bifurcation}, presented in Figure \ref{fig:diri_v_1e3}. Indeed, we can see an eigenvalue crossing the line $\Re(\sigma_\mu) = 0$ for the global eigenproblem in Figure \ref{fig:diri_eig_1e3} for $\alpha = 0.001$.
Therefore, if we plot the eigenvalues of the state eigenproblem of Algorithm \ref{alg:01} (Figure \ref{fig:dirichlet_eig_state}), we see how the symmetric profile does never cross the origin, while the asymmetric solution in Figure \ref{fig:diri_v_1e3} for $\alpha=0.001$ does.
In the setting with the modified BCs, the physical stable solution behavior is a feature of straight profile.
Moreover, from Figure \ref{fig:dirichlet_eig_state}, we can clearly observe a couple of complex and conjugate eigenvalue crossing the imaginary axis. This is, in fact, a paradigm for Hopf bifurcation \cite{AQpreprint,seydel2009practical} and represents another evidence of how deeply the system changed its inner features.
\begin{remark}[Lagrange multipliers]
From a numerical point of view, we employed Lagrange multipliers to solve the optimality system \eqref{eq:Dirichlet_eq}. The condition $v = u$ on $\Gamma_{\text{D}}$ has been weakly imposed in integral form.
This will result in extra terms in the adjoint equations. Furthermore, we weakly impose also the boundary condition $w = 0$ on $\Gamma_{\text{D}}$ with another multiplier. The reason of this latter decision will be explained in Section \ref{sec_ROM}.
\end{remark}
\subsection{Comparative Eigenvalue Analysis}
\label{comparison}
In this Section, we sum up all the observations and results derived from the global eigenvalue analysis over the four test cases. Therefore, we now list the similarities between these, especially for what concerns the variation against different values of the penalization parameter $\alpha$:
\begin{itemize}
\item[{$\small{\circ}$}] the \emph{ eigenvalues cluster} around the value of $\alpha$. This behavior is well represented in Figures \ref{fig:n_eig_1e2}, \ref{fig:n_eig_1e3}, \ref{fig:in_eig_1e2} and \ref{fig:in_eig_1e3}. These eigenvalues come from the optimality equation;
\item[{$\small{\circ}$}] the \emph{predominance of positive eigenvalues} over the negative ones. In all the performed spectral analysis we have observed that the control action lowers the negative eigenvalues. The stronger is the control, the greater is the number of positive eigenvalues, as it is represented in Figures \ref{fig:dist_eig_1e1} and \ref{fig:eig_diri};
\item[{$\small{\circ}$}] the \emph{shears effect} for low controlled system. The shears configuration is characteristic of control problems which do not highly change the uncontrolled system solution. It is the case of Neumann control in Figure \ref{fig:n_eig1} and of the channel control for $\alpha = 1$ as shown in Figure \ref{fig:in_eig1}. For the other cases, the smaller is $\alpha$ the least visible is this eigenvalue configuration: in some cases, the structure is completely broken;
\item[{$\small{\circ}$}] the $\mu^{**}$ \emph{identification}. It is clear that the shears (or their top curve, if shears are broken) approach to the real line in the point $\mu^{**}$ for which the bifurcating phenomenon of the controlled system occurs. This is the same situation we found for the uncontrolled problem, in which the path of the eigenvalue identifies the value of bifurcation parameter $\mu^*$. Moreover, this situation is often preserved regardless of $\alpha$. Indeed, the positive shears eigenvalue is still present in Figures \ref{fig:dist_eig_1e1}, \ref{fig:in_eig_1e1}, \ref{fig:in_eig_1e2} and even in the Dirichlet optimal control, as shown in Figure \ref{fig:eig_diri}. In some cases, a shift of the $\mu^{**}$ compared to $\mu^*$ has been observed.
\end{itemize}
Since the structure of the spectral analysis is highly influenced by the control strength, we tried to perform an eigenvalue analysis dealing with only state and adjoint equations. For all the test cases, shears occurs.
The shears structure is symmetric when the solution
shows the wall-hugging property, while it is slightly asymmetric when the state flow is straight.
We guess
that this behavior is due to the different reaction of state and adjoint blocks to the bifurcating phenomena. Indeed, for the symmetric flux, the behavior of the state equation has to be preserved for all $\mu$, while the adjoint problem, which is strictly linked to the control variable, puts more effort in rebalancing the flux, resulting in an asymmetric contribution that causes the shears to be slightly asymmetric.
The spectral analysis of a nonlinear system is a indispensable tool to understand bifurcation phenomena, eventually. Under the point of view of computational costs, it is a very tough task, most of all for nonlinear \ocp s. Indeed, FE discretization leads to huge systems to be solved for a wide sample of parameter $\mu \in \Cal P$. In the next Section we propose ROMs as a suitable approach to overcome this issue.
\begin{figure}[b]
\centering
\includegraphics[width=0.489\textwidth]{/ocp/plot_eig_r_neumann1_f_b}
\includegraphics[width=0.489\textwidth]{/ocp/plot_eig_r_distributed1_f_b}
\caption{\emph{Comparative Analysis}. \emph{Left}: asymmetric velocity with Neumann control for $\alpha = 1$. \emph{Right}: symmetric velocity with distributed control for $\alpha = 1$. }
\label{fig:tenaglie}
\end{figure}
\section{Nonlinear Parametrized Optimal Control Problems and Bifurcating systems}
\label{general_problem_sec}
In this Section we introduce a generic nonlinear \ocp s. We will focus on the minimization of quadratic cost functional under nonlinear PDE($\bmu$) constraint for Hilbert spaces, following the Lagrangian approach \cite{gunzburger2003perspectives, hinze2008optimization}. In Section \ref{general_problem} and \ref{FE} we provide existence results and optimality conditions for nonlinear \ocp s in their continuous and discretized version, respectively. Then, Section \ref{bif} will describe the spectral properties of the optimization system at hand.
\subsection{Problem Formulation}
\label{general_problem}
Optimal control is a mathematical tool which aims at modifying the natural behavior of a system. Let us suppose to have a \emph{state} PDE($\bmu$)
\begin{equation}
\label{eq:state}
G(y; \bmu) = f,
\end{equation}
with \emph{state variable} $y \eqdot y(\bmu) \in \mathbb Y$, i.e.
$G: \mathbb Y \times \mathcal P \rightarrow \mathbb Y\dual$ where $\mathbb Y$ is a Hilbert space, $f \in \state\dual$ is a forcing term,
$\mathcal P \subset \mathbb R^P$
is a parameter space of dimension $P \geq 1$, while $G(y; \bmu) =
E_{\textit{n}\ell}(y; \bmu) + E_{\ell}(y; \bmu)$ is the \emph{state operator}, with $E_{\ell} \in \Cal L(\state, \state \dual)$ and $E_{\textit{n}\ell}$ representing the linear and nonlinear contributions, respectively.
Here, we call $\Cal L \cd$ the space of linear continuous functions between two spaces.
We now want $y$ to be the most similar to a known solution profile
$y_\text{d} \eqdot y_\text{d}(\bmu) \in \mathbb Y_{\text{obs}} \supseteq \mathbb Y$.
To this end, a new variable is introduced in the equation, the \emph{control variable} $u \eqdot u(\bmu) \in \mathbb U$, with $\mathbb U$ another possibly different Hilbert space. Let us define the \emph{controlled equation}
$\mathcal E: \mathbb Y \times \mathbb U \times \mathcal P \rightarrow \mathbb Y\dual$ as
\begin{equation*}
\mathcal E(y,u; \bmu) \eqdot\; G(y; \bmu) - C(u) - f = 0,
\end{equation*}
where $C \in \Cal L(\control, \state \dual)$ is the \emph{control operator} describing the action of the variable $u$ on the system\footnote{Parametrized control operators are also possible, with a straightforward extension of the methodology presented herein.}. In other words, we are trying to change the behavior of the state PDE($\bmu$) through $C(u)$.
The \ocp $\;$ reads: given a $\bmu \in \mathcal P$, find the pair $(y,u) \in \mathbb Y \times \mathbb U$ which solves
\begin{equation}
\label{min_problem}
\min_{y \in \mathbb Y, u \in \mathbb U} J(y,u; y_\text{d}) \text{ subject to } \mathcal E(y,u; \bmu) = 0,
\end{equation}
where $J: \mathbb Y \times \mathbb U \times \mathbb Y_{\text{obs}} \rightarrow \mathbb R$ is the \emph{objective functional} defined by
\begin{equation}
J(y,u; y_\text{d}) \eqdot \half \norm{y - y_\text{d}}_\mathbb {Y_{\text{obs}}}^2 + \alf \norm{u}_{\mathbb U}^2,
\end{equation}
and $\alpha \in (0, 1]$ is a \emph{penalization parameter}. The role of $\alpha$ is of great interest: indeed, a large value of $\alpha$ translates in a poor capability of the system to be controlled, while $\alpha \ll 1$ allows the functional to be minimized with larger values of the variable $u$. Problem \eqref{min_problem} admits a solution if \cite[Section 1.5.2]{hinze2008optimization}:
\begin{enumerate}[(i)]
\item $\mathbb U$ is convex, bounded and closed;
\item $\mathbb Y$ is convex and closed;
\item for every $\bmu \in \Cal P$, the controlled system $\mathcal E (y,u; \bmu) = 0$ has a bounded solution map
$u \in \mathbb U \mapsto y(u) \in \mathbb Y$;
\item for a given $\bmu \in \Cal P$, the map $(y,u, \bmu) \in \mathbb Y \times \mathbb U \times \mathcal P \rightarrow \mathcal E (y,u; \bmu) \in \mathbb Y\dual$ is weakly continuous with respect to (w.r.t.) the first two arguments;
\item for a given $y_{\text{d}} \in \mathbb Y_{\text{obs}}$, the objective functional $J(y,u; y_\text{d})$ is weakly lower semicontinuous w.r.t.\ $y$ and $u$.
\end{enumerate}
We now discuss the Lagrangian structure and the necessary first order optimality conditions. First of all, let $z \eqdot z(\bmu) \in {\mathbb Y^{\ast \ast}} = \mathbb Y$ be an arbitrary variable called \emph{adjoint variable}. Let us call $X = (y,u,z) \in \mathbb X \eqdot \mathbb Y \times \mathbb U \times \mathbb Y$ and let us build the \emph{Lagrangian functional}
$\Lg:\mathbb X \times \mathbb Y_{\text{obs}} \times \mathcal P \rightarrow \mathbb R$ as
\begin{equation}
\label{lagrangian_functional}
\Lg(X; y_\text{d}, \bmu) \eqdot J(y, u; y_\text{d}) + \la z, \Cal E(y,u; \bmu) \ra_{\mathbb Y \mathbb Y\dual},
\end{equation}
where $\la \cdot, \cdot \ra_{\mathbb Y \mathbb Y\dual}$ is the duality pairing of $\mathbb Y$ and $\mathbb Y\dual$. The introduction of the adjoint variable allows to treat problem \eqref{min_problem} in an unconstrained fashion finding the stationary point of \eqref{lagrangian_functional}. We remark that we consider $z$ in the same space of the state variable for a proper definition of the discretized problem: we will clarify the reason in Section \ref{FE}. Moreover, the variable $X$ will inherit the parameter dependence by definition, i.e $X \eqdot X(\bmu)$.
Furthermore, let us assume that the following hold:
\begin{enumerate}[resume*]
\item $\mathbb U$ is nonempty;
\item $J : \mathbb Y \times \mathbb U \times \mathbb Y_{\text{obs}} \rightarrow \mathbb R$ and $\mathcal E : \mathbb Y \times \mathbb U \times \mathcal P \rightarrow \mathbb Y\dual$ are continuously Fr\'echet differentiable w.r.t.\ $y$ and $u$;
\item given $\bmu \in \mathcal P$, the controlled system $ \mathcal E(y, u; \bmu) = 0$ has a unique solution $y = y(u) \in \mathbb Y$ for all $u \in \mathbb U$;
\item given $\bmu \in \mathcal P$, $D_y \mathcal E (y, u; \bmu) \in \mathcal L(\mathbb Y, \mathbb Y\dual)$ has a bounded inverse for all control variables $u$.
\end{enumerate}
The Fr\'echet derivative w.r.t.\ a variable $\star$ will be indicated as $D_{\star}$, as already done in (ix). Assuming
$({y}, u) \in \state \times \control$ to be a solution to \eqref{min_problem} for a given $\bmu \in \Cal P$, thanks to hypotheses
(vi) - (ix), there exists an adjoint variable $z \in \state$ such that the following variational system is satisfied \cite{hinze2008optimization}:
\begin{equation}
\label{KKT}
\begin{cases}
D_{y}\Lg(X; y_\text{d}, \bmu) [\omega] = 0 & \forall \omega \in \state,\\
D_u\Lg(X; y_\text{d}, \bmu) [\kappa] = 0 & \forall \kappa \in \control,\\
D_z\Lg(X; y_\text{d}, \bmu) [\zeta] = 0 & \forall \zeta \in \state,\\
\end{cases}
\qquad \text{or, in strong form,} \qquad
\begin{cases}
y + D_y \mathcal E (y, u; \bmu)\dual (z) = y_\text{d}, &\\
\alpha u - C\dual (z )= 0, &\\
\mathcal E(y,u;\bmu) = 0, &\\
\end{cases}
\end{equation}
where $D_y \mathcal E (y, u; \bmu) \dual \in \Cal L(\state, \state \dual)$ is the adjoint operator of the Fr\'echet linearization of $\mathcal E (y,u; \bmu)$ w.r.t.\ the state variable, while $C\dual \in \Cal L(\mathbb Y, \mathbb U\dual) $ is the adjoint of the control operator.
We will refer to problem \eqref{KKT} as \emph{optimality system}, in weak or strong form, respectively.
Moreover, writing the latter in compact form, it reads: given $\bmu \in \Cal P$, find $X \in \mathbb X$ such that
\begin{equation}
\label{ocp}
\mathcal G(X; \bmu) = {\Cal F},
\end{equation}
with
\begin{equation*}
\mathcal G(X; \bmu) \eqdot \begin{bmatrix} y + D_y \mathcal E (y, u; \bmu)\dual (z) \\ \alpha u - C\dual (z) \\ G(y, \bmu) - C(u) \end{bmatrix} \quad
\text{and} \quad \Cal F \eqdot \begin{bmatrix} y_{\text{d}} \\ 0 \\ f \end{bmatrix}.
\end{equation*}
In the nonlinear case, even considering only the state equation, the solution for a given parameter $\bmu$ may not be unique. Therefore, the local well-posedness of the problem \eqref{ocp}, which strongly relies on its local invertibility assumptions (viii) and (ix), can fail due to the singularity of the state equation.
Therefore, we can talk about \emph{solution branches}, i.e.\ multiple solution behaviors for a given value of the parameter.
We denote by $k$ the number of branches, and by $\Cal X_i$, $i = 1, \hdots, k$ the set of each solution on the $i$-th branch. We call \emph{solution ensemble} the set of all the solution branches $\Cal X_i$:
\begin{equation}
\label{ensemble}
\Cal X \eqdot \bigcup_{i = 1}^k \{ X(\bmu) \in \Cal X_i \; | \; \bmu \in \mathcal P \}.
\end{equation}
In the next Section, we will discuss the FE approximation of a solution for a fixed value of the parameter to the nonlinear \ocp , restricting ourselves to the well-posed setting.
\subsection{The FE Approximation}
\label{FE}
We are interested in the numerical approximation of the solution ensemble \eqref{ensemble} of the nonlinear \ocp \, in \eqref{ocp}, defined over an open and bounded regular domain $\Omega \subset \mathbb R^d$.
Our aim is to discretize the problem at hand, in order to investigate its qualitative changes w.r.t.\ the values of the parameter.
We remark that, even considering a single branch, the fulfillment of the well-posedness conditions can fail at some critical point $\bmu^{*}$.
Hence, in the following, we assume $\bmu \neq \bmu^*$ and $X(\bmu) \in \Cal X_i$ for some $i \in \{1, \dots, k\}$, thus we call $\Cal X_i$ a \textit{non-singular branch}.
Furthermore, we assume the nonlinearity to be at most quadratic in the state variable, \A{guided by the numerical results we are going to present in the following Sections. However, the structure and the methodology do not change when dealing with nonlinearities of higher order.}
To approximate the system in \eqref{ocp},
first of all we define the triangulation $\Cal{T^{N_T}}$ of $\Omega$, where $K$ is an element of $\Cal{T^{N_T}}$ and $\Cal N_{\Cal T}$ is the number of cells. Then, let us consider the discrete spaces $\state \discy = \state \cap\mathbb K_{r_y}$
and $\control \discu = \control \cap \mathbb K_{r_u}$, where
$
\mathbb K_r = \{ v \in C^0(\overline \Omega) \; : \; v |_{K} \in \mbb P^r, \; \; \forall \, K \in \Cal{T^{N_T}} \},
$
and $\mbb P^r$ is the space of all the polynomials of degree at most equal to $r$. Let us consider the FE function space $\mbb X \disc = \state \discy \times \control \discu \times \state \discy \subset \mbb X$, of dimension
$\Cal N = 2\Cal N_y + \Cal N_u$. The FE approximation of the parametrized problem \eqref{ocp} reads: given $\bmu \in \Cal P$, find $X\disc \eqdot X \disc (\bmu) \in \mathbb X\disc$ such that
\begin{equation}
\label{FE_ocp}
\mathcal G(X\disc; \bmu) = {\mathcal F}.
\end{equation}
We now want to make the algebraic structure of the system \eqref{FE_ocp} explicit. After the FE discretization, we can define $\mathsf y$, $\mathsf u$ and $\mathsf z$ as the column vectors which entries are given by the FE coefficients of the state, control and adjoint variables in their approximated spaces, respectively. In the same fashion, we call $\mathsf y_\text{d}$ the column vector of the FE coefficients representing the desired state profile. Let us focus on the structure of the optimality problem. At the FE level, applying our controlled state to the FE basis, we can derive the matrices
$\mathsf {E}_{\textit{n}\ell} + \mathsf {E}_{\ell} - \mathsf C$ and the forcing term vector $\mathsf f$. Moreover, we define the mass matrices $\mathsf M_y$ and $\mathsf M_u$ for state/adjoint variables and control, respectively. We still need to understand the algebraic structure of $D_y \Cal E(y,u; \bmu)$.
The Fr\'echet derivative of the controlled state equation w.r.t.\ the state $y$ will be
$\mathsf {E}_{\textit{n}\ell}'[\mathsf y] + \mathsf E_{\ell}$. In other words, the linear state structure is preserved, the nonlinear operator is linearized in $\mathsf {E}_{\textit{n}\ell}'[\mathsf y] $ and the control contribution disappears. Then, the global matrix formulation of the optimization system \eqref{FE_ocp} is
\begin{equation}
\label{algebra_ocp}
\overbrace{
\begin{bmatrix}
\mathsf M_y & 0 & \mathsf {E}_{\textit{n}\ell}'[\mathsf y]^T + \mathsf E_{\ell}^T\\
0 & \alpha \mathsf M_u & - \mathsf C^T \\
\mathsf {E}_{\textit{n}\ell} + \mathsf {E}_{\ell} & - \mathsf C & 0 \\
\end{bmatrix}\underbrace{
\begin{bmatrix}
\mathsf y \\
\mathsf u \\
\mathsf z \\
\end{bmatrix}}_{\mathsf X}}^{\mathsf G(\mathsf X; \boldsymbol \mu)}
=
\overbrace{
\begin{bmatrix}
\mathsf M_y \mathsf y_\text{d} \\
0 \\
\mathsf f \\
\end{bmatrix}}^{\mathsf F},
\end{equation}
which in compact form reads:
\begin{equation}
\label{G_compact_FE}
\mathsf R(\mathsf X; \bmu)\eqdot \mathsf G(\mathsf X; \boldsymbol \mu) - \mathsf F = 0,
\end{equation}
where $\mathsf R(\mathsf X; \bmu)$ represents the \emph{global residual} of the optimality system.
To solve system \eqref{G_compact_FE}, we rely on Newton's method and we solve
\begin{equation}
\mathsf {X}^{j + 1} = \mathsf {X}^j+ \mathsf{Jac}(\mathsf X^{j}; \boldsymbol \mu)^{-1}(\mathsf F - \mathsf G(\mathsf X^j; \boldsymbol \mu)), \spazio j \in \mathbb N,
\end{equation}
until a residual based convergence criterion is satisfied.
Since the matrix $\mathsf {E}_{\textit{n}\ell}'[\mathsf y]^T$ still depends on $\mathsf y$, the Jacobian matrix will be of the following nature:
\begin{equation}
\label{J_ocp}
\mathsf{Jac}(\mathsf X^j; \bmu) = \begin{bmatrix}
\mathsf M_y + \mathsf D_{\mathsf y}( \mathsf {E}_{\textit{n}\ell}'[\mathsf y^j]^T)[\mathsf z^j] & 0 & \mathsf {E}_{\textit{n}\ell}'[\mathsf y^j]^T + \mathsf E_\ell^T\\
0 & \alpha \mathsf M_u & - \mathsf C^T \\
\mathsf {E}_{\textit{n}\ell}'[\mathsf y^j] + \mathsf E_\ell & - \mathsf C & 0 \\
\end{bmatrix},
\end{equation}
where the matrix $\mathsf D_{\mathsf y}( \mathsf {E}_{\textit{n}\ell}'[\mathsf y^j]^T)[\mathsf z^j]$ does not depend anymore on the state, but only on the $j-$th value of the adjoint variable.
Now, one can write
\begin{equation}
\label{J_saddle}
\mathsf{Jac}(\mathsf {X}^j; \bmu) =
\begin{bmatrix}
\mathsf A & \mathsf B^T \\
\mathsf B & 0 \\
\end{bmatrix},
\end{equation}
where
\begin{equation}
\mathsf A =
\begin{bmatrix}
\mathsf M_y + \mathsf D_{\mathsf y}( \mathsf {E}_{\textit{n}\ell}'[\mathsf y^j]^T)[\mathsf z^j] & 0\\
0 & \alpha \mathsf M_u & \\
\end{bmatrix}
\spazio \text{and} \spazio \mathsf B =
\begin{bmatrix}
\mathsf {E}_{\textit{n}\ell}'[\mathsf y^j] + \mathsf E_\ell & - \mathsf C
\end{bmatrix}.
\end{equation}
We remark that,
in the considered numerical settings, $\mathsf A$ is symmetric and, thus, we will always refer to \eqref{J_saddle} as a saddle point structure. To guarantee the solvability of the system we consider $\mathsf A$ an invertible matrix. Furthermore, we need the following \emph{Brezzi inf-sup condition} to be verified:
\begin{equation}
\label{FE_lbb}
\beta_\textit{Br} \disc(\bmu) \eqdot \adjustlimits \inf_{0 \neq \mathsf z} \sup_{0 \neq \mathsf x} \frac{\mathsf z^T\mathsf B \mathsf x}{\norm{\mathsf x}_{\state \times \control}\norm{\mathsf z}_{ \state}} \geq \hat{\beta}_\textit{Br} \disc > 0,
\end{equation}
where $\mathsf x = [{\mathsf y}^T, {\mathsf u}^T]^T$.
The assumption $z \in \state$, will guarantee the fulfillment of the inf-sup stability condition in the FE approximation \cite{negri2015reduced,negri2013reduced}.
For general nonlinear problems, $\mathsf A$ may possibly be different from $\mathsf A^T$, however the well-posedness results can be extended, and the interested reader may refer to \cite{Benzi, GeneralizedSaddlePoint}. \\
In the next Section, we will describe the branch-wise procedure implemented to reconstruct the bifurcation diagram.
\subsection{Bifurcation and stability analysis}
\label{bif}
For nonlinear PDE($\bmu$)s, the assumptions (vii)-(ix) ensure the applicability of the well-known Implicit Function Theorem \cite{ciarlet2013linear, Prodi}.
Indeed, under those assumptions, one expects that when the parameter changes slightly, a stable solution evolves continuously in a unique manner.
When such conditions fail to be fulfilled, the model may undergo \textit{bifurcation} phenomena.
In particular, we consider the case in which the state equation has a singularity at some parameter value $\bmu^*$, and for the sake of clarity, we will assume $f = 0$ (or, equivalently, include the forcing term $f$ in the expression of $G$).
Indeed, from the mathematical perspective, we can give a precise definition of such points \cite{Prodi}.
\begin{definition}
\label{de:bifurcation_points}
A parameter value $\bmu^* \in \Cal P$ is a \textit{bifurcation point} for \eqref{eq:state} from the solution $y^* \eqdot y(\bmu^*)$, if there exists a sequence $(y_n, \bmu_n) \in \mbb Y \times \Cal P$, with $y_n \neq y^*$, such that (i) $G(y_n; \bmu_n) = 0$, and (ii) $(y_n, \bmu_n) \to (y^*, \bmu^*)$.
\end{definition}
Thus, the bifurcation phenomena is a paradigm for non-uniqueness in nonlinear analysis, and a necessary condition is the failure of the Implicit Function Theorem.
\begin{proposition}
\label{pr:ift}
A necessary condition for $\bmu^*$ to be a bifurcation point for $G$ is that the partial derivative $D_yG(y^*; \bmu^*)$ is not invertible.
\end{proposition}
Moreover, given the existence of many possible configurations for the system, a natural question is to understand which one inherits the stability of the unique solution, when it exists.
To perform a stability analysis, one of the most common and widely-studied method is the spectral analysis of the problem, which consists in the investigation of the eigenvalues of the system.
This technique, which has its roots in the ordinary differential equation (ODEs) theory \cite{seydel2009practical,kuznetsov2004elements,kielhofer2006bifurcation}, allows to understand the stability property of a solution to \eqref{eq:state} by means of the sign of the spectrum of the model operator.
In particular, for a general nonlinear uncontrolled problem, one linearizes the equation
around the solution under investigation, $\hat{y} = y(\hat{\bmu})$, and then solves the eigenvalue problem given by
\begin{equation}
\label{eq:eigen_state}
D_yG(\hat{y}; \hat{\bmu}) y_e = \rho_{ \hat{\bmu}} y_e ,
\end{equation}
where the pair $(\rho_{\hat \bmu}, y_e)$ represents respectively the eigenvalues and the eigenvector of $D_y G$ at $\hat{y}$ for each $\hat \bmu$ fixed.
This analysis provides us information about the physical stability of the problem. Indeed, the stability analysis is strongly bound to the investigation of the solution features after a small perturbation. If the perturbation is small enough and the dynamics of the system remains in a neighborhood of the solution, then it will be called a \textit{stable solution}. Thus, it is fundamental to observe that, in connection with ODEs stability theory, a positive eigenvalue gives an exponentially divergent behavior, while a negative one produces only small oscillations around the solution.
Therefore, it is clear that in order to have a stable solution, all eigenvalues must have negative real parts
Dealing with the controlled problem \eqref{ocp} makes the analysis more involved. Indeed, the adjoint variable does not have physical meaning, making a straightforward application of the considerations above not applicable. Due to the high indefiniteness of the saddle point matrix \eqref{J_saddle}, a standard sign-analysis is no longer possible, see \cite{Benzi, BenziSimoncini, BenziWathen} as references on the topic.
Nevertheless, we can consider the eigenvalue problem for the system of the optimality conditions, in order to investigate \textit{a posteriori} the spectral property of \eqref{ocp}, as:
\begin{equation}
\label{eq:eigen_ocp}
D_X\Cal G(\hat{X}; \hat{\bmu}) X_e = \sigma_{\hat \bmu} X_e ,
\end{equation}
where $\hat{X} = X(\hat{\bmu})$ is the solution of which we are investigating the stability property and $(\sigma_{\hat \bmu}, X_e)$ is the eigenpair formed by the $\hat \bmu$-dependent eigenvalues $\sigma_{\hat \bmu}$ and eigenvectors $X_e$.
We will refer to \eqref{eq:eigen_state} as the \textit{state eigenvalue problem} and to \eqref{eq:eigen_ocp} as the \textit{global eigenvalue problem}.
As we said, the main issue with bifurcating system is the lack of the invertibility for the Fr\'echet derivative of the operator $\Cal G$ due to Proposition \ref{pr:ift}.
The latter, being equivalent to the injectivity and surjectivity of $\Cal G$, can be rewritten in terms of the \textit{continuous Babu{\v s}ka inf-sup stability}: there exists an inf-sup constant $\hat{\beta}_{\textit{Ba}} > 0$ such that
\begin{equation}
\label{eq:inf-sup_1}
\beta_{\textit{Ba}}(\bmu) = \adjustlimits \inf_{X \in \mbb X} \sup_{Y \in \mbb X} \frac{\langle D_X\Cal G[\hat{X}](X; \bmu), Y \rangle_{\mathbb X \mathbb X\dual}}{\norm{X}_{\mbb X}\norm{Y}_{\mbb X}} \geq \hat{\beta}_{\textit{Ba}} \qquad \forall \, \bmu \in \Cal P ,
\end{equation}
\begin{equation}
\label{eq:inf-sup_2}
\adjustlimits \inf_{Y \in \mbb X} \sup_{X \in \mbb X} \frac{\langle D_X\Cal G[\hat{X}](X; \bmu), Y \rangle_{\mathbb X \mathbb X\dual}}{\norm{X}_{\mbb X}\norm{Y}_{\mbb X}} > 0 \qquad \forall \, \bmu \in \Cal P .
\end{equation}
It is clear that the inclusion property $\mathbb{X}^\Cal N \subset \mathbb{X}$
is only a necessary but not sufficient condition for \eqref{eq:inf-sup_1} and \eqref{eq:inf-sup_2} to hold at the discrete level.
Hence, an additional assumption has to be required for the \textit{discrete Babu{\v s}ka inf-sup stability} of $\Cal G$: there exists a constant $\hat{\beta}_\textit{Ba} \disc > 0$ such that
\begin{equation}
\label{eq:inf_sup_disc}
\beta_\textit{Ba} \disc (\bmu) = \adjustlimits \inf_{\mathsf{X} \neq 0} \sup_{\mathsf{Y} \neq 0} \frac{\mathsf{Y}^T \mathsf{Jac}\ \mathsf{X}}{\norm{\mathsf{X}}_{\mbb X^{\Cal N}}\norm{\mathsf{Y}}_{\mbb X^{\Cal N}}} \geq \hat{\beta}_\textit{Ba} \disc \qquad \forall \, \bmu \in \Cal P .
\end{equation}
We remark that the continuous condition on the surjectivity \eqref{eq:inf-sup_2} is no longer needed for the discrete inf-sup stability \eqref{eq:inf_sup_disc}. In fact, while the assumption \eqref{eq:inf_sup_disc} corresponds to the non singularity of the matrix $\mathsf{Jac}$, a discrete counterpart of the assumption \eqref{eq:inf-sup_2} would require its surjectivity, or equivalently the injectivity of the transpose matrix $\mathsf{Jac}^T$, which being square would be the same as requiring \eqref{eq:inf_sup_disc}.
We also highlight that both continuous and discrete inf-sup conditions are satisfied as long as $\bmu \neq \bmu^*$.
We can finally present the branch-wise procedure we developed in order to deal with the numerical computation of multiple branches of solutions. Such an approach requires the combination of different methodologies for the approximation of the bifurcation diagram and the analysis of its stability properties.
In order to keep the presentation simple, we consider that the first component $\mu$ of the parameter $\bmu \in \Cal P \subset \mathbb R^P$ is the one responsible for the bifurcation behavior of the model, and in order to follow a single branch we consider that all the $P-1$ remaining parameters (thus the global physical/geometrical configuration) are unchanged on the branch.
\A{Even though in this work we will only deal with bifurcation phenomena with co-dimension one, the following methodology can be adapted to the multi-parameter and/or co-dimension $>1$ case, see e.g.\ \cite{pichi2021artificial}, by carefully choosing the technique to follow the branching behavior.}
Algorithm \ref{alg:01} summarizes how to reconstruct each branch $\mathcal{X}_i$ of solutions. More precisely, we combine, respectively:
\begin{itemize}
\item[{$\small{\circ}$}] Newton's method, as the nonlinear solver,
\item[{$\small{\circ}$}] Galerkin FE method, as the discretization phase,
\item[{$\small{\circ}$}] simple continuation method, as the bifurcation path tracer,
\item[{$\small{\circ}$}]generalized eigenvalue problems, as the stability detectors.
\end{itemize}
At the very beginning, one has to chose the branch to approximate, and the most preferable way to ``guide" the nonlinear solver to the desired configuration is through the initial guess.
Thus, in order to reconstruct a branch we consider the discrete version of the parameter space $\mathcal{P}_K = [\bmu_1, \dots, \bmu_K] \subset \mathcal{P}$ of cardinality $K$. We can take $\mathcal{P}_K$ as an ordered set, with the natural ordering induced by the first parameter component. Such ordering serves to assign the solution obtained for a given parameter $\bmu_{j-1}$ as the initial guess for the nonlinear solver at next iteration for $\bmu_{j}$. This allows us to follow the bifurcation behavior of the model. We choose the simplest variant of the continuation methods \cite{allogwer}, where the parametric set is fixed a priori, since it works well with pitchfork like bifurcation \cite{pichirozza,pichiquaini}. A more involved methodology have to be implemented when dealing with e.g.\ turning points or secondary bifurcations.
The next step is the actual discretization of the problem by means of the Newton-Kantorovich method \cite{ciarlet2013linear} combined with the Galerkin FE method. The initial guess for the former is set to the solution for the previous parameter value, while the latter projects the problem into a finite dimensional space, obtaining a linear system that we repeatedly solve until a convergence criterion is satisfied (here we chose a threshold tolerance $\epsilon$ for the norm of the global residual \eqref{G_compact_FE} at the $i-$th iteration of the Newton's method).
In Algorithm \ref{alg:01}, we call $\mathsf{Jac_y(\hat y, \hat \bmu)}$ the Jacobian matrix referred to the state equation \eqref{eq:state} and with $\mathsf V$ and $\mathsf {V_y}$ the scalar product matrices of the global optimization variable and of the state variable, respectively.
Finally, having computed a solution $\mathsf{X}_j$ of the problem \eqref{ocp} for the parameter $\bmu_j$, we can investigate its stability properties solving the two generalized eigenproblems, to recover the physical stability and the spectral properties, respectively for the state and global eigenvalue problems.
\begin{remark}
\label{re:branch}
Moreover, we highlight that the choice of the initial guess is fundamental, but it is not always sufficient to recover the full bifurcation diagram. In such cases, different techniques were proposed, e.g.\ manipulating the set $\mathcal{P}_K$ through predictor-corrector continuation methods, which involves pseudo-arclength strategy and homotopy \cite{allogwer,seydel2009practical}. Finally, when it is difficult to choose a proper guess, one can rely on:
\begin{itemize}
\item[{$\small{\circ}$}] the discretized version of analytic expressions, resembling the main properties of the sought solution \cite{pichiquaini};
\item[{$\small{\circ}$}] a deflation method, which requires only one initial guess, and discovers the full diagram preventing the convergence to already discovered solutions, helping the solver to find new branches \cite{pintore2019efficient, Charalampidis_et_al2018};
\item[{$\small{\circ}$}] the eigenvectors of the global eigenvalue problem, that have been used in \cite{pichirozza} to obtain the direction of the bifurcation branch in a neighborhood of the bifurcation points.
\end{itemize}
\end{remark}
\begin{algorithm}[H]
\caption{A pseudo-code for the reconstruction of a branch}\label{alg:01}
\begin{algorithmic}[1]
\State{$\mathsf{X}_0=\mathsf{X}_{guess}$}\Comment{Initial guess}
\For{$\bmu_j \in \mathcal{P}_K$}\Comment{Continuation loop}
\State{{$\mathsf{X}_j^{(0)} = \mathsf{X}_{j-1}$}} \Comment{{Continuation guess}}
\While{$|| \mathsf R(\mathsf{X}_j^{(i)}; \bmu_j)|| > \epsilon$}\Comment{Newton's method}
\State{$\mathsf{Jac}(\mathsf{X}_j^{(i)}; \bmu_j)\delta \mathsf{X} = \mathsf{R}(\mathsf{X}_j^{(i)}; \bmu_j)$}\Comment{Galerkin FE method}
\State{$\mathsf{X}_j^{(i+1)} = \mathsf{X}_j^{(i)} - \delta \mathsf{X}$}
\EndWhile
\State{$\mathsf{Jac_{y}}\mathsf{(y_j; \bmu_j) y_e = \rho_{\bmu_j} V_yy_e}$}\Comment{State eigenproblem}
\State{$\mathsf{Jac(X_j; \bmu_j) X_e = \sigma_{\bmu_j} \mathsf VX_e}$}\Comment{Global eigenproblem}
\EndFor
\end{algorithmic}
\end{algorithm}
\section{ROMs for Nonlinear \ocp s}
\label{sec_ROM}
This Section introduces ROM approximation techniques for nonlinear \ocp s. The proposed reduced strategy is independent from the governing state equation. Indeed, we refer to \cite {Strazzullo1, Strazzullo3, ZakiaMaria, Zakia} for previous contributions to ROM for nonlinear \ocp s, extending them to standard techniques proposed in \cite{HESS2019379, Hess2019, pichiquaini, pichirozza,pintore2019efficient, PR15}, for bifurcating systems. Section \ref{sec_rom_gen} introduces the basics ideas of ROM approach and the standard assumptions to guarantee its efficiency and applicability. Moreover, in Section \ref{POD}, we will describe the used reduction strategy, relying on POD-Galerkin basis construction, see \cite{ballarin2015supremizer, burkardt2006pod, Chapelle2013, hesthaven2015certified} as introductory references, combined with aggregated spaces techniques, following the linear \ocp s fashion, as already presented in \cite{bader2016certified,bader2015certified,dede2010reduced,gerner2012certified,karcher2014certified,karcher2018certified, negri2015reduced,negri2013reduced,quarteroni2007reduced}. Finally, numerical results are shown in Section \ref{rom_results}.
\subsection{General Reduction Strategy}
\label{sec_rom_gen}
In Section \ref{NS_ocp}, we analyzed how optimal flow control can modify an expected behavior in bifurcating systems. In such problems, a parametric study of the state solution is necessary in order to understand the solution properties. The FE method cannot be exploitable when several instances of parametrized problem have to be studied. Indeed, the computational time required can be too costly, especially in an optimal control setting, where the state equation is associated to an optimization system. ROM techniques aim at building a \emph{reduced} surrogate of the FE approximation. The main idea of ROM is to spend computational resources to build a new (smaller) model starting from FE solutions. Such a \emph{reduced system}, although having a considerably lower dimension, is built in a way that it does not lose accuracy and can be used to analyze several parametric configurations in a versatile low-dimensional framework.
We now briefly introduce ROM ideas in the \ocp s setting. At the continuous level, we have already defined the solution branch $\Cal X_i$, which represents how an optimal solution
$X(\boldsymbol \mu) = (y(\boldsymbol \mu), u(\boldsymbol \mu),z(\boldsymbol \mu))$ of \eqref{ocp} changes w.r.t.\ the parameter $\bmu \in \Cal P$. For the sake of clarity, in this Section we will underline the parameter dependence of our solution branch, since it is of primary importance to understand the ROM basics and notions. From now on we will refer to FE as \emph{high-fidelity} approximation. Indeed, we suppose that the FE discretization reliably represents the solution branch $\Cal X_i$ as its discretized counterpart $\Cal X_i^{\Cal N}$. \\
The ROM aims at representing the high-fidelity $\Cal X_i^{\Cal N}$ through the construction of basis derived from \emph{snapshots}, i.e. properly chosen solutions of \eqref{FE_ocp}. The reduced function spaces are contained in the FE spaces: a standard Galerkin projection is performed, resulting in an efficient low-dimensional solution which is \A{still accurate w.r.t.\ }the high-fidelity model. We exploited a \emph{branch-wise reduction}, namely, for every bifurcating solution branch $\Cal X_i$, we build a different ROM. \\
Of course this is the best approach form the accuracy standpoint, but in \cite{pichirozza,pichiquaini} a different context a global approach was pursued, where the loss in accuracy is balanced with the construction of a single ROM.
We suppose to have already constructed the reduced spaces $\state_N \subset \state {\discy} \subset \state$ and $\control{_N} \subset \control \discu \subset \control$, the former for state and adjoint variables, the latter for control, respectively (description of the construction of reduced spaces is postponed to Section \ref{POD}). The \emph{function space dimension} $N$ is usually much lower than $\Cal N$. We will refer to this stage as \emph{offline phase}: here, apart from the basis construction, all the parameter independent quantities are assembled and stored. \\
After this reduced spaces building process, one can solve the following low-dimensional problem in an \emph{online phase}
given $\boldsymbol \mu \in \Cal P$, find
$X_N (\boldsymbol \mu) = (y_N(\boldsymbol \mu), u_N(\boldsymbol \mu),z_N(\boldsymbol \mu)) \in
\mathbb X_N \eqdot
\Cal \state_N \times \Cal \control_N \times \Cal \state_N$ such that it holds:
\begin{equation}
\label{ROM_ocp}
\begin{cases}
D_{y}\Lg(X_N; y_\text{d}, \bmu)[\omega] = 0 & \forall \omega \in \state{_N},\\
D_u\Lg(X_N; y_\text{d}, \bmu)[\kappa] = 0 & \forall \kappa \in \control{_N},\\
D_z\Lg(X_N; y_\text{d}, \bmu)[\zeta] = 0 & \forall \zeta \in \state{_N}.\\
\end{cases}
\end{equation}
Namely, at each new parametric instance $\bmu \in \Cal P$, the system \eqref{ROM_ocp}, which inherits the features from the high-fidelity dynamics \eqref{FE_ocp}, is assembled and solved. Also in this case, we can deal with the nonlinearity applying a Newton's method, as in the FE setting. Moreover, the stability analysis can be performed as described in Algorithm \ref{alg:01}, employing the Galerkin projection in the reduced space. It is clear that, it is convenient to exploit the ROM only if one does not have to build from scratch the reduced model for any parametric instance.
For this reason, the system \eqref{ROM_ocp} is assumed to be affinely decomposed, i.e.\ all the variational forms involved can be written as the product of $\boldsymbol \mu -$independent forms and
$\boldsymbol \mu -$dependent functions \cite{hesthaven2015certified}.
When the affine dependency assumption is verified, the online phase does not depend on $\Cal N$, and can be performed in a small amount of time. Conversely, the offline process is performed only once and can take advantage of High Performance Computing (HPC) resources.
\begin{remark}
Our test cases deal with Navier-Stokes governing equations \eqref{eq:NS_eq}, then, it has at most quadratically
nonlinear terms and the affine decomposition is not fulfilled. One can employ hyper-reduction techniques as the Empirical Interpolation Method (EIM) to recover it. We refer the interested reader to \cite{barrault2004empirical} or \cite[Chapter 5]{hesthaven2015certified}.
\end{remark}
In the next Section, we will focus on the ROM offline and online phase, showing the strategy employed to build the reduced function spaces.
\subsection{Offline and online stages}
\label{POD}
In this Section, we present how to build a reduced space for \ocp s. We exploit here the POD algorithm. Thanks to this algorithm, $N_{\text{\text{max}}}$ snapshots are sampled and then compressed in order to generate function spaces of dimension $N < N_{\text{\text{max}}}$.
It is well known that optimization governed by PDEs($\bmu$) constraints leads to the solution of a saddle point system \cite{Benzi, bochev2009least, hinze2008optimization, Stoll}, as already specified in Section \ref{FE}. In order to guarantee the well-posedness of such a structure, the matrix $\mathsf B$ of system \eqref{J_saddle} must verify the inf-sup stability condition \eqref{FE_lbb} for every $\boldsymbol \mu \in \Cal P$.
In the FE approximation, the above-mentioned relation holds since state and adjoint spaces are equally discretized. However, the inf-sup stability must hold at the reduced level too, since the relation is provable if the reduced spaces for state and adjoint variables coincide. The standard POD construction process leads to the reduced function spaces for state and adjoint which may be different even under the assumption of the same starting FE spaces. To overcome this issue, the basis are usually manipulated in order to stabilize the system. Indeed, we apply \emph{aggregated spaces} technique, as already done in several papers about ROM for \ocp s, see \cite{bader2016certified,bader2015certified,dede2010reduced,gerner2012certified,karcher2014certified,karcher2018certified, negri2015reduced,negri2013reduced,quarteroni2007reduced} as references. The strategy aims at building a common space for state and adjoint which is able to describe both state and adjoint variables. \\Let us suppose to have applied a standard POD for all the involved variables and to have defined the following spaces $\label{state_r} {\state}_{N}= \text{span }\{\chi^{y}_n, \chi^{z}_n, \; n = 1, \dots, N\}$ and $\label{control_r} {\control}_{N} = \text{span}\{\chi^{u}_n, \; n = 1, \dots, N\}$, with
$$
\mathsf Z =
\begin{bmatrix}
\mathsf Z_{\mathsf x} \\
\mathsf Z_{\mathsf z}
\end{bmatrix},
\spazio \text{and} \spazio
\mathsf Z_{\mathsf x} =
\begin{bmatrix}
\mathsf Z_{\mathsf y} \\
\mathsf Z_{\mathsf u}
\end{bmatrix}
$$
where
$
\mathsf Z_{\mathsf y} \equiv \mathsf Z_{\mathsf z} = [\chi_{1}^{y} | \cdots | \chi_{N}^{y}| \chi_{1}^{z} | \cdots | \chi_{N}^{z}] \in \mathbb R^{\Cal N_{y} \times 2N}
$ and $
\mathsf Z_{\mathsf u} = [\chi_{1}^{u} | \cdots | \chi_{N}^{u}] \in \mathbb R^{\Cal N_{u} \times N}
$ are the reduced basis matrices for each variable and $\mathsf Z $ spans the global space $\mathbb X_N$.
We want to solve the optimality system in a low dimensional framework at each parametric instance.
To this end, we employ a Galerkin projection into the reduced spaces and the system \eqref{ocp} will be
\begin{equation}
\label{G_compact_ROM}
\mathsf G_{N}(\mathsf X_N; \boldsymbol \mu) \mathsf X_N = \mathsf F_N,
\end{equation}
where
$$\mathsf G_{N}(\mathsf X_N; \boldsymbol \mu) \eqdot \mathsf Z^T \mathsf G(\mathsf Z \mathsf X_N; \boldsymbol \mu),
\spazio \text{and} \spazio \mathsf F_N \eqdot \mathsf Z^T \mathsf F.$$
The system \eqref{G_compact_ROM} is nonlinear, thus we apply Newton's method and we iteratively obtain
\begin{equation}
\mathsf {X}_N^{j + 1} = \mathsf {X}_N^j+ \mathsf{Jac}_N(\mathsf X_N^{j}; \boldsymbol \mu)^{-1}(\mathsf F_N - \mathsf G(\mathsf X_N^j; \boldsymbol \mu)\mathsf X_N^j), \spazio j \in \mathbb N.
\end{equation}
from the FE approximation, the Fr\'echet derivative inherits the saddle point structure, i.e.
\begin{equation}
\label{Frechet_ROM}
\mathsf{Jac}_N (\mathsf X_N; \boldsymbol \mu) \mathsf X_N =
\begin{bmatrix}
\mathsf A_N & \mathsf B_N^T \\
\mathsf B_N & 0 \\
\end{bmatrix}
\begin{bmatrix}
\mathsf x_N \\
\mathsf z_N
\end{bmatrix},
\end{equation}
with $\mathsf{Jac}_N (\mathsf X_N; \boldsymbol \mu) = \mathsf Z^T \mathsf{Jac}(\mathsf Z \mathsf X_N; \bmu)\mathsf Z$, $\mathsf A_N = \mathsf Z_{\mathsf x}^T \mathsf A \mathsf Z_{\mathsf x}$ and $\mathsf B_N = \mathsf Z_{\mathsf z}^T \mathsf B \mathsf Z_{\mathsf x}$. \\
We now have all the ingredients to define a \emph{reduced Brezzi inf-sup condition} as follows
\begin{equation}
\label{ROM_infsup}
\beta_{\text{Br},N}(\bmu) \eqdot \adjustlimits \inf_{0 \neq \mathsf z_N} \sup_{0 \neq \mathsf x_N} \frac{\mathsf z_N^T\mathsf B_N \mathsf x_N}{\norm{\mathsf x_N}_{\mathbb Y \times \mathbb U}\norm{\mathsf z_N}_{\Cal \state}} \geq \overline{\beta}_{\text{Br},N} > 0.
\end{equation}
If $\bmu \neq \bmu^{\ast}$, relation \eqref{ROM_infsup} is verified thanks to the aggregated space definition.
We remark that this technique is increasing the dimension of the global reduced system. However, it is usually still much smaller then $\Cal N$. For the sake of simplicity and for a consistent construction of state and adjoint space, we always choose $N_{\text{max}}$ and $N$ equal for all the involved variables.
\begin{remark}[Supremizer Stabilization] Dealing with Navier-Stokes governing equations,
one has to take care not only of the global inf-sup condition \eqref{ROM_infsup}, but also with the state equation inf-sup condition. Indeed, Navier-Stokes problem can be recast as saddle point itself, which results in a nested saddle point structure when a Navier-Stokes problem is used as state equation of an optimal control problem. The aggregated space techniques has to be accompanied by \emph{supremizer stabilization} for the reduced velocity space.
This approach \cite{rozza2007stability} consists in defining a supremizer operator
$T^{\boldsymbol \mu}: \mathbb P^{\Cal N_p} \rightarrow{{\mathbb V}}^{\Cal N_v}$ as
$$(T^{\boldsymbol \mu} s, \phi)_{\mathbb {V}} = b(\phi, s; \boldsymbol \mu), \quad \forall \phi \in {{\mathbb V}}^{\Cal N_v},$$
where $b\cd$ is the bilinear form representing the continuity equation defined in Section \ref{sec:NS}.
Then, we enrich the reduced velocity space through the pressure supremizers as follows:
$$
{{\mathbb V}}_N = \text{span}\{ \chi^{v}_n, \; \chi^{{T_p}}_n, \chi^{w}_n, \; \chi^{{T_q}}_n, \; n = 1,\dots,N\},
$$
where $\chi^{T_p}_n$ and $\chi^{T_q}_n$ are the basis supremizers obtained by state and adjoint pressure snapshots, respectively. Enlarging in this way the reduced space for velocity will guarantee inf-sup stability for the Navier-Stokes equation. This approach, i.e. supremizer stabilization combined with aggregated spaces, is the key for the well-posedness of the whole optimality system \eqref{NS_ocp}. This will lead to a reduced system of dimension $13N$, which is still convenient compared to the global FE approximation dimension.
\end{remark}
\subsection{Numerical Results}
\label{rom_results}
In this Section we present the numerical results deriving from the reduction of the four controlled test cases described in Section \ref{NS_ocp}. For each numerical test case, the offline setting is given by $N_\text{max} = 51$ snapshots evaluated for equidistant parameters in the range of $\mathcal P = [0.5, 2]$ and the POD algorithm is chosen for the ROM construction. Let us define the \emph{basis number} $\overline N$ as the maximum value of $N$, i.e. $N \in \{1, \hdots, \overline N\}$. For Dirichlet test case we chose $\overline N = 12$ basis functions, while for the other test cases the basis number is $\overline N = 20$. \C{Such value is chosen in analogy with reduction results obtained in the uncontrolled scenario, see e.g.\ \cite{pichi2021artificial,pintore2019efficient,khamlich2021model}}. \A{The former choice is due to the presence of two multiplier variables, which increase the global ROM dimension (from $13 \overline N$ to $15 \overline N$), and jeopardize the robustness of the reduced nonlinear solver. We remark that the final reduced systems are still much smaller than their high-fidelity counterparts, which involve from 50 to 70 thousands of degrees of freedom, depending on the type of control imposed, boundary or distributed, respectively.} We
perform an online phase solving \eqref{ROM_ocp} for $151$ equidistant value of $\mu$ in the same parameter space $\Cal P$. The performance has been tested through separate error analysis for each variable. The reliability of the ROM approach has been evaluated through
\begin{itemize}
\item[{$\small{\circ}$}] an average error over the parameter space against an increasing value of the reduced spaces dimension $N$ from one up to $\overline N$;
\item[{$\small{\circ}$}] a $\mu$-dependent error computed for the value $\overline N$.
\end{itemize}
The two error analyses highlight different features of the reduced system that we are going to discuss in the following.
Indeed, the average error gives us information about how the reliability changes as the behavior of the solution changes. The straight profile appears to be always the best approximated, due to its Stokes-like (symmetric) nature for all the value $\mu \in \Cal P$. This is the case of Neumann and Channel control, which average error is depicted in Figures \ref{fig:neumann_mean_s} and \ref{fig:channel_mean_s}. Their asymmetric counterparts, Figures \ref{fig:neumann_mean_as} and \ref{fig:channel_mean_as}, show how representing the two different features of the solution, a Stokes-like one for $\mu > \mu^{\ast}$ and a wall-hugging for the lower values of $\mu$, using the same value of $\overline N$ is more difficult than the symmetric one. Nonetheless, the provided accuracy for basis size $\overline N$ is satisfactory for many practical applications in both target cases.
This argument applies to the control and adjoint variables, yet the state ones are the best described by ROM for all the test cases. Because of the optimality equation, the adjoint variables feel the direct influence of the control, which is the most challenging one to be approximated by the reduced model due to its high variability in $\mu$. Indeed, the control variable presents a sort of \emph{on-off} behavior which drastically affects the efficiency of reduced representation.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Neumann/mean_error_N_s}
\caption{}
\label{fig:neumann_mean_s}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Neumann/mean_error_N_as}
\caption{}
\label{fig:neumann_mean_as}
\end{subfigure}\\
\caption{Average error over $\mu$ with $\overline N = 20$ and $\alpha = 0.01$ for symmetric and asymmetric profile in (a) and (b) for Neumann control, respectively.}
\label{fig:rom_neumann_av}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Neumann/error_s_state}
\caption{}
\label{fig:neumann_mu_s_state}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Neumann/error_s_adj}
\caption{}
\label{fig:neumann_mu_s_adj}
\end{subfigure}\\
\caption{The $\mu$-dependent error with $\overline N = 20$ and $\alpha = 0.01$ for symmetric profile for state variable in (a) and adjoint and control variables in (b) for Neumann control, respectively.}
\label{fig:rom_neumann_mu}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Distributed/mean_error_N_s}
\caption{}
\label{fig:dist_mean_s}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Distributed/mean_error_N_as}
\caption{}
\label{fig:dist_mean_as}
\end{subfigure}\\
\caption{Average error over $\mu$ with $\overline N= 20$ and $\alpha = 0.01$ for symmetric and asymmetric profile in (a) and (b) for Distributed control, respectively.}
\label{fig:rom_dist_av}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Distributed/error_s_state}
\caption{}
\label{fig:dist_mu_s_state}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Distributed/error_as_state}
\caption{}
\label{fig:dist_mu_s_adj}
\end{subfigure}\\
\caption{The $\mu$-dependent error with $\overline N= 20$ and $\alpha = 0.01$ for symmetric and asymmetric profile of the state variable in (a) and (b) for Distributed control, respectively.}
\label{fig:rom_dist_mu}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Channel/mean_error_N_s}
\caption{}
\label{fig:channel_mean_s}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Channel/mean_error_N_as}
\caption{}
\label{fig:channel_mean_as}
\end{subfigure}\\
\caption{Average error with $\overline N= 20$ over $\mu$ for symmetric ($\alpha = 1$) and asymmetric ($\alpha = 0.01$) profile in (a) and (b) for Channel control, respectively.}
\label{fig:rom_channel_av}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Channel/error_s_state}
\caption{}
\label{fig:channel_mu_s_state}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Channel/error_as_state}
\caption{}
\label{fig:channel_mu_s_adj}
\end{subfigure}\\
\caption{The $\mu$-dependent error for $\overline N= 20$ for symmetric and asymmetric profile of the state variable in (a) and (b) for Channel control, respectively.}
\label{fig:rom_channel_mu}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Dirichlet/mean_error_N_1}
\caption{}
\label{fig:diri_mean_1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Dirichlet/mean_error_N_1e3}
\caption{}
\label{fig:diri_mean_1e3_av}
\end{subfigure}\\
\caption{Average error over $\mu$ with $\overline N= 12$ for $\alpha = 1$ and $\alpha = 0.001$ in (a) and (b) for Dirichlet control, respectively.}
\label{fig:rom_diri}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Dirichlet/error_1e3_state}
\caption{}
\label{fig:diri_mu_1e3_state}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{/rom/Dirichlet/error_1e3_adj}
\caption{}
\label{fig:diri_mu_1e3_adj}
\end{subfigure}\\
\caption{The $\mu$-dependent error for $\alpha = 0.001$ with $\overline N = 12$ for state variable in (a) and adjoint and control variables in (b) for Dirichlet control, respectively.}
\label{fig:rom_diri_mu}
\end{figure}
For example, if we deal with Stokes target $v_{\text{d}}$, the control is $\emph{off}$ for high values of viscosity, but, when $\mu \sim \mu^{\ast}$, it starts to grow in magnitude and to change drastically its features. This is represented in Figures \ref{fig:neumann_mu_s_adj} and \ref{fig:diri_mu_1e3_adj}, where the higher values for the control error and, thus, for the adjoint variable, is for higher values of $\mu$.
Since the control magnitude is essentially zero, for Channel and Dirichlet test cases with low Reynolds number (high viscosity), we chose to plot the absolute errors instead of the relative ones, in order to prevent division by zero.
This is not the case of Distributed control, see for example Figure \ref{fig:dist_mean_as}, which presents good errors decay for all the variables, since its strong action causes the control magnitude to always be a meaningful normalization factor.\\
The most challenging case to be approximated is the Dirichlet one for $\alpha = 0.001$. It is the most complex dynamics, where a new bifurcation appears. Indeed, it can reach an error of almost $10^{-3}$ for the controlled state. Even though this result is worse than the ones obtained for the other test cases, where average error are ranging between $10^{-5}$ and $10^{-8}$, the accuracy provided by the ROM is still acceptable for many practical purposes. We remark that this performance is strictly correlated to the more complex features of the Dirichlet problem. This is evident also from the $\mu$-dependent errors in Figure \ref{fig:diri_mu_1e3_state} and \ref{fig:diri_mu_1e3_adj}. In both the pictures, we see a great increment of the error for high Re. Although the phenomenon appears also for the other test cases, see Figures \ref{fig:dist_mu_s_adj}, \ref{fig:channel_mu_s_state} and \ref{fig:channel_mu_s_adj}, it is not as strong as the Dirichlet case.
Furthermore, the $\mu$-dependent error gives an \emph{a posteriori} information about the bifurcation point. Indeed, in order to have good accuracy property, ROM approach requires regularity on the parametric dependence of the solution instead of the spatial one. This means that reduced errors will generally exhibit a peak at $\mu^*$.
In fact, an increasing value of the error can be seen around $\mu^{\ast} \sim 0.96$ for Neumann, Distributed and Channel control \C{as one can observe from Figures \ref{fig:neumann_mu_s_state}, \ref{fig:dist_mu_s_state} and \ref{fig:channel_mu_s_state} for the state variable, respectively}.
This feature can be very useful when there is no previous knowledge about bifurcating behaviors. In this sense, ROM is not only an useful approach to solve in a fast way very complicated time consuming systems, but also to detect parameters which can be related to the bifurcating nature of the problem at hand, since their instances will be the worst approximated. \C{Namely, the ROMs confirm ``a posteriori" the location of the bifurcation points.} We conclude this analysis by noticing that the same considerations hold for the Dirichlet control, but this time at the left end of the parametric domain $\mathcal P$ where such phenomenon is clearly influenced by the new configuration observed in Figure \ref{fig:diri_v_1e3}.
\section{Bifurcations for Navier-Stokes Equations: the Coanda Effect}
\label{sec:state}
In this Section we analyze a bifurcating phenomenon deriving from Navier-Stokes equations in a sudden-expansion channel flow problem. Indeed, consider the channel geometry depicted in Figure \ref{fig:channel}. A fluid characterized by a high viscosity presents a jet which is symmetric w.r.t.\ the horizontal axis. Furthermore, a pair of vortices, called Moffatt eddies \cite{moffatt_1964}, form downstream of the expansion. Lowering the viscosity, the inertial effects of the fluid become more important and the two symmetric recirculation regions break the symmetry. Indeed, as the length of the recirculation zones increases, one can observe a non-uniform decrease of the pressure along the vertical axis. Thus, when we reach the aforementioned critical value, one recirculation zone expands whereas the other shrinks, giving rise to an asymmetric jet. This phenomenon is called the \textit{Coanda effect} and has been extensively studied in literature within different contexts \cite{tritton2012physical,AQpreprint,khamlich2021model,cardio,HESS2019379,pintore2019efficient,pichi2021artificial}.
From the mathematical point of view, this translates to a PDE($\bmu$) which, decreasing the viscosity $\mu$ below a certain critical value $\mu^{\ast}$, admits the existence of more solutions for the same value of $\mu \in \Cal P$.
During the study of the solution for different viscosity values, we expect the system to show two qualitatively different configurations:
\begin{itemize}
\item[{$\small{\circ}$}] a physically unstable configuration with a symmetric jet flow, the \emph{symmetric solution},
\item[{$\small{\circ}$}] a physically stable configuration with a wall-hugging jet, the \emph{asymmetric solution}.
\end{itemize}
These solutions, depicted in Figure \ref{fig:NS_sol_hf_bif}, coexist for parameter values below the critical one $\mu^{\ast}$ and belong to different branches that intersect in the bifurcation point, forming the so-called pitchfork bifurcation.
In the next Sections we introduce the mathematical formulation of Navier-Stokes equations describing the flow in a channel. This will serve us to highlight the bifurcated behavior of the system and its stability properties and it will be fundamental to understand how different controls affect the original system.
\subsection{Navier-Stokes problem as the state equation}
\label{sec:NS}
Here we consider a simplified setting with a two-dimensional planar straight channel with a narrow inlet and a sudden expansion, depicted in Figure \ref{fig:channel}, which represents a simplification of the left atrium and the mitral valve, respectively.
We define $\Gamma_{\text{in}} = \{0\}\times[2.5, 5]$ and $\Gamma_{\text{out}} = \{50\}\times[0, 7.5]$, where inflow and outflow boundary conditions are imposed, respectively. We indicate with $\Gamma_{\text{wall}}$, the boundaries representing the walls, in this case $\Gamma_{\text{wall}}= \Gamma_{\text{D}} \cup \Gamma_{0}$, where $\Gamma_{\text{D}} = \{\{10\}\times[0, 2.5]\}\cup \{\{10\}\times[5, 7.5]\}$ and
$ \Gamma_{0} = \partial \Omega \setminus \{\Gamma_{\text{in}} \cup \Gamma_{\text{D}} \cup \Gamma_{\text{out}}\}$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{images/screen2.png}
\caption{\emph{Uncontrolled system}: domain $\Omega$ which represents a straight channel with a narrow inlet. }
\label{fig:channel}
\end{figure}
The steady and incompressible Navier-Stokes equations for a viscous flow in $\Omega$ read as:
\begin{equation}
\label{eq:NS_eq}
\begin{cases}
-\mu \Delta v + v\cdot\nabla v + \nabla p=0 \quad &\text{in} \ \Omega, \\
\nabla \cdot v = 0 \quad &\text{in} \ \Omega, \\
v = v_{\text{in}} \quad &\text{on} \ \Gamma_{\text{in}}, \\
v = 0 \quad &\text{on} \ \Gamma_{\text{wall}}, \\
- pn + (\mu \nabla v) n = 0 \quad &\text{on} \ \Gamma_{\text{out}},
\end{cases}
\end{equation}
where $v = (v_{x_1}, v_{x_2})$ is the velocity of the fluid, $p$ is its pressure normalized over a constant density and $\mu$ represents the kinematic viscosity.
We supplement the system \eqref{eq:NS_eq} with proper boundary conditions: a stress free boundary condition on the velocity at the outlet, $\Gamma_{\text{out}}$ with outer normal $n$, a no-slip (homogeneous) Dirichlet boundary condition on $\Gamma_{\text{wall}}$ and a non-homogeneous Dirichlet boundary conditions $v_{\text{in}}$ at the inlet $\Gamma_{\text{in}}$ given by $v_{\text{in}}(x_2) = [20(5-x_2)(x_2 -2.5), 0]^T$.
For later convenience, we introduce the dimensionless Reynolds number, which represents the ratio between inertial and viscous forces, and is given by $\text{Re} = Uh / \mu$, where $U$ and $h$ are characteristic velocity (i.e., maximum inlet velocity, $U = 31.25$) and characteristic length of the domain (i.e., length of the inlet section, $h = 2.5$), respectively. In the following we will consider $\mu$ as the parameter.
Fixed the domain $\Omega$, the flow regime varies as we consider different values for the viscosity $\mu$ in $\mathcal{P} \subset \mathbb{R}$.
As we said in the introduction to this Section, this model exhibits a bifurcating behavior. Indeed we have the existence and uniqueness of the solution only above a certain critical value for the viscosity, that for this test case corresponds to $\mu^* \approx 0.96$. {Such value has been found in different works and numerical contexts for this benchmark \cite{pintore2019efficient,khamlich2021model,pichi2021artificial}. It can be obtained either ``a posteriori" by looking at the behaviour of the flow while varying the viscosity, or ``a priori" by investigating the change of sign of the leading eigenvalue w.r.t.\ the parameter. }
To investigate the loss of uniqueness in a neighborhood of this pitchfork bifurcation, we set the parameter space as $\mathcal{P} = [0.5, 2.0]$, such that the first critical point $\mu^*$ is included.
These values for the viscosity correspond to Re in the interval [39.0, 156.0].
Let $\V=\left(H^1(\Omega)\right)^2$, $\V_{\text{in}}=\{v \in \V \mid v=v_{\text{in}} \text{ on }\Gamma_{\text{in}}, v=0 \text{ on }\Gamma_{\text{wall}}\}$, $\V_0=\{v \in \V \mid v=0 \text{ on }\Gamma_{\text{in}} \cup \Gamma_{\text{wall}}\}$ be the function spaces for velocity. Furthermore, let $\Q=L^2(\Omega)$ be the function space for pressure. The weak formulation of \eqref{eq:NS_eq} reads as: given $\mu \in \mathcal{P}$, find $v \in \V_{\text{in}}$ and $p \in \Q$ such that
\begin{equation}
\label{eq:gal_ns}
\left\{
\begin{aligned}
\mu\int_\Omega\nabla v\cdot\nabla \psi \, d\Omega + \int_\Omega \left(v\cdot\nabla v\right)\psi \, d\Omega - \int_\Omega p\nabla\cdot \psi \, d\Omega = 0 \quad\quad &\forall \, \psi \in \V_0, \\
\int_\Omega \pi\nabla\cdot v \, d\Omega = 0\quad\quad &\forall \, \pi \in \Q.
\end{aligned}
\right.
\end{equation}
We can rewrite the formulation of \eqref{eq:gal_ns} in an equivalent way
as: given $\mu \in \mathcal{P}$, find $v \in \V_{\text{in}}$ and $p \in \Q$ such that
\begin{equation}
\label{eq:gal_ns2}
\begin{cases}
a(v,\psi; \mu) +s(v,v,\psi) +b(\psi,p) = 0 \quad &\forall \, \psi \in \V_0, \\
b(v,\pi) = 0\quad &\forall \, \pi \in \Q ,
\end{cases}
\end{equation}
having introduced the following bilinear and trilinear forms for all $v$, $\bar v$, $\psi \in \V$ and $p \in \Q$,
\begin{equation}
\label{eq:forms}
a(v, \psi; \mu) =\mu\int_\Omega\nabla v\cdot\nabla \psi \, d\Omega, \qquad
b(v, p) = -\int_\Omega(\nabla\cdot v) \hspace{.05cm}p \, d\Omega, \qquad
s(v, \bar v, \psi)=\int_\Omega \left(v\cdot\nabla \bar v\right) \psi \, d\Omega.
\end{equation}
\subsection{Numerical approximation of the problem}
We can now discuss the numerical approximation of the Navier-Stokes equation, that will be the state equation of the control problem in the next Sections. We consider a mesh on the domain $\Omega$ with $\Cal{N_T} =2785$ cells and $\Cal N_{y} = 24301$ degrees of freedom associated to a Taylor-Hood $\mathbb{P}^2$-$\mathbb{P}^1$ discretization of $\V \times \Q$. This choice is motivated by the well-known stability results of the Taylor-Hood Finite Element pair \cite{quarteroni2008numerical}.
In order to plot the bifurcation diagram, we choose an output value that results in a symmetry indicator of the approximated solution. \A{This function is given by the value of the vertical component of the velocity in a point of the channel, i.e.\ $v_{x_2}$ evaluated for $(x_1, x_2) = (14, 4)$, or the nearest node to that value since the mesh is unstructured. We remark that this output provides a merely graphical intuition about the symmetry breaking, and can be chosen to be efficiently computed by means of the analysis in Section \ref{sec_ROM}.}
In Figure \ref{fig:bifurcation} we plot the bifurcation diagram with all the solution branches found for the system \eqref{eq:gal_ns2} in the viscosity range chosen.
The numerical approximation clearly shows that a supercritical pitchfork bifurcation occurs around the critical viscosity value $\mu^* \approx 0.96$.
It is evident that we have a unique solution for all $\mu > \mu^*$, thus when the fluid behaves like a Stokes one, while we find three qualitatively different solutions increasing the Reynolds number. The bifurcation point $\mu^*$ is also the one responsible for the change in stability properties of the model. Indeed, the unique symmetric solution remains stable until it encounters the critical value $\mu^*$, where it becomes unstable. Moreover, this feature is inherited by the bifurcating solutions, which evolve as a physically stable branch.
\begin{figure}
\centering
\includegraphics[width=9cm]{images/Bifurcation_diagram_nu_no_control_new2-eps-converted-to.pdf}
\caption{\emph{Uncontrolled Navier-Stokes system}: Bifurcation diagram.}
\label{fig:bifurcation}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{lower_branch_velocity_1}\qquad
\includegraphics[width=7cm]{lower_branch_pressure_1}
\\
\vspace{1em}
\includegraphics[width=7cm]{middle_branch_velocity_1}\qquad
\includegraphics[width=7cm]{middle_branch_pressure_1}
\caption{\emph{Uncontrolled Navier-Stokes system}: representative solutions for $\mu = 0.5$, velocity and pressure fields, lower and middle branch, top and bottom, respectively.}
\label{fig:NS_sol_hf_bif}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{plot_eig_rc_nc.png}\qquad
\includegraphics[width=7cm]{plot_eig_rc_nc_rect.png}
\caption{\emph{Uncontrolled Navier-Stokes system}: eigenvalues of the state eigenproblem in the complex plane: stable and unstable solutions, left and right panels respectively.}
\label{fig:plot_eig_rc_nc}
\end{figure}
Some representative solutions for the lower and middle branch are presented in Figure \ref{fig:NS_sol_hf_bif}. Velocity and pressure fields belonging to different branches, for the same viscosity value $\mu = 0.5$, present qualitatively dissimilar behavior.
Indeed, the pressure for the lower branch decreases near the bottom-left corner of the expansion, causing the velocity to deflect, hugging the lower wall. Finally, thanks to the no-slip boundary condition the flux goes back to mid line, ending with a non-axis-symmetric outflow.
The stability analysis is performed through the eigenvalue analysis depicted in Section \ref{bif}, where Algorithm \ref{alg:01} has been applied exclusively to the state equation \eqref{eq:state}.
In particular, we analyzed the behavior of the first $N_{eig} = 100$ eigenvalues of \eqref{eq:eigen_state}, by means of the Krylov-Schur algorithm, varying the viscosity of the fluid.
Such eigenvalues are plotted in Figure \ref{fig:plot_eig_rc_nc} for the stable lower branch (left panel) and unstable middle branch (right panel). Note that since the Navier-Stokes operator is not symmetric, we have the presence of both real and complex eigenvalues. Here we are just interested in pitchfork bifurcation, thus in the zoom we follow the behavior of the biggest real eigenvalue and its sign. When investigating the stability of the lower branch, all the eigenvalues of the Navier-Stokes system, linearized around this stable solution, have negative real part. From the consideration of Section \ref{bif}, we can assert the stability of the wall-hugging branch. Indeed, the zoom in the left plot of Figure \ref{fig:plot_eig_rc_nc} shows no crossing of negative-real part eigenvalues. On the contrary, the close up in the right plot, which corresponds to the symmetric flow, shows the sign change of the biggest eigenvalue, thus characterizing a physically unstable solution.
| 2024-02-18T23:40:01.484Z | 2022-05-04T02:04:51.000Z | algebraic_stack_train_0000 | 1,113 | 22,967 |
|
proofpile-arXiv_065-5559 | \section{Introduction}
In relativistic nucleon--nucleon collisions dielectrons are emitted from various sources. Pseudoscalar and vector-mesons can decay directly or via Dalitz decays into a real or a virtual photon that internally converts into an $\rm e^{+}e^{-}$ pair. In addition, semi-leptonic decays from open heavy-flavour (HF) hadrons can produce a correlated dielectron when following the hadronisation and decay pattern
\begin{equation}
c\overline{c} \rightarrow D\overline{D} \rightarrow XY e^{+}e^{-}.
\label{eq:cc2ee}
\end{equation}
The same holds true for dielectron pairs from open-beauty hadrons. The analysis of these pairs can then shed light on the correlation of the heavy-quark pairs, especially in the low transverse momentum regime, which is not easily accessible in other analyses.
In nucleon--nucleus collisions the before mentioned sources of dielectrons can be modified. Of particular interest are possible modifications of the heavy-flavour production via cold nuclear matter effects, e.g.\ shadowing. Additional sources of dielectrons such as thermal radiation from a hot medium, i.e.\ hadron gas or QGP, possibly formed in collisions of small systems.
\section{Data analysis}
We report on the results from two data taking periods~\cite{ref-ee7,ref-ee13}. In 2010, the ALICE detector recorded $370\times10^{6}$ minimum bias (MB) pp events at $\sqrt{s} = 7$\,TeV. In another pp data taking, in 2016, a total of $455\times10^{6}$ MB were recorded at $\sqrt{s} = 13$\,TeV. In addition, a dedicated high multiplicity (HM) trigger selected 0.036\% of the highest multiplicity pp collisions and collected $79.2\times10^{6}$ HM events.
In ALICE electrons\footnote{Electrons here and in the whole document also refers to their anti-particles, positrons.} are identified in the central barrel using the Inner Tracking System (ITS), the Time Projection Chamber (TPC), and the Time-Of-Flight system (TOF) in a kinematic range of $p_{\rm T,e} > 0.2$\,GeV/$c$ and $\eta_{\rm e} < |0.8|$. The selected electrons are then combined to opposite-sign (OS) pairs. The OS invariant mass distribution contains all correlated signal pairs, but in addition a combinatorial background. The background is estimated by constructing a spectrum of same-sign (SS) pairs.
Residual differences in the acceptance for SS and OS pairs are estimated using event mixing and taken into account during the subtraction of the background. The spectrum is then corrected for tracking and particle identification inefficiencies.
\section{Results}
Dielectrons were measured as a function of invariant mass ($m_{\rm ee}$) and pair transverse momentum ($p_{\rm T,ee}$) in both data taking periods. In addition, a measurement as function of $\rm DCA_{ee}$, the distance of closest approach of the electrons to the primary vertex normalised to its resolution and summed in quadrature, was performed in the 7\,TeV data sample. The $m_{\rm ee}$ spectra integrated over $p_{\rm T,ee}$, and $\rm DCA_{ee}$ are shown in Fig. \ref{fig:mee} in comparison with an expectation of the cross sections from known hadronic sources, the hadronic cocktail.
\begin{figure}[ht!]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.33,
trim = 0 100 10 130, clip]{./plots/2018-10-11-2018-09-03-invmassintegrated}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.33,
trim = 0 80 10 120, clip]{./plots/2018-10-11-2018-09-25-2018-09-25-Signal_cocktail_pt0_pythia.pdf}
\end{minipage}
\caption{Dielectron cross section in pp collisions at $\sqrt{s} = 7$\,TeV (left) and $\sqrt{s} = 13$\,TeV (right) as a function of $m_{\rm ee}$ in comparison with a cocktail of known hadronic sources~\cite{ref-ee7,ref-ee13}.}
\label{fig:mee}
\end{figure}
The measured $m_{\rm ee}$ spectra are well described by the hadronic cocktail within statistical and systematic uncertainties.
At intermediate mass both spectra are described by a contribution from charm and beauty calculated with PYTHIA6~\cite{ref-pythia6} with the Perugia2011 tune~\cite{ref-perugia2011} normalised to the cross sections measured in single heavy-flavour hadron measurements~\cite{ref-ccbar,ref-bbbar}.
In addition, it can be seen that at LHC energies the mass spectra are dominated over a wide mass range by the contribution from semi-leptonic decays of correlated open heavy-flavour hadrons.
This can be used to select and further study the production of heavy-flavour quarks in high-energy collisions.
In the mass window of 1.1\,GeV/$c^{2}$ < $m_{\rm ee}$ < 2.7\,GeV/$c^{2}$, the so called intermediate mass region (IMR), the contributions from the HF hadrons can be selected without any significant contribution from other sources.
\begin{figure}[ht!]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 100 0 130, clip]{./plots/2018-May-09-heavyflavourptee}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 100 1 130, clip]{./plots/2018-May-09-heavyflavourdca}
\end{minipage}
\caption{Dielectron cross section in pp collisions at $\sqrt{s} = 7$\,TeV as a function of $p_{\rm T, ee}$ (left) and $\rm DCA_{ee}$ (right) in comparison with a cocktail of known hadronic sources~\cite{ref-ee7}.}
\label{fig:pteedca}
\end{figure}
In Fig. \ref{fig:pteedca}, the $p_{\rm T,ee}$ and $\rm DCA_{ee}$ spectra measured at $\sqrt{s} = 7$\,TeV in the IMR are depicted.
For both observables one can see that the contribution from charm and beauty have a different spectral shape. In the $p_{\rm T,ee}$ case the charm contribution dominates up to about 3\,GeV/$c$. Above this, the spectrum is dominated by the contribution from beauty quarks. For the $\rm DCA_{ee}$ observable, the crossing point is around 4$\sigma$. The distinct shapes of the two contributions in $p_{\rm T,ee}$ and $\rm DCA_{ee}$ are used to disentangle them with a two-component fit to either the $m_{\rm ee}$-$p_{\rm T,ee}$ or the $\rm DCA_{ee}$ distributions. The result is presented in Fig. \ref{fig:xsection} using heavy-flavor distributions obtained from PYTHIA, as described before, and POWHEG~\cite{ref-powheg}.
\begin{figure}[ht!]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 100 10 160, clip]{./plots/2018-May-09-oneSigmaPythiaDCA0to8}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 100 10 160, clip]{./plots/2018-May-09-oneSigmaPowhegDCA0to8}
\end{minipage}
\caption{Total $\rm c\overline{c}$ and $\rm b\overline{b}$ cross sections with systematic and statistical uncertainties, extracted from fits of the measured dielectron yield from heavy-flavour hadron decays to ($m_{\rm ee}$, $p_{\rm T,ee}$) and to $\rm DCA_{ee}$ with PYTHIA (left) and POWHEG (right) in comparison with published cross sections from independent measurements (lines)~\cite{ref-ee7}.}
\label{fig:xsection}
\end{figure}
The two approaches are consistent within uncertainties, in the case of PYTHIA, as well as POWHEG. However, we can see a significant shift in the charm cross section when using POWHEG instead of PYTHIA.
This shift can be understood since PYTHIA calculates the leading-order contributions and POWHEG in addition also includes the next-to-leading order contribution, which changes the overall correlation. In a measurement of the HF cross section in the dielectron channel we are sensitive to these different contributions. The cross sections extracted with both models are in agreement with independent measurements within uncertainties.
\begin{figure}[ht]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 170 10 180, clip]{./plots/2018-May-10-Ratio_ptbin0}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 170 10 180, clip]{./plots/2018-May-10-Ratio_ptbin4}
\end{minipage}
\caption{Ratio of dielectron production in high-multiplicity events over inelastic events integrated over $p_{\rm T,ee}$ (left) and for $p_{\rm T,ee} > 3$\,GeV/$c$ (right)~\cite{ref-ee13}.}
\label{fig:ratio}
\end{figure}
Similar findings can be reported in the 13\,TeV analysis. In this analysis the production of dielectrons was studied as function of $m_{\rm ee}$ and $p_{\rm T,ee}$ for a minimum bias and a high-multiplicity data sample.
In Fig. \ref{fig:ratio}, the ratio of the high-multiplicity dielectron spectrum over the inelastic one is shown as function of $m_{\rm ee}$, integrated over $p_{\rm T,ee}$ and for $p_{\rm T,ee} > 3$\,GeV/$c^2$, left and right, respectively. The ratios are compared with ratios of the expected hadronic contributions. The cocktail ratio reflects modifications measured independently at high multiplicity.
We use a measurement of the multiplicity dependence of D mesons~\cite{ref-DmesonsMult} to scale the heavy-flavour production, including the B mesons. For the light-flavour part of the cocktail a measurement of the multiplicity dependence of the $p_{\rm T}$ spectra is used, which shows a hardening with multiplicity~\cite{ref-ptMult}.
No significant deviation from the cocktail in both $p_{\rm T,ee}$ intervals is observed.
The high $p_{\rm T,ee}$ part of the spectrum, dominated by the heavy-flavour contribution from beauty quarks, can be described by a cocktail constructed from D-meson measurements. This suggests a similar scaling of charm and beauty production with multiplicity at LHC energies.
In p--Pb collisions, we can further study the modification of HF hadron production resulting e.g.\ from the modification of the parton distribution functions. These modifications would be expected for small $Q^{2}$ and $x_{\rm Bj}$ in the production process of the HF quark pair and thus at low $p_{\rm T}$. This makes dielectrons a prime probe for this sort of measurements, since standard selection criteria in the analysis preserve most of the HF cross sections.
\begin{figure}[ht!]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 100 10 180, clip]{./plots/2018-09-27-PionImrComp_400}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[scale=0.35,
trim = 0 100 10 180, clip]{./plots/2018-09-27-DCArms_400}
\end{minipage}
\caption{$\rm DCA_{ee}$ spectra in the $\pi$-mass region and the IMR normalised to unity (left) and the $\rm \langle DCA_{ee}\rangle$ as function of $m_{\rm ee}$ in p--Pb collisions at $\sqrt{s_{\rm NN}}=5.02$\,TeV.}
\label{fig:DCAppb}
\end{figure}
In Fig. \ref{fig:DCAppb} (left), we show the $\rm DCA_{ee}$ distributions for the mass region dominated by the $\pi^{0}$ Dalitz decays in comparison with the HF dominated IMR. The spectra are normalised to unity for direct comparison of the shapes. It is apparent that the HF dominated mass region has a much wider distribution. This will give the opportunity to disentangle not only the charm and beauty distributions, but in addition study a possible prompt source, such as thermal radiation. The sensitivity of the $\rm DCA_{ee}$ on the mixture of prompt and non-prompt contributions to the spectrum can be derived from Fig. \ref{fig:DCAppb} (right). We show the $\rm \langle DCA_{ee} \rangle$ as function a of $m_{\rm ee}$. For low masses ($< 0.5$\,GeV/$c^{2}$), we see a rather flat distribution. This is the region dominated by the Dalitz decays of $\pi^{0}$ and $\eta$ mesons, both prompt sources. With increasing masses the charm contribution becomes more significant, and with this the $\rm \langle DCA_{ee} \rangle$ rises. Significant drops in the distribution can be associated with the narrow contributions of the resonance decays of $\rho, \omega, \phi$ mesons. At masses larger than the $\phi$ the spectrum is completely dominated by the charm and beauty contributions and the $\rm \langle DCA_{ee} \rangle$ reaches a maximum. The falling off of $\rm \langle DCA_{ee} \rangle$ could be interpreted as a rising prompt contribution in the radiative tail of the $J/\psi$, at whose mass the spectrum is completely dominated by a prompt source again.
\section{Conclusion}
We presented the measurement of the dielectron production cross section as function of $m_{\rm ee}$, $p_{\rm T,ee}$ and $\rm DCA_{ee}$ in pp collisions at $\sqrt{s} = 7$\,TeV and as function of $m_{\rm ee}$, $p_{\rm T,ee}$ and multiplicity at $\sqrt{s} = 13$\,TeV. The spectra are well described with a hadronic cocktail for all observables. Cross sections for the production of HF quarks were extracted. The extracted cross sections are in agreement with previous measurements of single HF hadrons. A strong model dependence for this measurement points to a sensitivity to the heavy-quark production mechanisms in these models. The comparison of minimum bias and high-multiplicity $m_{\rm ee}$ spectra for $p_{\rm T,ee} > 3$\,GeV/$c$ suggests that the scaling of beauty production follows the previously observed modifications of charm production. We do not observe an indication of an additional source of thermal radiation in high multiplicity pp events within the precision of the data.
In p--Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$\,TeV, the $\rm DCA_{ee}$ distribution shows promising possibilities for further studies of possible modifications of the HF contributions due to cold nuclear matter effects and to disentangle prompt and non-prompt sources. The latter would be another possibility to study possible thermal radiation in small collision systems.
| 2024-02-18T23:40:01.785Z | 2020-10-30T01:01:02.000Z | algebraic_stack_train_0000 | 1,129 | 2,189 |
|
proofpile-arXiv_065-5632 | \section{Introduction}
A central theme in complex and symplectic geometry is to understand the stability of various properties under natural deformations, and the relationship to uniqueness and moduli problems. For instance, the classic result of Kodaira-Spencer \cite{KodairaSpencer} shows that small complex deformations of K\"ahler manifolds remain K\"ahler. The main result of this note is local stability of K\"ahler structures under symplectic deformations on Calabi-Yau manifolds:
\begin{thm} \label{t:mainthm} Let $(M^{2n}, \omega, J)$ be a compact K\"ahler manifold with $c_1(M, J) = 0$. There exists $\ge > 0$ depending on $\omega$ so that if $\omega'$ is another symplectic form on $M$ such that
\begin{align*}
\brs{\omega - \omega'}_{C^{\infty}(\omega)} < \ge,
\end{align*}
then there exists an integrable complex structure $J'$ on $M$ compatible with $\omega'$.
\end{thm}
\begin{rmk}
\begin{enumerate}
\item The proof of Theorem \ref{t:mainthm} is an elementary consequence of the dynamical stability of symplectic curvature flow (SCF) \cite{SCF} near Calabi-Yau metrics. This dynamical stability of the more general family of `almost Hermitian curvature flows' introduced in \cite{SCF} was shown in the thesis of Smith (cf. \cite{SmithSCF}). We sketch the proof below in the simplified case of SCF.
\item It follows from the proof that in fact the $C^{\infty}$ smallness of the perturbation can be weakened to smallness in an appropriate Sobolev space.
\item The space of deformations of symplectic cohomology classes which remain K\"ahler under deformation was studied in \cite{deBart}.
\item Theorem \ref{t:mainthm} follows in some cases using results from complex deformation theory, and our proof provides an alternative using a geometric flow adapted to almost K\"ahler geometry.
\item Recently, the result of Theorem \ref{t:mainthm} was shown in the case $n = 3$ in \cite{IIAflow} using a geometric flow of symplectic forms adapted to that dimension.
\end{enumerate}
\end{rmk}
To begin we recall fundamental properties of symplectic curvature flow \cite{SCF}. An almost K\"ahler structure is a pair $(\omega, J)$ of a symplectic structure together with a compatible almost complex structure $J$ such that $g = \omega J$ is a Riemannian metric. In general $J$ is not integrable and $N$ will denote the Nijenhuis tensor of $J$. Almost K\"ahler structures come equipped with a Chern connection, the unique connection $\nabla$ on the tangent bundle such that $\nabla g \equiv 0, \nabla J \equiv 0$, and $T^{1,1} = 0$, where $T^{1,1}$ denotes the $(1,1)$ component of the torsion of $\nabla$. Let $\Omega$ denote the curvature of $\nabla$, and define $P = \tr \Omega J \in \pi c_1(M, J)$. A one-parameter family of almost K\"ahler structures $(g_t, \omega_t, J_t)$ satisfies symplectic curvature flow if
\begin{gather} \label{f:SCF}
\begin{split}
\dt g =&\ -2 \Rc + \frac{1}{2} B^1 - B^2,\\
\dt \omega =&\ - P,\\
\dt J =&\ - D^* D J + \mathcal N + \mathcal R,
\end{split}
\end{gather}
where $D$ denotes the Levi-Civita connection, and
\begin{align*}
B^1_{ij} =&\ g^{kl} g_{mn} D_i J_k^m D_j J_l^n, \qquad \qquad B^2_{ij} = g^{kl} g_{mn} D_k J_i^m D_l J_j^n,\\
\mathcal N_i^j =&\ g^{jk} g_{mn} g^{pq} D_p J_r^m J_i^r D_q J_k^n, \qquad \mathcal R_i^j = J_i^k \Rc_k^j - \Rc_i^k J_k^i.
\end{align*}
Note that the our description of symplectic curvature flow is redundant as any two of $(g, \omega, J)$ suffices to recover the third by compatibility. The fundamental points (cf. \cite{SCF} Theorem 1.6) are that symplectic curvature flow is locally well-posed for arbitrary initial data on compact manifolds, preserves the almost K\"ahler conditions, and if $J_0$ is integrable reduces to K\"ahler-Ricci flow.
\begin{thm} \label{t:dynstabSCF} (cf. \cite{SmithSCF} Theorem 1.1) Let $(M^{2n}, \omega_{CY}, J_{CY})$ denote a compact Calabi-Yau manifold. There exists $\ge > 0$ so that if $(\omega, J)$ is an almost K\"ahler structure such that
\begin{align*}
\brs{\omega_{CY} - \omega}_{C^{\infty}(\omega_{CY})} + \brs{J_{CY} - J}_{C^{\infty}(\omega_{CY})} < \ge,
\end{align*}
then the solution to symplectic curvature flow with initial condition $(\omega, J)$ exists on $[0,\infty)$ and converges exponentially to a K\"ahler Calabi-Yau structure $(\omega_{\infty}, J_{\infty})$.
\end{thm}
\begin{proof}
The proof relies on ideas from parabolic regularity theory and so we work directly with the gauge-modified flow which is strictly parabolic. For any almost K\"ahler structure $(g, J)$ we define the vector field
\begin{align*}
X^k(g, J) = g^{ij} \left( \Gamma_{ij}^k - (\Gamma_{CY})_{ij}^k \right).
\end{align*}
An elementary but important point is that this vector field is equivalently expressed as
\begin{align*}
X^k =&\ \omega^{ij} \nabla^{CY}_i J_j^k.
\end{align*}
Using $X$ we define the gauge-fixed symplectic curvature flow:
\begin{gather} \label{f:gfSCF}
\begin{split}
\dt g =&\ -2 \Rc + \frac{1}{2} B^1 - B^2 + L_X g =: \mathcal F_1(g,J)\\
\dt J =&\ - D^* D J + \mathcal N + \mathcal R + L_X J =: \mathcal F_2(g,J)
\end{split}
\end{gather}
The analysis centers on a sharp characterization of the linearization of this flow. To find this fix a one-parameter family of almost K\"ahler structures $(g_t, J_t)$ such that $(g_0, J_0) = (g_{CY}, J_{CY})$ and $\dot{g} = h, \dot{J} = K$. Lengthy but straightforward computations using that $(g_0, J_0)$ is Calabi-Yau show (cf. \cite{SCF} proof of Theorem 1.6)
\begin{align*}
\mathcal L_{(g_{CY}, J_{CY})} \mathcal \mathcal F_1 (h,K) =&\ \Delta h + 2 R \circ h =: \mathcal L_1(h),\\
\mathcal L_{(g_{CY}, J_{CY})} \mathcal \mathcal F_2 (h,K) =&\ \Delta K + 2 R \circ K =: \mathcal L_2(K),\\
\end{align*}
where
\begin{align*}
(R \circ h)_{ij} =&\ R_{i k l j} h^{kl}, \qquad (R \circ K)_j^i = g^{kl} R_{j l m}^i J_k^m.
\end{align*}
To analyze this operator we recall the work of Koiso \cite{Koisodef}. The operator $\mathcal L_1$ is the Einstein deformation operator at a Ricci-flat metric, and splits according to the decomposition $h = h_S + h_A$ into the $J$-symmetric and $J$-antisymmetric pieces. The action on $h_S$ corresponds precisely to the Hodge Laplacian acting on the $(1,1)$-form $h_S J_{CY}$, which is negative semidefinite with kernel determined by harmonic $(1,1)$ forms, which are canonically identified with $H^{1,1}(M, J_{CY})$. The action on $h_A$ is identified, after raising an index with $g_{CY}$, with the $\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t}$-Hodge Laplacian acting on $\Lambda^{0,1} \otimes T^{1,0}$, in this case restricted to symmetric endomorphisms. Furthermore, the operator $\mathcal L_2$ is again this same $\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t}$-Hodge Laplacian acting on $\Lambda^{0,1} \otimes T^{1,0}$, whose kernel is identified with the space of deformations of $J_{CY}$. This again is negative semidefinite with kernel identified with $H^{2,0}(M, \mathbb C)$.
Thus we have shown that the linearized operator is negative semidefinite, with kernel identified with the space of Einstein deformations of the given Calabi-Yau. It follows from a result in \cite{Tiandeformation} that every such infinitesimal deformation is in fact integrable.
Given this weak linear stability, together with an explicit description of the kernel, which is integrable, the remainder of the proof follows standard lines (cf. for instance \cite{Lotaystab, Sesum, SmithSCF,HCF}). In particular, by treating the flow as a small perturbation of the linearized flow, and using the analysis of the linearized operator above, one can show exponential decay towards some Calabi-Yau structure. Given this exponential convergence, it is elementary to show that the family of diffeomorphisms relating (\ref{f:gfSCF}) and (\ref{f:SCF}) converges exponentially, and thus the solution to (\ref{f:SCF}) is also converging to a Calabi-Yau structure exponentially fast.
\end{proof}
We now prove Theorem \ref{t:mainthm} as a consequence of Theorem \ref{t:dynstabSCF}:
\begin{proof}[Proof of Theorem \ref{t:mainthm}] Given $(M^{2n}, \omega, J)$ a compact K\"ahler manifold with $c_1(M, J) = 0$, by Yau's theorem \cite{YauCC} there exists a unique Calabi-Yau metric $\omega_{CY} \in [\omega]$ compatible with $J$. Applying Moser's Lemma to the family of cohomologous symplectic forms $\omega_t = t \omega_{CY} + (1-t) \omega$ we obtain the existence of a diffeomorphism $\phi$ such that $\phi^* \omega_{CY} = \omega$. By construction the pair $(\omega, \phi^* J) = (\phi^* \omega_{CY}, \phi^* J)$ is K\"ahler, Calabi-Yau. Now fix $\omega'$ such that $\brs{\omega - \omega'}_{C^{\infty}(\omega)} < \ge$. For sufficiently small $\ge > 0$, the one-parameter family $\omega_t = t \omega' + (1-t) \omega$ consists of symplectic forms, and we deform $\phi^* J$ along this path to produce an almost complex structure $J'$ compatible with $\omega'$ such that $\brs{\phi^* J - J'}_{C^{\infty}(\omega)} < C(\omega) \ge$ (cf. \cite{SCF} Lemma 4.3). For $\ge$ chosen sufficiently small at the outset, the pair $(\omega', J')$ satisfies the hypothesis of Theorem \ref{t:dynstabSCF} relative to the Calabi-Yau structure $(\omega, \phi^* J)$, and thus the solution to symplectic curvature flow with initial condition $(\omega', J')$ exists globally and converges to a Calabi-Yau structure $(\omega_{\infty}, J_{\infty})$, which further satisfies $\brs{\omega - \omega_{\infty}}_{C^{\infty}(\omega)} < C(\omega) \ge$. By construction, since $J'$ is connected by a smooth path to $J$, it follows that $c_1(M, J') = 0$, and this property is preserved along the symplectic curvature flow. This in turn implies that the cohomology class of $[\omega']$ is preserved along the flow, thus $[\omega_{\infty}] = [\omega']$. Since both $\omega_{\infty}$ and $\omega'$ are $\ge$-close to the symplectic form $\omega$, it follows that the path $t \omega' + (1-t) \omega_{\infty}$ consists of cohomologous symplectic forms, again applying Moser's Lemma we obtain a diffeomorphism $\psi$ such that $\psi^* \omega_{\infty} = \omega'$. It follows that the pair $(\omega', \psi^* J_{\infty})$ is K\"ahler, in fact Calabi-Yau, finishing the proof.
\end{proof}
| 2024-02-18T23:40:02.091Z | 2022-02-10T02:25:09.000Z | algebraic_stack_train_0000 | 1,141 | 1,695 |
|
proofpile-arXiv_065-5738 |
\section{Supplementary Experiments}
\begin{figure*}[!ht]
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/compaction/W1_quantile_compaction_latency_comp}.eps}
\vspace{-1cm}
\caption{Overall compaction latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/compaction/W1_quantile_compaction_cpu_latency_comp}.eps}
\vspace{-1cm}
\caption{CPU latency for compactions}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/point_query/W1_filter_block_cache_miss_comp}.eps}
\vspace{-1cm}
\caption{Block misses for point queries}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/point_query/W1_index_block_cache_miss_comp}.eps}
\vspace{-1cm}
\caption{Index misses for point queries}
\end{subfigure}
\vspace{-0.2cm}
\caption{(a,b) shows that correlation between the overall latency for compactions and the CPU cycles spent for compactions; (b,c) shows how the misses to the filter and index blocks change across different compaction strategies as the proportion of non-empty and empty queries change in a lookup-only workload.}
\label{fig:W1-supp}
\end{figure*}
In this appendix, we present the supplementary results along with the auxiliary observations (\textbf{o}) that were omitted from the main paper due to space constraints.
In the interest of space, we limit our discussion to the most interesting results and observations.
For better readability, we re-use the subsection titles used in \S \ref{sec:results} throughout this appendix.
\subsection{Performance Implications}
Here we present the supplementary results for the serial execution of the ingestion-only and lookup-only workloads.
Details about the workload specifications along with the experimental setup can be found throughout \S \ref{subsec:performance}.
\Paragraph{\mob The CPU Cost for Compactions is Significant}
The CPU cycles spent due to compactions (Fig. \ref{fig:W1-supp}(b)) is close to $50\%$ of the overall time spent for compactions (Fig. \ref{fig:W1-supp}(a), which is same as Fig. \ref{fig:W1}(c)) regardless of the compaction strategy.
During a compaction job CPU cycles are spent in (1) the preparation phase to obtain necessary locks and take snapshots, (2) sort-merging the entries during the compaction, (3) updating the file pointers and metadata, and (4) synchronizing the output files post compaction.
Among these, the time spent to sort-merge the data in memory dominates the other operations.
This explains the similarity in patterns between Fig. \ref{fig:W1-supp}(a) and \ref{fig:W1-supp}(b).
As both the CPU time and the overall time spent for compactions are driven by the total amount of data compacted, the plots look largely similar.
\Paragraph{\mob Dissecting the Lookup Performance}
To analyze the lookup performance presented in Fig.~\ref{fig:W1}(h), we further plot the block cache misses for Bloom filters blocks in Fig.~\ref{fig:W1-supp}(c), and the index (fence pointer) block misses in Fig.~\ref{fig:W1-supp}(d).
Note that, both empty and non-empty lookups must first fetch the filter blocks, hence, for the filter block misses remain almost unaffected as we vary $\alpha$.
Not that \texttt{Tier} has more misses because it has more overall sorted runs.
Subsequently, the index blocks are fetched only if the filter probe returns positive.
With $10$ bits-per-key the false positive is only $0.8\%$, and as we have more empty queries, that is, increasing $\alpha$, fewer index blocks are accessed.
The filter blocks are maintained at a granularity of files and in our setup amount to $20$ I/Os.
The index blocks are maintained for each disk page and in our setup amount to $4$ I/Os, being $1/5^{th}$ of the cost for fetching the filter blocks.\footnote{filter block size per file = \#entries per file $*$ bits-per-key = $512$*$128$*$10$B = $80$kB; index block size per file = \#entries per file $*$ (key size$+$pointer size) = $512 * (16$+$16)$B = $16$kB.}.
The cost for fetching the filter block is $5\times$ the cost for fetching the index block.
This, coupled with the probabilistic fetching of the index block (depending on $\alpha$ and the false positive rate ($FPR=0.8\%$) of the filter) leads to a non-monotonic latency curve for point lookups as $\alpha$ increases, and this behavior is persistent regardless of the compaction strategy.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.1_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W18_1_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.1_write_delay_comp}.eps}
\caption{Write delay}
\label{fig:W18_1_write_delay}
\end{subfigure}
\vspace{-1em}
\caption{Varying Block Cache (insert-only)}
\label{fig:W18.1}
\end{figure}
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.2_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W18_2_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.2_write_delay_comp}.eps}
\caption{Write delay}
\label{fig:W18_2_write_delay}
\end{subfigure}
\vspace{-1em}
\caption{Varying Block Cache (interleaving with $10\%$ point lookups)}
\label{fig:W18.2}
\end{figure}
We vary the block cache for insert-only and mixed workloads ($10\%$ existing point lookups interleaved with insertions). For mixed workload, the mean compaction latency remains stable when block cache varies from $8$MB to $256$MB. However, for insert-only workload, the mean compaction latency increases sharply when block cache is more than $32$ MB (Fig. \ref{fig:W18_1_mean_compaction_latency} and \ref{fig:W18_2_mean_compaction_latency}). We observe that for insert-only workload, the write delay (also termed as write stall) is more than twice that of mixed workload (Fig. \ref{fig:W18_1_write_delay} and \ref{fig:W18_2_write_delay}). We leave this interesting phenomenon for future discussion. Compared to full and partial compaction, tiering is more stable with respect to different block cache size.
\subsection{Varying Page Size}
When we vary the page size, we observe almost consist patterns across different compaction strategies for all metrics (Fig. \ref{fig:W19_1_mean_compaction_latency} and \ref{fig:W19_1_write_ampli}). It turns out that compaction strategy does not play a big role for different page sizes.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W19/W19.1_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W19_1_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W19/W19.1_write_ampli_comp}.eps}
\caption{Write amplification}
\label{fig:W19_1_write_ampli}
\end{subfigure}
\vspace{-1em}
\caption{Varying Page Size (insert-only)}
\label{fig:W19.1}
\end{figure}
\subsection{Varying Size Ratio}
We also compare the performance for different size ratio. According to Fig. \ref{fig:W20_1_mean_compaction_latency}, tiering has higher mean compaction latency compared to other strategies when the size ratio is no more than 6 and after 6, full compaction and oldest compaction become the top-2 time-consuming strategies. In terms of tail compaction strategy in Fig. \ref{fig:W20_1_P100_compaction_latency}, tiering is still the worst one compared to other strategies.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/compaction/W20.1_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W20_1_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W20/W20.1_P100_compaction_latency_comp}.eps}
\caption{Tail compaction latency}
\label{fig:W20_1_P100_compaction_latency}
\end{subfigure}
\vspace{-1em}
\caption{Varying Size Ratio (insert-only)}
\label{fig:W20.1}
\end{figure}
\subsection{Varying Bits Per Key (BPK)}
We also conduct the experiment to investigate the BPK's influence over compaction. From Fig. \ref{fig:W20_1_mean_compaction_latency} and \ref{fig:W21_2_P100_compaction_latency}, the mean and tail compaction latency may increase a little bit with increasing bits per key since larger filter blocks should be written but this increasing is very tiny since the increasing filter block is quite smaller than all data blocks. At the same, we also observe that the query latency even increases with increasing BPK (see Fig. \ref{fig:W21_2_empty_get_latency} and \ref{fig:W21_2_existing_get_latency}). This might come from higher filter block misses (Fig. YY) and this pattern becomes more obvious for existing queries in which case, accessing filter blocks is completely an extra burden.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.2_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W21_2_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.2_P100_compaction_latency_comp}.eps}
\caption{Tail compaction latency}
\label{fig:W21_2_P100_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.3_empty_mean_get_latency_comp}.eps}
\caption{Mean get latency (empty queries)}
\label{fig:W21_2_empty_get_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.3_existing_mean_get_latency_comp}.eps}
\caption{Mean get latency (existing queries)}
\label{fig:W21_2_existing_get_latency}
\end{subfigure}
\vspace{-1em}
\caption{Varying Size Ratio (insert-only)}
\label{fig:W20.1}
\end{figure}
\subsection{Compaction Primitives}
We define a compaction strategy as \textit{an ensemble of design primitives that represents the fundamental decisions about the physical data layout and the data (re-)organization policy}.
Each primitive answers a fundamental design question.
\begin{itemize}[leftmargin=5mm]
\item[1)] \textit{Compaction trigger}: \textbf{When} to re-organize the data layout?
\item[2)] \textit{Data layout}: \textbf{How} to lay out the data physically on storage?
\item[3)] \textit{Compaction granularity}: \textbf{How much} data to move at-a-time during layout re-organization?
\item[4)] \textit{Data movement policy}: \textbf{Which} block of data to be moved during re-organization?
\end{itemize}
Together, these design primitives define \textit{when} and \textit{how} an LSM-engine re-organizes the data layout on the persistent media.
The proposed primitives capture any state-of-the-art LSM-compaction strategy and also enables synthesizing new or unexplored compaction strategies.
Below, we define these four design primitives.
\begin{figure*}
\vspace{-0.2in}
\centering
\includegraphics[width=\textwidth]{omnigraffle/primitives.pdf}
\vspace{-0.25in}
\caption{The primitives that define LSM compactions: trigger, data layout, granularity, and data movement policy.}
\label{fig:comp_prims}
\vspace{-0.1in}
\end{figure*}
\subsubsection{\textbf{Compaction Trigger}}
Compaction triggers refer to the set of events that can initiate a compaction job.
The most common compaction trigger is based on the \textit{degree of saturation} of a level in an LSM-tree~\cite{Alsubaiee2014,FacebookRocksDB,GoogleLevelDB,HyperLevelDB,Tarantool,Golan-Gueta2015,Sears2012}.
The degree of saturation for Level $i$ ($1 \leq i \leq L-1$) is typically measured as the ratio of the number of bytes of data stored in Level $i$ to the theoretical capacity in bytes for Level $i$.
Once the degree of saturation goes beyond a pre-defined threshold, one or more immutable files from Level $i$ are marked for compaction.
Some LSM-engines use the file count in a level to compute degree of saturation~\cite{GoogleLevelDB,Huang2019,HyperLevelDB,RocksDB2020,ScyllaDB}.
Note that the file count-based degree of saturation works only when all immutable files are of equal size, or for systems that have a tunable file size.
The ``\#sorted runs'' compaction trigger, triggers a compaction if the number of sorted runs (or ``tiers'') in a level goes past a predefined threshold, regardless of the size of a level.
Other compaction triggers include the \textit{staleness of a file}, the \textit{tombstone-based time-to-live}, and \textit{space} and \textit{read amplification}.
For example, to ensure propagation of updates and deletes to the deeper levels of a tree, some LSM-engines assign a time-to-live (TTL) for each file during its creation.
Each file can live in a level for a bounded time, and once the TTL expires, the file is marked for compaction~\cite{FacebookRocksDB}.
Another delete-driven compaction trigger ensures bounded persistence latency of deletes in LSM-trees through a different timestamp-based scheme.
Each file containing at least one tombstone is assigned a special time-to-live in each level, and up on expiration of this timer, the file is marked for compaction~\cite{Sarkar2020}.
Below, we present a list of the most common \textbf{compaction triggers}:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Level saturation}}: level size goes beyond a nominal threshold
\item[ii)] \textit{\textbf{\#Sorted runs}}: sorted run count for a level reaches a threshold
\item[iii)] \textit{\textbf{File staleness}}: a file lives in a level for too long
\item[iv)] \textit{\textbf{Space amplification (SA)}}: overall SA surpasses a threshold
\item[v)] \textit{\textbf{Tombstone-TTL}}: files have expired tombstone-TTL
\end{itemize}
\subsubsection{\textbf{Data layout}}
The data layout is driven by the compaction eagerness, and determines the data organization on disk by controlling the number of sorted runs per level.
Compactions move data between storage and memory, consuming a significant portion of the device bandwidth.
There is, thus, an inherent competition for the device bandwidth between ingestion (external) and compaction (internal) -- a trade-off depending on the eagerness of compactions.
The data layout is commonly classified as \emph{leveling} and
\emph{tiering}~\cite{Dayan2017,Dayan2018a}.
With leveling, once a compaction is triggered in Level $i$, the file(s) marked for compaction are merged with the overlapping file(s) from Level $i+1$, and the result is written back to Level $i+1$.
As a result, Level $i+1$ ends up with a (single) longer sorted run of immutable files~\cite{FacebookRocksDB,Golan-Gueta2015,GoogleLevelDB,Huang2019,HyperLevelDB,Sears2012}.
For tiering, each level may contain more than one sorted runs with overlapping key domains.
Once a compaction is triggered in Level $i$, all sorted runs in Level $i$ are merged together and the result is written to Level $i+1$ as a new sorted run without disturbing the existing runs in that level~\cite{Alsubaiee2014,ApacheCassandra,ApacheHBase,FacebookRocksDB,ScyllaDB,Tarantool}.
A hybrid design is proposed in Dostoevsky~\cite{Dayan2018} where the last level is implemented as leveled and all the remaining levels on disk are tiered.
A generalization of this idea is proposed in the literature as a continuum of designs~\cite{Dayan2019,Idreos2019} that allows each level to separately decide between leveling and tiering.
Among production systems, RocksDB implements the first disk-level (Level $1$) as tiering~\cite{RocksDB2020}, and it is allowed to grow perpetually in order to avoid write-stalls~\cite{Balmau2019,Balmau2020,Callaghan2017} in ingestion-heavy workloads.
Below is a list of the most common options for \textbf{the data layout}:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Leveling}}: one sorted run per level
\item[ii)] \textit{\textbf{Tiering}}: multiple sorted runs per level
\item[iii)] \textit{\textbf{\bm{$1$}-leveling}}: \textit{tiering} for Level $1$; \textit{leveling} otherwise
\item[iv)] \textit{\textbf{\bm{$L$}-leveling}}: \textit{leveling} for last level; \textit{tiering} otherwise
\item[v)] \textit{\textbf{Hybrid}}: a level can be \textit{tiering} or \textit{leveling} independently
\end{itemize}
\subsubsection{\textbf{Compaction Granularity}}
Compaction granularity refers to the amount of data moved during a single compaction job.
One way to compact data is by sort-merging and moving all data from a level to the next level -- we refer to this as \textit{full compaction}~\cite{Alkowaileet2020,Alsubaiee2014,Teng2017,WiredTiger}.
This results in periodic bursts of I/Os due to large data movement during compactions, and as a tree grows deeper, the latency spikes are exacerbated causing prolonged write stalls.
To amortize the I/O costs due to compactions, leveled LSM-based engines employ \textit{partial compaction}~\cite{FacebookRocksDB,GoogleLevelDB,Huang2019,ONeil1996,Sarkar2020,ScyllaDB}, where instead of moving a whole level, a smaller granularity of data participates in every compaction.
The granularity of data can be a single file~\cite{Dong2017,Huang2019,Sarkar2020} or multiple files~\cite{Alkowaileet2020,Alsubaiee2014,ApacheCassandra,ONeil1996} depending on the system design and the workload.
Note that, partial compaction does not radically change the total amount of data movement due to compactions, but amortizes this data movement uniformly over time, thereby preventing undesired latency spikes.
A compaction granularity of ``sorted runs'' applies principally to LSMs with lazy merging policies.
Once a compaction is triggered in Level $i$, all sorted runs (or tiers) in Level $i$ are compacted together, and the resulting entries are written to Level $i+1$ as a new immutable sorted run.
Below, we present a list of the most common \textbf{compaction granularity} options:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Level}}: all data in two consecutive levels
\item[ii)] \textit{\textbf{Sorted runs}}: all sorted runs in a level
\item[iii)] \textit{\textbf{File}}: one sorted file at a time
\item[iv)] \textit{\textbf{Multiple files}}: several sorted files at a time
\end{itemize}
\subsubsection{\textbf{Data Movement Policy}}
When \textit{partial compaction} is employed, the data movement policy selects
which file(s) to choose for compaction.
While the literature commonly refers to this decision as \textit{file picking policy}~\cite{Dong2016}, we use the term \textit{data movement} to generalize for any possible data movement granularity.
A na\"ive way to choose file(s) is at random or by using a round-robin policy~\cite{GoogleLevelDB,HyperLevelDB}.
These data movement policies do not focus on optimizing for any particular performance metric, but help in reducing space amplification.
To optimize for read throughput, many production data stores~\cite{Huang2019,FacebookRocksDB} select the ``coldest'' file(s) in a level once a compaction is triggered.
Another common optimization goal is to minimize write amplification.
In this policy, files with the least overlap with the target level are marked for compaction~\cite{Callaghan2016,Dong2016}.
To reduce space amplification, some storage engines choose files with the highest number of tombstones and/or updates~\cite{FacebookRocksDB}.
Another delete-aware approach introduces a tombstone-age driven file picking policy that aims to timely persist logical deletes~\cite{Sarkar2020}.
Below, we present the list of the common \textbf{data movement policies}:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Round-robin}}: chooses files in a round-robin manner
\item[ii)] \textit{\textbf{Least overlapping parent}}: file with least overlap with ``parent'
\item[iii)] \textit{\textbf{Least overlapping grandparent}}: as above with ``grandparent''
\item[iv)] \textit{\textbf{Coldest}}: the least recently accessed file
\item[v)] \textit{\textbf{Oldest}}: the oldest file in a level
\item[vi)] \textit{\textbf{Tombstone density}}: file with \#tombstones above a threshold
\item[vii)] \textit{\textbf{Tombstone-TTL}}: file with expired tombstones-TTL
\end{itemize}
\subsection{Compaction as an Ensemble of Primitives}
Every compaction strategy takes one or more values for each of the four primitives.
The trigger, granularity, and data movement policy are multi-valued primitives, whereas data layout is single-valued.
For example, a common LSM design~\cite{Alkowaileet2020} has a \textbf{leveled} LSM-tree (\textit{data layout}) that compacts \textbf{whole levels} at a time (\textit{granularity}) once a \textbf{level reaches a nominal size} (\textit{trigger}).
This design does not implement many subtle optimizations including partial compactions, and by definition, does not need a data movement policy.
A more complex example is the compaction strategy for a \textbf{leveled} LSM-tree (\textit{data layout}) in which compactions are performed at the \textit{granularity} of a \textbf{file}.
A compaction is \textit{triggered} if either (a) a \textbf{level reaches its capacity} or (b) a \textbf{file containing tombstones is retained in a level longer than a pre-set TTL}~\cite{Sarkar2020}.
Once triggered, the \textit{data movement policy} chooses (a) \textbf{the file with the highest density of tombstones}, if there is one or (b) \textbf{the file with the least overlap with the parent level}, otherwise.
\begin{table}[t]
\centering
\resizebox{0.475\textwidth}{!}{%
\LARGE
\begin{tabular}{l|c|ccccc|cccc|cccccccc}
\toprule
\multicolumn{1}{c|}{\multirow{9}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Database} \end{tabular}}}
& \multirow{9}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Data layout} \end{tabular}}
& \multicolumn{5}{c|}{\textbf{\multirow{1}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Compaction} \end{tabular}}}}
& \multicolumn{4}{c|}{\textbf{\multirow{1}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Compaction} \end{tabular}}}}
& \multicolumn{7}{c}{\textbf{\multirow{1}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Data Movement} \end{tabular}}}} \\
\multicolumn{1}{c|}{}
& \multicolumn{1}{c|}{}
& \multicolumn{5}{c|}{\textbf{Trigger}}
& \multicolumn{4}{c|}{\textbf{Granularity}}
& \multicolumn{7}{c}{\textbf{Policy}} \\ \cline{3-19}
&
& \rotatebox[origin=l]{90}{Level saturation}
& \rotatebox[origin=l]{90}{\#Sorted runs}
& \rotatebox[origin=l]{90}{File staleness}
& \rotatebox[origin=l]{90}{Space amp.}
& \rotatebox[origin=l]{90}{Tombstone-TTL\hspace*{1mm}}
& \rotatebox[origin=l]{90}{Level}
& \rotatebox[origin=l]{90}{Sorted run}
& \rotatebox[origin=l]{90}{File (single)}
& \rotatebox[origin=l]{90}{File (multiple)}
& \rotatebox[origin=l]{90}{Round-robin}
& \rotatebox[origin=l]{90}{Least overlap ($+$$1$) }
& \rotatebox[origin=l]{90}{Least overlap ($+$$2$)}
& \rotatebox[origin=l]{90}{Coldest file}
& \rotatebox[origin=l]{90}{Oldest file}
& \rotatebox[origin=l]{90}{Tombstone density}
& \rotatebox[origin=l]{90}{Expired TS-TTL}
& \rotatebox[origin=l]{90}{N/A (entire level)} \\ \hline \bottomrule
\multirow{2}{*}{RocksDB~\cite{FacebookRocksDB},}
& \multirow{1}{*}{Leveling /}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & \multirow{2}{*}{\cmark}
& \multirow{2}{*}{} & {} & {}
& {} & \multirow{2}{*}{\cmark} & \multirow{2}{*}{\cmark} & {}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & \multirow{2}{*}{\cmark}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{\cmark} & {} \\
\multirow{2}{*}{Monkey~\cite{Dayan2018a}}
& \multirow{1}{*}{1-Leveling} & & & & & & & & & & \\ \cline{2-19}
& \multirow{1.2}{*}{Tiering} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & {} & {} & {} & {} & \multirow{1.2}{*}{\cmark} \\ \midrule
\multirow{1}{*}{LevelDB~\cite{GoogleLevelDB},}
& \multirow{2}{*}{Leveling}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & {}
& {} & {} & {}
& {} & \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & \multirow{2}{*}{\cmark}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{\cmark} & {}
& {} & {} & {} \\
\multirow{1}{*}{Monkey (J.)~\cite{Dayan2017}}
& {} & & & & & & & & & & \\ \midrule
SlimDB~\cite{Ren2017}
& {Tiering}
& \cmark & {} & {}
& {} & {} & {}
& {} & \cmark & \cmark & {}
& {} & {} & {}
& {} & {} & {} & \cmark \\ \midrule
Dostoevsky~\cite{Dayan2018}
& $L$-leveling & {\cmark$^{L}$} & {\cmark$^{T}$}
& {} & {} & {}
& {\cmark$^{L}$} & {\cmark$^{T}$} & {} & {}
& {} & {\cmark$^{L}$} & {}
& {} & {} & {} & {} & {\cmark$^{T}$} \\ \midrule
LSM-Bush~\cite{Dayan2019}
& {Hybrid leveling} & {\cmark$^{L}$} & {\cmark$^{T}$}
& {} & {} & {}
& {\cmark$^{L}$} & {\cmark$^{T}$} & {} & {}
& {} & {\cmark$^{L}$} & {}
& {} & {} & {} & {} & {\cmark$^{T}$} \\ \midrule
Lethe~\cite{Sarkar2020}
& Leveling
& \cmark & {}
& {} & {} & {\cmark} &
& {} & \cmark & \cmark & {}
& {\cmark} & & &
& & \cmark \\ \midrule
Silk~\cite{Balmau2019}, Silk+~\cite{Balmau2020}
& {Leveling}
& {\cmark} & {}
& {} & {} & {} & {}
& {} & {\cmark} & {\cmark} & {\cmark}
& {} & {} & {} & {}
& {} & {} \\ \midrule
HyperLevelDB~\cite{HyperLevelDB}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {}
& {\cmark} & {\cmark} & {\cmark}
& {} & {} & {} \\ \midrule
PebblesDB~\cite{Raju2017}
& {Hybrid leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {\cmark}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
\multirow{2}{*}{Cassandra~\cite{ApacheCassandra}}
& \multirow{1}{*}{Tiering}
& {} & {\cmark}
& {\cmark} & {} & {\cmark}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} &{} &{\cmark}
\\ \cline{2-19}
& \multirow{1.2}{*}{Leveling}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} &\multirow{1.2}{*}{\cmark}
\\ \midrule
WiredTiger~\cite{WiredTiger}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {\cmark} & {} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
X-Engine~\cite{Huang2019}, Leaper~\cite{Yang2020}
& {Hybrid leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {\cmark}
& {} & {\cmark} & {}
& {} & {} & {\cmark} \\ \midrule
HBase~\cite{ApacheHBase}
& {Tiering} & {} & {\cmark}
& {} & {} & {}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
\multirow{2}{*}{AsterixDB~\cite{Alsubaiee2014}}
& \multirow{1}{*}{Leveling} & {\cmark} & {{}}
& {} & {} & {}
& {\cmark} & {} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} &{\cmark} \\ \cline{2-19}
& \multirow{1.2}{*}{Tiering}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
\\ \midrule
Tarantool~\cite{Tarantool}
& {$L$-leveling} & {\cmark$^L$} & {\cmark$^T$}
& {} & {} & {}
& {\cmark$^L$} & {\cmark$^T$} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
\multirow{2}{*}{ScyllaDB~\cite{ScyllaDB}}
& \multirow{1}{*}{Tiering} & {} & {\cmark}
& {\cmark} & {} & {\cmark}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \cline{2-19}
& \multirow{1.2}{*}{Leveling}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark} &\multirow{1.2}{*}{} \\ \midrule
bLSM~\cite{Sears2012}, cLSM~\cite{Golan-Gueta2015}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {}
& {\cmark} & {} & {}
& {} & {} & {} \\ \midrule
Accumulo~\cite{ApacheAccumulo}
& {Tiering} & {\cmark} & {\cmark}
& {} & {} & {\cmark}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
LSbM-tree~\cite{Teng2017,Teng2018}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {\cmark} & {} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
SifrDB~\cite{Mei2018}
& {Tiering} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {} & {\cmark}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \hline
\bottomrule
\end{tabular}
}
\caption{Compaction strategies in state-of-the-art systems. [\footnotesize{\cmark$^{L}$\normalsize: for levels with leveling; \footnotesize\cmark$^{T}$\normalsize: for levels with tiering.}\normalsize] \label{tab:db}}
\vspace{-0.35in}
\end{table}
\Paragraph{The Compaction Design Space Cardinality}
Two compaction strategies are considered different from each other if they differ in at least one of the four primitives.
Compaction strategies that
differ in only one primitive, can have vastly different performance when subject to the same workload while running on identical hardware.
Plugging in some typical values for the cardinality of the primitives, we estimate the cardinality of the compaction universe as >$10^4$, a vast yet largely unexplored design space.
Table \ref{tab:db} shows a representative part of this space, detailing the compaction strategies used in more than twenty academic and production systems.
\Paragraph{Compactions Analyzed}
For our analysis and experimentation, we select ten representative compaction strategies that are prevalent in production and academic LSM-based systems.
We codify and present these candidate compaction strategies in Table \ref{tab:comp_list}.
\texttt{Full} represents the compaction strategy for leveled LSM-trees that compacts entire levels upon invocation.
\texttt{LO+1} and \texttt{LO+2} denote two partial compaction routines that choose a file for compaction with the smallest overlap with files in the parent ($i+1$) and grandparent ($i+2$) levels, respectively.
\texttt{RR} chooses files for compaction in a round-robin fashion from each level.
\texttt{Cold} and \texttt{Old} are read-friendly strategies that mark the coldest and oldest file(s) in a level for compaction, respectively.
\texttt{TSD} and \texttt{TSA} are delete-driven compaction strategies with triggers and data movement policies that are determined by the density of tombstones and the age of the oldest tombstone contained in a file, respectively.
\texttt{Tier} represents a variant of tiered data layout, where compactions are triggered when either (a) the number of sorted runs in a level or (b) the estimated space amplification in the tree reaches certain thresholds.
This interpretation of tiering is also referred to as \textit{universal compaction} in systems like RocksDB~\cite{Kryczka2020,RocksDB2020}.
Finally, \texttt{1-Lvl} represents a hybrid data layout where the first disk level is realized as \textit{tiered} while the others as \textit{leveled}.
This is the default data layout for RocksDB~\cite{Kryczka2020,RocksDB2020a}.
\begin{table*}[!ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{L{2cm}|C{1.9cm}|C{1.6cm}|C{1.3cm}|C{1.2cm}|C{2.8cm}|C{1.5cm}|C{1.7cm}|C{2.8cm}|C{1.8cm}|C{2.9cm}}
\toprule
\multirow{2}{*}{\textbf{Primitives}} & \textbf{\texttt{Full} \cite{Alsubaiee2014,Teng2017,WiredTiger}} & \textbf{\texttt{LO+1} \cite{FacebookRocksDB,Dayan2018a,Sarkar2020}} & \textbf{\texttt{Cold} \cite{FacebookRocksDB}} & \textbf{\texttt{Old} \cite{FacebookRocksDB}} & \textbf{\texttt{TSD} \cite{FacebookRocksDB,Huang2019}} & \textbf{\texttt{RR} \cite{GoogleLevelDB,HyperLevelDB,Sears2012,Golan-Gueta2015}} & \textbf{\texttt{LO+2} \cite{GoogleLevelDB,HyperLevelDB}} & \textbf{\texttt{TSA} \cite{Sarkar2020}} & \textbf{\texttt{Tier} \cite{Ren2017,ApacheCassandra,HBase2013}} & \textbf{\texttt{1-Lvl} \cite{FacebookRocksDB,Kryczka2020,RocksDB2020a}} \\
\midrule
\multirow{2}{*}{Trigger} & \multirow{2}{*}{level saturation} & \multirow{2}{*}{level sat.} & \multirow{2}{*}{level sat.} & \multirow{2}{*}{level sat.} & \multirow{1}{*}{1. TS-density} & \multirow{2}{*}{level sat.} & \multirow{2}{*}{level sat.} & \multirow{1}{*}{1. TS age} & \multirow{1}{*}{1. \#sorted runs} & \multirow{1}{*}{1. \#sorted runs$^T$} \\
& & & & & 2. level sat. & & & 2. level sat. & \multirow{1}{*}{2. space amp.} & \multirow{1}{*}{2. level sat.$^L$} \\
\midrule
\multirow{1}{*}{Data layout} & leveling & leveling & leveling & leveling & leveling & leveling & leveling & leveling & tiering & hybrid \\
\midrule
\multirow{2}{*}{Granularity} & \multirow{2}{*}{levels} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{sorted runs} & \multirow{1}{*}{1. sorted runs$^T$} \\
& & & & & & & & & & \multirow{1}{*}{2. files$^L$} \\
\midrule
\multirow{1}{*}{Data~movement} & \multirow{2}{*}{N/A} & \multirow{1}{*}{least overlap.} & \multirow{2}{*}{coldest file} & \multirow{2}{*}{oldest file} & \multirow{1}{*}{1. most tombstones} & \multirow{2}{*}{round-robin} & \multirow{1}{*}{least overlap.} & \multirow{1}{*}{1. expired TS-TTL} & \multirow{2}{*}{N/A} & \multirow{1}{*}{1. N/A$^T$} \\
\multirow{1}{*}{policy} & & \multirow{1}{*}{parent} & & & \multirow{1}{*}{2.~least~overlap.~parent} & & \multirow{1}{*}{grandparent} & \multirow{1}{*}{2.~least~overlap.~parent} & & \multirow{1}{*}{2. least overlap. parent$^L$}\\
\bottomrule
\end{tabular}
}
\caption{Compaction strategies evaluated in this work. [\footnotesize{$^{L}$\normalsize: levels with leveling; \footnotesize$^{T}$\normalsize: levels with tiering.}\normalsize]\label{tab:comp_list}}
\vspace{-0.25in}
\end{table*}
\subsection{Write Performance}
The primary objective of compactions is to re-organize the data stored on disk to create fewer longer sorted runs.
However, as data on disk are stored in immutable files, in-place modifications are not supported in LSM-trees.
Each compaction job, thus, takes as input a number of immutable files that are read to memory from disk, and writes back a new set of immutable files to disk after the compaction job is completed.
From the stand-point of a write-optimized data store, this data movement due to compaction is often classified as superfluous and causes high write amplification, which results in under-utilization of the device bandwidth and leads to poor write throughput~\cite{Raju2017}.
Below, we discuss how the different dimensions of compactions affect the write performance, which is further summarized in Table \ref{tab:perf}.
\subsubsection{\textbf{Write amplification}} We define write amplification as the count for the number of times an entry is (re-)written without any modifications to disk during its lifetime (i.e., until it is physically deleted from the disk).
While in an ideal write-optimized store, write amplification should be $1$ (i.e., entries are written in a log), periodic (re-)organization of data in LSM-trees leads to significantly high write amplification~\cite{Raju2017}.
\textit{\blue{Data layout}.} In a leveled LSM-tree, every time a Level $i$ reaches its capacity, all (or a subset of) files from Level $i$ are compacted with all (or the overlapping) the files from Level $i+1$; thus, on average each entry is written $T$ times within a level which leads to an average-case write amplification of $\mathcal{O}(T \cdot L)$.
For a tiered LSM, each level may have up to $T$ sorted runs with overlapping key-ranges; thus, each entry is written at least once per level resulting in an average-case write amplification of $\mathcal{O}(L)$.
An $l$-leveled LSM-tree has its last $l$ levels implemented as leveled with the remaining shallower $L-l$ levels as tiering; and thus, the average-case write amplification in an $l$-leveled tree is given as $\mathcal{O}(L-l) + \mathcal{O}(T \cdot l)$.
Similarly, for a hybrid LSM-tree, the average-case write amplification can be expressed as $\mathcal{O}(L-i) + \mathcal{O}(T \cdot i)$, where $L-i$ denotes the number of tiered levels in the tree.
\textit{Compaction trigger.} Compaction triggers that relate to level capacity are the primary trigger in all LSM-trees, and thus, constitute the baseline for our write amplification analysis.
Any additional triggers, such as staleness of a file, expired tombstone TTLs, space amplification, and read amplification manifests in more frequent compactions, which leads to higher write amplification.
However, such secondary triggers often optimizes for a different performance metric, which may amortize the write amplification over time~\cite{Sarkar2020}.
\textit{Compaction granularity.} Compaction granularity controls the average amount of data movement per compaction job, and is typically driven by the size of the immutable files.
The granularity of compaction do not affect the write amplification, as the total amount of data compacted over time remains the same regardless of the granularity of data movement.
\textit{Data movement policy.} The data movement policy in LSM-based storage engines is typically chosen as to optimize for a set of performance metrics, and thus, plays a crucial role on the overall performance of the engine, including write amplification.
The average write amplification remains as $\mathcal{O}(T \cdot L)$ for leveling and $\mathcal{O}(T)$ for tiering as files are chosen for compaction at random or in a round-robin manner once a compaction job is triggered.
However, choosing the file with the least overlap with its parent or grandparent levels optimizes for write amplification, and thus, have a reduced data movement due to compactions.
For compaction strategies that optimize for other performance goals, such as read performance, space amplification, and delete performance, the write amplification is often measured to be higher than that of the average case.
\subsubsection{\textbf{Write Throughput}} The write throughput of an LSM-tree is principally driven by the degree of utilization of the device bandwidth by writes.
The bandwidth utilization, in turn, is affected by (i) any write stalls resulting from the compactions and (ii) the absolute bandwidth support provided by the device.
While the time taken to complete a compaction job influences the frequency and amplitude of the write stalls, the overall data movement due to compaction determines the bandwidth of the device that can be used for writing the ingested data.
Thus, for a given device, write throughput is affected by compactions in the same way as write amplification.
\textit{\blue{Data layout}.} A leveled LSM-tree performs compactions eagerly whenever the memory buffer is full or a disk level reaches a nominal capacity.
This triggers compactions frequently, which consumes the device bandwidth at a greater degree, and affects the write throughput adversely.
In contrast, in tiering, compactions are less frequent and with the device bandwidth mostly free of compaction traffic, the write throughput is significantly improved.
\textit{Compaction trigger.} In presence of secondary compaction triggers (alongside saturation-driven primary triggers), the data movement due to compactions may increase which leads to reduced write throughput.
However, in most practical cases, additional compaction triggers this additional data movement due to compactions amortizes significantly over time, and thus, in the long run, the write amplification remains almost unaffected by the presence of secondary compaction triggers.
\textit{Compaction granularity.} Larger compaction granularity results in long-living compaction jobs, which often leads to prolonged write stalls, and thereby, a steep drop in the write throughput for a significant duration, increasing the tail-latency for writes.
Such latency spikes are generally infrequent, but highly prevalent in leveled LSM-trees that compacts whole levels at at time~\cite{ONeil1996}, leveled LSM-trees with partial compaction routines that compacts several files at a time, and in most tiered LSM-trees~\cite{ApacheCassandra,ApacheHBase}.
A smaller compaction granularity amortized such latency spikes over time by performing smaller but more frequent compactions throughout the workload execution~\cite{FacebookRocksDB}.
However, it is noteworthy that the overall amount of data movement due to compactions always remains the same regardless of the compaction granularity, and thus, the overall write throughput remains unaffected by the granularity of compactions.
It is just that a smaller compaction granularity is free from any undesired latency spikes and write stalls, thereby ensuring bounded tail-latency for writes.
\textit{Data movement policy.} Similarly to write amplification, the data movement policy also plays a critical role on the write throughput of a storage engine.
Partial compaction routines that chooses the files with minimal overlap with the target level, saturates the device bandwidth with compaction traffic the least, and hence, has the highest write throughput.
Optimizing for no or other performance metrics, on the other hand, increases the compaction traffic considerably, and thus, demonstrate a reduced throughput for writes.
\subsubsection{\textbf{SSD Lifetime}}
\red{We might end up writing a small para on how compactions affect the lifetime of SSDs here.}
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccl|ccccccc|ccccccc|ccccccc}
\toprule
\MCL{3}{c|}{\MRW{2.5}{\begin{tabular}[c]{@{}c@{}}\textbf{Compaction} \end{tabular}}}
& \MCL{7}{c|}{\textbf{\MRW{1}{\begin{tabular}[c]{@{}c@{}}\textbf{Workload} \end{tabular}}}}
& \MCL{7}{c|}{\textbf{\MRW{1}{\begin{tabular}[c]{@{}c@{}}\textbf{Workload} \end{tabular}}}}
& \MCL{7}{c}{\textbf{\MRW{1}{\begin{tabular}[c]{@{}c@{}}\textbf{Workload} \end{tabular}}}} \\
\MCL{3}{c|}{\MRW{2.5}{\begin{tabular}[c]{@{}c@{}}\textbf{Knob} \end{tabular}}}
& \MCL{7}{c|}{\textbf{A}}
& \MCL{7}{c|}{\textbf{B}}
& \MCL{7}{c}{\textbf{C}} \\ \cline{4-24}
& & & Write & PL & SRQ & LRQ & SA & WA & DPL
& Write & PL & SRQ & LRQ & SA & WA & DPL
& Write & PL & SRQ & LRQ & SA & WA & DPL \\\hline \bottomrule
\MRW{5}{\rotatebox[origin=c]{90}{\textbf{Compaction}}}
& \MCL{1}{c|}{\MRW{5}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{\blue{Data layout}}}}}
& \MRW{1.5}{Leveling}
& \MRW{1.5}{\Up} & \MRW{1.5}{\Upp} & \MRW{1.5}{\Up} & \MRW{1.5}{\Dww}
& \MRW{1.5}{\Up} & \MRW{1.5}{--} & \MRW{1.5}{\Up}
& \MRW{1.5}{\Dwww} & \MRW{1.5}{\Dwww} & \MRW{1.5}{\Up} & \MRW{1.5}{\Up}
& \MRW{1.5}{\Dw} & \MRW{1.5}{\Dw} & \MRW{1.5}{\Dw}
& \MRW{1.5}{\Dwww} & \MRW{1.5}{\Up} & \MRW{1.5}{\Up} & \MRW{1.5}{\Upp}
& \MRW{1.5}{\Upp} & \MRW{1.5}{\Upp} & \MRW{1.5}{\Upp} \\
& \MCL{1}{c|}{}
& \MRW{2}{Tiering}
& \MRW{2}{\Up} & \MRW{2}{\Upp} & \MRW{2}{\Up} & \MRW{2}{\Dww}
& \MRW{2}{\Up} & \MRW{2}{--} & \MRW{2}{\Up}
& \MRW{2}{\Dwww} & \MRW{2}{\Dwww} & \MRW{2}{\Up} & \MRW{2}{\Up}
& \MRW{2}{\Dw} & \MRW{2}{\Dw} & \MRW{2}{\Dw}
& \MRW{2}{\Dwww} & \MRW{2}{\Up} & \MRW{2}{\Up} & \MRW{2}{\Upp}
& \MRW{2}{\Upp} & \MRW{2}{\Upp} & \MRW{2}{\Upp} \\
& \MCL{1}{c|}{}
& \MRW{2.5}{$l$-leveling}
& \MRW{2.5}{\Up} & \MRW{2.5}{\Upp} & \MRW{2.5}{\Up} & \MRW{2.5}{\Dww}
& \MRW{2.5}{\Up} & \MRW{2.5}{--} & \MRW{2.5}{\Up}
& \MRW{2.5}{\Dwww} & \MRW{2.5}{\Dwww} & \MRW{2.5}{\Up} & \MRW{2.5}{\Up}
& \MRW{2.5}{\Dw} & \MRW{2.5}{\Dw} & \MRW{2.5}{\Dw}
& \MRW{2.5}{\Dwww} & \MRW{2.5}{\Up} & \MRW{2.5}{\Up} & \MRW{2.5}{\Upp}
& \MRW{2.5}{\Upp} & \MRW{2.5}{\Upp} & \MRW{2.5}{\Upp} \\
& \MCL{1}{c|}{}
& \MRW{3}{Hybrid}
& \MRW{3}{\Up} & \MRW{3}{\Upp} & \MRW{3}{\Up} & \MRW{3}{\Dww}
& \MRW{3}{\Up} & \MRW{3}{--} & \MRW{3}{\Up}
& \MRW{3}{\Dwww} & \MRW{3}{\Dwww} & \MRW{3}{\Up} & \MRW{3}{\Up}
& \MRW{3}{\Dw} & \MRW{3}{\Dw} & \MRW{3}{\Dw}
& \MRW{3}{\Dwww} & \MRW{3}{\Up} & \MRW{3}{\Up} & \MRW{3}{\Upp}
& \MRW{3}{\Upp} & \MRW{3}{\Upp} & \MRW{3}{\Upp} \\
& \MCL{1}{c|}{} & & & & & & & & & & & & & & & & \\ \midrule
\MRW{6}{\rotatebox[origin=c]{90}{\textbf{Compaction}}}
& \MCL{1}{c|}{\MRW{6}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{Trigger}}}}
& \MRW{1.25}{\#Bytes}
& \MRW{1.25}{\GA} & \MRW{1.25}{\GA} & \MRW{1.25}{\GA\GA\GA} & \MRW{1.25}{\RA}
& \MRW{1.25}{\NA} & \MRW{1.25}{\NA} & \MRW{1.25}{\BA}
& \MRW{1.25}{\GA} & \MRW{1.25}{\RA} & \MRW{1.25}{\BA} & \MRW{1.25}{\BA}
& \MRW{1.25}{\GA} & \MRW{1.25}{\GA} & \MRW{1.25}{\GA}
& \MRW{1.25}{\GA} & \MRW{1.25}{\BA} & \MRW{1.25}{\BA} &
& \MRW{1.25}{\GA\GA} & \MRW{1.25}{\GA\GA} & \MRW{1.25}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{1.5}{\#Files}
& \MRW{1.5}{\GA} & \MRW{1.5}{\GA} & \MRW{1.5}{\GA\hspace*{-1pt}\GA\hspace*{-1pt}\GA} & \MRW{1.5}{\RA}
& \MRW{1.5}{\NA} & \MRW{1.5}{\NA} & \MRW{1.5}{\BA}
& \MRW{1.5}{\GA} & \MRW{1.5}{\RA} & \MRW{1.5}{\BA} & \MRW{1.5}{\BA}
& \MRW{1.5}{\GA} & \MRW{1.5}{\GA} & \MRW{1.5}{\GA}
& \MRW{1.5}{\GA} & \MRW{1.5}{\BA} & \MRW{1.5}{\BA} &
& \MRW{1.5}{\GA\GA} & \MRW{1.5}{\GA\GA} & \MRW{1.5}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{1.75}{\#Sorted runs}
& \MRW{1.75}{\GA} & \MRW{1.75}{\GA} & \MRW{1.75}{\GA\GA\GA} & \MRW{1.75}{\RA}
& \MRW{1.75}{\NA} & \MRW{1.75}{\NA} & \MRW{1.75}{\BA}
& \MRW{1.75}{\GA} & \MRW{1.75}{\RA} & \MRW{1.75}{\BA} & \MRW{1.75}{\BA}
& \MRW{1.75}{\GA} & \MRW{1.75}{\GA} & \MRW{1.75}{\GA}
& \MRW{1.75}{\GA} & \MRW{1.75}{\BA} & \MRW{1.75}{\BA} &
& \MRW{1.75}{\GA\GA} & \MRW{1.75}{\GA\GA} & \MRW{1.75}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{2}{Staleness of file}
& \MRW{2}{\GA} & \MRW{2}{\GA} & \MRW{2}{\GA\GA\GA} & \MRW{2}{\RA}
& \MRW{2}{\NA} & \MRW{2}{\NA} & \MRW{2}{\BA}
& \MRW{2}{\GA} & \MRW{2}{\RA} & \MRW{2}{\BA} & \MRW{2}{\BA}
& \MRW{2}{\GA} & \MRW{2}{\GA} & \MRW{2}{\GA}
& \MRW{2}{\GA} & \MRW{2}{\BA} & \MRW{2}{\BA} &
& \MRW{2}{\GA\GA} & \MRW{2}{\GA\GA} & \MRW{2}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{2.25}{Space amp.}
& \MRW{2.25}{\GA} & \MRW{2.25}{\GA} & \MRW{2.25}{\GA\GA\GA} & \MRW{2.25}{\RA}
& \MRW{2.25}{\NA} & \MRW{2.25}{\NA} & \MRW{2.25}{\BA}
& \MRW{2.25}{\GA} & \MRW{2.25}{\RA} & \MRW{2.25}{\BA} & \MRW{2.25}{\BA}
& \MRW{2.25}{\GA} & \MRW{2.25}{\GA} & \MRW{2.25}{\GA}
& \MRW{2.25}{\GA} & \MRW{2.25}{\BA} & \MRW{2.25}{\BA} &
& \MRW{2.25}{\GA\GA} & \MRW{2.25}{\GA\GA} & \MRW{2.25}{\GA\GA} \\ \vspace*{-2pt}
& \MCL{1}{c|}{} & & & & & & & & & & & & & & & & \\ \midrule
\MRW{5}{\rotatebox[origin=c]{90}{\textbf{Compaction}}}
& \MCL{1}{c|}{\MRW{5}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{Granularity}}}}
& \MRW{1.5}{Level}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {}
& \MRW{4}{\small{\ding{109}}} & {} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\small{\ding{109}}} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\cmark}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {} & & & & & \\
& \MCL{1}{c|}{}
& \MRW{2}{File (single)} & \Up & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{2.5}{File (multiple)} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{3}{Sorted run} & {}
& {} & \cmark & \small{\ding{109}} & \cmark & {}
& \cmark & {} & {} & {}
& {} & {} & {} & & & & & \\
& \MCL{1}{c|}{} & & & & & & & & & & & & & & & & \\ \midrule
\MRW{6}{\rotatebox[origin=c]{90}{\textbf{Data Movement}}}
& \MCL{1}{c|}{\MRW{6}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{Policy}}}}
& \MRW{1}{Round-robin}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {}
& \MRW{4}{\small{\ding{109}}} & {} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\small{\ding{109}}} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\cmark}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {} & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Least overlap} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Coldest file} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Oldest file} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{File w/ most TS} & {}
& {} & \cmark & \small{\ding{109}} & \cmark & {}
& \cmark & {} & {} & {}
& {} & {} & {} & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Expired TS-TTL} & & & & & & & & & & & & & & & & \\ \midrule
\bottomrule
\end{tabular}
}
\caption{Implications of compaction knobs on performance in LSM-based systems.
\label{tab:perf}}
\vspace{-0.25in}
\end{table*}
\subsection{Read Performance}
The fundamental purpose of re-organizing of data on disk through compactions is to facilitate efficient point lookup and range scan operations in LSM-trees.
However, as reads and writes have to the same device bandwidth, compactions affect the read performance for mixed workloads that have ingestion and lookups interleaved.
Even for a read-only workload, the position of the data in the tree as well as the number of levels in the tree are affected by the compaction strategy, which in turn, affect the point and range lookup performance. Below, we present how the different dimensions of compactions affect the read performance of an LSM-tree.
\subsubsection{\textbf{Point Lookups}} The point lookup performance of an LSM-tree is enhanced at a cost of extra main memory space that is used to store auxiliary data structures, such as Bloom filters and fence pointers.
While Bloom filters probabilistically reduces the number of runs to be probed for a point lookup, and fence pointers ensure that we perform only a single I/O per sorted run, if the Bloom filter probe returns positive.
Compacting files often pushes the entries in the files to a deeper level, and this may asymptotically increase the cost for lookups.
\textit{\blue{Data layout}.}
The average cost for a point lookup operation on an non-existing key is given as $\mathcal{O}(L \cdot e^{-BPK})$ for leveling and $\mathcal{O}(T \cdot L \cdot e^{-BPK})$ in case of tiering.
A point lookup on an existing must always perform at least one I/O to fetch the target key, and thus, the average cost for this given as $\mathcal{O}(1 + L \cdot e^{-BPK})$ for leveling and $\mathcal{O}(1 + T \cdot L \cdot e^{-BPK})$ for leveling.
For a hybrid design, the average lookup cost for non-existing keys becomes \red{fill in}, and similarly for lookups on existing keys.
\textit{Compaction trigger.} Compaction triggers affect the point lookup performance insignificantly. In presence of secondary triggers, compaction jobs may become more frequent, which consumes a significant proportion of the device bandwidth.
While this affects the point lookup throughput marginally, it constitutes a serious performance bottleneck in scan-heavy workload, about which discuss in the Section \ref{sec:4.2.2}.
\textit{Compaction granularity.} The granularity of data movement during a compaction job affects the point lookup performance on existing keys to some extent.
For tiered LSMs and leveled LSMs with a full compaction routine, once a Level $i$ reaches a nominal capacity, all entries from that level are compacted and moved to the Level $i+1$ (rendering Level $i$ to be empty).
Compacting all files from a level regardless of the ``hotness'' of the file causes recently and frequently accessed files to move to lower levels of the tree, which is particularly undesirable when the goal is to maximize the read throughput.
In contrast, partial compaction allows for choosing files for compaction while optimizing for read performance.
\textit{Data movement policy.} For partial compaction strategies, the choice of files to compact influences the performance for point lookups on existing keys significantly.
Choosing the ``coldest'' file, i.e., the least recently accessed file compaction ensures that the frequently read files are retained in the shallower level, which reduces the average cost for point lookups.
This improvement is becomes pronounced further for lookup-intensive workloads that has a high degree of temporality~\cite{Cao2020}, reducing the cost for point lookups on existing keys to $\mathcal{O}(1)$ in leveling and $\mathcal{O}(T)$ for tiering without any assistance from caching.
Compaction strategies that optimizes different performance metrics, such as write amplification, space amplification, or delete performance, however, may choose a ``hot'' file for compaction, which adversely affects the lookup performance.
\subsubsection{\textbf{Range Lookups}} \label{sec:4.2.2}
LSM-trees support range lookups by sort-merging all the qualifying runs (partly or entirely) in memory and then returning the recent-most version of the qualifying entries.
Range lookups benefit from fence pointers in efficiently identifying the first page of qualifying entries in each level, but are not assisted by Bloom filters.
The cost for range lookups depend on the selectivity of the range query and the number of sorted runs in a tree.
Compactions affect the range lookup performance very differently from point lookups.
For our analysis, we distinguish a short range lookup from long range lookup in terms of the number of disk pages per sorted run affected by the range query.
A short range query should not have more than two qualifying pages for each sorted run present in a tree~\cite{Dayan2018}.
\textit{\blue{Data layout}.} \blue{The data layout} controls the number of sorted runs in an LSM-tree, and therefore, influences the cost for range lookups.
The average cost for a long range lookup is given as $\mathcal{O}(\tfrac{s \cdot N}{B})$ for leveling, and that for tiering is $\mathcal{O}(\tfrac{T \cdot s \cdot N}{B})$, where $s$ denotes the selectivity of the range query.
For short range queries, the average cost is simply proportional to the number of sorted runs, and is given as $\mathcal{O}(L)$ for leveling and $\mathcal{O}(T \cdot L)$ for tiering.
For hybrid designs, the cost for range lookups fall in between that for leveling and tiering, and depends on the exact design of the tree.
\textit{Compaction trigger.} The effect of secondary compaction triggers is pronounced on the range query performance than that on point queries.
While the amount of data movement peer operation for a point lookup and a short range lookup are comparable, long range lookups with moderate to large selectivity read a significantly larger amount of data from the disk.
In presence of secondary compaction triggers, compactions and long range queries contend for the same device bandwidth, which causes a drop in the read throughput.
\textit{Compaction granularity.} A larger granularity of data movement during compaction leads to large amounts of data movement periodically, and thus, causes a drop in the lookup performance as a result of bandwidth sharing.
However, such compactions always reduce the number of non-empty levels in a tree at least by one, and by a plurality, less frequently.
Reduction in the number of levels in a tree, improves the range lookup performance dramatically as it mitigates reading superfluous data from disk and also requires fewer CPU-cycles for the sort-merge operation.
In the best case, the cost for long range lookups drops down to $\mathcal{O}(s)$ for leveling and $\mathcal{O}(T \cdot s)$ for tiering, and that for short range lookups becomes $\mathcal{O}(1)$ for leveling and $\mathcal{O}(T)$ in case of tiering.
Partial compactions, in contrast, always have a logarithmically increasing number of non-empty levels in the tree, and thus, the cost for range lookups remains unchanged.
\textit{Data movement policy.} As range lookups require sort-merging the records spread across all sorted runs in a tree, the position of a qualifying entry in the tree do not influence the performance of range lookups.
Also, the data movement policies only pertain to partial compactions, where the number of non-empty levels in a tree follows a strictly increasing trend, each range lookup must always take into account qualifying entries from all tree-levels.
Thus, the range lookup performance remains agnostic of the data movement policy.
\subsection{Space Utilization} Following prior work~\cite{Sarkar2020}, we define space amplification as the ratio between the size of superfluous entries and the size of the unique entries in the tree.
Mathematically, space amplification ranges between $[0, \infty)$, and that if all inserted keys are unique there is no space amplification.
However, in as fraction of updates increase in a workload, the space amplification also increases, and it is further amplified in presence of point and range deletes in a workload.
Compactions influence the space amplification of a tree along several dimensions.
\textit{\blue{Data layout}.} In presence of only updates, the worst-case space amplification in a leveled LSM-tree is $\mathcal{O}(1/T)$ and $\mathcal{O}(T)$ for tiering~\cite{Dayan2018}.
However, with the addition of deletes, the space amplification increases significantly, and is given as $\mathcal{O}(\tfrac{N}{1 - \lambda})$ for leveling and $\mathcal{O}(\tfrac{(1 - \lambda) \cdot N + 1}{\lambda \cdot T})$ for tiering, where $\lambda$ denotes is ratio of the size of a tombstone and the average size of a key-value pair~\cite{Sarkar2020}.
The space amplification for hybrid designs lay between the two extremely, and depends heavily on the exact tree design.
\textit{Compaction trigger.} Compactions can be triggered as a function of the overall space amplification of an LSM-tree, and is particularly useful for tiered implementations which has a higher headroom for space amplification~\cite{RocksDB2020}.
Further, triggering compactions based on number of tombstone in a file~\cite{Dong2016} or age of the oldest tombstone contained in a file~\cite{Sarkar2020} propels the tombstones eagerly to the deeper levels of the tree, which reduces the space amplification by compacting the invalidated entries early.
\textit{Compaction granularity.} Compacting data at a large granularity often reduces the number of non-empty levels in a tree, and in the process, arranges the data across fewer but larger and more compact sorted runs.
For example, in tiering, once a level reaches a nominal capacity all $T-1$ sorted runs are compacted together and written as a single compact and larger run in the following level, rendering the child level empty.
Similarly, in leveling with full compaction, every time level reaches saturation, sorted runs from one or more levels are merged with the level of larger capacity, reducing the number of sorted runs.
Any superfluous entry from the input runs is removed during this process, which leads to reduced space amplification.
For leveling with full compactions, periodically all levels are merged together to a single long level (when the saturation trigger for all levels are triggered simultaneously), which yields a compact tree with no space amplification.
However, for partial compaction, as the number of sorted runs always follow a increasing trend, the worst-case space amplification remains the same.
\textit{Data movement policy.} While compacting data at the granularity of a file, choosing files for compaction based on number of tombstones in a file or tombstones with an expired time-to-live the reduces the space amplification.
However, optimizing for performance goals, such as write amplification or read throughput, do not necessarily bring down the write amplification.
\subsection{Delete Performance} While deletes affect the performance of an LSM-based engine across several facets, here we focus on persistently deleting entries within a time-limit in order to analyze the implications of compactions from a privacy standpoint.
To ensure timely and persistent deletion, a tombstone must participate in a compaction involving the last tree-level within the threshold time.
\textit{\blue{Data layout}.} The average time taken to persistently delete an entry from a tree is given by $\mathcal{O}(\tfrac{T^{L-1} \cdot P \cdot B}{I})$ for leveling and $\mathcal{O}(\tfrac{T^L \cdot P \cdot B}{I})$ for tiering~\cite{Sarkar2020}, where $I$ denotes the rate of ingestion of unique entries to the database.
Note that, while for leveling propelling the tombstone to the last level ensures persistent deletion of the target entry, in case of tiering, the tombstone must participate in a compaction involving all the sorted runs from the last level to guarantee delete persistence.
\textit{Compaction trigger, granularity, and data movement policy.} Saturation-based compaction triggers ensure persistence of all superfluous entries in a tree every time a new level is added to a tree, but only when the compactions are performed at a granularity of levels or sorted runs.
For partial compaction strategies, a secondary compaction trigger, that compacts files with tombstones eagerly, must be invoked to ensure delete persistence~\cite{RocksDBTS,Sarkar2020}.
Otherwise, in the worst-case, deletes may not be persisted at all in an LSM-tree based on partial compactions.
\subsection{Summarizing the Implications}
\red{A super-short summary of the section.}
\subsection{Compaction Benchmark}
\subsection{Standardization of Compaction Strategies}
We choose RocksDB \cite{FacebookRocksDB} as our experimental platform, as it (i) is open-source, (ii) is widely used across industry and academia, (iii) has a large active community.
To ensure fair comparison we implement all compaction
strategies under the same LSM-engine.
\Paragraph{Implementation}
We integrate our codebase into RocksDB v6.11.4.
We assign to \textit{compactions a higher priority than writes}
to accurately benchmark them, while always maintaining the
LSM structure~\cite{Sarkar2021}.
\Paragraphit{Compaction Trigger}
The default compaction trigger for (hybrid) leveling in RocksDB is level saturation~\cite{RocksDB2020a}, and for the universal compaction is space amplification~\cite{RocksDB2020}.
RocksDB also supports delete-driven compaction triggers, specifically whether the \#tombstones in a file goes beyond a threshold.
We further implement a trigger based on the tombstones age to facilitate timely deletes~\cite{Sarkar2020}.
\Paragraphit{Data layout}
By default, RocksDB supports only two different data layouts: \textit{hybrid leveling} (tiered first level, leveled otherwise)~\cite{RocksDB2020a} and a variation of \textit{tiering} (with a different trigger), termed \emph{universal compaction}~\cite{RocksDB2020}.
We also implement pure \textit{leveling} by limiting the number of first-level runs to one, and triggering a compaction when the number of first-level files is more than one.
\Paragraphit{Compaction Granularity}
The granularity for leveling is \textit{file} and \textit{sorted runs} for tiering.
To implement classical leveling, we mark all files of a level for compaction.
We ensure that ingestion may resume only after all the compaction-marked files are compacted thereby replicating the behavior of the full compaction routine.
\Paragraphit{Data Movement Policy}
RocksDB (v6.11.4) provides four different data movement policies: a file (i) with least overlap with its parent level, (ii) least recently accessed, (iii) with the oldest data in a level, and (iv) that has more tombstones than a threshold.
We also implement partial compaction strategies that choose a file (v) in a round-robin manner, (vi) with the least overlap with its grandparent level, and (vii) based on the age of the tombstones in a file.
\Paragraph{Designing the Compaction API}
We expose the compaction primitives through a new API
as configurable knobs.
An application can configure the desired compaction strategy and initiate workload execution.
The API also allows the application to change the compaction strategy for an existing database.
Overall, our experimental infrastructure allows us (i) to ensure an identical underlying structure while setting the compaction benchmark, and (ii) to tune and configure the design of the LSM-engine as necessary.
\subsection{Performance Metrics}
We now present the performance metrics used in our analysis.
\Paragraph{Compaction Latency}
The compaction latency includes the time taken to (i) identify the files to compact, (ii) read the participating files to memory, (iii) sort-merge (and remove duplicates from) the files, (iv) write back the result to disk as new files, (v) invalidate the older files, and (vi) update the metadata in the manifest file~\cite{FacebookRocksDB}.
\textit{The RocksDB metric \texttt{rocksdb.compaction.times.micros} is used to measure the compaction latency.}
\Paragraph{Write Amplification (WA)}
The repeated reads and writes due to compaction cause high WA~\cite{Raju2017}.
We formally define WA as \textit{the number of times an entry is (re-)written without any modifications to disk during its lifetime}.
\textit{We use \texttt{rocksdb.compact.write.bytes} and the actual data size to compute WA.}
\Paragraph{Write Latency}
Write latency is driven by the device bandwidth utilization, which depends on (i) write stalls due to compactions and (ii) the sustained device bandwidth.
\textit{We use the \texttt{rocksdb.db.write.micros} histogram to measure the average and tail of the write latency.}
\Paragraph{Read Amplification (RA)}
RA is the ratio between the total number of disk pages read for point lookups and the pages that should be read \emph{ideally}.
\textit{We use \texttt{rocksdb.bytes.read} to compute RA.}
\Paragraph{Point Lookup Latency}
Compactions determine the position of the files in an LSM-tree which affects point lookups on entries contained in those files.
\textit{Here, we use the \texttt{rocksdb.db.get.micros}.}
\Paragraph{Range Lookup Latency}
The range lookup latency depends on the selectivity of the range query, but is affected by the data layout.
\textit{We also use the \texttt{db.get.micros} histogram for range lookups.}
\Paragraph{Space Amplification (SA)}
SA depends on the data layout, compaction granularity, and the data movement policy.
SA is defined as \textit{the ratio between the size of logically invalidated entries and the size of the unique entries in the tree}~\cite{Dayan2018}.
\textit{We compute SA using the size of the database and the size of the logically valid entries.}
\Paragraph{Delete Performance}
We measure the degree to which the tested compaction strategies
persistently delete entries within a time-limit~\cite{Sarkar2020} in order to analyze the implications of compactions from a privacy standpoint~\cite{CCPA2018,Deshpande2018,Kraska2019a,Sarkar2018,Schwarzkopf2019,Wang2019}.
\textit{We use the RocksDB file metadata \texttt{age} and a delete persistence threshold.}
\subsection{Benchmarking Methodology}
We now discuss the methodology for varying the
key input parameters for our analysis: \textit{workload} and the \textit{LSM tuning}.
\subsubsection{\textbf{Workload}}
A typical key-value workload comprises of five primary operations: inserts, updates, point lookups, range lookups, and deletes.
Point lookups target keys that may or may not exist in the database -- we refer to these as \textit{non-empty} and \textit{empty point lookups}, respectively.
Range lookups are characterized by their \textit{selectivity}.
To analyze the impact of each operation, we vary the \textit{fraction} of each operation as well as their qualitative characteristics (i.e., selectivity and entry size).
We further vary the \textit{data distribution} of ingestion and queries focusing on (i) uniform, (ii) normal, and (iii) Zipfian distributions.
Overall, our custom-built benchmarking suite is a superset of the influential YCSB benchmark~\cite{Cooper2010} as well as the insert benchmark~\cite{Callaghan2017a}, and supports a number of parameters that are missing from existing workload generators, including deletes.
Our workload generator exposes over $64$ degrees of freedom, and is available via GitHub~\cite{Sarkar2021a} for dissemination, testing, and adoption.
\subsubsection{\textbf{LSM Tuning}}
We further study the interplay of LSM tuning and compaction strategies.
We consider questions like
\textit{which compaction strategy is appropriate for a specific LSM design and a given workload?}
To answer such questions we vary in our
experimentation key LSM tuning parameters, like (i) the memory buffer size, (ii) the block cache size, and (iii) the size ratio of the tree.
\subsection{Performance Implications}
\label{subsec:performance}
We first analyze the implications of compactions on the ingestion, lookup, and overall performance of an LSM-engine.
\subsubsection{\textbf{Data loading}}
In this experiment, we insert $10$M key-value entries uniformly generated into an empty database to quantify the raw ingestion and compaction performance.
\Paragraph{\Ob Compactions Cause High Data Movement}
Fig.~\ref{fig:W1}(a) shows that the overall (read and write) data
movement due to compactions is significantly larger than the actual
size of the data ingested.
Among the leveled LSM-designs, \texttt{Full} moves $63\times$ ($32\times$ for reads and $31\times$ for writes) the data originally ingested.
The data movement is significantly smaller for \texttt{Tier}, however, it remains $23\times$ of the data size.
The data movement for \texttt{1-Lvl} is similar to that of the leveled strategies in partial compaction.
These observations conforms with prior work~\cite{Raju2017}, but also highlight the problem of \textit{read amplification due to compactions} leading to poor device bandwidth utilization.
\Paragraph{\Ob Partial Compaction Reduces Data Movement at the Expense of Increased Compaction Count}
We now shift our attention to the different variations
of leveling.
Fig. \ref{fig:W1}(a) shows that leveled partial compaction leads to $34\%$--$56\%$ less data movement than \texttt{Full}.
The reason is twofold:
(1) A file with no overlap with its parent level, is only logically merged. Such \textit{pseudo-compactions} require simple metadata (file pointer) manipulation in memory, and no I/Os.
(2) A smaller compaction granularity reducing overall data movement by choosing a file with (i) the least overlap, (ii) the most updates, or (iii) the most tombstones for compaction.
Specifically, \texttt{LO+1} (and \texttt{LO+2}) is designed to pick files with the least overlap with the parent $i+1$ (and grandparent $i+2$) level.
They move $10\%$--$23\%$ less data than other partial compaction strategies.
Fig. \ref{fig:W1}(b) shows that the partial compaction strategies as well as \texttt{1-Lvl} perform $4\times$ more
compaction jobs than \texttt{Full}, which is equal to the number of tree-levels.
Note that for an LSM-tree with partial compaction, every
buffer flush triggers cascading compactions to all $L$ levels, while in a
full-level compaction system this happens when a level is full (every $T$
compactions).
Finally, since both \texttt{Tier} and \texttt{Full} are full-level
compactions the compaction count is similar.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{Larger compaction granularity leads to fewer but larger compactions}} \textit{Full-level compactions perform about $1/L$ times fewer compactions than partial compaction routines, however, full-level compaction moves nearly $2L$ times more data per compaction.}
}%
\normalsize
}
\vspace{0.1mm}
\begin{figure*}[!ht]
\vspace{-0.3in}
\centering
\includegraphics[width=0.98\textwidth]{omnigraffle/Plot-1.pdf}
\label{fig:W1-bytes-comp}
\vspace*{-0.25in}
\caption{Compactions influence the ingestion performance of LSM-engines heavily in terms of (a) the overall data movement, (b) the compaction count, (c) the compaction latency, and (d) the tail latency for writes, as well as (e, f) the point lookup performance. The range scan performance (g) remains independent of compactions as the amount of data read remains the same. Finally, the lookup latency (h) depends on the proportion of empty queries ($\alpha$) in the workload.}
\label{fig:W1}
\vspace{-0.1in}
\end{figure*}
\Paragraph{\Ob Full Leveling has the Highest Mean Compaction Latency}
As expected, \texttt{Full} compactions have the highest average latency
($1.2$--$1.9\times$ higher than partial leveling, and $2.1\times$ than tiering).
The mean compaction latency is observed to be directly proportional to the average amount of data moved per compaction.
\texttt{Full} can neither take advantage of pseudo-compactions nor optimize the data movement during compactions, hence, on average the data movement per compaction remains large.
\texttt{1-Lvl} provides the most predictable performance in terms of compaction latency.
Fig. \ref{fig:W1}(c) shows the mean compaction latency for all
strategies as well as the median (P50), the $90^{th}$ percentile (P90),
the $99^{th}$ percentile (P99), and the maximum (P100).
The tail compaction latency largely depends on the amount of data moved by the largest compaction jobs triggered during the workload execution.
We observe that the tail latency (P90, P99, P100) is more predictable for \texttt{Full}, while partial compactions, and especially, tiering have high variability due to differences in the data movement policies.
The compaction latency presented in Fig. \ref{fig:W1}(c) can be broken to
IO time and CPU time. We observe that the CPU effort is about $50\%$
regardless of the compaction strategy.
During a compaction, CPU cycles are spent in (1) obtaining locks and taking
snapshots, (2) merging the entries, (3) updating file
pointers and metadata, and (4) synchronizing output files post compaction.
Among these, the time spent to sort-merge the data in memory dominates.
\Paragraph{The Tail Write Latency is Highest for Tiering}
Fig. \ref{fig:W1}(d) shows that the tail write latency is highest for tiering.
The tail write latency for \texttt{Tier} is $\sim$$2.5\times$ greater than \texttt{Full} and $5$--$12\times$ greater than partial compactions.
Tiering in RocksDB~\cite{RocksDB2020} optimizes for writes and opportunistically seeks to compact all data to a large single level.
This design achieves lower average write latency (Fig. \ref{fig:W1-mixed}(b)) at the expense of prolonged write stalls in the worst case, which is when the overlap between two consecutive levels is very high.
\texttt{Full} also has $2$--$5\times$ higher tail write stalls than partial compactions because when multiple consecutive levels are close to saturation, a buffer flush can result in a cascade of compactions.
\texttt{1-Lvl} too has a higher tail write latency as the first level is realized as tiering.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{\texttt{Tier} may cause prolonged write stalls}}
\textit{Tail write stall for \texttt{Tier} is $\sim$$25$ms, while for partial leveling (\texttt{Old}) it is as low as $1.3$ms.}
}%
\normalsize
}
\vspace{1mm}
\subsubsection{\textbf{Querying the Data}}
In this experiment, we perform $1$M point lookups on the previously generated
preloaded database (with $10$M entries). The lookups are uniformly distributed
in the domain and we vary the fraction of empty lookups $\alpha$ between 0 and 1.
Specifically, $\alpha = 0$ indicates that we consider only non-empty lookups,
while for $\alpha = 1$ we have lookups on non-existing keys. We also execute
$1000$ range queries, while varying their selectivity.
\Paragraph{\Ob The Point Lookup Latency is Highest for Tiering and Lowest for Full-Level Compaction}
Fig. \ref{fig:W1}(e) shows that point lookups perform the best for \texttt{Full}, and the worst for tiering.
The mean latency for point lookups with tiering is between
$1.1$--$1.9\times$ higher than that with leveled compactions for lookups on existing keys, and $\sim$$2.2\times$ higher for lookups on non-existing keys.
Note that lookups on existing keys must always perform at least one I/O per
lookup (unless they are cached).
For non-empty lookups in a tree with size ratio $T$, theoretically,
the lookup cost for tiering should be $T\times$ higher than its leveling
equivalent~\cite{Dayan2017}.
However, this \textit{worst-case} cost is not always accurate; in practice it depends on (i) the block cache size and the caching policy, (ii) the temporality of the lookup keys, and (iii) the implementation of the compaction strategies.
RocksDB-tiering has overall fewer sorted runs than textbook tiering.
Taking into account the block cache and temporality in the lookup workload,
the observed tiering cost is less than $T\times$ the cost observed for
\texttt{Full}. In addition, \texttt{Full} is $3\%$--$15\%$ lower than the partial
compaction routines, because during normal operation of \texttt{Full} some levels
might be entirely empty, while for partial compaction all levels are always
close to being full.
Finally, we note that the choice of data movement policy does not affect the
point lookup latency significantly, which always benefits from Bloom filters
($10$ bits-per-key) and the block cache ($0.05\%$ of the data size).
\begin{figure*}[!ht]
\vspace{-0.3in}
\centering
\includegraphics[width=1.02\textwidth]{omnigraffle/Plot-3.pdf}
\vspace*{-0.35in}
\caption{(a-c) The average ingestion performance for workloads with interleaved inserts and queries is similar to that of an insert-only workload, but (d) with worse tail performance. However, (e) interleaved lookups are significantly faster.}
\vspace*{-0.1in}
\label{fig:W1-mixed}
\end{figure*}
\Paragraph{Point Lookup Latency Increases for Comparable Number of Empty and Non-Empty Queries} A surprising result for point lookups
that is also revealed in Fig. \ref{fig:W1}(e) is that they perform worse when the
fraction of empty and non-empty lookups is balanced.
Intuitively, one would expect that as we have more empty queries (that is, as $
\alpha$ increases) the latency would decrease since the only data accesses needed
by empty queries are the ones due to Bloom filter false
positives~\cite{Dayan2017}.
To further investigate this result, we plot in
Fig.~\ref{fig:W1}(h) the $90^{th}$
percentile ($P90$) latency which shows a similar
curve for point lookup latency as we vary $\alpha$.
In our configuration each file uses $20$ pages
for its Bloom filters, $4$ pages for its index blocks, and
that the false positive is $FPR=0.8\%$.
A non-empty query needs to load the Bloom filters of the levels it visits until it terminates.
For all intermediate levels, it accesses the index and data blocks with probability $FPR$, and then fetches the index and data blocks for the target level.
On the other hand, an empty query probes the Bloom filters of all levels before returning
an empty result. Note that for each level
it also accesses the index and data blocks with $FPR$.
The counter-intuitive shape is a result of the
non-empty lookups not needing to load the Bloom filters
for all levels when $\alpha=0$ and the
empty lookups
accessing index and data only when there is a false
positive when $\alpha=1$.
Fig. \ref{fig:W1}(h) also shows the highly predictable point lookup performance of \texttt{1-Lvl}.
\vspace{2mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{The point lookup latency is largely unaffected by the data movement policy}} \textit{In presence of Bloom filters (with high enough memory) and small enough block cache, the point query latency remains largely unaffected by the data movement policy as long as the number of sorted runs in the tree remains the same. This is because block-wise caching of the filter and index blocks reduces the time spent performing disk I/Os significantly.}
}%
\normalsize
}
\vspace{0.7mm}
\Paragraph{\Ob Read Amplification is Influenced by the Block Cache Size and File Structure, and is Highest for Tiering}
Fig. \ref{fig:W1}(f) shows that the read amplification across different
compaction strategies for non-empty queries ($\alpha=0$) is between $3.5$ and
$4.4$. This is attributed to the size of filter and index blocks which
are $5\times$ and $1\times$ the size of a data block, respectively.
Each non-empty point lookup fetches between $1$ and $L$ filter blocks depending
on the position of the target key in the tree, and up to $L \cdot FPR$
index and data blocks.
Further, the read amplification increases exponentially with $\alpha$, reaching up to $14.4$ for leveling and $21.3$ for tiering (for $\alpha=0.8$).
Fig. \ref{fig:W1}(f) also shows that the estimated read amplification for point lookups is between $1.2\times$ and $1.8\times$ higher for \texttt{Tier} than for leveling strategies.
This higher read amplification for \texttt{Tier} is owing to the larger number of sorted runs in the tree, and is in line with \textbf{O4}.
\Paragraph{The Effect of Compactions on Range Scans is Marginal}
To answer a range query, LSM-trees instantiate multiple \textit{run-iterators} scanning all sorted runs containing qualifying data.
Thus, its performance depends on (i) the iterator scan time (which relates to selectivity) and (ii) the time to merge the data.
The number of sorted runs in a
leveled LSM-tree remains the same, which results in similar range query latency for all leveled variations, especially for larger selectivity (Fig. \ref{fig:W1}(g)).
Note that without updates or deletes, the amount of data qualifying for a range query remains largely identical for different data layouts despite the number of runs being different.
The $\sim$$5\%$ higher average range query latency for \texttt{Tier}
is attributed to the additional I/Os needed to handle partially
qualifying disk pages from each run ($O(L\cdot T)$ in the worst case).
\subsubsection{\textbf{Executing mixed workloads}}
We now discuss the performance implications when ingestion and queries are mixed.
We interleave the ingestion of $10$M
unique key-value entries with $1$M point lookups.
The ratio of empty to non-empty lookups is varied across experiments.
All lookups are performed after $L-1$ levels are full.
Fig.~\ref{fig:W1-mixed} compares side by side the results for serial and interleaved execution of workloads with same specifications.
\begin{figure*}[t]
\vspace{-0.3in}
\centering
\includegraphics[width=0.95\textwidth]{omnigraffle/Plot-5.pdf}
\vspace*{-0.25in}
\caption{As the ingestion distribution changes to (a-d) PrefixZipf and (e-h) normal with standard deviation, the ingestion performance of the database remains nearly identical with improvement in the lookup performance.}
\vspace*{-0.1in}
\label{fig:W5}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.95\textwidth]{omnigraffle/Plot-6.pdf}
\vspace*{-0.25in}
\caption{Skewed lookup distributions like Zipfian (a, b) and normal (c, d) improve the lookup performance dramatically in the presence of a block cache and with the assistance of Bloom filters.}
\vspace*{-0.1in}
\label{fig:W6}
\end{figure*}
\Paragraph{\Ob Mixed Workloads have Higher Tail Write Latency}
Figures~\ref{fig:W1-mixed}(a) and (b) show that the
mean latency of compactions that are interleaved with
point queries is only marginally affected for all
compaction strategies. This is also corroborated by the
write amplification remaining unaffected by mixing
reads and writes as shown in Fig.~\ref{fig:W1-mixed}(c).
On the other hand, Fig.~\ref{fig:W1-mixed}(d) shows that
the tail write
latency is increased between $2$--$15\times$.
This increase is attributed to (1) the need of point
queries to access filter and index blocks that requires
disk I/Os that compete with writes and saturate the
device, and (2) the delay of
memory buffer flushing during lookups.
\Paragraph{Interleaving Compactions and Point Queries Helps Keeping the Cache Warm}
Since in this experiment we start the point queries when
$L-1$ levels of the tree are full, we expect that the
interleaved read query execution will be faster than
the serial one, by $1/L$ (25\% in our configuration) which
corresponds to the difference in the height of the trees.
However, Fig. \ref{fig:W1-mixed}(e) shows this
difference to be between $26\%$ and $63\%$ for non-empty
queries and between $69\%$ and $81\%$ for empty queries.
The reasons interleaved point query execution is faster
than expected are that (1) about $10\%$ of lookups
terminate within the memory buffer, without requiring any
disk I/Os, and (2) the block cache is pre-warmed with
filter, index, and data blocks cached during compactions.
Fig. \ref{fig:W1-mixed}(d) and \ref{fig:W1-mixed}(e) show how \texttt{1-Lvl} brings together \textit{the best of both worlds} and offer reasonably good ingestion and lookup performance simultaneously.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{Compactions help lookups by warming up the caches}} \textit{As the file metadata is updated during compactions, the block cache is warmed up with the filter, index, and data blocks, which helps subsequent point lookups.}
}%
\normalsize
}
\subsection{Workload Influence}
\label{subsec:workload}
Next, we analyze the implications of the workloads on compactions.
\subsubsection{\textbf{Varying the Ingestion Distribution}}
In this experiment, we use an interleaved workload that varies the ingestion distribution (\textit{Zipfian} with $s=1.0$, \textit{normal} with $34\%$ standard deviation), and has uniform lookup distribution.
We use a variant of the Zipfian distribution, called \emph{PrefixZipf}, where the key prefixes follow a Zipfian distribution while the suffixes are generated uniformly.
This allows us to avoid having too many updates in the workload.
\Paragraph{Ingestion Performance is Agnostic to Insert Distribution}
Figures \ref{fig:W1}(a), \ref{fig:W5}(a), and \ref{fig:W5}(e) show that the total data movement during compactions remains virtually identical for (unique) insert-only workloads generated using uniform, PrefixZipf, and normal distributions, respectively.
Further, we observe that the mean and tail compaction latencies are agnostic of the ingestion distribution
(Fig. \ref{fig:W1}(c), \ref{fig:W5}(b), and \ref{fig:W5}(f) are almost identical as well).
As long as the data distribution does not change over
time, the entries in each level follow the
same distribution and the overlap between
different levels remains the same.
\textit{Therefore, for an ingestion-only workload
the data distribution does not influence the
choice of compaction strategy.}
\Paragraph{\Ob Insert Distribution Influences Point Queries}
Figure \ref{fig:W5}(c) shows that while tiering has a
slightly higher latency for point lookups, the relative
performance of the compaction strategies is close to each
other for any fraction of non-empty queries in the
workload (all values of $\alpha$).
This is because when empty queries are drawn uniformly from the key domain, the level-wise metadata and index blocks help to entirely avoid a vast majority of unnecessary disk accesses (including fetching index or filter blocks). \
In Fig.~\ref{fig:W5}(d), we observe that the read
amplification remains comparable to that in
Fig. \ref{fig:W1}(f) (\textit{uniform} ingestion) for $\alpha = 0$ and even $\alpha = 0.4$.
However, for $\alpha = 0.8$, the read amplification in
Fig. \ref{fig:W5}(d) becomes $65\%$-$75\%$ smaller than
in the case of uniform inserts. The I/Os performed to
fetch the filter blocks is close to zero.
\textit{This shows that all compaction strategies perform equally well while executing an empty query-heavy workload on a database pre-populated with PrefixZipf inserts.}
In contrast, when performing lookups on a database pre-loaded with normal ingestion, the point lookup performance (Fig. \ref{fig:W5}(g)) largely resembles its uniform equivalent (Fig. \ref{fig:W1}(h)), as the ingestion-skewness is comparable.
The filter and index block hits are $\sim10\%$ higher for the normal distribution compared to uniform for larger values of $\alpha$, which explains the comparatively lower read amplification shown in Fig. \ref{fig:W5}(h).
This plot also shows the first case of unpredictable behavior of \texttt{LO+2} for $\alpha=0$ and $\alpha=0.2$.
We observe more instances of such unpredictable behavior for \texttt{LO+2}, which probably explains why it is rarely used in new LSM stores.
Once again, for both the compaction and tail lookup performance, \texttt{1-Lvl} offers highly predictable performance.
\subsubsection{\textbf{Varying the Point Lookup Distribution}}
In this experiment, we change the point lookup
distribution to \textbf{Zipfian} and \textbf{normal},
while keeping the ingestion distribution as uniform.
\Paragraph{The Distribution of Point Lookups Significantly Affects Performance}
Zipfian point lookups on uniformly populated data
leads to low latency point queries for all compaction
strategies, as shown in Fig. \ref{fig:W6}(a) because the
block cache is enough for the popular blocks in all cases,
as also shown by the low read amplification in
Fig. \ref{fig:W6}(b).
On the other hand, when queries follow the normal
distribution, partial compaction strategies \texttt{LO+1}
and \texttt{LO+2} dominate all other approaches, while
\texttt{Tier} is found to perform significantly
slower than all other approaches, as shown in
Fig. \ref{fig:W6}(c) and \ref{fig:W6}(d).
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{For skewed ingestion/lookups, all compaction strategies behave similarly in terms of lookup performance}} \textit{While the ingestion distribution does not influence its performance, heavily skewed ingestion or lookups impacts query performance due to block cache and file metadata.}
}%
\normalsize
}
\vspace{1mm}
\begin{figure*}[h]
\vspace{-0.3in}
\centering
\includegraphics[width=\textwidth]{omnigraffle/Plot-7.pdf}
\vspace*{-0.2in}
\caption{Experiments with varying workload and data characteristics (a-l) and LSM tuning (m-r) show that there is no perfect compaction strategy -- choosing the appropriate compaction strategy is subject to the workload and the performance goal.}
\label{fig:W7}
\vspace*{-0.1in}
\end{figure*}
\subsubsection{\textbf{Varying the Proportion of Updates}}
We now vary the update-to-insert ratio, while interleaving
queries with ingestion. An update-to-insert ratio $0$ means
that all inserts are unique, while a ratio $8$ means that
each unique insert receives $8$ updates on average.
\Paragraph{\Ob For Higher Update Ratio Compaction Latency for Tiering Drops; \texttt{LO+2} Dominates the Leveling Strategies}
As the fraction of updates increases, the mean compaction latency decreases
significantly for tiering because we discard multiple
updated entries in every compaction (Fig. \ref{fig:W7}(a)).
We observe similar
but less pronounced trends for \texttt{Full} and
\texttt{LO+2}, while the remaining leveling strategies
remain largely unchanged.
\textit{Overall, larger
compaction granularity helps to exploit the presence of
updates by invalidating more entries at a time.}
Among the leveling strategies, \texttt{LO+2} performs best as
it moves $\sim$$20\%$ less
data during compactions, which also affects write
amplification as shown in Fig. \ref{fig:W7}(b).
As the fraction of updates increases, all compaction strategies including
\texttt{Tier} have lower tail compaction latency.
Fig. \ref{fig:W7}(c) shows that \texttt{Tier}'s tail compaction latency
drops from $6\times$ higher than \texttt{Full} to $1.2\times$ for an
update-to-insert ratio of $8$, which demonstrates that \texttt{Tier} is
most suitable for update-heavy workloads. We also observe that lookup
latency and read amplification also decrease for update-heavy workloads.
\Paragraph{The Point Lookup Latency Stabilizes with the Level Count}
Fig. \ref{fig:W7}(d) shows that as the update-to-insert ratio increases, the mean point lookup latency decreases sharply before stabilizing.
The initial sharp fall in the latency is attributed to a decrement in the number of levels (from $4$ to $3$) in the LSM-tree, when the update-to-insert ratio increases from $0.4$ to $1$.
The latency then stabilizes because non-empty point lookups perform at least one disk I/O, which, in turn, dominates the overall lookup cost.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{Tiering dominates the performance for update-intensive workloads}} \textit{When subject to update-intensive workloads, \texttt{Tier} exhibits superior compaction performance along with comparable lookup performance (as leveled LSMs), which allows it to dominate the overall performance space.}
}%
\normalsize
}
\vspace{1mm}
\subsubsection{\textbf{Varying Delete Proportion}}
We now analyze the impact of deletes, which manifest as out-of-place
invalidations with special entries called tombstones \cite{Sarkar2020}.
We keep the same data size and vary the proportion of point
deletes in the workload. All deletes are issued on existing keys and are
interleaved with the inserts.
\Paragraph{\texttt{TSD} and \texttt{TSA} Offer Superior Delete Performance}
We quantify the efficacy of deletion using the number of tombstones at the end
of the workload execution.
The lower this number, the faster deleted data has
been purged from the database, which in turn reduces space, write, and read
amplification.
Fig. \ref{fig:W7}(e) shows that \texttt{TSD} and \texttt{TSA} maintain the fewer tombstones at the end of the experiment.
For a workload with $10\%$ deletes, \texttt{TSD} purges $16\%$ more tombstones than \texttt{Tier} and $5\%$ more tombstones than \texttt{LO+1} by picking the files that have a tombstone density above a pre-set threshold for compaction.
For \texttt{TSA}, we experiment with two different thresholds for delete persistence: \texttt{TSA}$_{33}$ and \texttt{TSA}$_{50}$ is set to $33\%$ and $50\%$ of the experiment run-time, respectively.
As \texttt{TSA} guarantees persistent deletes within the thresholds set, it compacts more data aggressively, and ends up with $7$--$10\%$ fewer tombstones as compared to \texttt{TSD}.
\texttt{Full} manages to purge more tombstones than any partial compaction routine, as it periodically compacts entire levels.
\texttt{Tier} retains the highest number of tombstones as it maintains the highest number of sorted runs overall.
As the proportion of deletes in the workload increases, the number of tombstones remaining the LSM-tree (after the experiment is over) increases.
\texttt{TSA} and \texttt{TSD} along with \texttt{Full} scale better than the partial compaction routines and tiering.
By compacting more tombstones, \texttt{TSA} and \texttt{TSD} also purge more invalid data reducing space amplification, as shown in Fig. \ref{fig:W7}(f).
\Paragraph{\Ob Optimizing for Deletes Comes at a (Write) Cost}
The reduced space amplification offered by \texttt{TSA} and \texttt{TSD} is achieved by compacting the tombstones eagerly, which increases the overall amount of data moved due to compaction.
Fig. \ref{fig:W7}(g) shows that \texttt{TSD} and \texttt{TSA}$_{50}$ compacts
$18\%$ more data than the write optimized \texttt{LO+1} (for \texttt{TSA}$_{33}$ this becomes $35\%$).
Thus, \texttt{TSD} and \texttt{TSA} are useful when the objective is to (i) persist deletes timely or (ii) reduce space amplification caused by deletes.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{\texttt{TSD} and \texttt{TSA} are tailored for deletes}} \textit{\texttt{TSA} and \texttt{TSD}, by design, choose files with tombstones for compactions to reduce space amplification. \texttt{TSA} ensures timely persistent deletion by compacting more data eagerly for smaller persistence thresholds, which increases the write amplification.}
}%
\normalsize
}
\vspace{1mm}
\subsubsection{\textbf{Varying the Ingestion Count}}
We now report the scalability results by varying the data size from $2^{27}$B to $2^{35}$B.
\Paragraph{\Ob \texttt{Tier} Scales Poorly Compared to Leveled and Hybrid Strategies}
The mean compaction latency scales sub-linearly for all compaction strategies barring \texttt{Tier}, as shown in Fig. \ref{fig:W7}(h).
\textit{The relative advantages of compaction strategies with leveled and hybrid data layouts remain similar regardless of the data size.}
This observation is further backed up by Fig. \ref{fig:W7}(i) which shows how write amplification scales.
We also observe that the advantages of the RocksDB-implementation of tiering (i.e., \textit{universal compaction})~\cite{RocksDB2020} diminishes as the data size grows beyond $8$GB.
Fig.~\ref{fig:W7}(j) shows that as the data size increases, the tail compaction latency for \texttt{Tier} increases, as the worst-case overlap between files from consecutive levels increase significantly.
This makes \texttt{Tier} unsuitable for latency-sensitive applications.
When the data size reaches $2$GB, \texttt{Full} triggers a \textit{cascading compaction} that writes all data to a new level, causing spikes in write amplification and compaction latency.
\subsubsection{\textbf{Varying Entry Size}}
Here, we keep the key size constant ($4$B) and vary the value from $4$B to
$1020$B to vary the entry size.
\Paragraph{\Ob For Smaller Entry Size, Leveling Compactions are More Expensive}
Smaller entry size increases the number of
entries per page, which in turn, leads to (i) more keys to be compared during
merge and (ii) bigger Bloom filters that require more space per file and more CPU for hashing.
Fig. \ref{fig:W7}(k) shows these trends. We also observe similar
trends for write amplification in Fig. \ref{fig:W7}(l) and for query latency.
They both decrease as the entry size increases.
However, as the overall data size increases with the entry size, we observe the compaction latency and write amplification to increase steeply for \texttt{Tier} (similarly to Fig. \ref{fig:W7}(h) and (i)).
\subsection{LSM Tuning Influence}
\label{subsec:tuning}
In the final part of our analysis, we discuss the interplay of
compactions with the standard LSM tunings knobs, such as memory buffer size, page size, and size ratio.
\Paragraph{\Ob Compactions with Tiering Scale Better with Buffer Size}
Fig. \ref{fig:W7}(m) shows that as the buffer size increases, the mean compaction latency increases across all compaction strategies.
The size of buffer dictates the size of the files on disk, and
larger file size leads to more data being moved per compaction.
Also, for larger file size, the filter size per file increases along with the time spent for hashing, which increases compaction latency.
Further, as the buffer size increases, the mean compaction latency for \texttt{Tier} scales better than the other strategies.
Fig. \ref{fig:W7}(n) shows that the high tail compaction latency for \texttt{Tier} plateaus quickly as the buffer size increases, and eventually crossovers with that for the eagerer compaction strategies when the buffer size becomes $64$MB.
We also observe in Fig. \ref{fig:W7}(o) that among the partial compaction routines \texttt{Old} experiences an increased write amplification throughout, while \texttt{LO+1} and \texttt{LO+2} consistently offer lower write amplification and guarantee predictable ingestion performance.
Fig. \ref{fig:W7}(p) shows that as the memory buffer size increases, the mean point lookup latency increases superlinearly.
This is because, for larger memory buffers, the files on disk hold a greater number of pages, and thereby, more entries.
Thus, the combined size of the index block (one index per page) and filter block (typically, $10$ bits per entry) per file grows proportionally with the memory buffer size.
The time elapsed in fetching the index and filter blocks causes the mean latency for point lookups to increase significantly.
\Paragraph{All Compaction Strategies React Similarly to Varying the Page Size}
In this experiment, we vary the logical page size, which in turn, changes the number of entries per page.
The smaller the page size, the larger the number of pages per file -- meaning more I/Os are required to access a file on the disk.
For example, when the page size shrinks from $2^{10}$B to $2^9$B, the number of pages per file doubles.
With smaller page size, the index block size per file increases as more pages should be indexed, which also contributes to the increasing I/Os.
Thus, an increase in the logical page size, reduces the mean compaction latency, as shown in Fig.~\ref{fig:W7}(q).
In Fig.~\ref{fig:W7}(r), we observe that as the page size increases, the size of the index block per file decreases, and on average fewer I/Os are performed to fetch the metadata block overall for every point lookup.
\Paragraph{Miscellaneous Observations}
We also vary LSM tuning parameters such as the size ratio, the memory allocated
to Bloom filters, and the size of the block cache.
We observe that changing the values of these knobs
affects the different compaction strategies similarly, and
hence, does not influence the choice of the appropriate
compaction strategy for any particular set up.
\section*{Acknowledgment}
We thank the reviewers for their valuable feedback.
We are particularly thankful to Guanting Chen for his contributions in the early stages of this project.
This work was partially funded by National Science Foundation under Grant No. IIS-1850202 and a Facebook Faculty Research Award.
\subsection*{4.1* Compaction Eagerness}
Below, we discuss how the performance of an LSM-based storage engine is affected by the compaction eagerness.
\Paragraph{Leveling} A leveled LSM-tree eagerly merges the overlapping sorted runs every time a level is saturated and affects the performance the storage engine as follows.
\textit{Write amplification}: In a leveled LSM-tree, every time a Level $i$ reaches its capacity, all (or a subset of) files from Level $i$ are compacted with all (or the overlapping) the files from Level $i+1$; thus, on average each entry is written $T$ times within a level which leads to an average-case write amplification of $\mathcal{O}(T \cdot L)$.
\textit{Write throughput}: A leveled LSM-tree performs compactions eagerly whenever the memory buffer is full or a disk level reaches a nominal capacity.
This triggers compactions frequently, which consumes the device bandwidth at a greater degree, and affects the write throughput adversely.
\textit{Point lookups}: For leveling, the average cost for a point lookup on an non-existing key is given as $\mathcal{O}(L \cdot e^{-BPK})$, and that on an existing key is $\mathcal{O}(1 + L \cdot e^{-BPK})$ as it must always perform at least one I/O to fetch the target key.
\textit{Range lookups}: Compaction eagerness controls the number of sorted runs in an LSM-tree, and therefore, influences the cost for range lookups.
The average cost for a long range lookup is given as $\mathcal{O}(\tfrac{s \cdot N}{B})$ for leveling, with $s$ being the average selectivity of the range queries.
For short range queries, the average cost is simply proportional to the number of sorted runs, and is given as $\mathcal{O}(L)$.
\textit{Space amplification}: In presence of only updates and no deletes in a workload, the worst-case space amplification in a leveled LSM-tree is $\mathcal{O}(1/T)$~\cite{Dayan2018}.
However, with the addition of deletes, the space amplification increases significantly, and is given as $\mathcal{O}(\tfrac{N}{1 - \lambda})$ for leveling, where $\lambda$ is ratio of the size of a tombstone and the average size of a key-value pair~\cite{Sarkar2020}.
The space amplification for hybrid designs lay between the two extremely, and depends heavily on the exact tree design.
\textit{Delete performance}: The average time taken to persistently delete an entry from a tree is given by $\mathcal{O}(\tfrac{T^{L-1} \cdot P \cdot B}{I})$ for leveling~\cite{Sarkar2020}, where $I$ denotes the rate of ingestion of unique entries to the database.
Note that, for leveling propelling the tombstone to the last level ensures persistent deletion of the target entry to guarantee delete persistence.
\Paragraph{Tiering}: A tiered LSM merger the sorted runs lazily and is optimized for writes.
\textit{Write amplification}: For a tiered LSM, each level may have up to $T$ sorted runs with overlapping key-ranges; thus, each entry is written at least once per level resulting in an average-case write amplification of $\mathcal{O}(L)$.
\textit{Write throughput}: For tiering, compactions are less frequent and with the device bandwidth mostly free of compaction traffic, the write throughput is significantly improved.
\textit{Point lookups}: The average point lookup cost for a tiered LSM is $\mathcal{O}(T \cdot L \cdot e^{-BPK})$ for lookups on an non-existing key, and $\mathcal{O}(1 + T \cdot L \cdot e^{-BPK})$ for existing keys.
\textit{Range lookups}: For tiering, the average cost for a long range lookup is given as $\mathcal{O}(\tfrac{T \cdot s \cdot N}{B})$, and that for short range lookups is $\mathcal{O}(T \cdot L)$ for tiering.
\textit{Space amplification}: The worst-case space amplification in a leveled LSM-tree is $\mathcal{O}(T)$~\cite{Dayan2018} for workloads with updates but no deletes; which increases to $\mathcal{O}(\tfrac{(1 - \lambda) \cdot N + 1}{\lambda \cdot T})$ in presence of deletes.
\textit{Delete performance}: The average latency for delete persistence for a tiered LSM is given by $\mathcal{O}(\tfrac{T^L \cdot P \cdot B}{I})$, as in case of tiering, the tombstone must participate in a compaction involving all the sorted runs from the last level to guarantee delete persistence.
\Paragraph{Hybrid Designs}: Hybrid LSM-designs are generalized by $l$-leveling which includes lazy leveling, RocksDB-style hybrid leveling with only the first level being tiered, and other LSM-variants proposed in \cite{Dayan2019} and \cite{Idreos2019}.
\textit{Write amplification}: An $l$-leveled LSM-tree has its last $l$ levels implemented as leveled with the remaining shallower $L-l$ levels as tiering; and thus, the average-case write amplification in an $l$-leveled tree is given as $\mathcal{O}(L-l) + \mathcal{O}(T \cdot l)$.
For lazy leveling, $l = 1$, which asymptotically makes the write amplification of the LSM-tree similar an tiered LSM; while for RocksDB-style hybrid leveling $l = L-1$ closely resembles the write amplification of a leveled LSM.
\textit{Write throughput}: The write throughput of hybrid LSM-designs fall in between that of leveling and tiering, and depends on the capacity of the number of leveling implemented as tiering or leveling.
\textit{Point lookups}: For a hybrid design, the average lookup cost for non-existing keys becomes \red{fill in}, and similarly for lookups on existing keys.
\textit{Range lookups}: For hybrid designs, the cost for range lookups fall in between that for leveling and tiering, and depends on the exact design of the tree.
\textit{Space amplification}: The space amplification for hybrid designs lay between the two extremely, and depends heavily on the exact tree design.
\textit{Delete performance}: The hybrid designs, the latency to persist deletes depends only on the implementation of the last level of the tree, i.e., if the last level is implemented as tiered, the said latency is same as a tiered LSM; otherwise, which is similar to a leveled LSM-tree.
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|l}
\toprule
\multirow{1}{*}{\textbf{Knobs}} & \textbf{Remark} \\
\midrule
\multirow{1}{*}{Size ratio} & \shortstack[l]{
\textit{Space-amplification}: Larger size ratio → fewer levels → reduce space-amp \\
\textit{Write-amplification}: Larger size ratio → more times that an entry averagely gets merged → increase write amplification. \\
\textit{Update cost}: Larger size ratio → more times that an update entry averagely gets merged till it consolidates → more average worst-case update cost \\
\textit{Read cost}: Larger size ratio → fewer levels → fewer expected lookup cost O(L). \\
\textit{Individual compaction bytes}: Larger size ratio → more entries to be copied during a compaction → more total compaction bytes. \\
\textit{*Update throughput}: ? \\
\textit{Modeling graph: https://www.desmos.com/calculator/q4ko9j1wmp}
} \\
\midrule
\multirow{1}{*}{Memory buffer} & \shortstack[l]{
\textit{Space-amplification}: Larger memory buffer → fewer levels → reduce space-amp in general and can be slightly optimized for skewed workloads. \\
\textit{Write-amplification}: Larger memory buffer → every entry participates in averagely the same number of compaction in total → No changes. \\
\textit{Update cost}: Larger memory buffer → fewer levels → an updating entry participates in a fewer numbers of compactions in total to consolidate → less average worst-case update cost \\
\textit{Read cost}: Larger memory buffer → fewer levels → fewer expected lookup cost O(L). \\
\textit{Individual compaction bytes}: Larger memory buffer → Larger file size → more entries to be copied during a compaction → more individual compaction bytes. \\
\textit{*Read throughput}: ? \\
\textit{*Update throughput}: ? \\
\textit{Modeling graph: https://www.desmos.com/calculator/udmwdg5dt1}
} \\
\midrule
\multirow{1}{*}{File size} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: No changes. \\
\textit{Individual compaction bytes}: Larger file size → more entries to be copied during a compaction → more individual compaction bytes \\
\textit{*Read throughput}: ? \\
\textit{*Update throughput}: ? \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Bloom filters} & \shortstack[l]{
\textit{Space-amplification}: Larger bytes per key → increase space-amplification. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Larger bytes per key → assume more memory allocated to Bloom filter → fewer read cost O(L) \\
\textit{Individual compaction bytes}: Larger bytes per key → more bytes written in individual compactions. \\
\textit{*Read throughput}: ? \\
\textit{*Update throughput}: ? \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Block cache} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Cache indexes, filters, data blocks for reads → significantly better read performance. \\
\textit{Read throughput}: Directly read from memory → significantly larger read throughput. \\
\textit{Individual compaction bytes}: No changes.\\
\textit{*Update throughput}: ? \\
\textit{Others}: Reads are more optimized for skewed workloads.
} \\
\midrule
\multirow{1}{*}{Threads} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Shorten compaction duration → Timely reclamation that can potentially reduce read amplification if a bunch of reads is right after multi-level-compactions that are triggered by intensive updates. \\
\textit{Individual compaction bytes}: No changes. \\
\textit{Read throughput}: Shorten compaction duration → increase read throughput for interleaved workloads.\\
\textit{Update throughput}: Shorten compaction duration → increase overall write throughput. \\
\textit{*Others}: Shorten compaction duration → consolidate duplicate and discarded entries timely → reduce read amplification and lookup cost → and better space amplification?
} \\
\midrule
\multirow{1}{*}{File size multiplier} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: Larger file size multiplier → larger level contributes to a larger number of entries copied during a compaction with the same compaction frequency → increase write-amplification. \\
\textit{Update cost}: Larger file size multiplier → larger level contributes to a larger number of entries copied during a compaction with the same compaction frequency → more update cost. \\
\textit{Read cost}: No changes. \\
\textit{Individual compaction bytes}: Larger file size multiplier → larger level has larger compaction bytes but with the same compaction frequency. \\
\textit{Read throughput}: No changes. \\
\textit{Update throughput}: Shorten compaction duration → increase overall write throughput. \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Level$_0$ compaction trigger} & \shortstack[l]{
\textit{Space-amplification}: More Level$_0$ files → increase space-amplification. \\
\textit{Write-amplification}: Level$_0$ self compactions → increase write-amplification. \\
\textit{Update cost}: More Level$_0$ files → more times of compactions to consolidates a existing entry. \\
\textit{Read cost}: More Level$_0$ files → increase space-amplification and read amplification → higher read cost. \\
\textit{Individual compaction bytes}: More compaction bytes between Level$_0$ and Level$_1$ compactions.\\
\textit{Update throughput}: More Level$_0$ files → increase insert/update throughput. \\
\textit{*Read throughput}: ? \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Max compaction bytes} & \shortstack[l]{
\textit{Space-amplification}: If too small, redundant entries might not be compacted in a timely manner → increase space-amplification. \\
\textit{*Write-amplification}: ? \\
\textit{*Update cost}: ? \\
\textit{Read cost}: If too small, redundant entries might not be reclaimed in time → increase read cost. \\
\textit{*Read throughput}: ? \\
\textit{Individual compaction bytes}: No changes.\\
\textit{Update throughput}: If too small, further compactions needed to be done → reduce update throughput. \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Compaction readahead size} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Perform bigger reads when doing compactions → increase read amplification and read cost. \\
\textit{*Read throughput}: ? \\
\textit{Individual compaction bytes}: No changes.\\
\textit{Update throughput}: No changes. \\
\textit{Others}:
} \\
\bottomrule
\end{tabular}
}
\caption{LSM-Tree compaction tuning knobs. \label{tab:cost}}
\vspace{-0.2in}
\end{table*}
\section{Introduction}
\label{sec:introduction}
\input{1-introduction}
\section{Background}
\label{sec:background}
\input{2-background}
\section{The Compaction Design Space}
\label{sec:compaction}
\input{3-compaction}
\section{Benchmarking Compactions}
\label{sec:methodology}
\input{4-methodology}
\section{Experimental Evaluation}
\label{sec:results}
\input{5-results}
\input{5-results_temp}
\section{Discussion}
\label{sec:tuning}
\input{6-tuning}
\section{Conclusions}
\label{sec:conclusion}
\input{8-conclusion}
\balance
{
\input{biblio}
}
\ifx\mode\undefined
| 2024-02-18T23:40:02.567Z | 2022-03-01T02:17:58.000Z | algebraic_stack_train_0000 | 1,162 | 17,375 |
|
proofpile-arXiv_065-5743 | \section{Introduction} \label{sec:intro}
The discovery of a $z>6$ gamma-ray burst (GRB) is a rare occurrence
that, thanks to the extreme luminosity of these sources, offers a window into the infant Universe, which is otherwise difficult to observe.
Long GRBs, with gamma-ray emission generally
longer than 2~s \citep{Kouveliotou1993a},
originate from the explosions of very massive stars \citep{Hjorth2003b,Stanek2003ApJ,WoosleyBloom2006a}.
Under the assumptions that the stellar initial mass function (IMF) in distant galaxies is not broadly different from that of closer objects and
that the opening angles do not evolve strongly with redshift,
the rate of GRBs can be used both to estimate the star-formation rate (SFR) \citep{Kistler2009a,Robertson2012a} and to study the effects of metallicity on supernovae (SNe)-Ibc and GRB progenitors
\citep{Grieco2012a}.
The SFR is expected to change at very high redshift with the transition from the first massive population III (pop-III) stars in the remote Universe to pop-II and pop-I stars \citep{Salvaterra2015a,Fryer2022a}.
How this happens remains an open question that may be addressed through GRB studies.
We note that the prompt emission is not affected by dust extinction, and thus GRBs can provide a census of obscured star formation at all redshifts \citep{Blain2000a}.
Due to their immense brightness, GRBs can also act as beacons illuminating the local circumburst medium
\citep[e.g.][]{Savaglio2003a,Prochaska2008a,Schady2011a,Watson2013a,Heintz2018e},
the interstellar medium (ISM) of their hosts
\citep[e.g.][]{Fynbo2006a,Savaglio2012a,Cucchiara2015a,Bolmer2019a,Heintz2019a}, and the surrounding intergalactic medium (IGM) in the line of sight
\citep{Totani2006a,Hartoog2015a}. They are therefore powerful probes of the ionisation and chemical enrichment history of the early universe.
To shed light on these open issues through very high-redshift GRBs, several mission concepts have been studied and proposed
\citep[e.g.][]{Amati2018a,Tanvir2021a,White2021a}.
So far, out of the $\approx555$ GRBs with a well-constrained spectroscopic redshift (as of 20 July 2022), only five have been detected\footnote{See \url{http://www.mpe.mpg.de/~jcg/grbgen.html}} at $z\gtrsim6$:
GRB 050904 \citep[$z=6.295$,][]{Kawai2006a,Tagliaferri2005a},
GRB 080913 \citep[$z=6.733$,][]{Greiner2009a,Patel2010AA},
GRB 090423A \citep[$z=8.23$,][]{Tanvir2009Nature,Salvaterra2009a},
GRB 130606A \citep[$z=5.913$,][]{Hartoog2015a,Chornock2013a}\footnote{We consider this burst to be at $z\sim6$ since it lies just below this threshold.}, and GRB 140515A \citep[$z=6.327$,][]{Chornock2014arXiv,Melandri2015a}.
An additional four have very low signal-to-noise spectra or photometric redshifts:
GRB 090429B \citep[$z\simeq9.4$,][]{Cucchiara2011a},
GRB 100905A \citep[$z\simeq7.88$,][]{Bolmer2018a},
GRB 120521C \citep[$z\simeq6$,][]{Laskar2014a}, and
GRB 120923A \citep[$z\simeq7.8$,][]{Tanvir2018a}.
Some of these events show larger prompt energetics than those at low redshift, but this is likely the result of observational biases, and a cosmic evolution of the GRB energy release function has not been confirmed yet \citep[e.g.][and references therein]{Tsvetkova2017,Tsvetkova2021}.
In fact, also very-high redshift GRBs follow the $E_{\mathrm{peak,}z}-E_\mathrm{iso}$ and $E_{\mathrm{peak,}z}-L_\mathrm{iso}$ correlations
\citep[`Amati' and `Yonetoku' correlations; ][]{Amati2002a,Yonetoku2004ApJ}.
The same is true for the afterglow luminosity (Kann et al. 2022a, in prep.), which is larger only when compared with the low-luminosity local events ($z<0.2$).
The large prompt energy release is well matched by a larger X-ray luminosity of their afterglows, as indeed the $L_{\mathrm{X}}/E_{\mathrm{iso}}$ is similar to that of low-redshift events. These results suggest that the powering mechanisms and progenitors do not evolve with redshift.
On the other hand, some studies have suggested that jets from GRBs in the high-redshift universe are more narrowly collimated than those at lower redshifts \citep[e.g.][]{Lloyd-Ronning2019a,Laskar2014a,Laskar2018a}.
Here we present a follow-up of the bright GRB 210905A, the tenth burst with redshift $z\gtrsim6$ detected in the last 16 years.
It was detected by the
\textit{Neil Gehrels Swift Observatory} ~\citep[][\textit{Swift} hereafter]{Gehrels2004a} and Konus-\textit{Wind} ~\citep{Aptekar1995a}. X-ray as well as optical and near-infrared (NIR) follow-up observations of its bright afterglow led us to determine a spectroscopic redshift of $z=6.312$ \citep[refined with respect to][]{Tanvir2021GCN30771}.
The burst was also detected by the Cadmium Zinc Telluride Imager (CZTI) on-board {\it Astrosat} \citep{Prasad2021GCN30782} and, following the detection by the \textit{Swift}{} Burst Alert Telescope (BAT, \citealt{Barthelmy2005SSRv}), it was also found via a targeted search in data of the Gamma-ray Burst Monitor (GBM) on-board {\it Fermi} \citep{Veres2021GCN30779}.
In \S\ref{sec:data} we describe the observations of both the GRB and the afterglow, and in \S\ref{sec:mod} we present the analysis of the data. In \S\ref{sec:dis} we discuss the results and compare them with other bursts at low and high redshift, and we draw our conclusions in \S\ref{sec:con}.
Throughout this work, the flux density of the afterglow is described as $F_\nu (t) \propto t^{-\alpha} \nu^{-\beta}$. A $\Lambda$CDM cosmological model with $\Omega_M = 0.308$, $\Omega_{\Lambda} = 0.692$, and $H_0 = 67.8$ km s$^{-1}$ Mpc$^{-1}$ \citep{Planck2016a} has been assumed for calculations.
All data are in the observer frame and $1\sigma$ errors are used throughout the paper, unless otherwise specified.
\section{Observations}\label{sec:data}
\subsection{Gamma-ray and X-ray observations.}
\label{sec:kw}
GRB 210905A was discovered by BAT on-board \textit{Swift}{} at $T_0=00$:12:41.3 UT on 5 September 2021 \citep{Sonbas2021GCN30765}.
The BAT light curve shows a complex
structure with three pulses, detected until $\sim800$ s after the burst trigger.
Since GRB 210905A was too weak to trigger\footnote{See \S4.3 of \cite{Tsvetkova2021} for details on the KW trigger sensitivity.} Konus-\textit{Wind}{} (KW),
the burst data are available only from the instrument's waiting mode, as first reported by \cite{Frederiks2021GCN30780}.
In this mode, count rates with a coarse time resolution of 2.944~s are recorded continuously
in three energy bands: G1 ($20-100$~keV), G2 ($100-400$~keV), and G3 ($400-1500$~keV).
A bayesian block analysis of the KW waiting mode data in S1 (one of the two NaI(Tl) detectors)
reveals three (separated in time) emission episodes,
each featuring a statistically significant count rate increase in the combined G1+G2 band (Figure~\ref{fig:gammaXopt}),
while no statistically significant emission was detected in the G3 band throughout the burst.
The first episode, which triggered \textit{Swift}/BAT,
started at $\simeq T_0-30$~s and ends at $\simeq T_0+11$~s (hereafter Pulse~1).
The weaker second episode ($\sim T_0+344$~s to $\sim T_0+426$~s; Pulse~2) coincided in time with the bright flare in the XRT windowed-timing (WT) mode light curve around $T_0+400$~s (Figure~\ref{fig:gammaXopt}).
The onset of the final emission episode, observed by KW from $\sim T_0+747$~s to $\sim T_0+862$~s (Pulse~3),
is clearly visible in the BAT mask-weighted data, which are available up to $\sim$800~s after the trigger.
The $T_\mathrm{90}$ duration\footnote{The total duration ($T_\mathrm{100}$) derived from the KW observation is $\sim$890~s (at the $5\sigma$ level).} of the GRB~210905A prompt emission derived from the KW observation is $\sim870$~s.
\textit{Swift}/XRT started observing the BAT error circle $91.7$~s after the trigger and found an unknown X-ray source at the UVOT-enhanced position coordinates RA (J2000) = 20$^{\rm h}$36$^{\rm m}$11$\fs$64, Dec. (J2000) = $-$44$^{\circ}$26\arcmin24\farcs3 with a final uncertainty of 1\farcs5 \citep[][\textit{Swift}/XRT catalogue]{Beardmore2021GCN30768}.
Pointed \textit{Swift}{} observations continued until $3.8$ Ms after the GRB, when the source became too faint to be detected. Light curves and spectra, as well as the result of their modelling, have been obtained from the \textit{Swift}/XRT repository \citep{Evans2007a,Evans2009a}.
However, to build more accurate multi-wavelength spectral energy distributions (SEDs), given that some data available in the \textit{Swift}/XRT repository suffer from bad centroid determination, we have processed the \textit{Swift}{} data corresponding to the epochs of our SED analysis (obs. IDs 01071993001/002/003, Fig. \ref{fig:sedoptx}).
To reduce the data, the software package \texttt{HeaSoft} 6.29 was used\footnote{
\url{http://heasarc.gsfc.nasa.gov/docs/software/lheasoft}} with
the latest calibration file available\footnote{\textit{Swift}/XRT calibration files: 20210915.}. For the data processing, we used standard procedures, consisting of the use of the package {\tt xrtpipeline}, available within the {\sc FTOOLS} distribution\footnote{\url{http://heasarc.gsfc.nasa.gov/ftools/}}, with standard-grade filtering.
Using the most refined position provided by the \textit{Swift}{} team, the selection of the GRB position in the X-ray data
and the extraction of both source and background spectra, were done with the {\tt xselect} package, while for the construction of the corresponding ancillary response file (.arf) we used {\tt xrtmkarf} on each corresponding epoch exposure file.
In the following, a Galactic equivalent hydrogen column density of $N_H=3.38\times10^{20}\, \textnormal{cm}^{-2}$ is adopted \citep{Willingale2013a}.
\input{texdata.tex}
\subsection{Optical/NIR imaging and photometry}
\label{sec:optnir}
\textit{Swift}/UVOT started observing about 156 s after the trigger but no credible afterglow candidate was found \citep{Siegel2021GCN30785}. The MASTER Global Robotic Net \citep{Lipunov2010a} was also pointed at GRB 210905A 6 s after notice time and 414 s after trigger time but could not detect any afterglow candidate \citep{Lipunov2021GCN30766}.
We obtained optical/NIR observations with the $0.6$m robotic Rapid Eye Mount telescope \citep[REM,][]{Zerbi2001a}, starting 428~s after the burst.
A transient source was detected immediately in the $H$ band and later in $i^\prime z^\prime ZJK$ bands (i.e. all except $g^\prime$ and $r^\prime$).
Observations continued for about 3 hr before the declining afterglow brightness fell below the instrument detection limits in all filters \citep{Davanzo2021GCN30772}. Images were automatically reduced using the jitter script of the \texttt{eclipse} package \citep{Devillard1997a} which aligns and stacks the images to obtain one average image for each sequence. A combination of IRAF \citep{Tody1993} and SExtractor packages \citep{bertin2010sextractor} were then used to perform aperture photometry.
We triggered Bessel $R$- and $I$-band observations with the 1m telescope of the Las Cumbres Observatory Global Telescope (LCOGT) network, equipped with the Sinistro instrument, at the Cerro Tololo Inter-American Observatory (CTIO), Chile.
The midpoints of the first epoch are $t_I=1.06$ hr and $t_R=1.29$ hr, in the $I$ and $R$ bands respectively.
The data provided by the LCO are reduced using the BANZAI pipeline \citep{mccully2018real} that performs bias and dark subtraction, flat-fielding, bad-pixel masking, and astrometric calibration.
Afterwards, we use our own pipeline, which aligns and stacks the images using the astroalign Python package \citep{beroiz2020astroalign}, and afterwards uses SExtractor to perform the photometry and calibration against a sample of USNO-B catalogue stars \citep{monet2003usno}.
Using the data-reduction pipeline from LCO, and our relative photometry pipeline\footnote{The photometry was confirmed after the cross-calibration mentioned below.}, we calculate a magnitude of $I=19.46\pm0.15$ mag and a $3\sigma$ upper limit of $R>22.44$ mag. The lack of an $R$-band detection alerted us to the possibility that this burst may lie at very high redshift \citep[$z>5$, first reported by ][]{Strausbaugh2021GCN30769,Strausbaugh2021GCN30770}.
GRB 210905A was observed simultaneously in $g^\prime r^\prime i^\prime z^\prime JHK$ with
GROND \citep[Gamma-Ray Burst Optical Near-Infrared Detector;][]{Greiner2008a,Greiner2019PASP}
mounted on the 2.2m MPG telescope at ESO La Silla Observatory in Chile \citep{Nicuesa2021GCN30781}. The first epoch observations were done around 23 hr after the GRB trigger. The afterglow was detected only in the $z^\prime JHK$ bands. A second set of observations obtained 7 hr later was shallower and yielded only upper limits. Subsequent follow-up observations were obtained on 7 and 8 September 2021, but the afterglow was also not detected in the latter epochs.
We continued our ground-based follow-up using both the \textit{VLT}/HAWK-I \citep[High Acuity Widefield K-band Imager,][]{Pirard2004a} NIR imager on Paranal, as well as the
Dark Energy Camera (DECam) mounted on the 4m Victor Blanco telescope at CTIO.
We also used the acquisition camera of the ESO \textit{VLT}/X-shooter spectrograph to obtain $g^\prime r^\prime I_{\rm Bessel} z^\prime $ imaging before moving on to spectroscopy.
We obtained a last ground-based observation 87 d after the GRB with \textit{VLT}/FORS2 in the $I_{\rm Bessel}$ band.
Finally, the field was observed with the \textit{Hubble Space Telescope (HST)} on 24 April 2022. At this epoch four dithered observations with a total duration of 4797 s were obtained in the $F140W$ filter. The data were obtained from the MAST archive and processed with {\tt astrodrizzle} to create a final combined
charge transfer efficiency corrected
image with a pixel scale of 0\farcs07/pixel.
Aperture photometry was performed with a radius of 0\farcs4 to minimise any contribution from the nearby sources (see Figure \ref{fig:forshawk}).
X-shooter and GROND optical/NIR images were reduced in a standard manner using PyRAF/IRAF \citep{Tody1993}. In particular, GROND data reduction was done with a customised pipeline \citep{Kruhler2008a} that is based on standard routines in IRAF.
FORS $I$-band and HAWK-I $JHK_s$-band data have been reduced using the ESO Reflex environment \citep{esoreflex2013a}.
We obtained PSF photometry with the DAOPHOT and ALLSTAR tasks of IRAF. PSF-fitting was used to measure the magnitudes of the GRB afterglow.
Only for the late-time FORS2 observation in $I_{\rm Bessel}$ at 87 days and \textit{HST}-$F140W$ at 232 days did we use aperture photometry.
All optical photometry except $I_{\rm Bessel}$-band data were calibrated against the SkyMapper catalogue \citep{Wolf2018a}, while the ground-based NIR photometric calibration was performed against the 2MASS catalogue \citep{Skrutskie2006a}. This procedure results in a typical systematic accuracy of 0.04~mag in $g^\prime r^\prime i^\prime z^\prime$, 0.06~mag in $JH$ and 0.08 mag in $K_s$.
To cross-calibrate all the $I$-band imaging we
applied the Lupton formulae to a set of local standard stars from the SkyMapper catalogue.
The $I$ filters used by X-shooter and LCO extend beyond 10000~{\AA}. Therefore, we expect that not all the flux is dimmed by the Lyman-$\alpha$ dropout at $\sim8900$~{\AA} in these filters. On the contrary, the FORS2 $I$-band filter has negligible transmission above Lyman-$\alpha$ (at the redshift of GRB 210905A). Therefore, we speculate that the possible (note the large error) late $I$-band emission (see \S~\ref{sec:constant}) does not originate from the afterglow, but instead from a foreground source.
The optical/NIR afterglow lies at coordinates RA (J2000) = $20^h36^m11\fs57$, Dec. (J2000) = $-44^{\circ}26\arcmin24\farcs7$
as measured in our first HAWK-I image and calibrated against field stars in the GAIA DR2 catalogue \citep{Gaia2018a} with the astrometric precision being 0\farcs15. This refines the position
reported by LCO \citep{Strausbaugh2021GCN30769} and is in agreement with the more
precise localisation provided by ALMA \citep{Laskar2021GCN30783}.
Table~\ref{tab:photall} provides a summary of all photometry of the transient (non-relevant upper limits are not reported). All reported magnitudes are in the AB photometric system and not corrected for the Galactic foreground extinction of $E(B-V)=0.029$ mag \citep{SchlaflyFinkbeiner2011a}.
\subsection{X-shooter spectroscopy and redshift}
Starting $\sim2.53$ hr after the GRB detection, we obtained UV to NIR spectroscopy
of the afterglow with the X-shooter instrument \citep{Vernet2011a} mounted on the \textit{VLT} on Cerro Paranal (ESO, Chile), via the Stargate Large Programme for GRB studies.
The afterglow is well detected in the red part of the visible arm.
A clear break is detected around 9000~{\AA}, which we interpret as the Lyman-$\alpha$ break (first reported in \citealt{Tanvir2021GCN30771}). Other lines such as
\ion{Fe}{ii}, \ion{Al}{ii}, \ion{C}{iv} and \ion{Si}{ii}
and fine structure lines are visible and display two velocity components, which belong to the ISM of the same galaxy. All these lines allow us to determine $z=6.312$ as the redshift of the GRB.
A very strong foreground system at $z = 2.8296$ (\ion{Mg}{ii}, \ion{Fe}{ii} lines)
and another at $z = 5.7390$ (\ion{C}{ii}, \ion{Fe}{ii}, \ion{C}{iv}, \ion{Si}{iv} lines) are also present.
The details concerning the reduction and analysis of the absorption lines in the X-shooter spectra are given in \cite{Saccardi2022a}.
This high redshift explains the non-detection by UVOT and MASTER and the red $R_C-I_C$ and $r^\prime-z^\prime$ colours found with LCO and X-shooter as due to Lyman dropout. In \cite{Fausey2022a} we will study the IGM neutral fraction in light of the GRB 210905A afterglow spectrum.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{sed-prompt.pdf}
\caption{Optical/NIR to gamma-ray SEDs of the prompt phase at five different epochs (see \S~\ref{sec:gammaxo}). All SEDs have been modelled with a double broken power-law following the expectations from synchrotron theory. Note that we could not constrain the low-energy break during the X-ray flare. In the fourth SED, we have simply scaled the solution from the last epoch (there is no KW detection during this epoch). Note that the photon indices described in the text correspond to spectral indices $1/3$, $-1/2$, $-1.3$ shown here. X-ray data are corrected for Galactic and intrinsic absorption.}
\label{fig:sedgammaxo}%
\end{center}
\end{figure}
\begin{table*}
\centering
\footnotesize
\caption{Fits to the prompt emission spectra.}
\label{tabFits}
\setlength{\tabcolsep}{0.6em}
\begin{threeparttable}
\begin{tabular}{lcccccccc}
\toprule
Spectrum$^a$ & Instruments &Model$^{f}$ &Time interval & $\alpha$ & $E_\textrm{peak}$ or $\nu_m$ & $E_\textrm{break}$ or $\nu_c$ & Flux (15--1500 keV) &$\chi ^{2} (\textrm{d.o.f.})$ \\
& & & (relative to $T_0$, s) & (photon index) & (keV) & (keV) &$(10^{-7}$ erg cm$^{-2}$ s$^{-1})$ &\\
\midrule
`peak'$^b$& BAT+KW&CPL & [$-0.465$, $2.479$] &$-0.66_{-0.30}^{+0.35}$ & $144_{-28}^{+56}$ & & $2.83_{-0.40}^{+0.56}$ &40.2 (58)\\[2ex]
\midrule \\
Pulse~1& BAT+KW &CPL & [$-29.905$, $11.311$] & $-0.99_{-0.17}^{+0.18}$ & $127_{-19}^{+31}$ & & $1.14_{-0.11}^{+0.14}$ &36.5 (58)\\
Pulse~1& BAT+KW& DBPL & [$-29.905$, $11.311$] & & 127 & $27.09_{-3.34}^{+3.41}$ & & 35.2 (56)\\[4ex]
X-ray flare$^{c}$& XRT+KW& DBPL & [$80.0$, $120.0$] & & $1.46_{-0.26}^{+0.21}$ & unconstrained & & 225.9 (281)\\[4ex]
Pulse~2$^{c}$& XRT+BAT+KW& CPL& [$343.983$, $426.415$] & $-0.80_{-0.47}^{+0.58}$ & $70_{-13}^{+22}$ & & $0.28_{-0.05}^{+0.06}$&48.6 (58)\\
Pulse~2$^{c}$& XRT+BAT+KW&DBPL & [$343.983$, $426.415$] & & 50 & $1.13_{-0.10}^{+0.11}$ & & 714.2 (714)\\[4ex]
Pulse~3$^{d}$& BAT&CPL & [$747.311$, $797.359$] & $-0.89_{-0.30}^{+0.37}$ & $154_{-36}^{+77}$ & & $0.76_{-0.11}^{+0.17}$& 55.2 (58)\\
Pulse~3$^{e}$& KW &CPL & [$747.311$, $862.127$] & $-0.88_{-0.30}^{+0.76}$ & $167_{-61}^{+88}$ & & $0.97_{-0.23}^{+0.24}$ & 0 (0)$^{e}$\\
Pulse~3$^{d,g}$& REM+BAT+KW&DBPL & [$747.311$, $797.359$] & & 154 & $0.006$\tnote{g} & & 59.3 (56)\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[a]{All spectra, except the first, are time-averaged.}
\item[b]{This spectrum was used to calculate the peak energy flux.}
\item[c]{The interval covered by XRT.}
\item[d]{The interval covered by BAT.}
\item[e]{KW-only fit; for the CPL model, the 3-channel fit has 0 degrees of freedom.}
\item[f]{CPL stands for cut-off power-law. DBPL stands for double-broken power-law, used for the synchrotron model. In this last case, the power-law indices were fixed as described in \S\ref{sec:gammaxo}.}
\item[g]{This break was fixed to match the $H$-band data.
}
\end{tablenotes}
\end{threeparttable}
\end{table*}
\section{Modelling and results}\label{sec:mod}
\subsection{Joint BAT-KW modelling}\label{sec:kwbat}
To derive the broad-band spectral parameters of the prompt emission of this burst, we performed a joint spectral analysis of the BAT data ($15-150$ keV) and the KW waiting-mode data ($20-1500$~keV)
for all three prompt emission episodes in a way similar to that described in \cite{Tsvetkova2021}.
The spectral data from the two instruments were simultaneously fit in
\texttt{Xspec v12.12.0} using three different spectral models (see below), all normalised to the energy flux in the $15-1500$ keV range.
The most reliable results for all three emission episodes were obtained with a power-law function with high-energy exponential cutoff (CPL).
Compared to the CPL, a simple power-law (PL) function fits the data with significantly worse statistics ($\Delta \chi^2 > 7$ in all cases)
and systematically overestimates the high-energy part of the spectra.
The Band function \citep{Band1993a} does not improve the fit statistics as compared to the CPL.
For all spectra, the Band fits\footnote{The Band function has parameters
$\alpha$, $\beta$ and $E_\mathrm{peak}$, not to be confused with the decay and spectral indexes of the afterglow, defined in \S\ref{sec:intro}.} provide values of the index $\alpha$ and $E_\mathrm{peak}$ almost identical to the CPL fits (and consistent within the large errors),
and set only an upper limit to the high-energy photon index ($\beta < -2.3$),
due to the sparse KW data which do not provide enough sensitivity and spectral resolution to constrain the spectral index above 100 keV.
Our fits with the CPL function are summarised in Table~\ref{tabFits}.
The time-averaged spectrum of the brightest episode (Pulse~1) is best described by $\alpha \sim -0.99$ and observed $E_\mathrm{peak} \sim$127~keV.
The spectrum of the weaker episode (Pulse~2) is characterised by a similar, within errors, $\alpha$, and an about halved $E_\mathrm{peak} \sim$70~keV.
The third emission episode is $\sim 115$~s long and
only partially covered by \textit{Swift}/BAT. In this case, we analysed the spectra extracted for two time intervals:
the first spectrum corresponds to the time interval of joint KW and BAT detection ($\alpha \sim -0.89$, $E_\mathrm{peak} \sim$154~keV),
and the second one covers the whole third emission episode ($\alpha \sim -0.88$, $E_\mathrm{peak} \sim$167~keV).
For the latter interval, the fits were made using the KW 3-channel spectrum alone and the obtained model flux was used to calculate the Pulse~3 energy fluence.
The $15-1500$ keV energy fluences of Pulses~1, 2, and 3, derived from our time-averaged fits, are summarised in Table \ref{tabEiso}, together with the
fluence integrated over all three emission episodes. We use these results to calculate the isotropic energy (see also \S\ref{sec:con-prompt}).
The spectrum in the interval ($T_0-0.465$ s, $T_0+2.479$ s) inside Pulse~1, which corresponds to the peak count rate,
is characterised by $\alpha \sim -0.66$ and $E_\mathrm{peak} \sim$144~keV.
Using this spectrum and the BAT light curve, we estimate the 1~s peak energy flux of GRB~210905A to be $3.83_{-0.54}^{+0.73} \times 10^{-7}$~erg~cm$^{-2}$~s$^{-1}$ ($15-1500$ keV).
\subsection{Joint modelling of the prompt emission from gamma-rays to the optical}
\label{sec:gammaxo}
In the previous section we have analysed the gamma-ray spectra during the three pulses and found that they can be modelled
almost equally well with a CPL or a Band function with very similar (within errors) low-energy photon index $\sim-0.8<\alpha<-1.2$ and $E_{\rm peak}$.
The high-energy index of the Band function is $\beta<-2.3$, poorly constrained by the sparse KW data.
Values of $-1$ and $-2.3$ are very typical low- and high-energy photon indices for GRBs \citep[e.g.][]{Preece1998a,Nava2011a}. Following early works \citep{Frontera2000a,Rossi2011a,Zheng2012a}, recently \cite{Oganesyan2019a}
have shown that the low-energy spectra ($<100$ keV) of the majority of \textit{Swift}/BAT GRBs actually have
a low-energy spectral break in the $2-30$ keV range, in addition to the typical break
corresponding to the peak energy at larger energies.
Such a break has also been discovered at higher energies, up to few hundreds of keV, in \textit{Fermi} bursts \citep{Ravasio2018a,Ravasio2019a}, and has been studied in detail \citep{Gompertz2022} in the temporally long merger event GRB 211211A \citep{Rastinejad2022}. It has been suggested to be a common feature of GRB prompt emission spectra \citep{Toffano2021a}.
Therefore, the low-energy part of the spectrum, with photon index $-1$,
into two power-law photon indices describing the spectrum below and above the
low-energy break, and have distributions centred around $-2/3$ and $-3/2$ (or $1/3$ and $-1/2$ for the flux density spectrum $F_\nu$), respectively.
These indices are the same as those
below and above the cooling break $\nu_c$
and expected by the synchrotron theory in the fast-cooling regime \citep[see also][]{Ravasio2018a,Ravasio2019a}.
Further confirmation of these empirical fits was obtained by direct fitting of prompt GRB spectra with a synchrotron model \citep{Ronchi2020a,Burgess2020a} and the synchrotron interpretation is discussed for example in \cite{Ghisellini2020a}.
To determine if the prompt emission of GRB 210905A is in agreement with these theoretical expectations,
we have modelled the NIR and X- to gamma-ray SEDs of five epochs during the whole prompt emission with a double broken power-law with photon indices fixed to the synchrotron model predictions.
That the optical-to-gamma emission is the result of a common radiative process is justified by the simultaneous evolution of the optical, X-ray and gamma-ray prompt emission.
The selected epochs are the three gamma-ray pulses, the first X-ray flare at $\sim120$~s and
an additional epoch at $\sim630-690$ s simultaneous to an $H$-band observation. This is the only
epoch before the last pulse with few counts in the BAT spectrum. We have fixed the high-energy break ($\nu_m$, the frequency corresponding to the minimum injection energy in a fast-cooling synchrotron model) to the break energy in the Band modelling above.
The high-energy photon index above this break has been fixed to $-2.4$, that is also consistent with the Band fit.
The results are shown in Table \ref{tabFits} and in Figure \ref{fig:sedgammaxo}.
The analysis of the X-ray flare alone shows that it is well modelled by $\nu_m$ at $\sim1$ keV, a photon index\footnote{We could not constrain the low-energy break for this epoch.} $-2.4$
and intrinsic absorption $N_H=7.7^{+3.6}_{-3.2}\times10^{22}\, \textnormal{cm}^{-2}$.
In the following, we fixed the intrinsic hydrogen column density to this value.
During the first two pulses the data are consistent with a broken power-law with photon index 0.5.
In the last two SEDs, we also include the $H$-band follow-up obtained with REM (Figure~\ref{fig:gammaXopt}).
Note that in the fourth SED we have simply scaled the solution from the last epoch, because there are basically just two measurements for three possible free parameters\footnote{Two breaks and the peak flux.}, not enough to constrain all breaks. Therefore, this is not shown in Table~\ref{tabFits}.
For these last SEDs (i.e. before and during Pulse 3) the $H$-band observation is below the extrapolation of the photon index from the gamma-rays, and thus $\nu_c$ must be in between the $H$ and X-ray bands. We further discuss the implications of this finding in \S\ref{sec:ori-prompt}.
Unfortunately, for both SEDs the lack of any colour information and possible contribution from the emerging afterglow
in the observed optical/NIR prevents us from affirming without doubt that the low-energy photon index is $-2/3$. However, we can confirm that for both SEDs the synchrotron model is in agreement with the observations.
\begin{figure*}
\begin{center}
\includegraphics[width=0.99\textwidth,angle=0]{lcox.pdf}
\caption{Optical and X-ray light curves. The dashed lines show the fit to each single band assuming a smoothly broken power-law model. The grey intervals are not considered in the first modelling of the light curves. Those in light blue have been used for the SED fitting (see \S \ref{sec:xo}). The X-ray light curve is computed at 1.73 keV, the log-mean of the XRT band. The last $H$-band data corresponds to the \textit{HST}/F140W detection. No colour correction was necessary as explained in \S\ref{sec:break}.}
\label{fig:lcoptx}%
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{sed-aft.pdf}
\caption{Optical/NIR to X-ray SEDs of GRB 210905A at three different epochs (0.1, 1.0, 2.18 d). The best fit with a broken power-law is shown in all three epochs, and the best-fit parameters are shown in Table \ref{tab:sed} (see \S \ref{sec:xo}). The dotted- and dashed-lines show the absorbed and unabsorbed models, respectively.
}
\label{fig:sedoptx}%
\end{center}
\end{figure}
\begin{table}
\centering
\caption{Optical/NIR to X-ray modelling of the afterglow (see Figure~\ref{fig:sedoptx}).}
\begin{threeparttable}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{lccccc}
\toprule
Model &Time & $\beta_\mathrm{opt}$\tnote{a} & \multicolumn{2}{c}{Break (keV)} & $\chi^2/$ d.o.f. \\
&d & & obs. & Theor.\tnote{b} & \\
\midrule
BPL& 0.1 & $0.62\pm0.04$ & $1.7_{-1.6}^{+2.6}$ & $1.4\pm1.2$ & 9.8/16 \\
SPL& 0.1 & $0.63\pm0.03$ & -- & -- & 9.9/17 \\
BPL& 1.0 & $0.60\pm0.04$ & $0.35_{-0.13}^{+0.28}$ & 0.35 & 63.1/53 \\
SPL& 1.0 & $0.71\pm0.02$ & -- & -- & 75.8/54 \\
BPL& 2.2 & $0.56\pm0.16$ & $0.18_{-0.03}^{+0.06}$ & $0.22\pm0.19$ & 22.8/25 \\
SPL& 2.2 & $0.80\pm0.03$ & -- & -- & 19.5/26 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[a] In the broken power-law we assumed $\beta_{X}=\beta_\mathrm{opt}+0.5$.
\item[b] Obtained from the best-fit value at 1 d with $\nu(t)=\nu(1d)(t/1d)^{-k}$, with $k=0.5$ after 1 d (ISM, slow cooling scenario) and $k=0.6$ before 1 d assuming energy injection (see \S\ref{sec:eninj}).
\end{tablenotes}
\label{tab:sed}
\end{threeparttable}
\end{table}
\subsection{Joint afterglow light curve and SED}\label{sec:xo}
Figure \ref{fig:lcoptx} shows both optical/NIR and X-ray light curves of the afterglow. Regions in grey have not been considered in this section because of: i) the presence of flares likely due to long-lasting activity from the central engine
and ii) a possible late break when compared to the earlier evolution (Figure~\ref{fig:lcoptx}) that we discuss below in \S \ref{sec:break}.
A complete understanding of the afterglow behaviour would require a full numerical simulation. Nevertheless, we can derive some conclusions by modelling the SEDs and light curves of the afterglow.
We modelled the afterglow SED from NIR to X-ray frequencies at three different epochs, 0.1, 1.0, and 2.18 days, using \texttt{Xspec v12.12.0} \citep{Arnaud1996a}. We have not considered the optical data ($z$-band and bluer bands) because they are affected by the Lyman-$\alpha$ break and thus do not add anything useful to this modelling.
The redshift was fixed to 6.312 and we fixed the Galactic and intrinsic hydrogen column density (see \S\ref{sec:kw}).
To avoid being affected by the uncertain gas absorption, we have not considered data below 0.5 keV in the modelling.
We have modelled the NIR-to-X-ray SED
both with a single and a broken power-law with $\beta_{X}=\beta_{\rm opt}+0.5$, at all three epochs. The best-fits
are shown in Figure~\ref{fig:sedoptx} and their
parameters are shown in Table~\ref{tab:sed}.
All fits give negligible dust extinction $A_V\lesssim0.03$ mag, independent of the extinction
law\footnote{In the \texttt{zdust} model.}, which is not unusual for high-$z$ GRB afterglows (see \S\ref{sec:avnh}).
It is not straightforward to decide between the single and broken power-law models as the SEDs are fit comparably well in both cases. However, we note that in the first epoch they give basically the same value for the low-energy spectral index. Therefore, we conclude that $\nu_c$ is within or above the X-ray band at $0.1$~d. That $\nu_c$ is then in between the two bands is even more clear in the second SED at 1~d whose best-fit gives $\beta_\mathrm{opt}=0.60\pm0.04$, and thus an electron index $p=2.20\pm0.08$. To confirm these findings, we need to also consider the light-curve evolution.
We have modelled the optical and NIR light curves simultaneously with a smoothly broken power-law \citep{Beuermann1999a}:
$F = (F_1^{\kappa}+ F_2^{\kappa})^{-1/\kappa}$,
where $F_\textrm{x}=f_\textrm{break}(t/t_\textrm{break})^{-\alpha_x}$,
$f_\textrm{break}$ being the flux density at break time $t_\textrm{break}$, $\kappa$ the break smoothness parameter, and the subscripts $1,2$ indicate pre- and post-break, respectively. We find a shallow break with large uncertainty at $t_\textrm{break,~opt} =0.99\pm 0.73$ d ($85.9 \pm62.7$~ks)
and decay indices $\alpha_{1,\rm opt}=0.69\pm0.04$ and $\alpha_{2,\rm opt}= 0.94 \pm 0.04$, with break smoothness $\kappa=10$ fixed \citep{Zeh2006a}\footnote{We have also evaluated smaller fixed $\kappa$ values (5, 2, 1) and find that $\chi^2/\mathrm{d.o.f.}$ increases,
$t_b$ remains similar, but even at $\kappa=5$, the error exceeds the value of the break time, and increases further.}.
With respect to a simple power-law, the $\chi^2/\mathrm{d.o.f.}$ decreases from $1.36$ to $0.92$.
The X-ray light curve shows an initial
peak at $97$~s followed by the typical steep decay \citep{Tagliaferri2005Nature,Barthelmy2005ApJ} with $\alpha=2.37^{+0.15}_{-0.16}$ until $\sim270$ s after the burst, when it is interrupted by a flare also visible in gamma-ray data.
After $\sim3000$ s, it is best modelled by a broken power-law with a
shallow break at $\simeq 1$~d: from $\alpha_{1,X} = 0.74^{+0.03}_{-0.01}$ to $\alpha_{2,X} = 1.10\pm 0.04$, with $t_\textrm{break,~X}=60\pm30$~ks\footnote{As shown by the \textit{Swift}/XRT light curve repository \citep{Evans2007a,Evans2009a}.} (1$\sigma$ errors).
Finally, we note that modelling simultaneously the X-ray and optical bands with the best-fit indices found above, the shallow break is seen at a common time of $0.70\pm0.26$ days ($60.5\pm22.5$ ks).
In Table \ref{tab:closure}, we compare the observed evolution with the predicted values of the temporal slopes in the optical/NIR and the X-ray bands for various slow-cooling afterglow scenarios \citep[see, e.g. ][]{Zhang2006c,Schulze2011a} and the electron index $p=2.20\pm0.08$. We cannot find a good solution for the data before $0.7$ d,
however, after the first modest break
the data are best modelled within a scenario where the jet is expanding into a constant-density medium
(hereafter referred to as the interstellar medium
or ISM environment). A single power-law SED solution
cannot explain the observed temporal decay index in X-rays, $\alpha_{2,X}=1.1$, with emission below the cooling frequency, $\nu_c$. Moreover, within this solution $\beta_\mathrm{opt}$ should be constant, but instead it evolves with time.
These results indicate that $\nu_c$ should lie between the optical and X-ray bands (see also Figure \ref{fig:sedoptx}).
A $\nu_c$ that has moved out of the X-ray band can explain the difference in the temporal decay index between optical and X-rays after the shallow break, therefore, we consider a broken power-law as the best description for the optical-to-X-ray SED.
For an upper branch\footnote{The spectral index $\beta_\mathrm{X}=0.90\pm0.15$ at $22.8$~ks reported in the XRT pages is well in agreement with this result.} $\beta_{X}=1.10\pm0.04$, obtained at 1d (the epoch with the best statistics), the electron index is $p=2.20\pm0.08$.
The large errors on the cooling frequency do not permit to test whether the break shifts in time as $t^{-1/2}$, although the results seem consistent with such a relation (see Table \ref{tab:sed}).
\begin{table}
\centering
\caption{Closure relations.}
\centering
\small
\setlength{\tabcolsep}{0.3em}
\begin{threeparttable}
\begin{tabular}{lccc}
\toprule
Afterglow model & Theoretical & \multicolumn{2}{c}{Observed} \\
& $\alpha$ & $\alpha_\textrm{1,opt}=0.69\pm0.04$ & $\alpha_\textrm{1,X}=0.74\pm0.03$ \\
& &$\sigma$-level\tnote{a}& $\sigma$-level\tnote{a}\\
\midrule
ISM\tnote{c}, wind, $\nu>\nu_c$ &$1.15\pm0.06 $ &$6.38$ & $6.11$ \\
ISM, $\nu<\nu_c$ &$0.90\pm0.06 $ &$2.88$& $2.36$ \\
wind, $\nu<\nu_c$ &$1.40\pm0.06 $ &$-9.76$ & $-11.03$ \\
\midrule
& & $\alpha_\textrm{2,opt}=0.93\pm0.04$ & $\alpha_\textrm{2,X}=0.93\pm0.04$ \\
\midrule
ISM, wind, $\nu>\nu_c$ &$1.15\pm0.06 $ &$3.02$& 0.69\tnote{b} \\
ISM, $\nu<\nu_c$ &$0.90\pm0.06$ &$-$0.50\tnote{b}& $-2.77$\\
wind, $\nu<\nu_c$ &$1.40\pm0.06 $ &$-13.17$& $-15.25$ \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[a] The $\sigma$-level is the difference of the predicted and
the observed temporal slope, normalised to the square root of the sum of their quadratic errors.
\item[b] The solution that matches the closure relations within 1 $\sigma$ is highlighted in bold (see \S\ref{sec:xo}).
\item[c] We follow the common use and refer to the constant-density medium as ISM.
\end{tablenotes}
\label{tab:closure}
\end{threeparttable}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=0.95\columnwidth,angle=0]{fors+hst-rings.pdf}
\caption{Zoom-in to the $10\arcsec\times10\arcsec$ region centred on the afterglow.
\textit{Top:} deep FORS2 $I_{\rm Bessel}$-band image obtained 87 d after the GRB trigger. The red point indicates the ALMA localisation of the afterglow. A faint source, highlighted with a black circle, lies
$\sim1\farcs5$ NW of the afterglow position.
The radius of the circle is that of the aperture used for photometry.
\textit{Bottom:} For comparison, the \textit{HST}/$F140W$ image shows several sources
at the position of the $I_{\rm Bessel}$-band source.
The cyan circle shows the location of the NIR afterglow and its error (\S\ref{sec:optnir}), measured in the first HAWK-I $H$-band observation.
}
\label{fig:forshawk}
\end{figure}
\subsection{The late NIR imaging}\label{sec:hst}
In Figure \ref{fig:forshawk} we show
the most recent observation of the field
obtained with \textit{HST} in the $F140W$ band.
At 0\farcs09$\pm0\farcs02$ from the NIR afterglow position we clearly detect an extended source ($F140W=25.66\pm0.05$ mag). The relative offset is measured comparing the centroids in the first HAWK-I image and the \textit{HST} image, after aligning these two images using a common set of sources.
It is slightly elongated in the NNE-SSW direction and has a FWHM of 0\farcs4 and 0\farcs3, larger than the FWHM of field stars ($0\farcs25\pm0\farcs02$).
Therefore, we conclude that the \textit{HST} detection is dominated by a constant source.
The statistical probability of chance alignment is $P_{cc} (<r)=0.03$ \citep{Bloom2002a}, which has been obtained using the projected angular separation ($r=0\farcs4$), the apparent magnitude ($H_{AB}=25.8$ mag, see \S\ref{sec:break}), and the $H$-band galaxy counts from \cite{Frith2006a}.
This is lower than what is commonly used to establish an association. Therefore, it is likely that this is the host galaxy of the GRB. This source is not detected in the $I_{\rm Bessel}$-band in an observation
obtained 87 d after the GRB with the \textit{VLT}/FORS2 instrument down to a $3\sigma$ upper limit of $26.0$ mag (AB).
We also note a more complex structure
which extends up to 2\farcs2 to the NW of the afterglow position.
This extended structure is weakly detected in the deep $I_{\rm Bessel}$ observation
with a similar brightness ($I_{\rm Bessel,AB}=24.84\pm0.18$ mag and $F140W_{AB}=24.7\pm0.04$ mag), and therefore is unlikely to reside at $z=6.3$.
This group of sources, or at least part of them,
could also be responsible for the foreground intervening system found in X-shooter spectra at $z=2.8$ with high-EW \mbox{Mg\,{\sc ii}} absorption \citep[see][]{Saccardi2022a}. In this case, the cold gas observed in absorption can also be offset from the hot and bright region observed in the $I$-band, or occupy a larger region of the same foreground galaxy.
\subsection{Constraints on the jet break}\label{sec:break}
A sizeable number of GRB afterglow light curves
break to steeper power-law decays, usually within a few days after the trigger.
These breaks have generally been interpreted as due to the outflow being collimated in a jet, where the break occurs when the relativistic beaming angle becomes wider than
the jet's half-opening angle $\theta_{\rm jet}$ \citep{Rhoads1997a,Sari1999a}.
In the forward-shock model the jet breaks have to be achromatic, thus to have the same slope (and slope change) simultaneously in all bands\footnote{The value of the light-curve post-jet-break slope depends on $\nu_\mathrm{obs}$ being above or below $\nu_m$ and $\nu_{sa}$, where $\nu_{sa}$ is the synchrotron self-absorption frequency. Optical and X-ray afterglow SEDs are usually observed to be above $\nu_m$ \citep[e.g.][]{Greiner2011a}.}.
In \S\ref{sec:xo} we have shown that a moderate break is present in both optical and X-rays at a common time of $\simeq 0.74$ d.
However, the post-break slope for both X-rays and optical is only $\simeq 1$, that is too shallow for a jet break,
both observationally \citep{Wang2015a}
and theoretically \citep{SariPiran1999a,Zhang2006c,Panaitescu2007a}.
Instead, the last XRT detection, together with the late observation by the \textit{Chandra} X-ray Observatory \citep[][]{Laskar2021GCN31127},
shows that the light curve breaks at $\sim30$ d.
However, the NIR light curve, taken up to
232 d in the observer frame, shows no simultaneous steep break (Figure~\ref{fig:lcoptx}).
In the following, we assume that the break in X-rays is indicative of an achromatic break, and the last NIR detection is likely dominated by another component
(see \S\ref{sec:hst},\ref{sec:constant}). Note that we do not apply any colour correction between $F140W$ and $H$ bands because the UV slope is basically flat for GRB host and star-forming galaxies \citep[e.g.][]{Schulze2015a}\footnote{Using the spectral slope $\beta_\mathrm{opt}=0.6$ obtained from the SED fitting of the afterglow, the colour correction is just $H-F140W=-0.10$ mag, and thus will not make an appreciable difference in our analysis.}.
To better constrain the break time, we modelled jointly the $H$-band and X-ray light curves after the early break at $0.7$ d with a smoothly broken power-law
$F = (F_2^{\kappa}+ F_3^{\kappa})^{-1/\kappa}$, following the definition in \S\ref{sec:xo}
but with the subscripts $2,3$ indicating the pre- and post-jet break respectively. We fixed the pre-jet-break index to the model values $\alpha_\mathrm{2,opt}=0.9$ and $\alpha_\mathrm{2,X}=1.15$ (see Table \ref{tab:closure}).
In our analysis we adopt the jet model (with sideways expansion) and slow cooling \citep[e.g.][]{Sari1998a,Zhang2004a}.
Therefore, we assume that the post jet-break index is $\alpha_\mathrm{3,opt}=\alpha_\mathrm{3,X} \simeq p=2.2$ (see \S\ref{sec:xo}).
Note that the sparse data after the break prevent us from constraining the $\kappa$ parameter. Therefore, we
let it vary between the two extremes of the interval $1<\kappa<5$. These are consistent with the expected values for emission either side of $\nu_c$ and for a typical GRB observation angle \citep{vanEerten2013a, Lamb2021a}. We note that from the models, it is difficult to get $\kappa>5$ \citep{vanEerten2013a}.
In the $H$-band we have considered an additional constant component (see \S \ref{sec:constant}).
In summary, the only free parameters are the break time, the flux at the jet break and the flux of the constant source.
The best-fit break time in the observer frame is\footnote{Assuming no sideways expansion, and thus $\alpha_3=\alpha_2+3/4$ \citep{Panaitescu2007a}, we find a similar solution $t_{\rm jet}=36.4\pm21.6$\,d and thus similar half opening angle and conclusions.}
$t_{\rm jet}=46.2\pm16.3$\,d
with the constant source having $H_{AB}= 25.8\pm0.2$ mag.
The modelling is shown in Figure \ref{fig:latelchx}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,angle=0]{lcjoint-hx.pdf}
\caption{Observer-frame $H$-band (purple) and X-ray (black) light curves. Solid lines show the joint fit with a smoothly-broken power-law, assuming common achromatic breaks. The dashed line shows the $H$-band light curve without constant component.
The horizontal dotted line shows the modelled $H$-band constant component.
Following the slow-cooling scenario (\S\ref{sec:xo}), we fixed the pre-break decay indices to $\alpha_\mathrm{opt}=0.9$ and $\alpha_\mathrm{X}=1.15$. The last break is interpreted as a jet break, and thus the post-break decay index has been fixed to $\alpha_\mathrm{opt,X}=p=2.2$. The late flattening in the $H$-band light curve can be explained by a constant contribution from a host or intervening system. See \S\ref{sec:break} for details.
}
\label{fig:latelchx}
\end{figure}
\section{Discussion}\label{sec:dis}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth,angle=0]{Fig7v4ed.pdf}
\caption{GRB~210905A prompt-emission light curve (top panel) in the context of eight high-redshift GRBs ($z\geq6$). Arbitrarily scaled BAT count rates ($15-350$ keV) are plotted in red against time in the GRB rest frame. The KW light curve of GRB~210905A is plotted in black. The vertical dotted line shows the trigger time.
}
\label{fig:lcgammaz}%
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.89\textwidth,angle=0]{FigAmati_20220613ed.pdf}
\caption{Rest-frame energetics of GRB~210905A in the $E_{\mathrm{peak,}z}$--$E_\mathrm{iso}$ and $E_{\mathrm{peak,}z}$--$L_\mathrm{iso}$ planes (red stars).
Brown stars in the left panel show the values derived for the individual Pulses 1, 2, and 3.
The rest-frame parameters of 315 long KW GRBs with known redshifts \citep{Tsvetkova2021} are shown with circles; the colour of each data point represents the burst's redshift.
In the left plot, rest-frame peak energy values are derived from time-averaged spectral fits ($E_{\mathrm{p,i,}z}$).
In the right plot, they are derived from spectra, corresponding to the burst's peak count rate ($E_{\mathrm{p,p,}z}$).
The `Amati' and `Yonetoku' relations for this sample are plotted with dashed lines and the dark- and light-grey shaded areas show their 68\% and 90\% prediction intervals, respectively. The error bars are not shown here for reasons of clarity.
}
\label{fig:amati}%
\end{center}
\end{figure*}
\subsection{The nature of the prompt emission}
\label{sec:ori-prompt}
GRB 210905A is among the few exceptional cases where optical data could be obtained during a gamma-ray pulse (Figure~\ref{fig:gammaXopt}). In the past, in less than a dozen cases
has modelling of the prompt emission been possible
from optical/NIR to gamma-rays, such as in the cases of GRBs 990123, 041219A, 060526, 080319B, 080603A, 080928, 090727, 091024, 110205A, 111209A, 130427A, and the more recent GRBs 160625B and 180325A
\citep[e.g.][]{SariPiran1999a,Vestrand2005a,Thoene2010a,Racusin2008Nature,Guidorzi2011a,Rossi2011a,Kopac2013a,Virgili2013a,Stratta2013a,Kann2011a,Zheng2012a,Gendre2013a,Vestrand2014a,Troja2017Nature,Becerra2021a}.
At $z>6$, this analysis was possible only for GRB 050904 \citep{Boer2006a}.
In all these cases, modelling of the data with a broken power-law shows that the X-to-gamma-ray SED of the prompt pulses is in agreement with synchrotron emission, and in particular with fast cooling.
This is in agreement with studies on large samples
as we have mentioned in \S~\ref{sec:gammaxo}
\citep[see][ for a discussion on the possible implications]{Ghisellini2020a}.
However, when including the optical data the situation can be more complex: for example, the main and earlier pulses of GRBs 990123 \citep{SariPiran1999a,Galama1999a,Corsi2005a,Maiorano2005a}, 080319B \citep{Racusin2008Nature,Bloom2009ApJ}, 111205A \citep{Zheng2012a}, 130427A \citep{Vestrand2014a}, 160625B \citep{Troja2017Nature}, and 180325A \citep{Becerra2021a} show a convex spectrum between optical and X/gamma-rays. Although different interpretations are also possible \citep[e.g.][]{Guiriec2016a}, this feature can be explained by synchrotron emission from internal forward shocks dominating the gamma-ray and X-ray prompt emission, while the early optical flashes are generated by a reverse shock.
The analysis of Pulse 3 of GRB 210905A is however clearly in disagreement with this latter scenario, with the $H$-band emission being fainter than the extrapolation of the power-law modelling the gamma-rays (Figure \ref{fig:sedgammaxo}).
Therefore, although simultaneous optical-to-gamma coverage of the first prompt pulses is missing in the case of GRB 210905A, we show that at least during the last pulse there is no indication that the NIR data have an origin different from the X/gamma-ray emission, and all the observed epochs during the prompt phase can be explained by synchrotron emission from internal shocks.
This is not surprising, as in several events (e.g. GRBs 990123, 130427A, 160525B) the optical-to-gamma SED later evolves and can be entirely explained as emission from the forward shock. \cite{Oganesyan2019a} have shown that the later SEDs are consistent with being produced through synchrotron emission in the moderately fast-cooling regime from the same emission region.
\subsection{Prompt emission in context}
\label{sec:con-prompt}
Using $z=6.312$, we estimate the rest-frame properties
of the burst prompt emission.
Isotropic-equivalent energy release ($E_\mathrm{iso}$) and rest-frame spectral peak energies $E_{\mathrm{peak,}z} = (1+z) E_\mathrm{peak}$
for the individual emission episodes were calculated
from the CPL spectral fits (\S\ref{sec:kwbat}); they are listed in Table~\ref{tabEiso}.
Integrated over the three intervals, the total energy release of GRB~210905A in $\gamma$-rays is $E_\mathrm{iso} = 1.27_{-0.19}^{+0.20} \times 10^{54}$~erg,
which is within the highest $\sim$7\% for the KW sample of 338 GRBs with known redshifts \citep{Tsvetkova2017,Tsvetkova2021}.
Since $E_\mathrm{peak}$ obtained from our fits differs between the individual emission episodes,
we used the spectral peak energy value weighted by the episode fluence, $E_\mathrm{peak} \sim $ 145~keV, to estimate the burst time-averaged $E_{\mathrm{peak,}z}$ to $\sim1060$~keV.
This intrinsic peak energy is among the highest $\sim15\%$ of long KW GRBs.
Derived from the peak energy flux, the peak $\gamma$-ray luminosity of the burst is $L_\mathrm{iso} = 1.87_{-0.26}^{+0.36} \times 10^{53}$~erg~s$^{-1}$.
The rest-frame $E_\mathrm{peak}$ corresponding to the time interval around the peak luminosity is $\sim1050$~keV.
The reported values of $E_\mathrm{iso}$ and $L_\mathrm{iso}$ were calculated in the rest frame 1 keV--10 MeV range.
All the quoted errors are at the $1\sigma$ confidence level.
With these estimates, GRB~210905A as well as its individual episodes lie inside the 68\% prediction interval (PI) of the
$E_{\mathrm{peak,}z}-E_\mathrm{iso}$
(`Amati' relation; Figure~\ref{fig:amati})
for 315 long KW GRBs with known redshifts \citep{Tsvetkova2021}.
Likewise, the burst peak luminosity and the corresponding $E_{\mathrm{peak,}z}$ perfectly fit the `Yonetoku' relation for the sample.
Figure~\ref{fig:lcgammaz} shows the GRB 210905A prompt emission in the context of eight GRBs at $z\gtrsim6$.
With the rest-frame duration
$T_\mathrm{90} /(1+z) \sim 119$~s
GRB~210905A is the intrinsically longest high-$z$ GRB detected to date and is also among the longest $\sim$3\% of bursts as compared to the whole KW catalogue\footnote{This sample does not include six KW ultra-long ($T_\mathrm{100} > 1000$~s) bursts, all at low-to-moderate redshifts $z\lesssim2$.}
\citep{Tsvetkova2017,Tsvetkova2021}, which covers the range $0.04\leq z \leq 9.4$.
In this high-redshift sample, GRBs 210905A and 130606A are the only bursts with well-separated emission episodes. Except for this feature, they are similar to all other bursts which show short spikes with only moderate energy release. The exception is GRB 050904, which is similar to GRB 210905A in terms of energy released ($E_\mathrm{iso}=(1.33\pm0.14)\times10^{54}$ erg) but shows a $\sim30$ s long emission episode with two extended peaks (and at least a third episode observed in X-rays).
The most powerful burst at low redshift is GRB 130427A at $z=0.3399$ \citep{Selsing2019a}. This GRB can be considered as a good analogue of the energetic high-$z$ population because of its high energy release \citep[][]{Perley2014a,dePasquale2016a}.
Its prompt emission parameters are similar to those of Pulse 3 of GRB 210905A (see Table~\ref{tabEiso}):
$E_{\mathrm{peak,}z}\sim1415$ keV and
$E_\mathrm{iso}\sim9.4\times10^{53}$ erg \citep[][]{Tsvetkova2017}.
Accordingly, GRB 130427A and Pulse 3 lie very close
in the $E_{\mathrm{peak,}z}-E_\mathrm{iso}$ plane.
We should note, however, that the intrinsic durations of GRB 130427A and pulse 3 of GRB 210905A differ by factor of two ($T_{90,z}\sim10$ s for GRB 130427A versus $\sim19$ s for Pulse 3).
The initial light curve of GRB 130427A is somewhat similar to that of GRB 210905A since it starts with a large structured peak $\sim$20 s long in the rest-frame, followed by a third peak starting at $\sim$100 s. This second pulse is, however, orders of magnitude weaker than the main pulse. So, GRB 130427A is not a `genuine' multi-episode GRB such as this work's burst or GRB 130606A, and is instead more similar to the other high-redshift GRBs.
\begin{table}
\centering
\setlength{\tabcolsep}{0.4em}
\caption{Parameters of the individual prompt emission pulses.}
\label{tabEiso}
\begin{threeparttable}
\begin{tabular}{lccc}
\toprule
Episode & $E_{\textrm{peak,}z}$ & Fluence ($15-1500$ keV)\tnote{a} & $E_\textrm{iso}$\\
& (keV) & $10^{-5}$~erg~cm$^{-2}$ & $(10^{53}$ erg) \\
\midrule
Pulse~1 & $930_{-140}^{+230}$ &$0.471_{-0.046}^{+0.052}$ &$3.40_{-0.33}^{+0.41}$\\
Pulse~2 & $510_{-95}^{+160}$ &$0.245_{-0.035}^{+0.050}$ &$1.73_{-0.33}^{+0.37}$\\
Pulse~3\tnote{b} & $1220_{-450}^{+640}$ &$1.11_{-0.26}^{+0.28}$ &$7.62_{-1.81}^{+1.89}$\\[2ex]
\midrule
Total\tnote{c} & $1060_{-320}^{+470}$ &$1.82_{-0.28}^{+0.29}$ &$12.7_{-1.9}^{+2.0}$\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[a] Fluences were calculated using the fits with the CPL function from Table~\ref{tabFits}.
\item[b] Only the KW 3-channel spectrum is used.
\item[c] This fluence is integrated over all three emission episodes.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Collimation--corrected energy and central engine}
\label{sec:beam}
Knowing the value of the jet opening angle is
crucially important because it enables us to estimate the `true', collimation-corrected, energetics of the outflow \citep{Frail2001a,Ghirlanda2007a}.
Numerical and analytical calculations \citep[e.g.][]{Sari1999a} have shown that the half-opening angle of the jet is related to the jet-break time. Following \cite{ZhangMacFadyen2009a} we calculate this angle $\theta_{\rm jet}$ using the following equation for a uniform jet expanding in a constant-density medium:
\begin{equation}\label{angle}
\frac{\theta_\text{jet}}{\text{rad}}
=0.12~\left(\frac{E_{\rm kin,iso}}{10^{53}\,{\rm erg}}\right)^{-1/8} \left(\frac{n}{\rm cm^{-3}}\right)^{1/8}~
\left(\frac{t_{\rm jet}}{\rm day}\right)^{3/8} (1+z)^{-3/8} \,,
\end{equation}
\noindent where $E_{\rm kin,iso}$ is the kinetic energy of the outflow assuming isotropy; $n=1\,\rm cm^{-3}$
is the number density of the medium, assumed to be constant; $t_{\rm jet}$ is the jet-break time (observer frame, see \S\ref{sec:break}), while $z=6.312$ is the redshift of the event.
The kinetic energy is the one left after the prompt phase, and which later dissipates in the afterglow. Together with
the energy released as gamma-rays in the prompt phase\footnote{Here, $E_{\gamma,\rm iso}$ is the same as $E_{\rm iso}=12.7\times10^{53}$ erg of \S\ref{sec:con-prompt}.} $E_{\gamma,\rm iso}$, it represents part of the total GRB fireball energy
$E_{\rm total,iso}=E_{\rm kin,iso}+E_{\gamma,\rm iso}$ \citep[e.g.][]{Zhang2004a,dePasquale2016a}.
Assuming an efficiency\footnote{And thus $E_{\rm kin,iso}=(1/\eta -1)\,E_{\gamma,\rm iso}$.} $\eta=E_{\gamma,\rm iso}/E_{\rm total,iso}=0.2$
we derive $\theta_{\rm jet}=0.147\pm0.017$~rad,
or $8.41\pm0.97$ degrees.
If we consider that the outflow is collimated, the
`true' gamma-ray energy of the jet is
$E_{\gamma}=E_{\rm \gamma,iso} ~(1-\cos(\theta_{\rm jet}))
\simeq 1\times10^{52}$~erg.
The assumed efficiency is justified theoretically \citep[e.g.][]{Guetta2001a} and by recent studies of GRB afterglows in the optical, X-rays and GeV gamma-rays \citep[e.g.][]{Beniamini2015a}.
However, higher values are also possible, as suggested by some observations \citep{Zhang2007b,Lu2018a} and theoretical models \cite[e.g.][]{KobayashiSari2001a,ZhangYan2011a}.
As shown in \cite{dePasquale2016a}, the minimum $E_{\rm total}$ is obtained for $\eta = 3/4$. Lower efficiencies correspond to higher total energies.
Therefore, with $\eta=0.2-0.75$ we can estimate the `total collimated energy' of the jet to be $E_{\rm total} \simeq E_{\rm \gamma}/\eta \simeq 3$--$8\times10^{52}$~erg.
We note that the dependence of $\theta_{\rm jet}$ on $n$ and the kinetic energy is rather weak (Eq. \ref{angle}).
Thus, the total energy is not sizeably affected by the exact values of $n$ and $E_{\rm kin}$\footnote{Please note that assuming here a larger density would cause an even larger $\theta_{\rm jet}$ and $E_{\rm total}$ \citep{Cenko2011a,Granot-vanderHorst2014a}.}.
The most widely discussed models
of central engines of GRBs are accreting magnetars or accreting black holes.
We can assume for a standard neutron star with mass $M\sim1.4\,M_\odot$ the maximum rotation energy to be in the range $3\times10^{52}$ erg \citep{LattimerPrakash2016a} -- $7\times10^{52}$ erg \citep{Haensel2009a}.
Therefore, our analysis allows us to disfavour a standard magnetar as central engine of this GRB. Only the most extreme magnetar models with $M\gtrsim2.1\,M_\odot$ and rotation energy $\sim10^{53}$ erg are not excluded \citep[see][]{Metzger2015a,Dallosso2018a,Stratta2018a}.
On the other hand, according to the Kerr metric \citep{Kerr1963a} the rotational energy $E_\textrm{rot}$ of a black hole can reach up to 29\% of its total mass, which exceeds that of neutron stars by a full order of magnitude. Indeed, rotating black holes of mass $M\sim3\,M_\odot$ possess rotational energies up to $E_\textrm{rot}\sim 10^{54}$ erg \citep[e.g.][]{vanPuttenDellaValle2017a}.
Therefore, an energy budget of $\sim10^{53}$ erg can be conveniently extracted via the Blandford-\.{Z}najek mechanism \citep{BlandfordZnajek1977a}, thereby suggesting that the central engine of GRB 210905A may well be a rotating black hole.
\subsection{The early X-ray and optical/NIR afterglow}\label{sec:eninj}
As shown in \S\ref{sec:xo}, although the optical-to-X-ray SED at 0.1 d is in agreement with the cooling break lying within the X-ray band, both their light curves are not well explained by the standard fireball scenario before the common shallow break at $\sim0.7$ d.
Here, we can investigate whether our data can justify the early decay and the shallow break.
First, the early break at $\sim0.7$ d is not well constrained but we can exclude that it is due to
a wind-to-constant-density transition as the light-curve decline, in such a scenario, would become shallower and not steeper \citep[e.g.][]{Panaitescu2007a,Schulze2011a}.
The times and the slopes instead make it an example of a `canonical' GRB X-ray afterglow light curve \citep{Nousek2006a,Zhang2007a}.
Studying the canonical light curve, \cite{Zhang2007a} interpreted the break between the shallow segment with $\alpha\simeq0.7$ to the more `normal' segment with $\alpha\simeq1$ as the end of an `energy injection' phase.
During energy injection, the ejecta is still receiving energy, either from a long-lived central engine, or by
slower ejecta shells that catch up
with the leading shell.
In other words, the mild break should be interpreted as cessation of energy injection.
Following the relations in \cite{Zhang2006c}, where $q$ the energy injection index,
we have (for ISM and $p=2.2$): $\alpha_{\rm opt} = ((2p-6)+(p+3)q)/4$, from which follows $q=0.84$ and $\alpha=0.69$ for $\nu<\nu_c$; for X-rays we obtain $\alpha_{\rm X} = ((2p-4)+(p+2)q)/4$, so $q=0.84$, and $\alpha=0.98$ for $\nu>\nu_c$.
Using a stratified shell model with ejected mass
$M(>\gamma) \propto \gamma^{-s}$, where $\gamma$ is the Lorentz factor of the shell \citep[][]{Rees1998a}
and the relation between $s$ and $q$ parameters \citep[$s=(10-7q)/(2+q)$,][]{Zhang2006c}, we find that a value of $s\sim1.45$ fits the pre-break behaviour.
Equally, a magnetar central engine model that continuously injects energy as $L(t) \propto t^{-q}$ \citep[e.g.][]{DaiLu1998a}, can model the early decay with $q=0.84$.
Therefore, we cannot discard one model over the other, specifically stratified shell versus magnetar. However, as discussed in \S\ref{sec:break}, the energy constraints
likely limit the viability of a new-born magnetar as the power source of the energy injection.
We note also that the theoretical energy-injected $\alpha_{X,1}\sim0.98$ is larger than the value observed (0.74). However, the theoretical value assumes $\nu>\nu_c$ but in Table \ref{tab:sed} we see that $\nu_c$ is well within the X-ray band in the first day after the burst trigger.
The energy injection changes the way the cooling frequency evolves, that is $\nu_c \propto t^{(q-2)/2}=t^{-0.58}$ for $q=0.84$, and thus the cooling frequency evolves slightly faster whilst energy injection is happening: if the cooling frequency is at $\sim2$ keV at 0.1 d, then at 1 d it would have been at $0.5$ keV, and consistent with what we observe.
Moreover, one should also consider that the $\nu_c$ break is likely smooth and covers a relatively large interval \citep[e.g.][]{GranotSari2002a}.
Therefore, the observed temporal decay index may well be somewhere between the values predicted for the $\nu<\nu_c$ and the $\nu>\nu_c$ cases, i.e, $0.69<\alpha_{X,1}<0.98$, in agreement with the observed value before $\sim$70 ks.
\subsection{The nature of the $H$-band flattening at late times}
\label{sec:constant}
The likely discovery of the host of GRB 210905A is a rare discovery, given that up to July 2022 only three hosts (those of GRBs 050904, 130606A, and 140515A) had been confirmed at $z>6$ \citep{McGuire2016a}, and four if we consider the
possible detection of the GRB 060522 host \citep{Tanvir2012a}.
The observed brightness of the source detected with \textit{HST} in the $F140W$ band corresponds to a rest-frame $m_{1900\AA}\sim-21$ mag, which is consistent with the characteristic magnitude at 1600 {\AA} of $z=6-7$ galaxies \citep[e.g.][]{Bouwens2021a}. Therefore, such a galaxy is not unusual, although it is more luminous in the UV than galaxies that contribute the most to the star formation at these redshifts. In the following, we make use of the brightness of $H_{AB}=25.8$ mag resulting from the light curve fitting.
A host galaxy at $z=6.3$
with such a brightness
and thus a rest-frame UV luminosity of $L_{\nu}=1.47\times10^{29}\,\mathrm{erg\,s}^{-1}\,\mathrm{Hz}^{-1}$, would have a SFR $\sim16\,M_\odot\, \mathrm{yr}^{-1}$ using equation 1 in \cite{Kennicutt1998a}.
This is certainly an acceptable value \citep[see also the discussion in][]{Saccardi2022a}, and in fact \cite{McGuire2016a} find that the $z\gtrsim6$ GRB hosts known to date likely have similar SFR, assuming a short-lived burst of star formation \citep[see also][]{Tanvir2012a}.
If its brightness is confirmed with further observations, the host of 210905A would also be the brightest. We caution however, that at this stage is not possible to separate some contamination from a possible foreground source
discussed in \cite{Saccardi2022a}, and thus the host can be fainter and the inferred SFR lower.
One could also speculate whether a SN can contribute to the final observation. However, note that a SN should reach an absolute magnitude of $m_\mathrm{2200}\sim-21$ mag in the far UV ($H$-band in the observer frame).
This is four times more then the most luminous GRB-SN confirmed spectroscopically, SN2011kl associated with GRB 111209A \citep[$m_\mathrm{2735}\sim-19.6$ mag at peak,][]{Greiner2015Nature,Kann2019AA}, although \cite{Kann2021a} have recently claimed the existence of an even more luminous SN associated with GRB 140506A with $M_{g^\prime}\approx-20.5$ mag.
As we find no evidence that GRB 210905A is more than just a very energetic but otherwise typical long GRB, there is no reason to claim the GRB would be accompanied by an extremely UV-luminous SN of a type not seen associated with GRBs before.
In the following we explore possible alternatives to the above interpretation of a jet break well visible in X-rays but hidden in the NIR by a constant source that becomes dominant. Thus, we consider the possibility that the afterglow still contributes substantially and the $H$ band can be modelled with a single power-law, with a chromatic break in the X-rays. It is also possible to speculate that the late light curve is the consequence of a spectral break moving between optical and X-rays. However, this not only contradicted by the elongated, extended, and offset nature of the \textit{HST} detection but, as we have shown in Section \S\ref{sec:xo}, our SED analysis which shows $\nu_c$ being already between the optical and X-ray bands after $0.7$ d.
Therefore, it is not possible to invoke the presence of an additional break in the slow-cooling regime moving into the band after this time. Moreover, the change in the temporal index is inconsistent with the passage of the cooling break, which should be $\Delta\alpha=0.25$ in the slow-cooling and uniform-medium environment, and additionally incompatible with other regimes, for example fast cooling or a wind-blown environment.
Another possibility is to consider a bright reverse shock, but this requires either a large energy gradient or a big difference in shell velocity, both of which are inconsistent with the gradual energy injection scenario which explains the early light curve until 0.7 d.
We could invoke a second, discrete shell with energy that is less than or comparable to the first (post initial energy injection) shell but much faster i.e. a delayed launch. However, the shell would have to conveniently collide with the leading shell at about the jet-break time and the optical excess would be the contribution from the reverse shock. This is not only an incredulous coincidence, but would also approximately double the total energy requirements to explain a second shell, making this event even more extreme.
We cannot exclude that a more mild shock could however explain the X-ray data at $10-20$ d, which lies just above the analytical modelling of the light curve.
We also cannot, however, confirm this possibility with the few data points available,
which are anyway within $2\sigma$ of the analytical model.
In conclusion,
we consider the detection of the host and/or an intervening galaxy (or a mix of the two) as the strongest and most plausible explanation for the flattening of the $H$-band light curve.
\subsection{The X-ray afterglow in context}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{XLC_210905A_rest_v3.pdf}
\caption{
X-ray afterglow of GRB 210905A (blue line) in the context of other high-redshift GRBs (green and red) and the world-sample of \textit{Swift}{} GRBs with known redshifts (grey density plot). The afterglow of GRB 210905A is the most luminous after 10 ks among all $z>5$ GRBs and one of the most luminous in general. The colour table on the right side translates a grey shade at a given luminosity and time into a fraction of bursts.
}
\label{fig:schulze}
\end{figure}
To put the X-ray emission in the context of other GRB afterglows, in particular high-redshift GRBs, we retrieved from the \textit{Swift}{} Burst Analyser
\citep{Evans2010a} the X-ray light curves of 421 long-duration \textit{Swift}{} GRBs with detected X-ray afterglows (detected in at least two epochs) and known spectroscopic redshifts, which were discovered before the end of July 2022. We processed the data and moved them to their rest-frames following \citet{Schulze2014a}.
Figure \ref{fig:schulze} shows the parameter space occupied by long-duration GRBs as a density plot and the X-ray light curve of GRB 210905A in blue.
We have also included the X-ray light curves of the high-redshift GRBs 090423, 090429B and 100905A that have only a photometric redshift. The uncertainty in luminosity for these three bursts is indicated by red-shaded regions around the light curves at their redshifts.
GRB 210905A's X-ray afterglow is among the most luminous at all times. Even compared to other GRBs at $5<z<6$ (green; GRBs 060522, 060927, 130606A, 131227A, 140304A, 201221A, 220521A) and $z>6$ (red; 050904, 080913, 090423, 090429B, 100905A, 120521C, 120923A, 140515A), GRB 210905A has an exceptionally high luminosity. Furthermore, its X-ray afterglow is fading slower than those of most GRBs, at least until the jet break at $\sim5\times10^5$ s in the rest frame (\S\ref{sec:break}).
Here we note that some of the other bursts at high-$z$
do not show a clear light-curve break in X-rays (GRBs 050904, 080913, 090423, 130606A), although some of them show a break in the optical (GRBs 050904, 090423, 090429B, 120521C), and GRB 140515A has just one single detection that suggests a possible break similar to GRB 210905A. This is because of the low observed flux of these very high-$z$ afterglows, as only the most luminous events are bright enough for \textit{Swift}/XRT.
\subsection{The optical/NIR afterglow in context}
Following the method devised by \cite{Kann2006ApJ}, we are able to put the NIR afterglow into the context of the (optical/NIR) total afterglow sample. We derive the observer-frame $R_C$ magnitude by shifting all data to the $H$ band, then extrapolating the spectral slope into the observer-frame $R_C$ band, which is completely suppressed at the redshift of the GRB (assuming that there would be no Lyman absorption).
The spectral slope, redshift, and the lack of extinction are then used to derive the magnitude shift $dRc=-5.12^{+0.20}_{-0.21}$ mag to $z=1$.
The derived $R_C$-band light curve still represents an observed magnitude, it is as if the GRB were at $z=1$ in a completely transparent universe.
We then compare the afterglow with the GRB afterglow light curve samples of \cite{Kann2006ApJ,Kann2010a,Kann2011a} as well as samples from upcoming publications (Kann et al., 2022a,b,c, in prep.). The result is shown in Figure \ref{fig:kann}, with GRB 210905A highlighted in red. The sample of Kann et al. (2022a), in prep. focuses on $z\gtrsim6$ GRBs, and these light curves are highlighted as thick black curves. At early times, the afterglow of GRB 210905A is seen to be among the most luminous known, albeit still fainter than the early afterglows of high-$z$ GRBs 130606A and especially 050904 \citep{Kann2007AJ}. Interestingly, the early flash of GRB 210905A aligns well in rest-frame time (between 70 and 110 s) with those seen in GRB 050904 \citep{Boer2006a}, GRB 160625B \cite[][an extremely energetic lower-redshift GRB, highlighted in blue]{Troja2017Nature}, and, with less contrast, in GRB 130606A \citep{Castro-Tirado2013a}. On the other hand, several bright prompt-associated flashes happen significantly earlier, such as the cases of GRB 080319B \citep{Racusin2008Nature,Bloom2009ApJ} and GRB 120711A \citep[][Kann et al. 2022a, in prep.]{Martin-Carrillo2014a}. Therefore, this similarity in time is likely just a chance coincidence.
An interesting result is found towards the end of the light curve. After removing the potential constant component, the combination of a late break and an early shallow decay makes the afterglow of this burst the most luminous ever detected for a certain time span, before the shallower post-break decay of the afterglow of GRB 160625B (which itself had a very late jet break, \citealt{Kangas2020ApJ}) makes the latter the most luminous known at very late times again (Kann et al. 2022c, in prep.). This provides further evidence for the extremely energetic nature of GRB 210905A.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{resKann_Plot_210905A_160625B.pdf}
\caption{The optical afterglow of GRB 210905A (red line) compared to a sample of extinction-corrected afterglows which have all been shifted to $z=1$, from \citet[][2022a,b,c, in prep.]{Kann2006ApJ,Kann2010a,Kann2011a}. Hereby, time and magnitudes are given in the observer frame, but assuming all GRBs are at $z=1$ in a perfectly transparent universe.
Light grey are LGRBs, thicker black lines GRBs with redshifts $z\gtrsim6$. All magnitudes are in the Vega system. The afterglow of GRB 210905A is the most luminous afterglow ever detected at moderately late times, before finally decaying faster than that of GRB 160625B (blue line). For this light curve, the potential constant source has been subtracted, see \S\ref{sec:constant} for more details. The late-time break in the light curve is clearly visible.}
\label{fig:kann}%
\end{center}
\end{figure}
\subsection{Dust absorption and equivalent hydrogen column densities}\label{sec:avnh}
As in other high-$z$ bursts \citep[see e.g.][]{Zafar2010a,Zafar2011a,Zafar2018a, Melandri2015a}, GRB 210905A is characterised by negligible absorption in the optical/NIR, in agreement with those expected for high-$z$ galaxies populating the faint end of the luminosity function \citep[e.g.][]{Salvaterra2011a}. In particular,
\cite{McGuire2016a} studied three $z>5.9$ GRB hosts and noted that afterglow analyses in each case pointed to low line-of-sight dust extinction.
Although a low $A_V$ is expected to correlate with a low $N_{H.X}$, the high $N_{H.X}$ value of $7.7^{+3.6}_{-3.2}\times10^{22}\, \textnormal{cm}^{-2}$ is also not exceptional. It is also observed in other environments, for example in AGNs, and can be naturally explained by the absorption of intervening metals along the line-of-sight \citep[][]{Starling2013a,Campana2015a}, which reside almost entirely in the neutral gas at $z>4.5$ \citep[e.g. ][]{PerouxHowk2020a}, although one cannot exclude the contribution of increasing gas density in the vicinity of the GRB \citep{Heintz2018e}.
The high $N_{H.X}$ is also in contrast with the $N_{\rm H\,{\sc I}} \simeq 1.35\times10^{21}\; \textrm{cm}^{-2}$
measured via the Lyman-$\alpha$ absorption-line
by \cite{Fausey2022a}. The difference can be explained by the very high number of ionising photons produced by the GRB that could ionise the IGM along the line-of-sight up to several hundreds of pc \citep{Saccardi2022a}.
We discuss the IGM contribution in more detail in \cite{Fausey2022a}.
\subsection{X-ray afterglow luminosity versus prompt energy}\label{sec:lxeiso}
The X-ray luminosity and the isotropic gamma-ray energy release seem to broadly follow a linear relation as already shown by \cite{dePasquale2006a} \citep[see also ][]{Nysewander2009ApJ}, suggesting a roughly universal efficiency for converting a fraction of the initial kinetic energy\footnote{Not to be confused with $E_\mathrm{kin}$, which is the energy left after the prompt phase.} into gamma-ray photons. This was later further confirmed by \cite{Davanzo2012a}. GRBs at $z>6$ also follow this relation.
We test here whether GRB 210905A follows this relation despite its luminosity. We estimate the afterglow X-ray integral flux in the $2-10$ keV rest-frame common energy band and compute the corresponding rest-frame X-ray luminosity at different rest-frame times. The $2-10$ keV rest-frame flux was computed from the observed integral $0.3-10$ keV unabsorbed fluxes and the measured photon index, $\Gamma$ (which we retrieved from the online \textit{Swift}{} Burst Analyser,
\citealt{Evans2009a,Evans2010a}) in the following way \citep[see][]{Gehrels2008ApJ,Davanzo2012a}:
\begin{equation}
f_{X,rf}(2-10 \, {\rm{keV}}) = f_X(0.3-10 \, \rm{keV})\frac{\left({\frac{10}{1+z}}\right)^{2-\Gamma}-\left({\frac{2}{1+z}}\right)^{2-\Gamma}}{{10}^{2-\Gamma}-{0.3}^{2-\Gamma}} \,.
\label{kcorr_eq}
\end{equation}
\noindent The obtained X-ray light curve was then fitted with a multiply broken power-law, after removing the time intervals showing significant flaring, and then the fits were interpolated or extrapolated to the rest-frame times $t_{rf}= 5$ min, $t_{rf}= 1$ hr, $t_{rf}= 11$ hr, $t_{rf}= 24$ hr.
As shown in Figure~\ref{fig:lxeiso} the properties of GRB\,210905A are fully consistent with the $E_\textrm{iso} - L_{X}$ correlations found for long GRBs by \cite{Davanzo2012a}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,angle=0]{correle_prompt_ag_long_GRB210905A_test_color_v2.pdf}
\caption{The $L_X-E_{\mathrm{iso}}$ correlation presented in \cite{Davanzo2012a} for the long GRBs of the BAT6 sample (diamonds) at different rest-frame times. The shaded area represents the $3\sigma$ scatter of the correlations. GRB 210905A is marked with a filled orange circle.}
\label{fig:lxeiso}
\end{figure}
\subsection{Long-GRB progenitors at high redshift}
At high redshift the Universe is expected to be populated by
pop-III stars, the first stars that formed out of gas clouds of pristine composition.
Chemical feedback from the supernova explosions of these very massive stars produces metal enrichment within star-forming clouds, raising the metallicity above a critical threshold above which we expect a slow transition of the SFR from massive pop-III to solar-size pop-II and pop-I stars \citep[e.g.][]{Schneider2006a,Maio2010a}.
Determining how this transition takes place is one of the main missing ingredients to understand galaxy formation in the early Universe.
All models \citep[e.g.][]{MeszarosRees2010a,Toma2011a,Piro2014a} predict pop-III GRBs to be very energetic events, and with very long intrinsic durations of $10^4$ s, making their detection possible even at the highest redshifts. In particular, \cite{Toma2011a} suggested that they can release an equivalent isotropic energy up to $\sim10^{56-57}$ erg.
In Figure \ref{fig:collim} we compare $\theta_{\rm jet}$ and the collimated energy $E_{\rm \gamma}$ of GRB 210905A with the KW sample of 43 long GRBs with reliable jet-break time estimates \citep{Tsvetkova2017,Tsvetkova2021}.
Considering the uncertainty on the collimation-corrected energy, GRB 210905A
lies just outside the $1\sigma$ confidence level of the
$E_{\mathrm{peak,}z}-E_{\rm \gamma}$
\citep[`Ghirlanda' relation, see ][]{Ghirlanda2004a,Ghirlanda2007a} and thus well compatible with this relation.
The energy values involved in GRB 210905A, both isotropic and collimated, are large but do not significantly differ from those at low redshift (see Figs.~\ref{fig:amati} and \ref{fig:collim}). At lower $z$, other events have produced $E\gtrsim10^{54}$~erg isotropically and $E_{\gamma} \simeq 10^{52}$~erg collimation-corrected \citep[see also][]{Cenko2011a}. The most outstanding example is GRB 130427A at $z=0.3399$ (see \S\ref{sec:ori-prompt}), the most powerful GRB at $z<0.9$ \citep[e.g.][]{Maselli2014a,dePasquale2016a}.
GRB 210905A has the highest $E_\gamma$ in the Konus-\textit{Wind}{} catalogue. This and the large $E_{\mathrm{p,}z}$
suggest a large bulk Lorentz factor $\Gamma_0$ of the jet. The afterglow light curve, as reported in Figure \ref{fig:lcoptx},
decays as a power-law in both the optical and X-ray band from $\gtrsim5000$~s onwards (observer frame). This suggests that the afterglow deceleration time happened before this epoch. Following the method in \cite{Molinari2007AA}, an upper limit on this peak time provides a lower limit to the maximum bulk Lorentz factor of the jet\footnote{To derive the peak time, we assumed smoothness $k=1$ and decay indices $\alpha_0=-0.7$, $\alpha_0=\alpha_\mathrm{1,opt}=0.69$, before and after the peak, respectively.}, namely $\Gamma_0 \gtrsim 200$, assuming a constant density medium $n_0=1 \;\mathrm{cm}^{-3}$ and the isotropic energy of GRB 210905A (\S\ref{sec:con-prompt}). With this estimate, and the inferred half-opening angle, the burst is consistent with the $\theta_\textrm{jet} - \Gamma_0$ broad anti-correlation reported in \citet[][see their Figure 4]{Ghirlanda2012a}.
Therefore, GRB 210905A, although extremely bright, is not separated markedly from other classical GRBs at low redshift.
In summary, no features of this event point to a pop-III origin.
\begin{figure*}
\centering
\includegraphics[width=0.4\textwidth,angle=0]{FigThZ_errs_220607ed.pdf}
\includegraphics[width=0.41\textwidth,angle=0]{FigEgEpz_220607ed.pdf}
\caption{Collimated parameters of GRB 210905A (red symbols) compared to a KW sample of 43 long GRBs from \cite{Tsvetkova2017,Tsvetkova2021}.
We assumed $\eta=0.2$ and $n=1\,\rm cm^{-3}$ for all bursts.
\textit{Left}: Half-opening angle $\theta_\textrm{jet}$ versus redshift. The dashed line within the grey area shows the relation found in \cite{Lloyd-Ronning2020a} with its error. \textit{Right}: $E_\gamma-E_{\mathrm{peak,}z}$ diagram.
As in Figure~\ref{fig:amati}, the colour of each data point represents the burst's redshift. The `Ghirlanda' relation is plotted together with its 68\% and 90\% PIs.
}
\label{fig:collim}
\end{figure*}
\subsection{Star-formation rate at very high redshift}
The rate of GRBs can be used to estimate the SFR in the remote Universe (see \S\ref{sec:intro}).
Recently, \citet[and references therein]{Lloyd-Ronning2019a,Lloyd-Ronning2020a,Lloyd-Ronning2020b} have argued
that at high redshift, the
GRB jets were,
on average, narrower than those of closer GRBs \citep[see also][]{Laskar2014a,Laskar2018a}. This would imply that more stars formed at high redshift than previously estimated, unless the GRB properties, and thus their rate, are extremely environment-sensitive \citep{Kistler2008a,Kistler2009a,Robertson2012a,Jakobsson2012a,Tanvir2012a,Japeli2016a,Palmerio2019a}.
In the left panel of Figure~\ref{fig:collim}, we report the relation found by the above authors in the $\theta_\mathrm{jet}-(1+z)$ plane. GRB 210905A is an outlier event located at $2-3\sigma$ above this relation. We observe that the half-opening angle of this GRB at $z = 6.312$ ($\theta_{\rm jet}\sim 8$ deg) is
consistent with the median value of $\theta_{\rm jet}=7.4_{-6.6}^{+11}$ deg for GRBs at $z\sim1$ but larger than the mean of $\theta\sim3.6\pm0.7$ deg found using three $z>6$ bursts \citep[GRBs 050904, 090423, 120521C,][]{Laskar2014a,Laskar2018a}. When we include GRB 210905A, the mean for the $z\gtrsim6$ bursts is $\theta\sim4.8\pm0.6$ deg, closer to the best value for $z\sim1$ events.
These findings would argue against a putative inverse correlation between $z$ and $\theta_{\rm jet}$.
\section{Conclusions}\label{sec:con}
GRB 210905A was a long burst at redshift $z=6.312$.
Our extensive and prompt follow-up observations from optical/NIR to X-ray and gamma-ray bands, starting in the first seconds, have allowed us to study in detail both the prompt and the afterglow phases.
We carried out a joint time-resolved analysis of the last of the three pulses of the prompt emission, which is shown to be in agreement with synchrotron emission, similar to other bursts at lower redshifts.
Among the sample of ten $z\gtrsim6$ GRBs known to date, GRB 210905A
stands out (together with GRB 050904), having the highest isotropic energy release and among the highest afterglow luminosity at late times, while still being consistent with the range of values found for other long GRBs.
The temporal evolution of the afterglow can be interpreted as due to energy injection followed by a decay well in agreement with the slow-cooling scenario and a constant-density (`ISM') circumburst medium profile within the standard fireball theory.
However, the optical and X-ray afterglows are among the most luminous ever detected, in particular in the optical range at $t\gtrsim0.5$ d in the rest frame, due to very slow fading and a late jet break.
In late \textit{HST} imaging, we find evidence for an
underlying host with UV luminosity slightly larger
than that of galaxies contributing the most to star formation at $z=6-7$.
If confirmed with further observations,
the host of GRB 210905A would be the fourth and the brightest GRB host at $z>6$ detected to date. It would also be bright enough to be characterised via spectroscopy with the \textit{JWST} \citep[e.g.][]{McGuire2016a}, providing one of the first and better estimates on the SFR, metallicity and dust content of a GRB host at very high redshift.
The jet break at $\sim50$ d (observer frame) results in a half-opening angle that is larger than that of other $z>6$ bursts, thus putting into question the putative inverse dependence of the half-opening angle on redshift.
The large total energy budget of $E_{\rm total}>10^{52}$ erg associated with this GRB likely excludes all but the most extreme magnetar models as a central engine of this GRB. Therefore, our analysis leaves the Kerr black hole as the preferred scenario for the central engine of GRB 210905A.
Finally, the shallow evolution before 1 day suggests that the black hole injected energy via stratified mass ejecta with different Lorentz factors.
In summary, this burst is consistent with the `Amati', `Ghirlanda', and `Yonetoku' relations. This fact, and the agreement with the $E_{\rm iso} - L_{X}$ plane show that GRB 210905A is a very energetic event but still in the upper tail of the prompt energy and X-ray luminosity distributions of long GRBs. It is not unexpected that our view of the high-$z$ GRB Universe is biased towards the most luminous events, simply because our instruments are limited in sensitivity. In other words, despite its outstanding luminosity it is unlikely that the origin of this GRB is different from those of low-redshift GRBs, such as a pop-III progenitor.
Gamma-ray bursts at $z\gtrsim6$ are rare events from the perspective of today's follow-up capabilities, but they are just a small part of a larger population that future proposed missions promise to uncover (e.g. \textit{THESEUS}, \citealt{Amati2018a}; \textit{Gamow}, \citealt{White2021a}) and, in synergy with the largest ground- and space-based telescopes (such as the James Webb Space Telescope), to answer open questions in modern astrophysics such as
the identification of the sources responsible for cosmic reionisation, and the evolution of SFR and metallicity across the transition from pop-III stars to pop-II and pop-I stars.
\begin{acknowledgements}
We thank the anonymous referee for providing thoughtful comments.
We acknowledge useful discussion with L. Nicastro and A. MacFadyen.
A. Rossi acknowledges support from the INAF project Premiale Supporto Arizona \& Italia.
D.D.F. and A.E.T. acknowledge support from RSF grant 21-12-00250.
D.A.K. acknowledges support from Spanish National Research Project RTI2018-098104-J-I00 (GRBPhot).
A.R., E.Pal., P.D.A., L.A., E.Pi., G.S., S.C., V.D.E., M.D.V., and A.M. acknowledge support from PRIN-MIUR 2017 (grant 20179ZF5KS).
P.D.A., A.M. acknowledge support from the Italian Space Agency, contract ASI/INAF n. I/004/11/5.
L.I. was supported by grants from VILLUM FONDEN (project number 16599 and 25501).
D.B.M. and A.J.L. acknowledge the European Research Council (ERC) under the European Union's Seventh Framework programme (FP7-2007-2013) (Grant agreement No. 725246). The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140.
K.E.H. acknowledges support by a Postdoctoral Fellowship Grant (217690--051) from The Icelandic Research Fund.
C.G.M. acknowledges financial support from Hiroko and Jim Sherwin.
Part of the funding for GROND (both hardware as well as personnel) was generously granted from the Leibniz-Prize to Prof. G. Hasinger (DFG grant HA 1850/28-1).
This work made use of data supplied by the UK \textit{Swift}{} Science Data Centre at the University of Leicester.
\end{acknowledgements}
\bibliographystyle{aa}
| 2024-02-18T23:40:02.590Z | 2022-08-05T02:14:57.000Z | algebraic_stack_train_0000 | 1,165 | 14,982 |
|
proofpile-arXiv_065-5781 | \section{Introduction}
Shape changes (morphodynamics) are one of the principal mechanisms through which individual cells interact with their environment \cite{yin2014cells, bodor2020cell}. These dynamics arise from the interplay between a multitude of molecules and complex signalling pathways that often organise with emergent simplicity to carry out critical cellular functions, including division and migration. T cells, specialised cells of the adaptive immune system, are highly dependent on global morphodynamics to squeeze through gaps in the extracellular matrix (ECM), in contrast to the ECM-degrading strategies other cells use (e.g. tumour cells). Despite plasticity for adjusting the mode of migration to environmental conditions, the migration of T cells is often characterised as amoeboid: fast (up to 25 \textmu m min\textsuperscript{-1} \cite{friedl1994locomotor}), with low adhesion and polarised morphologies arising due to the segregation of different cytoskeletal networks to specific subcellular compartments \cite{weninger2014leukocyte}. In this mode of locomotion, dynamic F-actin forms pseudopods at the leading edge and an actomyosin-rich uropod at the rear generates contractile forces \cite{dupre2015t} (see Fig. \ref{fig:figure1}a for a schematic). However, this canonical migration mechanism is not fixed and T cells adapt their motility to their immediate environment.
T cells are thought to toggle between exploration and exploitation states, balancing surface receptor cues for interacting with antigen-presenting or target cells (`stop') with chemokine-driven or purely exploratory searches (`run') \cite{krummel2016t}. The specific morphodynamics and force-generating mechanisms behind these states are not well understood, in part due to their large variety and adaptability in different environments \cite{fowell2021spatio}. Proposed methods for propulsion include leading edge extension and intercalation with the ECM (using either low-adhesion integrin connections or surface texture for friction), followed by contraction of the uropod for small pore sizes \cite{soriano2011vivo, fowell2021spatio}. In addition to creating friction for moving forward, the rearward flow of actin waves from the leading edge may connect with the ECM like a paddle \cite{reversat2020cellular, abercrombie1970locomotion}. However, the extent to which these methods are used in complex 3D ECM environments, and their precise organisation, are far from well-characterised.
Accurate characterisation is important as dysregulation of T cell migration processes can be highly deleterious. T cells differentiate into different effector states. For instance, antigen-specific CD4\textsuperscript{+} `helper' T cells amplify the immune response, while CD8\textsuperscript{+} `cytotoxic' T cells seek out and neutralise infected or cancerous cells \cite{nino2020cytotoxic}. Inadequate migration leaves infected and cancerous cells free to proliferate, while over-stimulation can cause inflammation-based diseases like asthma and arthritis \cite{xia2009recent}. While there are exciting immunotherapeutic avenues manipulating the migration process, these have so far disappointed \cite{rafiq2020engineering}. With quantitative representations of T cell morphodynamics, their statistics can be interpreted with high-precision and compared across conditions for potentially improved immunotherapeutic development, and mechanistic models can be developed \cite{keren2008mechanism,tweedy2013distinct,tweedy2019screening}.
One of the main challenges for analysing morphodynamics is that cells do not have obvious landmarks (e.g. legs, eyes, wings of animals), and so the important degrees of freedom must be inferred from the data itself. Where there is important landmark-like information (e.g. polarisation that can manifest as subtle morphological features), this is typically diffuse rather than precisely-locatable, which further complicates quantification. Current methods therefore do not explicitly include this information. 2D cell morphologies are often quantified using Fourier descriptors. This method decomposes the cell outline coordinates as functions of rotation around the centroid in terms of Fourier coefficients, which then represent the morphology. This approach has revealed that amoeboid migrating cells in 2D, including epithelial keratocytes and \textit{Dictyostelium} amoebae \cite{keren2008mechanism, tweedy2013distinct}, explore only a small subspace of the shapes that might be thought possible from qualitative inspection (i.e. low-dimensionality of morphology). Furthermore, the morphodynamics within this space are composed primarily of frequently-used, or `stereotyped', motifs (i.e. low-dimensionality of morphodynamics).
Imaging of 3D cell dynamics at sufficiently high spatio-temporal resolution has only recently become available through lattice-light sheet microscopy \cite{chen2014lattice}. Whether T cells navigating complex 3D ECM environments similarly have low-dimensional morphology and morphodynamics remains to be understood. Such questions in 3D necessitate automated analysis even more than in 2D, both because such datasets are inherently harder to visualise and interpret, and because 3D environments typically induce a richer variety of morphodynamics \cite{driscoll2015quantifying}. Spherical harmonic descriptors (SPHARM), a 3D analogue of Fourier descriptors, are a promising method for quantifying 3D cell shape and connecting with motion \cite{heryanto2021integrated}. However, the representations are typically too uninterpretable for exploring morphodynamics with high precision, and use so far is primarily limited to classification or the detection of established shape changes \cite{ducroz2012characterization, medyukhina2020dynamic}. It therefore remains an open question how best to quantify 3D cell shapes without clear landmarks and interpret high spatiotemporal dynamics.
Here, we sought to combine lattice light-sheet microscopy \cite{chen2014lattice} with quantitative image analysis to explore the 3D morphodynamics of cytotoxic T cells migrating in the absence of chemoattractant cues through 3D collagen matrices \cite{galeano2016antigen}. We first created a new compact shape descriptor, based on SPHARM, but better connected to key polarisation information than current approaches. We found that T cells explore a low-dimensional morphological space, and that run-and-stop migration emerges at long timescales. We explored the morphodynamic compositions of these two modes using multiscale wavelet analysis, previously used to explore the structure of fruit fly behaviour \cite{berman2014mapping, berman2016predictability}, uncovering a global set of largely discrete stereotyped motifs. Focusing ultimately on the run mode, due to its key role in active translocation and polarised morphologies that are well-suited for analysis with our descriptor, we found that periodically oscillating morphodynamics (every $\sim$100 s) sustain forward motion. These can be understood as a biphasic process integrating previously hypothesised propulsion mechanisms \cite{reversat2020cellular, abercrombie1970locomotion}, namely: front-widening and retraction of the uropod (rear moves forward), and rearward surface motion with forward extension (front moves forward).
\section{Results}
\subsection{T Cell Shape is Low-Dimensional}
\begin{figure}[!htb]
\center{\includegraphics[]
{figures/figure1.pdf}}
\caption{\label{fig:figure1} \textbf{T cell shape can be quantified by spherical harmonic descriptors in 3D.} \textbf{(a)} Schematic of a T cell employing an amoeboid migration strategy to navigate through the extracellular matrix (ECM) in 3D. Actin polymerisation at the front results in the formation of pseudopods, and a complex of actomyosin at the rear forms the uropod, important for stability and generating contractile forces. \textbf{(b)} Complex spherical harmonic functions, $Y_{l}^{m}(\theta, \phi)$, (real parts shown for $m\geq0$) form a basis on the surface of a sphere. \textbf{(c)} Cartesian coordinates of the cell surface, $\{x, y, z\}$, are mapped to the surface of a sphere, as parameterised by polar coordinates $\{\theta, \phi\}$. The three resulting functions $\{x(\theta, \phi), y(\theta, \phi), z(\theta, \phi)\}$ are decomposed in terms of the spherical harmonic functions and transformed to be translation, scale and rotation invariant. This yields the final shape representation, $D_{l}$, based on the harmonics at each energy level, $l$, with the exclusion of $l=0$ giving the translation invariance. \textbf{(d)} Truncation of the representation at different degrees of $l$ leads to different levels of smoothing, with $l=1$ describing the ellipsoid part of the shape. \textbf{(e)} An additional descriptor, $D_{0}$, for accounting for cell orientation, with the landmark-like smooth uropod at the rear and dynamic protrusions at the leading edge. Without this additional variable, the two cells shown have very similar descriptors. The standard deviation of $D_{0}$ across all datasets is 0.31, and the standard deviations of the remaining $D_{l}$ are all lower. \textbf{(f)} For cases where the uropod vanishes, the landmark-like rear can still be identified by its smoothness and stationarity, compared with the dynamic leading edge, as shown in the example. For simplicity, we refer to this region at the rear as the uropod for all frames.}
\end{figure}
We imaged primary mouse effector CD8\textsuperscript{+} cytotoxic T cells in 3D collagen matrices without chemical cues, with a lattice light-sheet microscope (LLSM) \cite{chen2014lattice} at spatial resolution of 0.145, 0.145, 0.4 \textmu m and temporal resolution of $\sim$2-5 s (see Methods for details on the imaging and pre-processing and Supplementary Fig. 1 for a representative snapshot and 3D trajectories). Spherical harmonics (Fig. \ref{fig:figure1}b) can be used to quantify 3D cell shapes, as shown in Fig. \ref{fig:figure1}c \cite{ducroz2012characterization, medyukhina2020dynamic, brechbuhler1995parametrization, kazhdan2003rotation}. The spherical harmonic functions, $Y_{l}^{m}(\theta, \phi)$, form a basis over the sphere, where $l$ is the function degree (related to frequency) and $m$ is the order (rotations at each degree). The full approach is detailed in Methods and summarised here. The Cartesian coordinates describing the cell surface are each mapped to a sphere, so as polar coordinates $\{\theta, \phi\}$ move over the sphere surfaces, the cell surface is traced out in object space. Analogous to a Fourier decomposition, the functions describing the cell surface can be decomposed into a set of spherical harmonic coefficients, $c_{l, i}^{m}$ with $i\in \{x, y, z\}$. The $l=0$ coefficients describe the centroid location, the $l=1$ coefficients describe the ellipsoid part of the shapes, and so on, with increasing levels of detail. Truncation of the representation at a certain $l_{max}$ therefore leads to a representation of a smoothed version of the original morphology, where higher-frequency features are filtered out (Fig. \ref{fig:figure1}d). Translation invariance is achieved by omitting the $l=0$ coefficient, scale invariance is achieved by dividing all coefficients by $V^{-\frac{1}{3}}$ where $V$ is the volume \cite{zhao2017application}, and rotational invariance is achieved by transforming to a new representation, $\{D_{l}\}_{l>0}$, with
\begin{equation}
D_{l} = \sum_{i\in (x,y,z)}\sum^{l}_{m=0}c_{l,i}^{m}c_{l,i}^{m*},
\label{eq:rotinv}
\end{equation}
analogous to how rotational invariance can be achieved by extracting the power spectrum from Fourier descriptors of 2D cell shapes \cite{tweedy2013distinct}. There are two key problems with the descriptor in its current form, and we made two modifications to remedy these.
\begin{figure}
\center{\includegraphics[]
{figures/figure2.pdf}}
\caption{\label{fig:figure2} \textbf{T cell shape is low-dimensional as quantified with 3 principal components.} \textbf{(a)} Principal components (PCs) 1, 2 and 3 capture 74\%, 12\% and 9.8\% (total of 96\%) of the variance in $D_{l}$, respectively. \textbf{(b)} Shape changes associated with each PC ($l_{max}=3$ reconstructions), found by splitting the PCA space into 7 equal-length bins along each axis and plotting the T cell within each bin with the lowest value for the other PCs. An increasing PC 1 represents elongation and front-widening, a decreasing PC 2 represents contraction with front-widening, and an increasing PC3 represents elongation (forward or sideways) with the centroid moving towards the uropod. \textbf{(c)} Correspondence between the principal components (PCs) and $D_{l}$ is found by inverting the minimum, mean and maximum of each PC, with the other two PCs set to zero. Red and blue indicate decreasing and increasing descriptors, respectively, as the PCs are increased. $D_{0}$ represents the closeness between the uropod and centroid, $D_{1}$ the ellipsoidal aspects, and higher descriptors represent higher-frequency shape features. \textbf{(d)} Cell reconstructions with $l_{max}=3$ at their positions in PCA space. Darkness of colour indicates increasing PC 2.}
\end{figure}
First, the coefficients are not linearly related to the spatial extent of different features. We therefore took the square root of each element, i.e. $\{D_{l}\} \to \{D_{l}^\frac{1}{2}\}$, which yields a descriptor more representative than the power spectrum \cite{shen2009modeling}. Without this operation, almost all variance is contained in the first (ellipsoid) coefficient. Second, we added an element to the shape representation to capture key polarisation information lost in a purely global shape representation. At the cell rear is the uropod, a smooth round appendage that stabilises the cell and generates contractile forces, and at the leading edge emerge dynamic, higher-frequency protrusions. The cells in frames A and B in Fig. \ref{fig:figure1}e have very similar descriptors under a regular spherical harmonic representation, reflecting the similarity of their ellipsoid components, but this misses the polarisation conveyed in subtler features. We therefore added an extra descriptor, linearly related to the distance between the uropod and centroid, $D_{0}$ (see Methods for the full expression). The standard deviation of $D_{0}$ across all datasets is 0.31, and the standard deviations of the remaining $D_{l}$ are all lower, showing that frames A and B in Fig. \ref{fig:figure1}e are approximately two standard deviations apart along the $D_{0}$ dimension. While most cells have a well-defined uropod that can be readily identified (e.g. frames A and B in Fig. \ref{fig:figure1}e), some can exhibit more spherical shapes, as shown in Fig. \ref{fig:figure1}f. However, even for these cells there is still an identifiable smooth rear opposite a dynamic leading edge, and temporal information can reveal where the uropod transiently forms. For simplicity, we refer to this region at the rear as the uropod for all frames. The ultimate representation of T cell shape is therefore $\{D_{l}\}_{l=0}^{l_{max}=15}$ with $D_{0}$ as described above and $D_{l}$ for $l>0$ the square root of the expression in Eq. \ref{eq:rotinv}.
We used principal component analysis (PCA) to identify a set of uncorrelated linear features, or principal components (PCs), from the initial high-dimensional shape representation, $\{D_{l}\}$. Despite the lack of obvious constraints from manual inspection, Fig. \ref{fig:figure2}a shows only three PCs are required to capture $\sim$96\% of the variance in the data (74\%, 12\% and 9.8\% for PCs 1, 2 and 3, respectively). The rotational invariance means that the PCA coordinates are not invertible to unique shapes. To better isolate what features each PC describes, we therefore split the PCA space into 7 equal-length bins along each axis and plotted the T cell within each bin with the lowest value for the other PCs, shown in Fig. \ref{fig:figure2}b for $l_{max}=3$ reconstructions and Supplementary Fig. 2a for full cells (and Supplementary Fig. 2b shows the PC values of these plotted cells). Fig. \ref{fig:figure2}c shows what $D_{l}$ transitions these PCs correspond to, with the minimum, mean and maximum inverted for each PC (with the other PCs set to zero), and Supplementary Fig. 2c shows the vector composition of each PC. An increasing PC 1 represents elongation and front-widening, a decreasing PC 2 represents contraction with front-widening, and an increasing PC3 represents elongation (forward or sideways) with the centroid moving towards the uropod. Fig. \ref{fig:figure2}d shows a sample of cells (with $l_{max}=3$) at their locations in the PC space. Supplementary Fig. 2d shows that along the main axis of variation (PC 1), dimensionality is relatively constant, and Supplementary Fig. 2e shows only modest differences in the spherical harmonic spectra of the low and high PC 1 populations.
Uncertainty in the uropod label, a diffuse region rather than a precisely-locatable point, can be quantified and propagated to downstream variables of interest (Supplementary Fig. 3 and Methods). Uropod uncertainty was found using the curvature around the labelled point (Supplementary Fig. 3a-b), and then PC uncertainties were calculated by re-computing $D_{0}$ using each point on the cell rear within this uncertainty (Supplementary Fig. 3c). The mean percent uncertainty in $D_{0}$ is 1.5\%, which is lower than the uropod uncertainty since cell rears are typically perpendicular to the axis defined by the centroid and uropod. The percentage uncertainties of the PCs (relative to their standard deviations) are 4.4\%, 0.30\% and 9.2\% for PCs 1-3, respectively.
\subsection{Run-and-stop migration emerges over long timescale}
To connect morphodynamics with migration strategies, variables describing cell motion are required. There are two landmark-like features of the cell that move through the ECM, the uropod and the centroid, and we calculated velocity vectors for both, invariant to cell scale (i.e. units of s\textsuperscript{-1}; see Methods). To ensure uropod velocities have adequate signal-to-noise ratio (SNR), where noise arises from uropod labelling uncertainty, we found for each dataset the mean time taken for the uropod to move a significant distance, $\tau_{sig}$, and then computed velocities using running means over position with a time window of $\tau_{sig}$ (see Methods and Supplementary Fig. 3d for details). We then calculated speeds, which are 1D and rotationally-invariant, unlike velocities. The uropod and centroid speeds alone cannot separate distinct behaviours at small timescales, like translation and rotation (Supplementary Fig. 4a), and so we searched for a biologically-meaningful reference frame. We found that long-timescale migration is typically along the axis defined by the uropod and centroid (the UC axis), rather than the ellipsoid major axis (Supplementary Fig. 4b). The speeds of the uropod and centroid along this axis then better differentiate distinct motifs (Supplementary Fig. 4c) and Supplementary Fig. 4d shows these describe largely irreversible motion. The former has lower variance and fewer reversals (Supplementary Fig. 4d), and Fig. \ref{fig:figure1}f and Supplementary video 9 show dynamics that we observed in some datasets, where the cell appears to test routes with multiple extensions and retractions but a relatively static uropod, before committing with the uropod. We therefore selected uropod speed along the UC axis as the variable for cell motion (Fig. \ref{fig:figure3}a), and henceforth refer to it simply as speed.
\begin{figure}
\center{\includegraphics[width = 0.7\linewidth]
{figures/figure3.pdf}}
\caption{\label{fig:figure3} \textbf{Run-and-stop migration emerges over long timescales.} \textbf{(a)} Speed is defined as the uropod speed along the uropod-centroid (UC) axis, $\| (\Delta\textnormal{uropod}/\Delta t) \|\cos{\varphi}$, with smoothed uropod and centroid locations, and a further operation for invariance to cell scale (see Methods). \textbf{(b)} Cumulative speed plots show some cells have repeated phases of high speed (e.g. Cell A) while others have much lower speeds (e.g. Cell B). Lines are coloured by the maximum distance traveled divided by total dataset duration. \textbf{(c)} This is despite significant uropod motion in some cases. Meshes are shown every $\sim$104 s and 103 s for Cells A and B, respectively. \textbf{(d)} Histograms of speed with different running mean windows ($t_{win}$). At small timescales, differences in speed between cells can be indistinguishable because most exhibit phases of low speed, highlighted for Cells A and B. However, bimodality into two modes (run-and-stop) emerges at around 150 s.}
\end{figure}
Two migration modes separate out at long timescales, as shown in plots of cumulative speed (Fig. \ref{fig:figure3}b): repeated phases of high speed, making significant progress forward, e.g. Cell A; and lower speeds, yielding little progress, despite significant uropod motion in some cases, e.g. Cell B (Fig. \ref{fig:figure3}c). Fig. \ref{fig:figure3}d shows that, while at small times the dynamics can be indistinguishable (both modes have phases of near-zero speed), run-and-stop bimodality emerges at approximately 150 s. This bimodality is consistent with conclusions from lower-resolution experiments, where long-timescale trajectories of single cells have been modelled with Lévy-type random walks (characteristic of switching between stop and run modes) \cite{harris2012generalized}. Interestingly, another study suggested more complex statistics, with cells divided into sub-populations described by distinct random walk models \cite{banigan2015heterogeneous}. PCs 1 and 2 have a stronger correlation with run-and-stop mode than speed, indicating that shape is specialised more for migration mode than instantaneous speed, with cells in the run mode longer and thinner than those in the stop mode (Supplementary Fig. 5a). We next explored the morphodynamics behind these migration modes.
\subsection{Stereotyped morphodynamics underlie migration modes}
We analysed longer duration datasets for each of the run and stop modes to investigate how they differ (Supplementary videos 1-4 and 5-8 for the run and stop modes, respectively). We first computed the autocorrelation functions of the shape (PCs 1-3) and speed dynamics (using high SNR timeseries; see Methods for details). The autocorrelation function (ACF) is the correlation of a timeseries with a lagged version of itself, as a function of the lag, which can reveal the presence of latent variables preserving information across time. We found an autocorrelation decay time, $\tau\textsubscript{ACF}$, by fitting an exponential decay model to the peaks of the oscillating ACFs (Supplementary Fig. 5b), and these decay times are indicative of the timescales over which processes are likely guided more by internal cytoskeletal machinery than stochastic external cues. For the stop mode, PC 3 is more autocorrelated than the other variables (mean $\tau\textsubscript{ACF} \sim$250 s compared with $\sim$150 s of the other variables; Supplementary Fig. 5b). PC 3 dynamics are suggestive of sensing: forward extension with a tentative rearward centroid, and reaching sideways. See Supplementary videos 5-9, with the three included in the PC 3 ACF analysis coloured by PC 3. For the run mode, the main differences are a decrease in the PC 3 autocorrelation (to $\sim$150 s) and an increase in the speed and PC 2 (contraction with front-widening) autocorrelations (to $\sim$225 s). The power spectra in Supplementary Fig. 5c show the run mode has larger oscillations in speed and PC 2, particularly for 0.005-0.01 Hz. The run mode is therefore associated with faster oscillations in speed and PC 2 that typically remain autocorrelated for longer than those of the stop mode. These ACFs give a global perspective on morphodynamics, and the presence of long timescales suggest that the morphodynamics are, as with the morphologies, low-dimensional. We therefore next zoomed in on the PC timeseries to interpret the organisation of local morphodynamics, or `behaviours', that underlies this low-dimensionality.
\begin{figure}
\center{\includegraphics[]
{figures/figure4.pdf}} \caption{\label{fig:figure4}\textbf{Morphodynamics are organised in stereotyped motifs.} \textbf{(a)} 11 stereotyped motifs are local peaks in a probability density function (PDF) over the spectrogram embeddings in morphodynamic space, where different locations represent different local morphodynamics. These form more of a discrete set than continuum, and some examples of the local PC series are shown (red, blue and green lines for PCs 1-3, respectively). \textbf{(b)} Example stereotyped motifs, with frames evenly-spaced across 150 s. \textbf{(c)} Utilisation of motifs in the stop and run modes. Red and blue indicate the speed running mean is above 0.005 s$^{-1}$ (run mode) and below 0.0025 s$^{-1}$ (stop mode), respectively (selected from the bimodal distribution in Fig. \ref{fig:figure3}d), and grey indicates it is in between these values (e.g. transitions). \textbf{(d)} Transition probability matrices reveal how cells move around the morphodynamic space, counting transitions only once the cell moves to a different motif.}
\end{figure}
The continuous wavelet transform is a method for finding local morphodynamics (behaviours) from a timeseries of morphologies, and has been used to map stereotyped behaviour in fruit flies \cite{berman2014mapping, berman2016predictability}. See Methods for the full pipeline and Supplementary Fig. 6a for a schematic. Wavelets are used to transform the timeseries into a spectrogram with multiscale dynamic information. Dimensionality reduction with t-SNE \cite{van2008visualizing} can then be performed to map the spectrogram to an interpretable 2D morphodynamic space, where different locations represent different local morphodynamics (Fig. \ref{fig:figure4}a), and Supplementary Fig. 6b shows the dimensionality reduction is robust across different hyperparameters. Stereotyped motifs are those that are frequently performed, and so correspond to peaks in the probability density function (PDF) of spectrogram embeddings in this space. We used wavelets with a maximum width of influence of 150 s, the approximate timescale of organisation found from the autocorrelation analysis.
We found that behaviours are organised into more of a discrete set rather than a continuum (Fig. \ref{fig:figure4}a): `islands' between which cells jump, and we could therefore categorise and interpret these individually. Fig. \ref{fig:figure4}b shows key examples, with frames evenly-spaced over a 150 s interval (with the remainder and further examples in Supplementary Fig. 7), and Supplementary Fig. 8 shows the PC dynamics of three examples from each motif. Fig. \ref{fig:figure4}c shows how these are utilised differently in the run and stop modes. Red and blue indicate the speed running mean is above 0.005 s$^{-1}$ (run mode) and below 0.0025 s$^{-1}$ (stop mode), respectively (selected from the bimodal distribution in Fig. \ref{fig:figure3}d), and grey indicates it is in between these values (e.g. transitions). In the stop mode, stereotyped motifs include static shape or minor reaching with centroid towards rear (4); forward lengthen with centroid towards front (5); and edge centroid forward (9) (Fig. \ref{fig:figure4}b). In the run mode, stereotyped motifs include front-widen then streamline and extend (8); and retract and front-widen then extend (2). A probability matrix for transitions between the stereotyped motifs is shown in Fig. \ref{fig:figure4}d, with rows and columns corresponding to the start and end motifs, respectively. We assigned points to the closest stereotyped motif and counted transitions only once the cell moves to a different motif (i.e. diagonal entries are zero). Frequent transitions include from 3 to 7 (retract to reach to one side) and from 8 to 1 (front-widen then streamline and extend to front-widening).
\begin{figure}
\center{\includegraphics[]
{figures/figure5.pdf}} \caption{\label{fig:figure5}\textbf{Periodic oscillations in PC 2 underlie the run mode.} \textbf{(a)} Entropy of the run mode marginal dynamics for each PC (5 repeats for each) shows a minimum for PC 2, and therefore that these dynamics are the most stereotyped, consistent with the autocorrelation results of Supplementary Fig. 5b. Markov chain entropies were calculated for transitions on grids over morphodynamic spaces for each PC, found by repeating the wavelet analysis with each PC on its own. \textbf{(b)} Dynamics in the PC 2 morphodynamic space for the run mode, where different locations represent different local PC 2 morphodynamics (and decreasing PC 2 represents contraction and front-widening). Tracking the trajectories of the longer duration datasets reveals periodic oscillations of varying amplitude. Local PC2 dynamics in 150 s windows are shown inset at key points in the morphodynamic space, showing outer rings represent higher-amplitude oscillations, with a region for particularly large PC 2 decreases, bottom right. The top left corner represents rearward surface motion and the bottom right corner represents contraction and front-widening. Regions in between represent transitions between these motifs. Maximum uropod speeds correspond to contraction and front-widening. \textbf{(c)} These results are suggestive of the following cyclic morphodynamic propulsion mechanism: the leading edge widens, likely intercalating with the ECM and contracting the uropod; the leading edge then extends forward, as the previously-widened leading edge regions undergo a rearward motion that may connect with the ECM like a paddle. This cycle repeats every $\sim$100 s. \textbf{(d)} An example showing these oscillations in Cell 1, coloured by PC 2.}
\end{figure}
We next looked in detail at the run mode, of particular interest as this is when cells use global morphodynamics for active translocation through the ECM, and because it is in all cases defined by polarised morphologies, for which our descriptor was designed. First, we repeated the wavelet analysis with the longer duration datasets of the run mode, finding more of a continuum than the global morphodynamic space, but for which stereotyped motifs can still be categorised (Supplementary Fig. 9a-b, with PC timeseries of three examples from each motif in Supplementary Fig. 10). Supplementary Fig. 9c-d shows the speeds and transition probability matrix. Aside for a turning motif, all fall into two categories: compression, and a rearward surface motion with extension forward (rearward with respect to the cell frame of reference, and relatively static in the lab frame). The precise motifs are then variants on these base behaviours, e.g. whether there is also widening. These underlying morphodynamics, omitting the distracting variations, are most characteristic of PC 2 dynamics, and this connection between PC 2 dynamics and migration is certainly consistent with the increased autocorrelation timescales and power of PC 2 relative to the stop mode (Supplementary Fig. 5b-c).
To test this theory, we calculated the entropy of each PC's morphodynamics. We did this by repeating the wavelet analysis for each PC on its own and calculating the Markov chain entropy for transitions on a grid over the resulting morphodynamic space (see Methods and Supplementary Fig. 9e). We used grids since these dynamics formed continuums rather than discrete, categorisable morphodynamic spaces. We found an entropy minimum for PC 2 (Fig. \ref{fig:figure5}a), confirming that PC 2 dynamics are the most stereotyped. Fig. \ref{fig:figure5}b shows how all four cells follow the same circular oscillations of varying radius in the space of PC 2 morphodynamics. Supplementary videos 1-4 are labelled as Cells 1-4. Outer and inner rings represent high and low amplitude oscillations, respectively, and there is a region for particularly large decreases in PC 2 (but not for large increases). These results, in conjunction with Supplementary videos 1-4, coloured by PC 2, suggest the following morphodynamic propulsion mechanism (sketched in Fig. \ref{fig:figure5}b-c): the leading edge widens, likely intercalating with the ECM to contract the uropod (PC 2 decreases); the leading edge then extends forward, as the previously-widened leading edge regions undergo a rearward flow that may connect with the ECM like a paddle, ultimately streamlining (PC 2 increases). This cycle is repeated every $\sim$100 s, and explains the oscillations in (uropod) speed observed in Fig. \ref{fig:figure3}d. Fig. \ref{fig:figure5}d shows an example section of Supplementary video 1, coloured by PC 2. These results suggest T cells utilise a highly periodic internal machinery to generate a sustained migration effort, alternating between two previously proposed propulsion mechanisms to move the uropod then leading edge forward \cite{reversat2020cellular, fowell2021spatio, abercrombie1970locomotion}. A plausible mechanistic basis for the rearward morphodynamic flow is retrograde cortical actin flow, a process that has been implicated in amoeboid migration in a number of cells, including T cells \cite{abercrombie1970locomotion, reversat2020cellular}. However, further investigations of internal actin dynamics are needed to explore this connection.
\section{Discussion}
T cells are a key part of the adaptive immune system, migrating through the extracellular matrix (ECM) to neutralise infected and cancerous cells. However, their morphodynamics have not yet been completely quantitatively mapped in 3D. Here, we used lattice light-sheet microscopy (LLSM) to acquire datasets of primary mouse cytotoxic T cells migrating through a collagen matrix with high spatiotemporal resolution. Using a novel shape descriptor that incorporates key polarisation information with a uropod label, we found that shape was low-dimensional. Run-and-stop migration emerges at long timescales ($\sim$150 s), and global morphodynamics are stereotyped, forming a discrete set rather than continuum. Stop mode morphodynamics primarily involve oscillations in centroid movement towards the uropod, with extension forwards or sideways (PC 3 dynamics), and these remain autocorrelated for long timescales (decay time, $\tau\textsubscript{ACF} \sim$250 s). The run mode (i.e. active translocation) arises from periodic oscillations in PC 2, with a period of $\sim$100 s and $\tau\textsubscript{ACF} \sim$225 s: the leading edge widens, likely using intercalation with the ECM to contract the uropod (PC 2 decreases); the leading edge then extends forward, as the previously-widened leading edge regions undergo a rearward motion that may connect with the ECM like a paddle, ultimately streamlining (PC 2 increases). These results indicate periodicity in the cellular machinery help sustain forward motion during active translocation.
Uropod tracking proved vital for differentiating key morphological and morphodynamic states. Uropod uncertainties were then required to ensure analysis was at sufficient signal-to-noise ratio (SNR), because the uropod is a diffuse region rather than a precisely-locatable point. In analogy to the role of the Hessian matrix in parameter fitting, we found this could be achieved relatively simply by quantifying uropod uncertainty through the curvature of the cell rear, then propagating this to downstream variables of interest. The inclusion of landmark-like but diffuse features will likely become more important as methods for tracking intracellular structures at high spatiotemporal resolution continue to improve, meaning spatial regions can be associated with specific internal organisation and activity \cite{mckayed2013actin}. In a small number of cases (e.g. Supplementary video 7), thin fluid-like protrusions extend out of the uropod, which cause dynamics in $D_{0}$ that are unlikely to be important for migration. To reduce these effects, in future work we will explore labelling uropods based on smoothed reconstructions (with e.g. $l_{max}=15$). We found uropod definition reduced for some cells in a long-lived stop mode (and therefore had high uncertainties for some PCs, meaning they were omitted from analyses). This may be indicative of loss of polarisation, so for these modes alternative shape descriptors may be more appropriate.
Internal retrograde actin flow has been a hallmark of cell migration models for decades, since Abercrombie first observed centripetal flow of particles on fibroblast surfaces \cite{dupre2015t, abercrombie1970locomotion}. However, Abercrombie also proposed a second propulsion mechanism, where rearward flows of surface deformation might push the cell forward like a paddle. Such morphodynamic flows (or `waves') have recently been observed in 2D migrating \textit{Dictyostelium} cells \cite{driscoll2012cell}, and in T cells embedded in microfluidic channels where they can enable migration without any adhesion \cite{reversat2020cellular}. To our knowledge, however, they have not been characterised in 3D ECM environments. Through inhibition at obstacles and activation on the opposite side, flows may also aid turning as has been described in neutrophils \cite{weiner2007actin}, and the lateral protrusions likely serve as an anchor in confined geometries \cite{tozluouglu2013matrix, mandeville1997dynamic}. Analysis of actomyosin dynamics, as well as tracking of the ECM fibres (perhaps with a contact map over the cell surfaces), would help test the connection between the rearward surface motion and internal actin dynamics, and the specific nature of how these interact with the ECM for anchoring and propulsion. The analysis would also reveal the extent to which decreasing PC2 (contraction with front-widening) is driven by contact with fibres, although the periodic PC 2 dynamics across all run mode cells suggests this may predominantly be internally regulated.
Exciting areas for future work include the extension of the analysis to the timescale of hours, where the statistics and morphodynamics of switching between run and stop modes could be interpreted at the single-cell level, and the hierarchical organisation of the stereotyped motifs could be mapped. There are technical challenges, however: individual cells would have to be followed and migration distances would exceed the scales of current LLSM fields of view. Dataset sizes might also become problematic, given that a 20 min video corresponds to 1 TB of data (with one colour). Furthermore, non-stationary issues such as aging, differentiation and activation may come into effect \cite{metzner2015superstat}. It would also be interesting to build statistical models of T cell morphodynamics \cite{tweedy2019screening}, which may then enable the development of mechanistic models \cite{zhu2016comp}, connecting morphodynamics to both extra and intra-cellular processes.
Ultimately, we hope quantitative morphodynamic analyses of T cells navigating the complex ECM environment will aid comparison of migration across different conditions (e.g. tissues, drugs and cell mutants). In particular, the prevalence and switching between human-labelled modes of migration such as chimneying, mesenchymal, amoeboid (blebbing), finger-like, and rear-squeezing could be put on firm objective grounds \cite{zhu2016comp, yamada2019mechanisms}. These advanced morphodynamic analyses will in turn help the development of mechanistic models, with a view to enhanced understanding of, and more effective, immunotherapeutics.
\section{Methods}
\begin{small}
\subsection{LLSM imaging and pre-processing}
\subsubsection{Lattice light-sheet microscopy (LLSM)}
LLSM experiments were either performed on a custom-built system described in \cite{geoghegan20214d}, or on a Zeiss Lattice Light Sheet 7 microscope (Zeiss, Oberkochen, Germany). OT1-Lifeact-GFP T cells \cite{galeano2020lifeact} labelled with CellTracker Deep Red dye were excited at 642 nm, OT1-mT/mG at 561 nm and plain OT1-Lifeact-GFP T cells at 488 nm. For all experiments performed on the home-built system, Point Spread Functions were measured using 200 nm Tetraspeck beads. The acquired datasets were deskewed and deconvolved using LLSpy, a Python interface for processing of LLSM data. Deconvolution was performed using a Richardson-Lucy algorithm using the PSFs generated for each excitation wavelength. Datasets acquired on the Zeiss system were deskewed using the Zeiss Zen (blue edition) software. All data were acquired at 37$^{\circ}$C and 5\% humidified CO2. The voxel size was 0.1x01x0.2 \textmu m\textsuperscript{3} for the home-built system and 0.145x0.145x0.4 \textmu m\textsuperscript{3} for the Zeiss system. The temporal resolution was 2.5 s per frame for the OT1-mT/mG datasets, 5.6 s per frame for the OT1-Lifeact-GFP-CellTracker Deep Red datasets (both imaged on the home-built system) and 4.17 s per frame for the plain OT1-Lifeact-GFP datasets imaged on the Zeiss system. We collected 29 datasets with 2,850 frames altogether and a mean and standard deviation across datasets of 98 and 78, respectively.
\subsubsection{Image segmentation}
Before further processing, membrane Tomato signal was denoised using a deep-learning approach based on Content-Aware Image Reconstruction (CARE) \cite{weigert2018content}. CellTracker Deep Red and membrane Tomato signal were bleach-corrected using FIJI \cite{schindelin9714}. Cell surfaces were segmented using Imaris 8.4.1 (Bitplane, Zurich, Switzerland). To minimise the occurrence of holes in the surfaces, depending on the signal to noise ratio smoothing factors between 0.35 \textmu m and 0.8 \textmu m were applied. Cell surface triangulations were exported using custom Matlab code and again analysed for surface holes. If required, surface holes were eliminated by custom Matlab code based on closing operations.
\subsubsection{Sample preparation}
Primary murine OT1-Lifeact-GFP and OT1-mT/mG cytotoxic T cells were isolated and cultured as previously described \cite{galeano2020lifeact}. All imaging was done with T cells cultured over 6 or 7 days. For imaging on the home-built system, OT1-Lifeact-GFP T cells were labelled with 100 nM CellTracker Deep Red dye (ThermoFisher Scientific, Waltham, USA).
Keeping all components on ice, collagen matrix solution was prepared by adding 10 \textmu l of 10x PBS, 1.15 \textmu l 1N NaOH and 39 \textmu l T cell medium (TCM), consisting of phenol-free RPMI 1640, 10\% foetal calf serum, 1 mM sodium pyruvate, 10 mM HEPES, 100 U/ml penicillin, 100 \textmu g/ml streptomycin, 2 mM L-glutamine and 50 \textmu M $\beta$2-mercaptoethanol (all from Gibco, ThermoFisher Scientific, Waltham, USA), to 50 \textmu l liquid-phase rat-tail collagen I ($\sim$3 mg/ml; Corning, New York, USA). Coverslip and imaging dish glass surfaces were treated with 2\% (3-aminopropyl) triethoxysilane in ethanol and 6\% glutaraldehyde to facilitate firm attachment of collagen gels. For imaging on the home-built LLSM, 6 \textmu l of collagen mix were placed onto surface-treated round 5 mm coverslips (Warner Instruments, Hamden, USA) and polymerised at 37$^{\circ}$C for 15 min. After polymerisation, 10\textsuperscript{5} T cells in phenol-free TCM were seeded on top of the gel and allowed to infiltrate over 3 h before imaging. For imaging on the Zeiss LLS system, 10\textsuperscript{5} T cells were added to TCM during collagen matrix mix preparation. 70 \textmu l of collagen mix were added to well of 35 mm imaging dishes (Mattek, Ashland, USA) and polymerised at 37$^{\circ}$C for 30 min. After polymerisation, 1 ml of pre-warmed phenol-free TCM was added to the dish and cells were allowed to recover for 1 h before imaging.
\subsection{Quantifying 3D cell morphology}
Cell morphologies were quantified using SPHARM. First, the cell surface, described with 3 Cartesian coordinates, $\{x, y, z\}$, is mapped to the unit sphere, described with polar coordinates $\{\theta, \phi\}$, such that the three Cartesian coordinates are functions of the polar coordinates: $\{x(\theta, \phi), y(\theta, \phi), z(\theta, \phi)\}$. $\{x(\theta, \phi), y(\theta, \phi), z(\theta, \phi)\}$ are then be decomposed in terms of the spherical harmonics, $Y_{l}^{m}(\theta, \phi)$, and only $m\geq0$ functions are required \cite{styner2006framework}. For $x$ for example,
\begin{equation}
x(\theta, \phi) = \sum_{l=0}^{\infty}\sum_{m=0}^{l}c_{l, x}^{m}Y_{l}^{m}(\theta, \phi),
\label{eq:decomposition}
\end{equation}
and the (in general complex) coefficients, $c_{l, i}^{m}$ with $i\in \{x, y, z\}$, represent the morphology. We used the SPHARM-PDM software package \cite{styner2006framework} to find the coefficients for the T cells with $l_{max}=15$ and cell meshes converted to voxel grids with a spatial resolution of 0.5 $\mu m$ for computational speed. The additional variable for capturing polarisation information was $D_{0}$. This was the distance between the uropod and centroid multiplied by $\frac{3}{2}$, with the numerator reflecting the fact that the harmonics are summed over 3 spatial coordinates and the denominator accounting for the fact that the coefficients have a spatial extent double their magnitude. The uropod was manually selected (aiming for its center) in alternating frames and linearly interpolated. PCA is a dimensionality reduction method that finds a set of uncorrelated linear features (the principal components, PCs), which are the eigenvectors of the data covariance matrix (which for $D_{l}$ has dimensions 16 $\times$ 16) \cite{wold1987principal}. Supplementary Fig. 2c shows the vector composition of each PC. As explored through the main text, PC 1 is largely associated with transitions between run and stop mode morphologies, PC 2 is largely associated with morphological transitions in the run mode, and PC 3 is largely associated with morphological transitions in the stop mode. For implementing PCA, we used the Scikit-learn Python package.
\subsection{Uncertainty quantification}
The uncertainty in the uropod label depends on the curvature of the cell rear, which we quantified using the mean curvature averaged across the 15 closest mesh vertices to the labelled point (with a sub-sampled mesh for computational speed). We then defined the positional uncertainty as the cord length associated with a 20$^{\circ}$ rotational perturbation. To convert this to PC uncertainties, we found the set of possible $D_{0}$ values using mesh vertices within this uncertainty (i.e. within one cord length of the uropod label), calculated the standard deviation, and converted to PC uncertainties by multiplying by the cosine of the angle between the $D_{0}$ and PC vectors in $\{D_{l}\}$ space. This process was repeated for every 10 frames in each dataset to get a single characteristic uncertainty for each PC (the mean) for each dataset. $\tau_{sig}$ was calculated as the mean time take for the uropod to move twice the cord length. Some cells have uropods that are near-stationary, and therefore have a $\tau_{sig}$ comparable with the full dataset duration. To account for such cases, we used a maximum $\tau_{sig}$ of 100 s, in order that these could be plotted for comparison with dynamic cells, but we excluded them from quantitative analysis.
\subsection{Finding a motion variable for small timescales}
We calculated uropod and centroid velocities by finding the displacements between consecutive positions (smoothed with running means over $\tau_{sig}$ for both, for consistency) and dividing by the time step and cube root of cell volume (for invariance to cell scale). The ellipsoid major axis was calculated as the eigenvector with the largest eigenvalue of $A^{T}A$ where A is a matrix of the $l=1$ spherical harmonic coefficients \cite{brechbuhler1995parametrization}: $\frac{\sqrt{3}}{3\sqrt{2\pi}}(\mathbf{c}_{1}^{-1}-\mathbf{c}_{1}^{1}, i(\mathbf{c}_{1}^{-1}-\mathbf{c}_{1}^{1}), \sqrt{2}\mathbf{c}_{1}^{0})$. For comparing the uropod-centroid (UC) and ellipsoid axes, we used running means for uropod and centroid with a time window of 100 s for long-timescale behaviour. We compared cells where the uropod speed was above 0.0025 s$^{-1}$, i.e. moving more than a quarter cell length in 100 s, and the distance between the uropod and centroid velocity vectors was within half the uropod speed, i.e. they were aligned.
\subsection{Timeseries autocorrelation functions and power spectra}
Autocorrelation functions and power spectra were computed for longer duration datasets for each of the run and stop mode. We removed timeseries with low SNR: PC timeseries where the ratio of the signal standard deviation to the PC uncertainty was below 2.5 and speed timeseries where $t_{sig}$ was of a similar scale to the full dataset duration. There was one removal for each of PC 1, 3 and speed, across different cells. We calculated the autocorrelation on de-trended timeseries, in order to only capture statistically significant correlations, removing trends with frequencies lower than 0.0025 Hz (corresponding to a period of approximately half the total dataset duration) with a Butterworth highpass filter \cite{butterworth1930theory}. We then found a decay time, $\tau\textsubscript{ACF}$, by fitting an exponential decay model, $y=e^{-\frac{x}{\tau\textsubscript{ACF}}}$, to the peaks of the ACF (rather than the full ACF, which is more appropriate for non-oscillatory patterns).
\subsection{Continuous wavelet transform}
The continuous wavelet transform was used to find local morphodynamics (or `behaviours') from the PC timeseries. A wavelet that decays to zero either side of a central peak is convolved with the timeseries, which produces a new timeseries where each element now represents local morphodynamics. Repeating this process with dilated versions of the wavelet and stacking the resulting set of timeseries yields a spectrogram with multiscale dynamic information, where high-frequency components are analysed close in time, but lower frequency information bleeds in from afar. This spectrogram is then mapped to an interpretable 2D space using t-SNE \cite{van2008visualizing}, and a PDF can be computed with kernel density estimation \cite{davis2011remarks}. t-SNE (t-distributed stochastic neighbour embedding) is a non-linear dimensionality reduction method that uses machine learning. Two similarity metrics between datapoints are defined for each of the two representations, the initial (high-dimensional) representation and the target (lower-dimensional) representation. The difference between the distributions of these similarities across all data pairs is minimised. For implementing t-SNE, we used the Scikit-learn Python package with default parameters: perplexity (analogous to the number of neighbours in other algorithms) of 30, learning rate of 200, and 1000 iterations.
We identified stereotyped motifs (PDF peaks) using adaptive binarisation, a method that thresholds pixels in an image with a threshold value that depends on the local statistics: the mean over a surrounding square of pixels with an added bias (we used square dimensions of 7 and a bias of 20, found with a grid search). We used adaptive rather than pure binarisation so that regions with high-density peaks and high PDF between then (`superhighways' representing common transitions) could be separated, while lower peaks in absolute terms could also be captured. We used two simple wavelets, the `mexican hat' wavelet and Gaussian 1\textsuperscript{st} derivative wavelet, with the combination of the two required to capture symmetric and antisymmetric features. For organisms where the morphodynamics of interest are organised in repeating bouts, e.g. high-frequency wing-beating of fruit flies, complex wavelets that enable the removal of phase information can be useful. However, over the timescales analysed here, T cell morphodynamics are slower-changing, and phase information is important. We used six equally-spaced frequencies for each wavelet from double the Nyquist limit up to the (wavelet-specific) frequency with width of influence corresponding to 150 s, the approximate timescale of organisation found from the autocorrelation analysis. The width of influence was found by convolving each wavelet with a square pulse to find where edge effects begin. When repeating this method for only the four run mode datasets, we used for the adaptive binarisation parameters square dimensions of 15 and a bias of 50, again found with a grid search.
\subsection{Comparing marginal morphodynamics of the run mode}
The marginal morphodynamics form continuums, and so transition matrices over stereotyped PDF peaks cannot be defined. Instead, we defined transition matrices over points on a grid. We then quantified the entropy for the transition dynamics of each PC (and compared with that of their combined dynamics). The entropy is $-\sum{\pi_{i}p_{ij}log_{2}p_{ij}}$, where $\pi_{i}$ is the equilibrium distribution and $p_{ij}$ is the probability that the next motif to be visited after $i$ will be $j$. For plotting the PC 2 dynamics of the four cells, we perturbed the wavelets slightly to further improve the interpretability. This was done by searching locally across options for the maximum wavelet width (keeping 150 s as an upper bound) and finding combinations with reduced entropy. Reduced entropy was associated with reducing the Gaussian wavelet maximum width to 100 s, but with the same Mexican hat wavelets as before.
\end{small}
\vspace{5mm}
\noindent \textbf{Author Contributions} \\
DK, JM and MB acquired and segmented the cell surface segmentation data; HC and RE performed the downstream morphological and morphodynamic analysis. \\
\noindent \textbf{Data accessibility} \\
The cell surface segmentation data that support the findings of this study have been deposited on Dryad\\
(DOI: \url{https://doi.org/10.5061/dryad.tdz08kq1r}). The code used is available at \url{https://github.com/hcbiophys/tcells_paper_code}. \\
\noindent \textbf{Funding Statement} \\
This work was funded by the Biotechnology and Biological Sciences Research Council (grant number BB/M011178/1) to RE and the Australian Research Council (Discovery Project grant DP180102458) to MB.\\
\noindent \textbf{Acknowledgements} \\
The authors thank ND Geoghegan, KL Rogers, N Tubau and LW Whitehead (WEHI, Melbourne, Australia) for technical assistance with LLS microscopy and image denoising. MB acknowledges Bitplane AG for an Imaris Developer licence, and HC and RE thank Suhail Islam for invaluable computational suppport. \\
\noindent \textbf{Competing Interests} \\
The authors declare no competing interests. \\
\printbibliography
\end{document}
\section{Supplementary Figures}
\pagenumbering{arabic}
\subsection{Supplementary Figure 1: 3D T Cell Migration}
\begin{figure}[H]
\center{\includegraphics[]
{figures/3d.pdf}}
\caption{\label{fig:3d} \textbf{3D T Cell Migration.} \textbf{(a)} Representative snapshot of a T cell migrating in a 3D collagen gel. Scale bar: 5 \textmu m. \textbf{(b)} Migration tracks of T cells embedded in a 3D collagen gel. Left: xy view. Right: xz view. Starting point of tracks was translated to origin of coordinate system for visualisation purposes.}
\end{figure}
\subsection{Supplementary Figure 2: Full meshes and principal components of the sampled frames}
\begin{figure}[H]
\center{\includegraphics[]
{figures/SI_pca.pdf}}
\caption{\label{fig:SI_pca} \textbf{Full meshes and principal components of the sampled frames.} \textbf{(a)} Shape changes associated with each PC are shown, found by splitting the PCA space into 7 equal-length bins along each axis and plotting the T cell within each bin with the lowest value for the other PCs ($l_{max}=3$ reconstructions in the main text, full cells here). An increasing PC 1 represents elongation and front-widening, a decreasing PC 2 represents contraction with front-widening, and an increasing PC3 represents elongation (forward or sideways), with the centroid moving towards the uropod. \textbf{(b)} Normalised PC values of the displayed cells, where black colouring indicates which PC is being sampled. \textbf{(c)} Vector composition of each PC, in terms of the spherical harmonic descriptors, $D_{l}$. \textbf{(d)} Dimensionality across the main mode of variation (PC 1) is relatively constant. For data both below the mean along PC 1 (red) and above it (green), the explained variance ratios by a new set of PCs, PC$'$, are plotted, and these decay at similar rates. \textbf{(e)} Difference in the spherical harmonic spectra (expressed through the descriptors, $D_{l}$) between the low and high PC 1 (below and above the PC 1 mean, respectively) populations.}
\end{figure}
\subsection{Supplementary Figure 3: Uropod uncertainty quantification and propagation to downstream variables of interest}
\begin{figure}[H]
\center{\includegraphics[]
{figures/SI_uncertainty.pdf}}
\caption{\label{fig:SI_uncertainty} \textbf{Uropod uncertainty quantification and propagation to downstream variables of interest.} \textbf{(a)} Mean curvature across the closest 15 mesh vertices to the uropod label was calculated (with a sub-sampled mesh for computational speed). \textbf{(b)} Uropod uncertainty is defined as the cord length associated with a 20$^{\circ}$ rotational perturbation. \textbf{(c)} PC uncertainties for each video were found by recalculating $D_{0}$ with all mesh vertices within this uropod uncertainty, calculating the standard deviation, and multiplying this by the cosine of the angle between the $D_{0}$ and PC vectors in $\{D_{l}\}$ space for every 10 frames in each video, then calculating the mean. \textbf{(d)} Uropods were also tracked for connecting morphodynamics to motion. To ensure these are at sufficient signal-to-noise ratio, we found for each video the mean time taken for the uropod to move a significant distance (defined as twice the cord length), $\tau_{sig}$, and then computed velocities using uropod running means with a time window of $\tau_{sig}$. We used a maximum $\tau_{sig}$ of 100s to ensure near-stationary cells were included in visualisations, but these insignificant velocities were omitted from quantitative analysis.}
\end{figure}
\subsection{Supplementary Figure 4: Finding a motion variable for small timescales}
\begin{figure}[H]
\center{\includegraphics[]
{figures/SI_speed.pdf}}
\caption{\label{fig:SI_speed} \textbf{Finding a motion variable for small timescales.} \textbf{(a)} We tracked two variables that can be used to link morphodynamics with motion: the uropod and the centroid. Their velocities (dividing by the cube root of volume for scale invariance) are 3D and not rotationally invariant, and a simple description in terms of the speed of either does not adequately separate distinct motifs, like turning (motif 1) and moving forward (motif 2). An internal reference frame is needed. \textbf{(b)} There are two options: the centroid-uropod (UC) axis and ellipsoid axis. A histogram shows that at times when the whole cell moves in unison, this happens more along the UC axis than the ellipsoid axis, with an example cell where the two differ shown. Running means for uropods and centroids with a time window of 100s were used for long-timescale behavior. Motion in unison was taken to be when uropod speed was above 0.0025 s$^{-1}$, i.e. moving more than a quarter cell length in 100s, and the distance between the uropod and centroid velocity vectors was within half the uropod speed, i.e. they were aligned. \textbf{(c)} Descriptors in terms of the speed of the uropod and centroid along the UC axis (speed\textsubscript{uropod,UC} and speed\textsubscript{centroid,UC}) can then differentiate motifs 1 and 2. \textbf{(d)} These describe largely irreversible motion (now with running means using the time windows from the uncertainty analysis, to include shorter-timescale behavior). speed\textsubscript{uropod, UC} has lower variance and fewer reversals, and Supplementary video 9 and Fig. 1f show examples where speed\textsubscript{centroid, UC} is much more oscillatory. We therefore selected speed\textsubscript{uropod, UC} as the cell motion variable, and referred to it simply as speed.}
\end{figure}
\subsection{Supplementary Figure 5: Link between run-and-stop modes and shape}
\begin{figure}[H]
\center{\includegraphics[]
{figures/SI_correlations.pdf}}
\caption{\label{fig:SI_correlations} \textbf{Link between run-and-stop modes and shape.} \textbf{(a)} The link between cell shape and speed is shown through correlations between the shape PCs, raw speed, and the running mean of speed with a window size of 150s. PCs 1 and 2 have a stronger correlation with long rather than short-timescale speed, indicating that shape is specialised more for migration mode than instantaneous speed, with cells in the run mode longer and thinner than those in the stop mode. $p$-values and $r$-values (Pearson correlation coefficients) are shown. \textbf{(b)} The autocorrelations (ACFs) were calculated for four long videos from each of the stop and run modes. Decay timescales, $\tau\_{ACF}$, were found using exponential decay models fitted to the peaks of the oscillating ACFs. For cells in the stop mode, PC 3 is the most strongly autocorrelated, followed by PC 2. For cells in the run mode, the main differences are a large drop in the autocorrelation of PC 3, meaning PC 2 becomes the most autocorrelated shape variable, and an increase in the autocorrelation of speed. \textbf{(c)} The power spectra of the PC and speed time series. The run mode is also associated with larger oscillations in PC 2 and speed. Only powers above the mean variance of associated with PC uncertainty from the uropod labelling are shown.}
\end{figure}
\subsection{Supplementary Figure 6: Continuous wavelet transform used to map stereotyped motifs}
\begin{figure}[H]
\center{\includegraphics[]
{figures/SI_wavelets_method.pdf}}
\caption{\label{fig:SI_wavelets_method} \textbf{Continuous wavelet transform used to map stereotyped motifs.} \textbf{(a)} The continuous wavelet transform was used to acquire a spectrogram capturing local multi-scale dynamic information from the PC time series, with two wavelet types to ensure both symmetric and antisymmetric features are captured (`mexican hat' and Gaussian 1\textsuperscript{st} derivative). We used 6 frequencies per wavelet, from double the Nyquist limit up to the frequencies associated with widths of influence of approximately 150s, as found from the autocorrelation analysis to be the timescale of morphodynamic organisation. The spectrogram, which represents morphodynamics at each time point, was embedded in an interpretable 2D morphodynamic space using t-SNE. \textbf{(b)} Robustness of the morphodynamic space found with t-SNE. The perplexity parameter is analogous to the number of neighbours in alternative dimensionality reduction methods, and the value used was 30 (the default for the Scikit-learn Python package). Colouring four of the motifs (top) shows that the embeddings are similar across the perplexity range suggested in the original t-SNE paper \cite{van2008visualizing}. Re-running the algorithm with a perplexity of 30 coloured by cell (bottom) also shows the embeddings are robust across random initialisations. }
\end{figure}
\subsection{Supplementary Figure 7: Global morphodynamic space with further examples.}
\begin{figure}[H]
\center{\includegraphics[width = 0.9\linewidth]
{figures/SI_2_motifs.pdf}}
\caption{\label{fig:SI_2_motifs} \textbf{Two examples for each stereotyped motif from Fig. 4a.} }
\end{figure}
\subsection{Supplementary Figure 8: PC dynamics of the stereotyped motifs}
\begin{figure}[H]
\center{\includegraphics[width = \linewidth]
{figures/SI_local_PCs_combined.pdf}}
\caption{\label{fig:SI_local_PCs_combined} \textbf{PC dynamics of the stereotyped motifs.} Three principal component (PC) time series for each of the stereotyped motifs are shown, with a 150s time window, and aligned in the $y$ direction so the middle times coincide. Colours indicate the PC and different PC series have different line styles.}
\end{figure}
\subsection{Supplementary Figure 9: Run mode morphodynamics}
\begin{figure}[H]
\center{\includegraphics[]
{figures/SI_wavelets_run.pdf}}
\caption{\label{fig:SI_wavelets_run} \textbf{Run mode morphodynamics.} \textbf{(a)} A morphodynamic space now formed exclusively from the four long videos of cells in the run mode shows the morphodynamic composition with higher resolution. \textbf{(b)} Examples of the stereotyped motifs, each over a 150s period. \textbf{(c)} Colouring by speed shows that motif 3 (compress and front-widen) is associated with the highest speeds. \textbf{(d)} The transition probability matrix (sequential stereotyped peaks). \textbf{(e)} Marginal dynamics of each PC. These form continuums, and so transition matrices were defined over points on a grid. We then quantified the entropy for the dynamics of each PC (and compared with that of their combined dynamics) and found an entropy minimum in PC 2 (consistent with the autocorrelation results). The entropy is $-\sum{\pi_{i}p_{ij}log_{2}p_{ij}}$, where $\pi_{i}$ is the equilibrium distribution and $p_{ij}$ is the probability that the next motif to be visited after $i$ will be $j$.}
\end{figure}
\subsection{Supplementary Figure 10: PC dynamics of the stereotyped motifs in the run mode}
\begin{figure}[H]
\center{\includegraphics[]
{figures/SI_local_PCs_split.pdf}}
\caption{\label{fig:SI_local_PCs_split} \textbf{PC dynamics of the stereotyped motifs in the run mode.} Three principal component (PC) time series for each of the stereotyped motifs are shown, with a 150s time window, and aligned in the $y$ direction so the middle times coincide. Colours indicate the PC and different PC series have different line styles.}
\end{figure}
\printbibliography
\end{document}
| 2024-02-18T23:40:02.761Z | 2022-04-11T02:01:53.000Z | algebraic_stack_train_0000 | 1,170 | 10,152 |
|
proofpile-arXiv_065-5851 | \section{Introduction} \label{Intro}
The equivalence principle is a cornerstone in the foundations of Einstein's general relativity, and it is indeed more fundamental than the symmetry of general covariance, in the sense that there are generally covariant spacetimes that do not obey the principle of equivalence \cite{R_Wald_GR}. Essential to the search for a quantum theory of gravity is a deep understanding of exactly where quantum mechanics and general relativity conflict. This motivates the search for tests of the equivalence principle of general relativity using quantum systems.
Different statements of the equivalence principle can be found in the literature. They correspond to propositions about (i) the equality between inertial and gravitational masses, (ii) the universality of free fall, and (iii) the equivalence between homogeneous gravitational fields and uniform accelerated motion.
Several tests have been performed with pendula or torsion balances leading to extremely accurate confirmation of the equality of gravitational and inertial masses at the classical level \cite{C_Will}. It has also been proved with quantum-mechanical particles by using gravity-induced interference experiments \cite{PhysRevLett.34.1472, Peters1999}. Therefore, on the basis of these experimental demonstrations, in this paper we take for granted the validity of the statement (i).
The universality of free fall, often referred as the weak equivalence principle, asserts that all test bodies fall in a gravitational field with the same acceleration regardless of their mass or internal composition, provided they are small enough that one can neglect the effects of gravity gradients. This is exactly true in classical mechanics, as it is equivalent to the statement (i). However, in quantum systems, the universal character of gravity entails much more than just the equality between inertial and gravitational masses. Indeed, quantum objects do not satisfy the essence of the weak equivalence principle since their behavior in external gravitational fields (and even for free particles) is mass-dependent. This is clearly seen from the fact that while masses cancel out in the Newton's law for a particle in a homogeneous gravitational field (e.g. along the $z$-axis), $m \ddot{z} = m g$, they do not from the Schr\"{o}dinger equation,
\begin{align}
i \hbar \frac{\partial \psi}{\partial t} = - \frac{\hbar ^{2}}{2m} \frac{\partial ^{2} \psi}{\partial z ^{2}} + m gz \psi , \label{Schro-GravField}
\end{align}
thus implying that different inertial masses may produce different spreading of wave packets. For a sharply peaked wave packet, by invoking Ehrenfest's theorem it is clear that the mean position $\braket{z}$ follows a geodesic, with however, mass-dependent quantum fluctuations around it (proportional to the ratio $\hbar / m$), thus signaling the nonuniversality of quantum free fall \cite{PhysRevD.55.455, Ali_2006}. The compatibility between the weak equivalence principle and quantum mechanics is an interesting issue that is yet to be completely settled. It has been extensively investigated with quantum particles that undergo free-fall in a homogeneous gravitational field. For example, in Ref. \cite{PhysRevD.55.455}, Viola and Onofrio studied free-falling Schr\"{o}dinger cat states and determine the average time of flight by means of the Ehrenfest's theorem, which unsurprisingly is mass-dependent. Also, violations to the weak equivalence principle has been investigated with Gaussian \cite{Ali_2006} and non-Gaussian \cite{Chowdhury_2011} wave packets evolving in the presence of the gravitational field, and quantified through the mean arrival time at an arbitrary detector location from a probability current approach \cite{PhysRevA.47.85, PhysRevA.51.2748, PhysRevA.59.1010, PhysRevA.58.840, LEAVENS199327, MUGA1995351}. On a different framework, in Refs. \cite{Davies_2004, Davies2_2004}, Davies used a model quantum clock \cite{PhysRev.109.571, doi:10.1119/1.12061} to compute the transit time of a free falling quantum particle in a background gravitational field. Recently, it has been proposed that violations to the weak equivalence principle can also be studied through the dephasing and phase shift of free-falling composite systems \cite{Anastopoulos_2018}. In these scenarios, a precise definition of the time of flight is required, and as discussed by Finkelstein in Ref. \cite{PhysRevA.59.3218}, there exists no unique or unambiguous definition that is universally applicable and empirically well tested \cite{MUGA2000353}.
As a sequel to the works mentioned above, in this paper we study the issue of violations to the weak equivalence principle from a different perspective which allows us to avoid the phenomenological definitions of the time of flight, namely, the cut-the-wave procedure. This method consists in cutting up the wave function abruptly and evaluate the flux afterwards. By means of the peculiar time-dependence of the flux at a given position, i.e., diffraction in time, we quantify the degree of violation to the equivalence principle when the system evolves in the presence the gravitational field. This may be analyzed in terms of the Moshinsky shutter \cite{PhysRev.88.625}, who studied the time-evolution of a cut-off plane wave in free-space. Since its experimental verification \cite{PhysRevLett.77.4}, the diffraction in time phenomena has found many applications (see Ref. \cite{DELCAMPO20091} and references therein), and it has been even proposed as a probe for Planck scale physics \cite{PhysRevLett.101.221301, PhysRevD.90.125027}.
The cancellation of masses in the Newton's law for a particle in a homogeneous gravitational field allow us to introduce a coordinate system in which the accelerated motion with respect to an inertial reference frame is replaced by free motion in an accelerated frame, thus implying the statement (iii), often referred as the strong equivalence principle. In short, performing the transformations $z^{\prime} = z - vt - \frac{1}{2} gt ^{2}$ and $t^{\prime}=t$, the equation of motion $m \ddot{z} = m g$ for a particle in a homogeneous gravitational field reduces to the equation of motion for a free particle, i.e., $m \ddot{z} ^{\prime} = 0$. In this way, Einstein postulated that all the physical laws in a homogeneous gravitational field should be locally equivalent to the physics in a uniformly accelerated frame. The question naturally arises as to whether or not the strong equivalence principle is valid for quantum systems. The answer is in the affirmative, and it can be demonstrated by the fact that Schr\"{o}dinger's equation for a particle in a homogeneous gravitational field (\ref{Schro-GravField}) gets transformed, via the above coordinate transformation to an accelerated frame, to the free-particle Schr\"{o}dinger's equation:
\begin{align}
i \hbar \frac{\partial \psi ^{\prime}}{\partial t^{\prime}} = - \frac{\hbar ^{2}}{2m} \frac{\partial ^{2} \psi ^{\prime}}{\partial z ^{\prime \, 2}} , \label{Schro-Free}
\end{align}
with $\psi ^{\prime} = e ^{i \gamma (z,t) } \psi $. Therefore, there is a formal correspondence between a uniform gravitational field and a uniform acceleration in the underlying quantum kinematics, just as there is in classical kinematics. However, the relation between quantum states and its dynamical evolution is much more subtle than in the classical case. In fact, the equation of motion may transform correctly, but the energy eigenstates do not i.e., while the energy eigenstates for a particle moving freely have the form $e ^{ikz-iEt / \hbar}$, the stationary states of the Schr\"{o}dinger equation (\ref{Schro-GravField}) have the form $\mbox{Ai} (\frac{z - b}{a}) e ^{-iEt / \hbar}$, and they do not transform into each other under any coordinate transformation. In Ref. \cite{Longhi:18}, using entangled photons, violation of the strong equivalence principle was found, thus suggesting that we cannot take for granted its validity.
In this paper we examine the equivalence principle of gravity at the quantum level by using the diffraction in time of matter waves in two ways. In Sec. \ref{DIT} we study a quasi-monochromatic beam of particles incident on a shutter that is suddenly removed at $t=0$ and fall due to the gravitational field. The oscillatory behavior around the classical distribution is mass-dependent, and we interpret it as a signature of the violation of the weak equivalence principle. The width of the diffraction in time serves a measure of the degree of violation. We also show the validity of the strong equivalence principle in this case. Next, using the recent advances in the manipulation of ultracold atoms and neutrons, as well as the experimental observation of quantum states of ultracold neutrons in the gravitational field above a flat mirror, in Sec. \ref{GravityQuantumStates} we investigate the diffraction in time of a suddenly released beam of particles initially prepared in gravitational quantum bound states. In this case, since the quantum state is normalizable (as compared with the quasi-monochromatic beam), we quantify the degree of violation in a different manner. Indeed, we compare the time of flight from the mean position of the initial distribution versus the classical time of flight. We estimate the degree of violation for thermal and ultracold neutrons, as well as cesium atoms and large molecules. In this case, we find that the strong equivalence principle is violated. In the last Sec. \ref{DiscussionConclusions} we discuss our results and present a brief summary.
\begin{figure}
\includegraphics[scale=0.5]{figure1}
\caption{The shutter problem.} \label{figure1}
\end{figure}
\section{Diffraction in time of free-falling particles} \label{DIT}
Let us consider a monochromatic beam of independent particles (of mass $m$ and momentum $p > 0$) impinging on a totally absorbing shutter located at the origin $z=0$, such that at $t=0$, the shutter is opened and the beam is suddenly released in the presence of the gravitational field, as shown in Fig. \ref{figure1}. We will refer to this setup as scenario A. This is the closest possible quantum version of the Galileo leaning tower experiment. The problem implies the following initial quantum state:
\begin{align}
\psi _{0} (z,t=0)=e ^{-i \frac{pz}{\hbar}} H (z) , \label{InitialStateDIT}
\end{align}
where $H(z)$ is the Heaviside step function. It is worth mentioning that the initial wave function (\ref{InitialStateDIT}) is idealized, since particles are not affected by gravity until the shutter is opened and they fall down. However this scenario can be achieved by applying a uniform electric field pointing upwards in the region $z>0$, compensating in this way the effects of gravity, making feasible the preparation of such initial state. In Section \ref{GravityQuantumStates} we shall consider a more realistic scenario where particles are trapped by the effect of gravity in bound states before they fall down.
The quantum state $\psi (z,t)$ at time $t>0$ is related with the initial quantum state (\ref{InitialStateDIT}) by
\begin{align}
\psi (z,t) = \int _{- \infty} ^{\infty} K (z,t ; z ^{\prime} , t ^{\prime}) \psi _{0} (z ^{\prime} , t ^{\prime}) d z ^{\prime} , \label{PropagatedStateDIT}
\end{align}
where $K \left( z ,t ; z ^{\prime} , t ^{\prime} \right)$ is the propagator, which solves the time-dependent Schr\"{o}dinger equation \cite{Feynman_Hibbs}. For a uniform gravitational potential $V _{\mbox{\scriptsize Grav}} (z) = mgz$, the propagator is found to be
\begin{align}
\hspace{-0.2cm} K \left( z ,t ; z ^{\prime} , t ^{\prime} \right) = \sqrt{\frac{m}{2 \pi i \hbar \left( t - t ^{\prime} \right)}} \exp \left\lbrace \frac{i}{\hbar} \left[ \frac{m \left( z - z ^{\prime} \right) ^{2}}{2 \left( t - t ^{\prime} \right)} \right. \right. \notag \\ \left. \left. - \frac{mg}{2} \left( t - t ^{\prime} \right) \left( z + z ^{\prime} \right) - \frac{mg ^{2}}{24} \left( t - t ^{\prime} \right) ^{3} \right] \right\rbrace . \label{FreeFallPropagator}
\end{align}
Leaving aside the technical details for evaluating the integral in Eq. (\ref{PropagatedStateDIT}), the time-evolved quantum state we obtain is
\begin{align}
\hspace{-0.2cm} \psi (x,t) &= \sqrt{\frac{1}{2}} e ^{i \phi (z,t)} \left\lbrace \left[ \frac{1}{2} + C (\xi)\right] + i \left[ \frac{1}{2} + S (\xi)\right] \right\rbrace , \label{PropagatedStateDIT2}
\end{align}
where $\phi (z,t)$ is a phase-factor irrelevant for the purposes of this paper, $C(\xi)$ and $S(\xi)$ are the Fresnel integrals \cite{Gradshteyn_Ryzhik}, and the Fresnel integral's argument is a function of position and time
\begin{align}
\xi = \sqrt{\frac{m}{\pi \hbar t}} \left( z + vt + \frac{gt ^{2}}{2} \right) , \label{Xi}
\end{align}
where $v=p/m$ is the particle's velocity. The quantum probability density $\vert \psi (z,t) \vert ^{2}$ admits a simple geometric interpretation in terms of the Cornu spiral (see for example Refs. \cite{PhysRev.88.625, DELCAMPO20091, PhysRevD.90.125027}). Figure~\ref{DIT_Plot} shows the classical and quantum probability densities as a function of time recorded by a detector located at $z<0$. While the classical distribution jumps suddenly from $0$ to the stationary value $1$ at $t=T$, where $T= - (v/g) + \sqrt{ (v/g) ^{2} + 2 \vert z \vert / g } >0$ is the (mass-independent) classical time of flight, the quantum distribution exhibits a diffraction effect in time: it increases monotonically from zero to $1/4$ for $t<T$ and it behaves as a damped oscillation around the classical value for $t>T$, tending to the exact classical value when $t \gg T$.
\begin{figure}
\includegraphics[scale=0.45]{DIT_Plot}
\caption{Classical (red line) and quantum (blue line) density profiles as a function of time $t$ at a fixed distance $z<0$.} \label{DIT_Plot}
\end{figure}
One of the main salient features of this setup is the validity of the strong equivalence principle. As we can see, the solution (\ref{PropagatedStateDIT2}) corresponds to that of the diffraction in time result in free-space \cite{PhysRev.88.625, DELCAMPO20091, PhysRevD.90.125027} via the coordinate transformation $z^{\prime} = z - vt - \frac{1}{2} gt ^{2}$ and $t^{\prime}=t$ to an accelerated frame. On the other hand, as extensively discussed in Section \ref{Intro}, the possibility of violation of the weak equivalence principle in quantum mechanics has been largely discussed. For example, the mass dependence of both the probability density and the mean arrival time are taken as a convincing evidence of this. In quantum mechanics, the physical meaning of the mean arrival time $\tau$ has remained rather obscure. It is usually defined in terms of the quantum probability current density $J(z,t)$ by $\tau (z) = \frac{ \int _{0} ^{\infty} t \, J(z,t) \, dt }{ \int _{0} ^{\infty} J(z,t) \, dt }$, which is explicitly mass-dependent \cite{PhysRevA.47.85, PhysRevA.51.2748, PhysRevA.59.1010, PhysRevA.58.840, LEAVENS199327, MUGA1995351}. Another definition due to Peres is in terms of the expectation value of the clock-time operator $\hat{T}$, i.e., $\tau (z) = \bra{\psi (z,t)} \hat{T} \ket{\psi (z,t)}$, or equivalently, as a change in the phase of the wave-function \cite{PhysRev.109.571, doi:10.1119/1.12061}.
To avoid referring
to these phenomenological definitions of the mean arrival time, and taking advantage of the diffraction-in-time effect
of our problem, here we quantify the degree of violation of the weak equivalence principle in terms of the width of the diffraction-in-time effect.
It can be estimated from the difference $\delta t = t _{2} - t _{1}$ between the first two times at which the probability
density takes the classical value, as shown in Fig.~\ref{DIT_Plot}. Such times can be estimated from the Cornu spiral (see for
example Refs. \cite{PhysRev.88.625, DELCAMPO20091, PhysRevD.90.125027}), with the result $\delta \xi \approx 0.85$. For $p \vert z \vert \gg \hbar$ we obtain, from Eq. (\ref{Xi}),
\begin{align}
\delta t \simeq \delta \xi \, \sqrt{ \frac{\pi v T}{k \left( 2 \vert z \vert - vT \right) ^{2}} } \; T . \label{DeltaT}
\end{align}
Clearly, this quantity tends to zero in the limit of large-mass. This implies
that diffraction-in-time effects vanish
for macroscopic objects. As an example we estimate $\delta t$ for different probes. We first consider a neutron beam at a
thermal energy of
$0.0253$eV ($v \simeq 2200$m/sec) \cite{EMRICH201655} and the detector placed at $\vert z \vert =1$m. The diffraction width (\ref{DeltaT}) becomes
\begin{align}
\delta t = 0.37 \times 10 ^{-8} \mbox{sec} , \qquad \mbox{thermal neutrons} ,
\end{align}
which is very small. If instead we consider ultracold neutrons (UCNs), for which $v \simeq 2$cm/sec \cite{Nesvizhevsky2002}, the diffraction width results
\begin{align}
\delta t = 6 \times 10 ^{-5} \mbox{sec} , \qquad \mbox{ultracold neutrons} ,
\end{align}
which is four orders of magnitude larger. A similar order of magnitude is obtained by using cesium atoms \cite{PhysRevLett.71.3083}, for which $m \simeq 2.2 \times 10 ^{-25}$kg. The result is
\begin{align}
\delta t = 0.5 \times 10 ^{-5} \mbox{sec} , \qquad \mbox{Cesium atoms} .
\end{align}
Therefore, these results make UCNs
and cesium atoms potential candidates to test violations to the weak equivalence principle, as well as the validity of the strong equivalence principle, through diffraction in time experiments.
Large molecules, such as C$_{60}$, C$_{176}$ and large organic molecules, have
been proposed to be promising candidates for indirect probes of quantum gravity in a laboratory setting. Time diffraction effects are expected to be considerably larger than those of candidate theories for quantum gravity, so we expect them to be a promising alternative to test
the equivalence principle according to our predictions. For example, C$_{60}$ \cite{Arndt1999} and C$_{176}$ \cite{Goel2004} buckyball molecules have masses $1.19668 \times 10 ^{-24}$kg and $3.50706 \times 10 ^{-24}$kg, respectively, and taking $v \simeq 2$cm/sec, we find
\begin{align}
\delta t = 0.4 \times 10 ^{-6} \mbox{sec} , \qquad \mbox{C$_{60}$ molecule} \\ \delta t = 0.18 \times 10 ^{-6} \mbox{sec} , \qquad \mbox{C$_{176}$ molecule} .
\end{align}
We close this section by commenting about the formal emergence of the classical limit in this system. As discussed before, as the mass increases,
the time width (\ref{DeltaT}) decreases, thus indicating the convergence to the classical time of flight. Therefore, we expect the quantum probability distribution to approach its
classical counterpart in the same fashion. It is widely accepted that classical and quantum probability density functions approach each other for periodic systems in a locally averaged sense when the principal quantum number is large (i.e.,
in the high-energy limit) \cite{doi:10.1119/1.17807, doi:10.1119/1.2173280, Rowe_1987}. This has been successfully confirmed for the simplest spatially confined quantum systems: the infinite square well potential, the harmonic oscillator, the Kepler problem \cite{ClassLim1, ClassLim2, ClassLim3}, and more recently the quantum bouncer \cite{Nuevo}. However, the application of this prescription to systems with continuous spectra is rather unclear. In the present case, the oscillatory behavior of the quantum distribution due to time diffraction, together with the energy level controlled by the mass, allows us to extend
the idea of local averages to the time domain as follows: the local average in the time domain of the quantum probability density follows the classical distribution for large masses. In short:
\begin{align}
\rho _{\mbox{\scriptsize C}} (z,t) = \lim _{m \gg 1} \frac{1}{2 \epsilon _{m}} \int _{t - \epsilon _{m}} ^{t + \epsilon _{m}} \vert \psi (z,t ^{\prime}) \vert ^{2} dt ^{\prime} , \label{LocalAverage}
\end{align}
where the interval $\epsilon _{m}$ decreases as the mass $m$ increases. In the large-mass limit, we use the asymptotic form of the Fresnel integrals \cite{Gradshteyn_Ryzhik} to evaluate this expression, thus confirming the emergence of the classical distribution.
\section{Suddenly released gravitational quantum states} \label{GravityQuantumStates}
The observation of gravitational quantum states of ultracold neutrons \cite{Nesvizhevsky2002} has opened a new arena in which new fundamental short-range interactions \cite{BAELER2009149} and physics beyond the Standard Model \cite{PhysRevD.97.095039, PhysRevD.99.075032} can be tested. Inspired by the experiments performed with UCNs at the Institut Laue-Langevin, in this section we suggest that a time-diffracted beam of UCNs, initially prepared in gravitational quantum states, serve also as a probe for the validity of both the weak and the strong equivalence principles in the quantum regime. In order to gain a clear insight of the idea, we briefly recall how the GRANIT experiment works.
In the experiment sketched in Fig. \ref{figure2}, an intense horizontal beam of UCNs is allowed to fall onto a horizontal mirror. By using a neutron absorber right above the mirror and counting the number of particles that move up to the absorber and down to the mirror, they found that UCNs do not move continuously but jump from one height to another, as quantum theory predicts. In this situation, the vertical motion is quantized, while the horizontal one is driven by classical laws. Here, we suggest that if the mirror is suddenly removed, or equivalently when UCNs reach the end of the chamber and freely-fall in the presence of the gravitational field, as depicted in Fig. \ref{figure2}, the time-diffracted neutrons can be used as a probe to test the equivalence principle in an analogous way as the system considered in the previous section. Of course, the initial quantum state will be prepared inside the chamber, and hence, one can further explore the validity of the various forms of the equivalence principle in the quantum regime (for neutrons in low-energy states) as well as the transition to the classical regime (for neutrons in high-energy states). In principle, a similar experiment can be carried out using ultracold atoms. We will refer to this setup as scenario B.
\begin{figure}
\includegraphics[scale=0.5]{figure2}
\caption{Schematic of the setup. Arrows correspond to neutron classical trajectories between the source and the entrance to the slit between the mirror (at $z=0$) and a neutron absorber. The oscillatory gray curve illustrates the
squared moduli of the neutron wave function above the mirror, whose quantum state can be selected by varying the height of the neutron absorber. In this way, neutrons are prepared in low-energy quantum states
and then fall
to the detector at $z<0$.} \label{figure2}
\end{figure}
The neutron wave function $\psi (z)$ in the Earth's
gravitational field above a mirror is governed by the Schr\"{o}dinger equation
\begin{align}
\left( - \frac{\hbar ^{2}}{2m} \frac{d ^{2} }{dz ^{2}} +mgz \right) \psi (z) = E \psi (z) , \label{Schro-GravUn}
\end{align}
subjected to the following boundary conditions: $\psi (z)$ must vanish asymptotically as $z \to \infty$, and $\psi (z = 0) = 0$ because of the presence of a mirror at $z=0$. All in all, the normalized solution can be written in terms of the Airy function \cite{doi:10.1142/p345}
\begin{align}
\psi _{n} (z) = \frac{1}{\sqrt{l _{g}}} \frac{\mbox{Ai} (a _{n} + z/l _{g} )}{\mbox{Ai} ^{\prime} (a _{n})} H (z) , \label{UCNsWaveFunc}
\end{align}
where $a _{n}$ is the $n$-th zero of the Airy function $\mbox{Ai}$ and $l _{g} = \sqrt[3]{\hbar ^{2} / (2m ^{2}g ) }$ is the gravitational length. The boundary condition at $z=0$ defines the quantum state energies \cite{NESVIZHEVSKY2000754}
\begin{align}
E _{n} = - mgl_{g} a _{n} . \label{UnpEnergy}
\end{align}
Within the classical description, a neutron with energy $E _{n}$ can rise in the gravitational field up to the height $h _{n} = E _{n} / mg = - a _{n} l _{g}$. These idealized conditions are precisely the ones realized in the GRANIT experiment with ultracold neutrons in the Earth's gravity field \cite{Nesvizhevsky2002}.
Once the quantum states of the UCNs are prepared, they are allowed to freely fall due to gravity. Note that unlike the idealized initial quantum state considered in Section \ref{DIT}, the one considered here is perfectly feasible from the experimental side. As before, the time evolution of the state is governed by Eq. (\ref{PropagatedStateDIT}), with the propagator given by Eq. (\ref{FreeFallPropagator}) and the initial quantum state of Eq. (\ref{UCNsWaveFunc}), i.e. $\psi _{0} (z,t=0) = \psi _{n} (z)$. After simple manipulations, the time-evolved state can be expressed in the integral form
\begin{align}
\psi _{n} (z,t) &= \sqrt{ \frac{m l _{g}}{2 \pi \hbar t}} \int _{a_{n}} ^{\infty} d \chi \; \frac{\mbox{Ai} ( \chi )}{\mbox{Ai} ^{\prime} (a _{n})} \notag \\ & \hspace{0.2cm} \times \exp \left\lbrace i \frac{m}{2 \hbar t} \left( z - h _{n} + \frac{1}{2} g t ^{2} - l _{g} \chi \right) ^{2} \right\rbrace , \label{Time-Evolved-UCNsWaveFunc}
\end{align}
which cannot be expressed in an analytical closed-form. However, we appeal for numerical calculations in order to visualize the probability density and draw some conclusions.
In Fig. \ref{DensityProfiles} we show the probability density $\vert \psi _{n} (z,t) \vert ^{2}$ as a function of time $t$ at a fixed distance $z<0$ for different initially prepared quantum states (\ref{UCNsWaveFunc}) and different masses. In the upper panel, we show the ground state ($n=1$) for a light mass (at left), a neutron for instance, and for a heavier object (at right) such as large molecules. The lower panel shows the first excited state ($n=2$), and as before, at left (right) we take a light (heavy) mass.
There are some important differences with respect to the case presented in the previous section that deserves to be discussed in detail. The most salient feature is perhaps the normalizability of the quantum states. As we can see in the Fig. \ref{DIT_Plot}, the probability of finding time-diffracted particles after the shutter has been removed, i.e., the area under the curve on the interval $[0, \infty )$, is infinite. This is due to the fact that the initial state (\ref{InitialStateDIT}) is not really monochromatic since the spatial truncation, besides it is clearly not normalized. This is what makes the probability density to oscillate about the classical value for any time $t$ considerably larger than the time of flight $T$. On the other hand, as evinced in Fig. \ref{DensityProfiles}, the probability of finding a particle initially prepared in a gravitational quantum state (\ref{UCNsWaveFunc}) is finite. This is due to probability conservation, since the initial quantum state (\ref{UCNsWaveFunc}) is properly normalized. We now turn to the interpretation of our results.
It is well-known that the gravitational quantum state (\ref{UCNsWaveFunc}) exhibits oscillations from the ground level at $z =0$ up to infinity, and the number of nodes is determined by the quantum number $n$. In the GRANIT experiment, a series of quantized heights $h _{n} = - a _{n} l _{g}$ are measured \cite{Nesvizhevsky2002}, which correspond to the classical turning points, i.e., they measure the population of the $n$th quantum state of the UCNs. In Fig. \ref{DensityProfiles} we observe that the time-evolved probability density exhibits the same nodes as the initial quantum state, independently of the mass. From Eq. (\ref{Time-Evolved-UCNsWaveFunc}) we read $z(t)= h _{n} - \frac{1}{2} gt ^{2}$ as a possible equation of motion in the large-mass limit, wherefrom we determine the time of flight $\tau = \sqrt{ \frac{2}{g} (\vert z \vert + h _{n}) }$. This corresponds to the classical free fall time from the classical turning point $h _{n}$.
\begin{figure}
\subfloat[\label{SmallMGround}]{\includegraphics[width = 1.7in]{Ground1}}
\subfloat[\label{LargeMGround}]{\includegraphics[width = 1.7in]{Ground2}} \\
\subfloat[\label{SmallMExcited}]{\includegraphics[width = 1.7in]{FirstExcited1}}
\subfloat[\label{LargeMExcited}]{\includegraphics[width = 1.7in]{FirstExcited2}}
\caption{Plots of the quantum probability densities for ground state (upper panel) and the first excited state (lower panel), for small-mass (at left) and large-mass (at right).} \label{DensityProfiles}
\end{figure}
In the ground state, which is expected to exhibit non classical behavior, it is clear that the center of the quantum distribution is not peaked at the time $\tau$ (black-dotted vertical line) defined above, but at a lesser time $T$ (red-dashed vertical line), as shown in Figs. \ref{SmallMGround} and \ref{LargeMGround}. This occurs because from a classical point of view, we have to consider that the particle freely-falls not from the turning point $h _{n}$, but from the mean position of the initial distribution, which is $\braket{z} = \frac{2}{3} h _{n}$. Following this idea we introduce the time scale
\begin{align}
T = \sqrt{ \frac{2}{g} \left( \vert z \vert + \frac{2}{3} h _{n} \right) } < \tau ,
\end{align}
which is in clear agreement with the center of the time-evolved quantum distributions shown in Figs. \ref{SmallMGround} and \ref{LargeMGround}. Due to the mass-dependence of the gravitational length $l _{g} \propto m ^{-2/3}$, it is clear that $T \sim \tau$ in the large-mass limit, as we can confirm in the plots since the vertical dotted and dashed lines approach each other as the mass increases. Even more, in the limit $m \to \infty$, these time scales converge to the Newtonian time of flight $t _{\mbox{\scriptsize class}} = \sqrt{ 2 \vert z \vert / g }$, since the initial quantum distribution becomes strongly peaked at $h _{n} \to 0$, i.e., $ \lim _{m \to \infty} \vert \psi _{n} (z) \vert ^{2} \propto \delta (z)$ and $\braket{p} = 0$ (vanishing initial velocity). Interestingly, this case corresponds to a beam of particles moving horizontally on the mirror and then reflected into vertical motion. This implies that the time-evolved quantum state is given by the propagator (\ref{FreeFallPropagator}) with $z^{\prime}=0$, and hence, no transient effect takes place. Our result supports that the validity of the equivalence principle emerges in the large-mass limit (or equivalently in the large-energy limit, recall that $E _{n}=mgh _{n}$), which is commonly taken for granted. The same qualitative behavior is displayed by excited states when the mass is increased, as we can see in Figs. \ref{SmallMExcited} and \ref{LargeMExcited}.
In the previous section we introduced the diffraction width (\ref{DeltaT}) to quantify the degree of violation of the weak equivalence principle. In the present case, the quantum distribution does not increase up to the classical value due to its finite width and spreading, so we cannot use the same quantity to measure the degree of violation of the weak equivalence principle. However, the quantum distribution is strongly peaked at a certain value of time $T$ at a fixed distance $z<0$, which is different from the Newtonian time of flight $t _{\mbox{\scriptsize class}}$. Therefore, we can use this time delay to quantify the departures from the classical behavior: for a detector at a fixed position $z<0$, the transit time of particles deviates from the classical result by
\begin{align}
\delta t = \frac{T - t _{\mbox{\scriptsize class}}}{t _{\mbox{\scriptsize class}}} = \frac{h _{n}}{3 \vert z \vert} , \label{TimeDelay}
\end{align}
which tends to zero when the mass goes to infinity. This ratio can be estimated in a simple fashion for different quantum systems. They are summarized in Table \ref{table}. As expected, for very light particles, the transit times (\ref{TimeDelay}) deviate strongly from the Newtonian value, but for heavier objects (such as a cesium atom or large molecules as C$_{60}$ and C$_{176}$) they approach, reassuringly, to the expected mass-independent classical result.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$n$ & Neutrons & cesium & C$_{60}$ & C$_{176}$ \\ [0.5ex]
\hline
1 & $4.6 \times 10 ^{-6}$ & $4.77 \times 10 ^{-7}$ & $5.72 \times 10 ^{-8}$ & $2.06 \times 10 ^{-8}$ \\
2 & $8 \times 10 ^{-6}$ & $3.1 \times 10 ^{-7}$ & $1 \times 10 ^{-7}$ & $3.61 \times 10 ^{-8}$ \\ [1ex]
\hline
\end{tabular}
\caption{Time delay in the free fall of quantum systems initially prepared in gravitational quantum states.}
\label{table}
\end{table}
From the numerical analysis of the time of flight, it is clear that the classical behavior, together with the validity of the weak equivalence principle, emerges in the large-mass (high-energy) limit, in agreement with the local averaging procedure (\ref{LocalAverage}) discussed in the previous section. From the integral expression for the time-evolved quantum state (\ref{Time-Evolved-UCNsWaveFunc}), we observe by inspection that the large-mass limit makes the exponential a rapidly oscillatory function, while the Airy function in front smoothly oscillates since it is mass-independent. Since both functions are analytic in the complex plane, we can obtain an approximation of the integral in the large-mass limit by means of the steepest descend method. We observe that the exponential has a unique stationary point (i.e. where the integrand is maximum) at
\begin{align}
\chi _{0} (z,t) = \frac{1}{l _{g}} \left( z - h _{n} + \frac{1}{2} g t ^{2} \right) . \label{Chi0}
\end{align}
We now expand the Airy function in Eq. (\ref{Time-Evolved-UCNsWaveFunc}) to the zeroth order (other terms are neglected) and the function in the exponential up to second order. Thus, the quantum state may be written as
\begin{align}
\psi _{n} (z,t) &= \sqrt{ \frac{m l _{g}}{2 \pi \hbar t}} \frac{\mbox{Ai} ( \chi _{0} )}{\mbox{Ai} ^{\prime} (a _{n})} \int _{a_{n}} ^{\infty} d \chi \exp \left[ i \frac{m l _{g} ^{2}}{2 \hbar t} \left(\chi _{0} - \chi \right) ^{2} \right] . \label{Time-Evolved-UCNsWaveFuncApp}
\end{align}
This integral is quite simple. It can be expressed in terms of the Fresnel integrals as:
\begin{align}
\psi _{n} (z,t) &= \sqrt{ \frac{1}{2 l _{g}}} \frac{\mbox{Ai} ( \chi _{0} )}{\mbox{Ai} ^{\prime} (a _{n})} \left\lbrace \left[ \frac{1}{2} + C (\xi)\right] + i \left[ \frac{1}{2} + S (\xi)\right] \right\rbrace , \label{Time-Evolved-UCNsWaveFuncApp2}
\end{align}
where $\chi _{0}$ is given by Eq. (\ref{Chi0}) and
\begin{align}
\xi (z ,t ) = \sqrt{ \frac{m}{\pi \hbar t} } \left( z + \frac{1}{2} g t ^{2} \right) .
\end{align}
In Fig. \ref{DensityProfilesApp} we plot the quantum probability density $\vert \psi _{n} (z,t) \vert ^{2} $, approximated by the steepest descend method in the large-mass limit (\ref{Time-Evolved-UCNsWaveFuncApp2}), for the ground and first-excited states. In both cases we observe a sharp peak which coincides very well with the time of flight $T$ defined above, and a series of secondary small peaks.
\begin{figure}
\subfloat[\label{LargeMGroundApp}]{\includegraphics[width = 1.7in]{Ground2App}}
\subfloat[\label{LargeMExcitedApp}]{\includegraphics[width = 1.7in]{FirstExcited2App}}
\caption{Plots of the quantum probability densities, approximated by the steepest descend method in the large-mass limit, for the ground state (at left) and the first excited state (at right).} \label{DensityProfilesApp}
\end{figure}
We finally discuss both, the emergence of the classical behavior as well as the validity of the strong equivalence principle. The former is a simple task in this case. Using the limiting form of the Airy function $\delta (x) = \lim _{\epsilon \to 0 ^{+}} \frac{1}{ \epsilon } \mbox{Ai} (x/ \epsilon )$, in the large-mass limit, the wave function (\ref{Time-Evolved-UCNsWaveFuncApp2}) becomes
\begin{align}
\psi _{n} (z,t) \approx \delta \left( z + \frac{1}{2} g t ^{2} \right) , \label{ClassLim}
\end{align}
which is mass-independent and it is nonvanishing only along the exact classical trajectory. Of course, quantum corrections (proportional to the ratio $\hbar / m$) may arise to account for the subdominant quantum behavior of the freely falling
particle in the classical limit. The result of Eq. (\ref{ClassLim}) can be understood in a simple fashion. In the large-mass limit, the gravitational length tends to zero, and hence, for low-energy quantum states the mean position of the initial wave packet is at the ground level $z = 0$, and the mean velocity is zero because of the parity of the eigenstates. Therefore, in the classical limit, the particle freely-falls from the mirror and the corresponding equation of motion is $z = - gt^{2}/2$, as suggested by the wave function (\ref{ClassLim}).
There is just one question left to be answered: Is
the strong equivalence principle valid or not for this quantum system? Our answer is in the negative, as we shall discuss. Changing coordinates to an accelerated frame, i.e. $z^{\prime} = z - \frac{1}{2} gt ^{2}$ (with vanishing initial velocity) and $t^{\prime}=t$ , the Schr\"{o}dinger equation (\ref{Schro-GravUn}) transforms into the free-particle equation $\psi ^{\prime \prime} (z) = 0$, with however the time-dependent boundary condition $\psi ( - \frac{1}{2} gt ^{2})=0$ (i.e.,
the ground level freely-falls at the same rate that the particle does). Nevertheless, the solution to this problem is quite different from the one presented in the exact time-evolved
state (\ref{Time-Evolved-UCNsWaveFunc}), thus indicating that the strong equivalence principle is explicitly violated. However, there is a limit in which this principle works again: the large-mass limit. Using the steepest descend method, the exact time-evolved state (\ref{Time-Evolved-UCNsWaveFunc}) can be approximated by Eq. (\ref{Time-Evolved-UCNsWaveFuncApp2}), which we understand as composed by the product of two parts (leaving aside possible phase factors). The first one corresponds to the initial quantum state (\ref{UCNsWaveFunc}) as seen from an accelerated frame, i.e.
\begin{align}
\mbox{Ai} (a _{n} + z/l _{g} ) \quad \to \quad \mbox{Ai} \left( a _{n} + \frac{z + gt^{2}/2 }{l _{g}} \right) .
\end{align}
So, this part indicates that the initial quantum state evolves in time without distortion. However, the second part in Eq. (\ref{Time-Evolved-UCNsWaveFuncApp2}) has the information concerning the diffraction-in-time effect
due to gravity, thus violating explicitly the statement (iii) for the strong equivalence principle. This result confirms the finding in Ref. \cite{Longhi:18} regarding the violation in a quantum simulation. However, in the large-mass limit, the Fresnel integrals approach $1/2$, thus converting this part into a phase factor. Therefore,
\begin{align}
\psi _{n} (z,t) \quad \to \quad \psi _{n} (z + gt^{2}/2 ) e ^{i \pi /4} ,
\end{align}
thus recovering the validity of the strong equivalence principle in this system. This suggests that both the weak and strong versions of the equivalence principle are profoundly related in the quantum realm, just as they are in classical physics. The former is always violated due to the finite extension of wave packets, thus activating the violation of the latter (even when the Schr\"{o}dinger equation may transform correctly), as our results show. Therefore, in the large mass-limit, both statements work again. As evinced by Eq. (\ref{TimeDelay}), large masses imply $\delta t \to 0$, thus recovering the classical time of flight. Besides, the same limit leads to the validity of the strong version because the time-evolved state is obtained by transforming to an accelerated frame, in which the time-diffraction
effects result in an mere irrelevant phase factor.
\section{Discussion and Conclusions}\label{DiscussionConclusions}
The universality of the ratio between the gravitational and inertial masses was established with the Galileo's famous gedanken experiment of dropping bodies of different mass from a great height. The general theory of relativity, and its plausible variants, are founded in the equivalence principle. One of its many faces is precisely the equality between the gravitational and inertial masses, and it has been confirmed experimentally both from the classical and quantum sides. A different statement of the equivalence principle refers to the universality of free-fall (also referred as the weak equivalence principle). In classical mechanics, it is perfectly equivalent to the equality between the gravitational and inertial masses, since they cancel out in the Newton's equation of motion, thus confirming the Galileo's conclusions. However, if Galileo's experiment is performed with quantum probes, the universality of free fall ceases to be valid since the mass does not cancel out from the Schr\"{o}dinger equations both, for a free particle and for a particle in a uniform gravitational field. The compatibility between the weak equivalence principle and quantum mechanics has been investigated with wave packets and quantified by means of the time of flight to a detector arbitrarily located. Unsurprisingly, the time of flight is mass-dependent, and converges to the classical result in the large-mass limit. In quantum mechanics, the definition and the physical meaning of the time of flight is rather obscure. Some works use a probability current approach to define a mean time, and others use a model quantum clock to define a transit time in terms of the variation of the phase of the wave function between two given positions.
In this paper we have considered physical phenomena in which time naturally arises: the diffraction-in-time effect It consists in a beam of particles suddenly released in the presence of the gravitational field, as shown in Fig. \ref{figure1}. So, measuring the flux afterwards, we can determine the transit time at a fixed detector. We refer to this setup as scenario A. It is worth mentioning that diffraction in time (i.e. the quantum distribution as a function of time) has been experimentally detected, so the configuration we propose is feasible to be realized in lab, and it can be used to test both the weak and strong versions of the equivalence principle. However as pointed out in Section \ref{DIT}, the initial wave function we consider is rather idealized, since it assumes that gravity is absent above the shutter. This scenario can be achieved however by imposing a homogeneous electric field above the shutter to offset the effect of gravity, and hence the plane-wave initial state can be prepared. In scenario A we find that the weak equivalence principle is violated (since the transit time depends on the mass) and the strong equivalence principle is valid (the time-evolved state is obtained by a coordinate transformation of the free case to an accelerated reference frame). Using the width of the diffraction-in-time effect, we quantify the degree of violation to the weak equivalence principle, and give some numerical estimates for thermal neutrons, ultracold neutrons, cesium atoms and large molecules, such as C$_{60}$ and C$_{176}$. We find that ultracold neutrons and cesium atoms are the best probes to test our predictions, since the width of the diffraction effect in time is of the order $10^{-5}$s, which is within the current experimental precision.
Motivated by the recent high-sensitivity GRANIT experiments with ultracold neutrons, in this paper we have also considered the physics of ultracold neutrons as a test bed for studying violations to the equivalence principle. This configuration, sketched in Fig. \ref{figure2} and termed scenario B, is quite different from the previous one, since the initial quantum state is normalizable, and hence the time-evolved quantum state does not exhibit the same time profile in the quantum distribution. As such, we adopt a different mechanism to quantify the mass-dependence of the transit time. We showed that the quantum time of flight, in the large-mass (high-energy) limit, corresponds to the classical free fall time when the particle is suddenly released from the mean position, as computed with the initial quantum state. As expected, we find that in the large-mass limit, the quantum transit-time converges to the classical time of flight. We showed that this system violates both, the weak and strong equivalence principle. The former is expected due to the finite extent of the quantum distribution, however, the latter is subtle since we know that the Schr\"{o}dinger equation for a particle in a gravitational field correctly transforms into the Schr\"{o}dinger equation for a free-particle. Nevertheless, this system shows that it is not the case with the energy eigenstates.
Finally, our conclusions (summarized in Table \ref{table2}) do not attempt to completely settle the problem of the compatibility between the equivalence principle and quantum mechanics. We have considered two particular configurations in which time plays a prominent role, that serve as a quantum probe for both the universality of free fall and for the equivalence between homogeneous gravitational fields and uniform accelerated motion. Both systems violate the weak equivalence principle, as expected. However, when using a beam of quasi-monochromatic particles, the strong equivalence principle holds, while the case for quantum states bounded by the gravitational field, it is violated. This confirms that although Schr\"{o}dinger equation may transform correctly, the energy eigenstates do not.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& \quad Scenario A \quad & \quad Scenario B \quad \\ [0.5ex]
\hline
Weak E.P. & $\times$ & $\times$ \\ \hline
Strong E.P. & $\surd$ & $\times$ \\ [1ex]
\hline
\end{tabular}
\caption{Comparison for the validity of the Weak and Strong equivalence principles for both, scenario A (time-diffracted free-falling particles) and scenario B (suddenly released gravitational quantum states).}
\label{table2}
\end{table}
\section*{Acknowledgements}
J.A.C. was supported by the CONACyT master fellowship No. 725033. A.M.-R. has been partially supported by DGAPA-UNAM Project No. IA102722 and by Project CONACyT (M\'{e}xico) No. 428214. We thank to M. Cambiaso, M. J. Everitt and C. Escobar for their careful reading of the manuscript.\\
\section*{Data Availability Statement \; } No datasets were generated or analyzed during the current study.
| 2024-02-18T23:40:03.155Z | 2022-07-12T02:05:34.000Z | algebraic_stack_train_0000 | 1,184 | 7,586 |
|
proofpile-arXiv_065-5901 | \section{Introduction}
Networks of nonlinear oscillators have attracted interest in several scientific domains such as theoretical physics, mathematical biology, power-grid systems, and many more. In our investigation of oscillator networks (see \cite{[KO3]}, \cite{[CM1]}, \cite{[KO4]}), the networks of several communities joined together often appear and provide some interesting phenomena (see for example \cite[Proposition 23]{[CM1]} and \cite[Proposition 12]{[KO4]}). In \cite{[CM1]}, we study the case where the connection within a community follows a simple rule, namely, each community is a circulant network. In this case, the main theorem in \cite{[CM1]}, which generalizes the Circulant Diagonalization Theorem (CDT), explicitly describes the spectrum of the joined network. In this article, we generalize this theorem to the case where each community forms a regular graph. This relaxation will allow us to investigate a broader class of networks. In particular, we are able to apply our generalized theorems to study several interesting problems in spectral graph theory. Furthermore, in a work in preparation, we have used the techniques of this article to study some broadcasting mechanisms on multi-layer networks of oscillators.
The structure of this article is as follows. In the second section, we study some basic spectral properties of normal matrices with constant row sums. In the third section, we define the joins of these matrices and study their spectral properties. We then apply the main results from this section to give new proofs of several results in \cite{[CRS]} for the join of regular graphs. This is done in Section 4 of the article. Section 5 explains a simple method to construct Ramanujan graphs using the join construction. We then discuss the joined union of graphs in Section 6. In the final section, we apply the results from the previous sections to study some questions in graph energy. In particular, we propose a question on the relation between the energy of several regular graphs and their joined union.
\begin{rem}
We remark that some form of Theorem \ref{thm:join_spectrum} has been discussed previously in \cite{[joined_union2]} and \cite{[joined_union]}. We refer the reader to Remark \ref{rem:difference} and Remark \ref{rem:semimagic}
regarding our approach.
\end{rem}
\section{Normal matrices with constant row sums}
We start with a definition.
\begin{definition}
Let $A=(a_{ij})_{i,j}$ be an $n \times n$ matrix with complex coefficients. We say that $A$ is $r_A$-row regular if the sum of all entries in each row of $A$ is equal to $r_A$, namely
\[ \forall 1 \leq i \leq n,\;\sum_{j=1}^n a_{ij}=r_A. \]
Similarly, we say that $A$ is $c_A$-column regular if the sum of all entries in each column of $A$ is equal to $c_A$.
\end{definition}
\begin{rem} \label{rem:magic_square}
Some authors use the term ``semimagic squares" for matrices which are both $r_A$-regular and $c_A$-regular and $r_A=c_A$ (see, for example \cite{[semimagic]}.)
\end{rem}
We note that if $A$ is both $r_A$-row regular and $c_A$-column regular then $r_A=c_A$. This can be seen by observing that the sum of all entries in $A$ is equal to both $nr_A$ and $nc_A$; and therefore $r_A=c_A$. Here is a simple criterion for row and column regularity
\begin{lem} \label{lem:criterior}
Let $v=\mathbbold{1}_n=(1, 1, \ldots, 1)^t \in \C^n$. Then $A$ is $r_A$-row regular if and only if $v$ is an eigenvector of $A$ associated with the eigenvalue $r_A$.
Similarly, $A$ is $c_A$-column regular if and only if $v^{t}$ is a left eigenvector of $A$ associated with the eigenvalue $c_A.$
\end{lem}
\begin{proof}
Obvious from the definition.
\end{proof}
\begin{definition}
Let $A \in M_{n}(\C)$ be a matrix of size $n \times n$. We say that $A$
is normal if $AA^{*}=A^{*} A.$ Here $A^{*}$ is the conjugate transpose of $A$.
\end{definition}
A special property of normal matrices is that they are always diagonalizable by an orthonormal basis of eigenvectors.
\begin{thm} (see \cite[Theorem 2.5.3]{[HJ]})
Suppose $A$ is a normal matrix. Then its eigenspaces span $\C^n$ and are pairwise orthogonal with respect to the standard inner product on $\C^n.$
\end{thm}
A direct corollary of theorem is the following.
\begin{cor} \label{cor:orthogonal}
Suppose that $A$ is both normal and $r_A$-row regular. Then there exists an orthonormal basis $\{v_1^{A}, v_2^{A}, \ldots, v_n^{A} \} $ of eigenvectors of $A$ associated with the eigenvalues $\{\lambda_{1}^A, \lambda_2^A, \ldots, \lambda_n^A \}$ such that
\(v_1^{A}= \frac{1}{\sqrt{n}}\mathbbold{1}_n=\frac{1}{\sqrt{n}}(1, \ldots, 1)^t \in \C^n\).
In particular, $r_{A}=\lambda_{1}^A$ and, for $2 \leq k \leq n$, the standard inner product
\(\langle v_{1}^A, v_k^{A} \rangle =0 .\)
\end{cor}
Another corollary is the following.
\begin{cor}
If $A$ is both normal and $r_A$-row regular. Then $A$ is also $r_A$-column regular. In particular, $A$ is a semimagic square matrix.
\end{cor}
\begin{proof}
Let $\{v_1^{A}, v_2^{A}, \ldots, v_n^{A} \} $ be the system of orthonormal eigenvectors of $A$ associated with the eigenvalues $\{r_A, \lambda_2^A, \ldots, \lambda_n^A \}$ as described in Corollary \ref{cor:orthogonal}. Let $V=(v_1^{A}, v_2^{A}, \ldots, v_n^A)$ be the $n \times n$ matrix formed by this system of eigenvectors and let $D=\text{diag}(r_{A}, \ldots, \lambda_n^A)$ be the digonal matrix of corresponding eigenvalues. We then have $A V = VD$. Since $\{v_1^A, v_2^A, \ldots, v_n^A \}$ is an orthonormal basis, we have $VV^{*}=V^{*}V=I_n$, and hence $V^{*}=V^{-1}$. Therefore, we can rewrite the equation $AV=VD$ as
\[ (V^{*}) A = D V^{*} .\]
This shows that the rows of $V^{*}$, namely $\{(v_2^{A})^*, (v_2^A)^{*}, \ldots, (v_n^A)^{*} \}$ form a system of orthnormal \textit{left} eigenvectors for $A$ associated with the eigenvalues $\{r_A, \lambda_2^A, \ldots, \lambda_n^A \}$. We conclude that the column sums of $A$ are equal to $\lambda_1^{A}=r_A$ as well.
\end{proof}
\section{Joins of normal matrices with constant row sums}
Let $d, k_1, k_2, \ldots, k_d \in \N\setminus\{0\}$, and set $n=k_1+k_2+\ldots+k_d$.
Thus $\mathbf{k}_d=(k_1,\dots,k_d)$ is a partition of $n$ into $d$ non-zero summands. Following \cite{[CM1]}, we shall consider $n \times n$ matrices of the following form
\begin{equation}\label{eq:def of join}\tag{$\ast$}
A=\left(\begin{array}{c|c|c|c}
A_1 & a_{12}\bm{1}}%{\mathbbold{1} & \cdots & a_{1d}\bm{1}}%{\mathbbold{1} \\
\hline
a_{21}\bm{1}}%{\mathbbold{1} & A_2 & \cdots & a_{2d}\bm{1}}%{\mathbbold{1} \\
\hline
\vdots & \vdots & \ddots & \vdots \\
\hline
a_{d1}\bm{1}}%{\mathbbold{1} & a_{d2}\bm{1}}%{\mathbbold{1} & \cdots & A_d
\end{array}\right),
\end{equation}
where, for each $1 \leq i,j \leq d$, $A_i$ is a normal, $r_{A_i}$-row regular matrix of size $k_i \times k_i$ with complex entries, and $a_{i,j}\bm{1}}%{\mathbbold{1}$ is a $k_i \times k_j$ matrix with all entries equal to a constant $a_{i,j}\in\mathbb{C}$.
These matrices will be called \textit{$\mathbf{k}_d$-joins of normal row regular} (\textit{NRR} for short) \textit{matrices}.
For each $1 \leq i \leq d$, let $\{v_1^{A_i}, v_2^{A_i}, \ldots, v_{k_i}^{A_i} \}$ and $\{\lambda_1^{A_i}, \lambda_2^{A_i}, \ldots, \lambda_{k_i}^{A_i} \}$ be the set of eigenvectors and eigenvalues of $A_i$ as described in Corollary \ref{cor:orthogonal}. The next proposition is a direct generalization of \cite[Proposition 7]{[CM1]}. Before stating it, let us introduce the convenient notation
\[
(x_1,\dots,x_m)^T \conc (y_1,\dots,y_n)^T = (x_1,\dots,x_m, y_1,\dots,y_n)^T.
\]
For more vectors, we can define $\conc$ inductively.
\begin{prop}\label{prop:circulant eigenvectors}
For each $1 \leq i \leq d$ and $2 \leq j \leq k_i$ let
\[ \begin{aligned}
w_{i,j}&=\vec{0}_{k_1} \conc \ldots \conc\vec{0}_{k_{i-1}}\conc v_{j}^{A_i} \conc \vec{0}_{k_{i+1}} \conc \ldots \conc \vec{0}_{k_d}\\
\end{aligned}
\]
Then $w_{i,j}$ is an eigenvector of $A$ associated with the eigenvalue $\lambda_{j}^{A_i}.$
\end{prop}
\begin{proof}
By direct inspection, the key property being that, for $1\leq\ell\leq d,\;\ell\neq i$ and $2\leq j\leq k_i$, $\langle a_{\ell,i}\mathbbold{1}_{k_i},v^{A_i}_j\rangle=0$, according to Corollary \ref{cor:orthogonal}.
\end{proof}
We will refer to the $w_{i,j}$'s and to the associated eigenvalues $\lambda_{j}^{A_i}$ as the old NRR eigenvectors and eigenvalues of $A$. Let $\lambda_1, \lambda_2, \ldots \lambda_d$ be the (not necessarily distinct) remaining eigenvalues of $A$.
\begin{definition}
The reduced characteristic polynomial of $A$ is
\[ \overline{p}_{A}(t)=\prod_{i=1}^d (t-\lambda_i)=\dfrac{p_{A}(t)}{\prod_{\substack{1 \leq i \leq d, \\ 2 \leq j \leq k_i}} (t-\lambda_{j}^{A_i})} =\frac{p_A(t)}{\prod_{i=1}^d \frac{p_{A_i}(t)}{t-r_{A_i}}} .\]
\end{definition}
We will now describe $\overline{p}_{A}(t)$ as the characteristic polynomial of the matrix
\[ \overline A=
\begin{pmatrix}
r_{A_1} & a_{12}k_2 & \cdots & a_{1n}k_d \\
a_{21}k_1 & r_{A_2} & \cdots & a_{2n}k_d \\
\vdots & \vdots & \ddots & \vdots \\
a_{d1}k_1 & a_{d2}k_2 & \cdots & r_{A_d}
\end{pmatrix}.\]
For a vector $w=(x_1,\dots,x_d)\in\C^d$, we define
\[
w^\otimes=(\underbrace{x_1, \ldots, x_1}_{\text{$k_1$ terms}},\dots,\underbrace{x_d, \ldots, x_d}_{\text{$k_d$ terms}})^{t}\in \C^n
\]
\begin{thm} \label{thm:join_spectrum}
The reduced characteristic polynomial of $A$ coincides with the characteristic polynomial of $\overline A$, namely
\[ \overline{p}_{A}(t)=p_{\overline A}(t) .\]
In other words
\[ p_{A}(t)= p_{\overline{A}}(t) {\prod_{\substack{1 \leq i \leq d, \\ 2 \leq j \leq k_i}} (t-\lambda_{j}^{A_i})} .\]
\end{thm}
\begin{proof}
Firstly, we note that, by construction, for any $v\in\C^d$ and any $\lambda\in\C$
\begin{equation}\label{eq:tensor expansion}
\left[(\overline A-\lambda I)v\right]^\otimes = (A-\lambda I)v^\otimes.
\end{equation}
Let ${\lambda}$ be an eigenvalue of $\overline{A}$, and let ${w}=(x_1,\dots,x_d)$ be an associated generalized eigenvector, satisfying $(\overline{A}-{\lambda}I_d)^m{w}=0$ for a suitable $m$.
We will show, by induction on $m$, that $(A-\lambda I_n)^mw^\otimes=0$.
If $m=1$, the assertion is a consequence of Equation \eqref{eq:tensor expansion}. If $m>1$, consider the vector $w'=(\overline{A}-\lambda I_d)w$, which satisfies $(\overline{A}-{\lambda}I_d)^{m-1}{w'}=0$. By induction hypothesis, $({A}-{\lambda}I_n)^{m-1}{(w')}^\otimes=0$, therefore, thanks to Equation \eqref{eq:tensor expansion},
\[
(A-\lambda I_n)^mw^\otimes=(A-\lambda I_n)^{m-1}\left((A-\lambda I_n)w^\otimes\right)=(A-\lambda I)^{m-1}(w')^\otimes=0.
\]
In other words, the generalized eigenspaces of $\overline{A}$ lift to (direct summands of) generalized eigenspaces of $A$.
Now we observe that the NRR eigenvectors of $A$, together with the generalized eigenvectors of $A$ of the shape $w^\otimes$, $w\in\C^d$, form a linearly independent set thanks to Corollary \ref{cor:orthogonal}. Hence, by dimension counting, the eigenvalues of $\overline{A}$ are precisely the eigenvalues $\lambda_1,\dots,\lambda_d$ of $A$, with the correct multiplicity. Equivalently, $\overline{p}_A(t)=p_{\overline{A}}(t)$.
\end{proof}
\begin{rem} \label{rem:difference}
After proving Theorem \ref{thm:join_spectrum}, we learned from ResearchGate that a special form of this theorem has been proved in \cite[Theorem 2.1]{[joined_union]} and \cite[Theorem 3]{[joined_union2]}. To the best of our understanding, our method is quite different. Most importantly, our method works even in the cases where either $\overline{A}$ is not diagonalizable or $A$ is not a symmetric matrix.
\end{rem}
\begin{rem} \label{rem:semimagic}
We discuss a slight generalization of Theorem \ref{thm:join_spectrum}. First, we recall from Remark \ref{rem:magic_square} that a $k_1 \times k_1$ matrix $A_1$ with entries in a field $F$ is called a semimagic square if $A_1$ is both $r_{A_1}$-regular and $c_{A_1}$-regular and $c_{A_1}=r_{A_1}.$ If $k_1$ is invertible in $F$, then $F^{k_1}$ can be decomposed into
\begin{equation} \label{eq:decomposition}
F^{k_1}= F \mathbbold{1}_{k_1} \oplus W_1 .
\end{equation}
Here $F\mathbbold{1}_{k_1}$ is the one dimensional vector space generated by $\mathbbold{1}_{k_1}$ and $W_1$ is the set of all vectors $(x_1, x_2, \ldots, x_{k_1}) \in F^{k_1}$ such that $ \sum_{i=1}^{k_1} x_i =0$. We can check that each component of this decomposition is stable under $A_1$ for any semimagic square $A_1$. Now suppose that $A$ is the join of $d$ semimagic squares $A_i$ of sizes $k_i \times k_i$ as defined in equation \ref{eq:def of join}. We assume that further that $k_i$ is invertible in the field $F$. Let $W_i$ be the decomposition
\[ F^{k_i}=F \mathbbold{1}_{k_i} \oplus W_i .\]
We see that for $1 \leq i \leq d$
\[ \widehat{W_i}= \{\vec{0}_{k_1} \conc \ldots \conc\vec{0}_{k_{i-1}}\conc v_{i} \conc \vec{0}_{k_{i+1}} \conc \ldots \conc \vec{0}_{k_d} \quad | v_i \in W_i \}, \]
is an $A$-stable subpsace of $F^{k_i}$. By the same proof as explained in Theorem \ref{thm:join_spectrum}, we can see that
\begin{equation} \label{eq:generalization}
\overline{p}_{A}(t)=p_{\overline A}(t) .
\end{equation}
We also note that the set of all such $A$ with coefficients in any ring $R$ has the structure of a ring (the case $d=1$ was considered in \cite{[semimagic]}). In a separate paper in preparation (see \cite{[joint_group_ring]}), we will describe the structure of this ring and derive Equation \ref{eq:generalization} as a direct consequence. We show, in particular, that the map $A \mapsto \bar{A}$ is a ring homomorphism.
\end{rem}
\section{Spectrum of the join of regular graphs} \label{section:graph_join}
In this section, we apply Theorem \ref{thm:join_spectrum} to give new proofs for Theorem 2.1.8 and Theorem 2.1.9 in \cite{[CRS]}. Let $G_1, G_2, \ldots, G_d$ be undirected regular graphs such that $G_i$ has degree $r_i$ and $k_i$ vertices. Let $G$ be the join graph of $G_1, G_2, \ldots, G_d$, which we will denote by $G=G_1 + G_2 + \ldots +G_d$. We recall that $G$ is obtained from the disjoint union of $G_1, \ldots, G_2, \ldots, G_d$ by joining each vertex $G_i$ with each vertex in all others $G_j$ for $j \neq i$ (see \cite[Section 4]{[CM1]} and the reference therein for further details). Let $A_i$ be the adjacency matrix of $G_i$ for $1 \leq i \leq d$ and $A$ be the adjacency matrix of $G$. By definition of the join of graphs, the adjacency matrix $A$ of $G$ has the following form
\[
A=\left(\begin{array}{c|c|c|c}
A_1 & \bm{1}}%{\mathbbold{1} & \cdots & \bm{1}}%{\mathbbold{1} \\
\hline
\bm{1}}%{\mathbbold{1} & A_2 & \cdots & \bm{1}}%{\mathbbold{1} \\
\hline
\vdots & \vdots & \ddots & \vdots \\
\hline
\bm{1}}%{\mathbbold{1} & \bm{1}}%{\mathbbold{1} & \cdots & A_d
\end{array}\right).
\]
Since $G_i$ is an undirected graph, $A_i$ is real and symmetric, hence normal. Furthermore, since $G_i$ is regular of degree $r_i$, $A_i$ is $r_i$-row regular. By Theorem \ref{thm:join_spectrum}, the reduced characteristics polynomial of $A$ is given by
\[ \overline{p}_{A}(t)=p_{\overline A}(t) ,\]
where
\[ \overline A=
\begin{pmatrix}
r_{1} & k_2 & \cdots & k_d \\
k_1 & r_{2} & \cdots & k_d \\
\vdots & \vdots & \ddots & \vdots \\
k_1 & k_2 & \cdots & r_{d}
\end{pmatrix} .\]
In summary, we have
\begin{prop} \label{prop:join_spetrum}
The characteristic polynomial of $A$ is given by
\[ p_{A}(t)=p_{\overline{A}}(t) \dfrac{\prod_{i=1}^d p_{A_i}(t)}{\prod_{i=1}^d (t-r_i)}.\]
\end{prop}
Let us consider some special cases of this proposition.
\begin{cor}(See \cite[Theorem 2.1.8]{[CRS]})
If $G_1$ is $r_1$-regular with $k_1$ vertices and $G_2$ is $r_2$-regular with $k_2$ vertices then the characteristic polynomial of the join $G_1+G_2$ is given by
\[ p_{G_1+G_2}(t)=\frac{p_{G_1}(t) p_{G_2}(t)}{(t-r_1)(t-r_2)} \left((t-r_1)(t-r_2)-k_1k_2 \right) .\]
\end{cor}
\begin{proof}
Let $A_1, A_2$ be the adjacency matrix of $G_1, G_2$ respectively. Then, the adjacency matrix of $G_1+G_2$ is
\[ A= \begin{pmatrix}
A_1 & \bm{1}}%{\mathbbold{1} \\ \bm{1}}%{\mathbbold{1} & A_2
\end{pmatrix} .\]
We have
\[ \overline{A}= \begin{pmatrix}
r_1 & k_2 \\ k_1 & r_2
\end{pmatrix} .\]
Hence
\[ p_{\overline{A}}(t)=(t-r_1)(t-r_2)-k_1k_2 .\]
By Proposition \ref{prop:circulant eigenvectors}, we conclude that
\[ p_{G_1+G_2}(t)=\frac{p_{G_1}(t) p_{G_2}(t)}{(t-r_1)(t-r_2)} \left((t-r_1)(t-r_2)-k_1k_2 \right) .\]
\end{proof}
\begin{cor}(See \cite[Theorem 2.1.9]{[CRS]} \label{cor:equal_difference}
Let $G_i$ be $r_i$-regular with $k_i$ vertices. Assume further that
\[ k_1-r_1=k_2-r_2=\ldots=k_d-r_d=s .\]
Let $G$ be the join graph of $G_1, G_2, \ldots, G_d$. Let \[ n=k_1+k_2+\ldots+k_d ,\]
and
\[ r=n-s .\]
Then
\begin{enumerate}
\item $G$ is $r$-regular with $n$ vertices.
\item The characteristic polynomial of $G$ is given by
\[ p_{G}(t)=(x-r)(x+s)^{d-1} \dfrac{\prod_{i=1}^d p_{G_i}(t)}{\prod_{i=1}^d (t-r_i)}. \]
\end{enumerate}
\begin{proof}
Let $v_i$ be a vertex in $G_i$. By definition, the degree of $v_i$ in $G$ is given by
\[ \deg_{G_i}(v_i)+(n-k_i)= n-(k_i-r_i)=n-s=r.\]
We conclude that $G$ is $r$-regular. This proves part $(1)$. For part $(2)$, we note that if $A$ is the adjacency matrix of $G$ then $\overline{A}$ is given by
\[ \overline A=
\begin{pmatrix}
r_1 & k_2 & \cdots & k_d \\
k_1 & r_2 & \cdots & k_d \\
\vdots & \vdots & \ddots & \vdots \\
k_1 & k_2 & \cdots & r_d
\end{pmatrix}.\]
We observe that
\[ \overline{A}+sI_{d}= \begin{pmatrix}
k_1 & k_2 & \cdots & k_d \\
k_1 & k_2 & \cdots & k_d \\
\vdots & \vdots & \ddots & \vdots \\
k_1 & k_2 & \cdots & k_d
\end{pmatrix}\]
has rank $1$. Consequently, $-s$ is an eigenvalue of $\overline{A}$ with multiplicity at least $d-1$. Additionally, by part $(1)$, $G$ is $r$-regular, hence $\lambda=r$ is the remaining eigenvalue of $\overline{A}$. Consequently,
\[ p_{\overline{A}}(t)=(t-r)(t+s)^{d-1} .\]
By Proposition \ref{prop:join_spetrum}, we conclude that
\[ p_{G}(t)=(t-r)(t+s)^{d-1} \dfrac{\prod_{i=1}^d p_{G_i}(t)}{\prod_{i=1}^d (t-r_i)}. \]
\end{proof}
\end{cor}
\section{A simple construction of Ramanujan graphs}
We discuss some applications of Corollary \ref{cor:equal_difference} to the construction of Ramanujan graphs. We first recall the definition of these graphs (see \cite[Chapter 3]{[CRS]} and \cite{[Murty]} for further details.) We also recommend \cite{hoory2006expander} for a beautiful survey of some surprising applications and occurrence of Ramanujan graphs in various parts of mathematics, physics, communications networks and computer science.)
\begin{definition} (see \cite[Definition 3.5.4]{[CRS]})
Let $G$ be a connected $r$-regular graph with $k$ vertices, and
let $r=\displaystyle \lambda _{1}\geq \lambda_{2}\geq \cdots \geq \lambda_{n}$ be the eigenvalues of the adjacency matrix of $G$. Since $G$ is connected and $r$-regular, its eigenvalues satisfy
\( |\lambda_i| \leq r, 1 \leq i \leq n.\)
Let
\[\lambda (G)=\max_{|\lambda_i|<r}|\lambda_{i}| .\]
The graph $G$ is a \textit{Ramanujan graph} if
\[ \lambda (G)\leq 2{\sqrt {r -1}} .\]
\end{definition}
The following proposition provides a construction of Ramanujan graphs.
\begin{prop} \label{prop:rm1}
Let $d\geq 2$ and, for $1\leq i\leq d$, let $G_i$ be $r_i$-regular Ramanujan graphs with $k_i$ vertices. Suppose further that the $G_i$'s satisfy the same conditions as in Corollary \ref{cor:equal_difference}, namely
\[ k_1-r_1=k_2-r_2=\ldots=k_d-r_d=s .\]
Let $G$ be the join graph of $G_1, G_2, \ldots, G_d$ and
\( n=k_1+ k_2+ \ldots+k_d\).
Then $G$ is a Ramanujan graph if and only if
\[ s \leq 2 (\sqrt{n}-1) .\]
\end{prop}
\begin{proof}
Corollary \ref{cor:equal_difference} describes the eigenvalues of $G$. Taking into account that the valency $r$ of $G$ is greater than the valency $r_i$ of each $G_i$, and that each $G_i$ is Ramanujan, $G$ is Ramanujan if and only if $s\leq 2\sqrt{r-1}=2\sqrt{n-s-1}$, if and only if $s^2+4s-4n+4\leq 0$, if and only if $s\leq 2\sqrt{n}-2$.
\end{proof}
Here is a special case of this construction.
\begin{cor} \label{prop:Ramanujan_1}
Let $G$ be a $r$-regular graph with $k$ vertices. Let $G^{d}$ be the join graph of $d$ identical copies of $G$. Then there exists a natural number $d_0$ such that for all $d \geq d_0$, $G^d$ is a Ramanujan graph.
\end{cor}
\begin{proof}
By Proposition \ref{prop:rm1}, $G^d$ is a Ramanujan graph if and only if
\[ k-r \leq 2(\sqrt{dk}-1) .\]
This is equivalent to
\[ d \geq \frac{1}{k} \left(\frac{k-r}{2}+1 \right)^2 .\]
We therefore can take
\[ d_0= \left\lceil \frac{1}{k} \left(\frac{k-r}{2}+1 \right)^2 \right\rceil .\]
\end{proof}
\section{Spectrum of the joined union of graphs}
Let $G$ be a (weighted) digraph with $d$ vertices $\{v_1, v_2, \ldots, v_d\}$. Let $G_1, G_2, \ldots, G_d$ be (weighted) digraphs. The joined union $G[G_1, G_2, \ldots, G_d]$ is obtained from the union of $G_1, \ldots, G_d$ by joining with an edge each pair of a vertex from $G_i$ and a vertex from $G_j$ whenever $v_i$ and $v_j$ are adjacent in $G$ (see \cite{[joined_union]} for further details). Let $A_{G} =(a_{ij})$ be the adjacency matrix of $G$ and $A_{1}, A_{2}, \ldots, A_{d}$ be the adjacency matrices of $G_1, G_2, \ldots, G_d$ respectively. The adjacency matrix of $G[G_1, G_2, \ldots, G_d]$ has the following form
\[
A=\left(\begin{array}{c|c|c|c}
A_1 & a_{12}\bm{1}}%{\mathbbold{1} & \cdots & a_{1d}\bm{1}}%{\mathbbold{1} \\
\hline
a_{21}\bm{1}}%{\mathbbold{1} & A_2 & \cdots & a_{2d}\bm{1}}%{\mathbbold{1} \\
\hline
\vdots & \vdots & \ddots & \vdots \\
\hline
a_{d1}\bm{1}}%{\mathbbold{1} & a_{d2}\bm{1}}%{\mathbbold{1} & \cdots & A_d
\end{array}\right).
\]
\begin{rem} \label{rem:special_case}
When $G=K_d$, the complete graph on $d$ vertices, $G[G_1, G_2, \ldots, G_d]$ is exactly the join graph of $G_1, G_2, \ldots, G_d$ discussed in Section \ref{section:graph_join}.
\end{rem}
By Theorem \ref{thm:join_spectrum}, we have the the following proposition.
\begin{prop} \label{prop:joined_union}
Assume that for each $1 \leq i \leq d$, $G_i$ is a $r_i$-regular graph with $k_i$ nodes. Let $G[G_1, G_2, \ldots, G_d]$ be the joined union graph. Let $\{\lambda_{1}^{G_i}=r_i, \ldots, \lambda_{k_i}^{G_i} \}$ be the spectrum of $G_i$ as described in Corollary \ref{cor:orthogonal}. Then the spectrum of $A$ is the union of $\Spec(\overline{A})$ and the following multiset
\[ \{\lambda_j^{A_i} \}_{1 \leq i \leq d, 2 \leq j \leq k_i} .\]
Here $\overline{A}$ is the following $d \times d$ matrix, whose entries are the row sums of the blocks in the matrix $A$
\[ \overline A=
\begin{pmatrix}
r_{A_1} & a_{12}k_2 & \cdots & a_{1n}k_d \\
a_{21}k_1 & r_{A_2} & \cdots & a_{2n}k_d \\
\vdots & \vdots & \ddots & \vdots \\
a_{d1}k_1 & a_{d2}k_2 & \cdots & r_{A_d}
\end{pmatrix} .\]
\end{prop}
\begin{expl} \label{expl:acyclic}
Let us consider the case where $G$ is a directed acyclic graph. Using the topological ordering induced by $G$, we can assume that (up to a permutation) the adjacency matrix of $G$ is upper triangular. In this case, the matrix $A$ is upper triangular. Consequently, $\overline{A}$ is also upper triangular. Therefore, $\Spec(\overline{A})$ coincides with the union of the spectrum of of diagonal blocks. In other words, $\Spec(G[G_1, G_2, \ldots, G_d])$ is precisely $\cup_{i=1}^{d} \Spec(G_i)$.
\end{expl}
Let us consider another special case where the $G_i$ are all $r$-regular graphs with $k$ vertices. In this case, we have the following proposition.
. In summary, we have the following proposition.
\begin{prop} \label{prop:joined_union_equal_case}
Assume that for each $1 \leq i \leq d$, $G_i$ is a $r$-regular graph with $k$ vertices. Let $G[G_1, G_2, \ldots, G_d]$ be the joined union graph. Let $\{\lambda_{1}^{G_i}=r, \ldots, \lambda_{k_i}^{G_i} \}$ be the spectrum of $G_i$ as described in Corollary \ref{cor:orthogonal}. Then the spectrum of $A$ is the union the multiset
\[ \{\lambda_j^{A_i} \}_{1 \leq i \leq d, 2 \leq j \leq k_i} ,\]
and the following multiset
\[ \{r +k \sigma| \sigma \in \Spec(A_G) \}.\]
\end{prop}
\begin{proof}
In this case, the matrix $\overline{A}$ is of the following form
\[ \overline{A}= r I_d +k A_{G}, \]
where $A_G$ is the adjacency matrix of $G$. Therefore, the spectrum of $\overline{A}$ is given by $r+k \Spec(A_G)$.
\end{proof}
Let us keep the same assumptions as in Proposition \ref{prop:joined_union_equal_case}. Assume further that $G$ is a $\Delta$-regular graph with $d$ vertices. Then the joined union $G[G_1, G_2, \ldots, G_d]$ is a $(r+\Delta d)$-regular graph with $kd$ vertices. We have the following simple corollary that generalizes Corollary \ref{prop:Ramanujan_1}, since graph joins are particular cases of joined unions (see Remark \ref{rem:special_case}).
\begin{cor} \label{cor:rm2}
Assume that $r,k, \Delta$ are fixed. Assume further that $G$ is a connected, non-bipartite Ramanujan graph. Then there exists a number $d_0$ such that if $d \geq d_0$ then $G[G_1, G_2, \ldots, G_d]$ is a Ramanujan graph.
\end{cor}
\begin{proof}
Since $G$ is a connected, non-bipartite, and Ramanujan graph, the only eigenvalue $\sigma$ of $A_{G}$ such that $|\sigma|=\Delta$ is $\sigma=\Delta.$ For the other eigenvalues $\sigma \in \Spec(A_{G})$, we have
\[ |\sigma| \leq 2 \sqrt{\Delta-1} .\]
By triangle inequality, we have
\[ |r+ k \sigma| \leq r+k|\sigma| \leq r + k \sqrt{\Delta-1} .\]
By Proposition \ref{prop:joined_union_equal_case}, to guarantee that $G[G_1, G_2, \ldots, G_d]$ is Ramanujan, we only need to make sure that the following inequalities hold
\[ |\lambda_{j}^{A_i}| \leq 2 \sqrt{r+\Delta d -1}, \quad \text{for all} \quad 2 \leq j \leq k, 1 \leq i \leq d ,\]
and
\[ r+k \sqrt{\Delta-1} \leq 2 \sqrt{r+\Delta d -1} .\]
Since $r, k, \Delta$ are fixed, there exists an integer $d_0$ such that, if $d \geq d_0$, the above inequalities hold.
\end{proof}
\section{Applications to graph energy}
In this section, we apply our main theorems to study some questions on graph energy.
\begin{definition}
Let $G$ be a graph with $d$ nodes. Suppose that
\[ \Spec(G)=\{\lambda_1, \lambda_2, \ldots, \lambda_d \} .\]
The energy of $G$ is defined to be the following sum (see \cite[Section 9.2.2]{[CRS]} for further discussions.)
\[ E(G)=\sum_{i=1}^d |\lambda_i| .\]
\end{definition}
\begin{expl}
If $G=K_d$ the complete graph with $d$ vertices. Then
\[ \Spec(G)=\{[-1]_{d-1}, [d-1]_{1} \}, \]
where $[a]_m$ means that $a$ has multiplicity $m.$ We conclude that the energy of $K_d$ is $2(d-1).$
\end{expl}
Let $G_i$ and $G$ be as at the beginning of Section $4$, namely \[ G=G_1+G_2+ \ldots+G_d=K_d[G_1, G_2, \ldots, G_d]. \]
We have the following inequality.
\begin{prop} \label{prop:first_estimate}
The energy of $G$ is strictly larger than the sum of the energy of $G_i$:
\[ E(G)> \sum_{i=1}^d E(G_i) .\]
\end{prop}
\begin{proof}
Let $\{\lambda_1, \lambda_2, \ldots, \lambda_d \}$ be the eigenvalues of $\overline{A}$ where $A$ and $\overline{A}$ are the matrices defined at the beginning of Section $4$, namely
\[ \overline A=
\begin{pmatrix}
r_1 & k_2 & \cdots & k_d \\
k_1 & r_2 & \cdots & k_d \\
\vdots & \vdots & \ddots & \vdots \\
k_1 & k_2 & \cdots & r_d
\end{pmatrix} .\]
Note that $\lambda_i \in \R$ as they are also eigenvalues of $A$, which is real and symmetric. By Proposition \ref{prop:join_spetrum}, we have
\[ E(G)-\sum_{i=1}^{d} E(G_i)=\sum_{i=1}^{d} |\lambda_d|- \sum_{i=1}^{d} r_i .\]
We also note that $\sum_{i=1}^d \lambda_i=\Tr(\overline{A})=\sum_{i=1}^d r_i.$ Therefore, we have
\[ E(G)-\sum_{i=1}^{d} E(G_i)= \sum_{i=1}^{d} (|\lambda_i|-\lambda_i)=2 \sum_{ \lambda_i<0} |\lambda_i| .\]
Hence, to show that $E(G)>\sum_{i=1}^d E(G_i)$, we only need to show that for some $i$, $\lambda_i<0.$
Let $s_i=k_i-r_i>0$. Without loss of generality, we can assume that
\[ k_1-r_1 \leq k_2-r_2 \leq \ldots \leq k_d-r_d .\]
Let us consider
\begin{align*} p_{\overline{A}}(-s_1)&= p_{\overline{A}}(r_1-k_1)= \det((r_1-k_1)-\overline{A}) \\
&=(-1)^d \det \begin{pmatrix}
k_1 & k_2 & \cdots & k_d \\
k_1 & r_2+k_1-r_1 & \cdots & k_d \\
\vdots & \vdots & \ddots & \vdots \\
k_1 & k_2 & \cdots & r_d+k_1-r_1
\end{pmatrix} \\
&= (-1)^d k_1 \det \begin{pmatrix}
1 & k_2 & \cdots & k_d \\
1 & r_2+k_1-r_1 & \cdots & k_d \\
\vdots & \vdots & \ddots & \vdots \\
1 & k_2 & \cdots & r_d+k_1-r_1
\end{pmatrix}.
\end{align*}
By adding $-k_i$ times the first column to the $i$-th column, we see that the later determinant is also equal to
\[ \det \begin{pmatrix}
1 & 0 & \cdots & 0 \\
1 & (k_1-r_1)-(k_2-r_2) & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
1 & 0 & \cdots & (k_1-r_1)-(k_d-r_d)
\end{pmatrix}=(s_1-s_2)(s_1-s_3) \ldots (s_1-s_d) .\]
We conclude that
\[ p_{\overline{A}}(-s_1)=(-1)^d k_1 \prod_{j \neq 1} (s_1-s_j)=-k_1 \prod_{j \neq 1}(s_j-s_1) \leq 0 .\]
By the same argument, we see that
\[ p_{\overline{A}}(-s_2)=-k_2 \prod_{j \neq 2} (s_j-s_2)=k_2(s_2-s_1) \prod_{j>2} (s_j-s_2) \geq 0 .\]
By the mean value theorem, $p_{\overline{A}}(t)$ has a real root on the interval $[-s_2, -s_1]$. In particular, at least one eigenvalue of $\overline{A}$ must be negative. This completes the proof.
\end{proof}
\iffalse
\begin{rem}
Perhaps it is nice to give an example of two graphs $G_1, G_2$ where the above inequality fails.
\end{rem}
\fi
\begin{definition}
A graph $G$ with $d$ nodes is called hyperenergetic if $E(G) \geq 2(d-1).$
\end{definition}
\begin{prop}
Assume that $G_i$ are all $r$-regular with $k$ vertices. Assume further that $G$ is hyperenergetic. Then
\[ E(G[G_1, G_2, \ldots, G_d]) \geq E(G)+\sum_{i=1}^d E(G_i) .\]
The equality can happen, for example when $G$ and $G_i$ are all complete graphs.
\end{prop}
\begin{proof}
Let $A$ be the adjacency matrix of $G[G_1, G_2, \ldots, G_d]$. Then the matrix $\overline{A}$ in Proposition \ref{prop:joined_union} has the following form
\[ \overline A=
\begin{pmatrix}
r & a_{12}k & \cdots & a_{1n}k \\
a_{21}k & r & \cdots & a_{2n}k \\
\vdots & \vdots & \ddots & \vdots \\
a_{d1}k & a_{d2}k & \cdots & r
\end{pmatrix}=rI_d+ k A_{G} .\]
Let $\Spec(A_{G})=\{\lambda_1, \lambda_2, \ldots, \lambda_d \}$ then
\[ \Spec(\overline{A})=\{r+k \lambda_1, r+k \lambda_2, \ldots, r+k \lambda_d \}. \]
By Proposition \ref{prop:joined_union}, we have
\begin{align*}
E(G[G_1, G_2, \ldots, G_d])- E(G)-\sum_{i=1}^d E(G_i) &=\sum_{i=1}^{d}|r+k \lambda_i|- \sum_{i=1}^{d}|\lambda_i|- dr.
\end{align*}
We note that by the Perron-Frobenius Theorem, one of the eigenvalues of $A_{G}$ must be real and non-negative. Let us assume $\lambda_1 \geq 0.$ We then have
\begin{align*}
\sum_{i=1}^d |r+k \lambda_i|&=r+k \lambda_1+ \sum_{i=2}^d |r+k \lambda_i|\\
& \geq r+k \lambda_1+\sum_{i=2}^d (k|\lambda_i|-r) \\
&\geq k\sum_{i=1}^d |\lambda_i| -(d-2)r.
\end{align*}
Consequently, we have
\begin{align*}
E(G[G_1, G_2, \ldots, G_d])- E(G)-\sum_{i=1}^d E(G_i) & \geq (k-1) \sum_{i=1}^d |\lambda_i|-2(d-1)r \\
& \geq r (\sum_{i=1}^d |\lambda_i|-2(d-1))\\
& \geq 0.
\end{align*}
Note that the second inequality follows from $k \geq r+1$ and the last inequality follows from the assumption that $G$ is hyperenergetic.
\end{proof}
\begin{rem}
The above proof can be slightly generalized as follow. Suppose that $G$ is an undirected graph and the spectrum of $G$ consists of $n$ negative eigenvalues and $p$ non-negative eigenvalues. Suppose that the energy of $G$ satisfies
\begin{equation} \label{eq:inequality1}
E(G) \geq d+n-p =2(d-p).
\end{equation}
Then we have
\[ E(G[G_1, G_2, \ldots, G_d]) \geq E(G)+\sum_{i=1}^d E(G_i) .\]
We checked that all undirected graphs with at most $3$ nodes satisfies the Inequality \ref{eq:inequality1}.
\end{rem}
\begin{question} \label{question:inequality}
Suppose that $G_i$ are all regular graphs. Does the following inequality hold in general?
\begin{equation} \label{eq:inequality}
E(G[G_1, G_2, \ldots, G_d]) \geq E(G)+\sum_{i=1}^d E(G_i)?
\end{equation}
\end{question}
We provide an answer to this question in a special case, namely for $d=2$.
\begin{prop}
Let $G_1, G_2$ be two regular graphs and $G$ be a graph with $2$ nodes. Then \[ E(G[G_1, G_2]) \geq E(G)+ E(G_1)+E(G_2) .\]
\end{prop}
\begin{proof}
If $G$ is acyclic, $E(G)=0$ and by Example \ref{expl:acyclic}, we have
\[ E(G[G_1, G_2]) \geq E(G)+ E(G_1)+E(G_2) .\]
If $G$ is not acyclic then $G=K_2$. The energy of $G$ is $E(G)=2.$ Suppose that $G_i$ is $r_i$ regular with $k_i$ vertices for $i \in \{1, 2 \}$. Let $\lambda_1, \lambda_2$ be the eigenvalues of $\overline{A}$ where
\[ \overline{A}= \begin{pmatrix} r_1 & k_2 \\ k_1 & r_2 \end{pmatrix} .\]
By Proposition \ref{prop:join_spetrum} we have
\[ E(G[G_1, G_2])-E(G_1)-E(G_2)=|\lambda_1|+|\lambda_2|-(r_1+r_2) .\]
We conclude that
\[ \lambda_1, \lambda_2= \frac{(r_1+r_2) \pm \sqrt{(r_1-r_2)^2+4k_1 k_2}}{2} .\]
We have $\det(\overline{A})=r_1r_2-k_1 k_2<0$ so one root of $\overline{A}$ is negative and the other is positive. Consequently
\begin{align*}
|\lambda_1|+|\lambda_2|-r_1-r_2 &=\sqrt{(r_1-r_2)^2+4k_1k_2}-(r_1+r_2) \\ &\geq \sqrt{(r_1-r_2)^2+4(r_1+1)(r_2+1)}-(r_1+r_2) \\ & \geq (r_1+r_2+2)-(r_1+r_2)=2 .
\end{align*}
In other words, we have
\[ E(G[G_1, G_2]) \geq E(G)+ E(G_1)+E(G_2) .\]
\end{proof}
Another situation where we can verify Inequality \ref{eq:inequality} is the following.
\begin{prop}
Let $G_i$ be $r_i$-regular with $k_i$ vertices. Assume further that
\[ k_1-r_1=k_2-r_2=\ldots=k_d-r_d=s .\]
Let $G$ be the joined union graph $K_d[G_1, G_2, \ldots, G_d]$. Then
\[ E(K_d[G_1, G_2, \ldots, G_d]) \geq E(K_d)+\sum_{i=1}^d E(G_i). \]
\end{prop}
\begin{proof}
Let $k=\sum_{i=1}^d k_i$. By Corollary \ref{cor:equal_difference}, we have
\begin{align*}
& E(K_d[G_1, G_2, \ldots, G_d])- E(K_d)-\sum_{i=1}^d E(G_i) \\ &=(k-s)+(d-1)s-2(d-1)-\sum_{i=1}^d r_i \\
&=\sum_{i=1}^d (k_i-r_i)-s+(d-1)(s-2) \\
&= ds-s+(d-1)(s-2) = 2(d-1)(s-1) \geq 0.
\end{align*}
Consequently
\[ E(K_d[G_1, G_2, \ldots, G_d]) \geq E(K_d)+\sum_{i=1}^d E(G_i). \]
\end{proof}
\begin{prop}
Let $G_i$ be $r_i$-regular with $k_i$ vertices. Let $s_i=k_i-r_i$ Assume further that
\[ s_1 <s_2<\ldots <s_d .\]
Let $G$ be the joined union graph $K_d[G_1, G_2, \ldots, G_d]$. Then
\[ E(K_d[G_1, G_2, \ldots, G_d]) \geq 2 \sum_{i=1}^{d-1} s_i +\sum_{i=1}^d E(G_i). \]
In particular, if $d \geq 2$ then
\[ E(K_d[G_1, G_2, \ldots, G_d]) > E(K_d) +\sum_{i=1}^d E(G_i). \]
\end{prop}
\begin{proof}
Let $\{\lambda_1, \lambda_2, \ldots, \lambda_d \}$ be the eigenvalues of $\overline{A}$ where $A$ and $\overline{A}$ are the matrices in Proposition \ref{prop:join_spetrum}, namely
\[ \overline A=
\begin{pmatrix}
r_1 & k_2 & \cdots & k_d \\
k_1 & r_2 & \cdots & k_d \\
\vdots & \vdots & \ddots & \vdots \\
k_1 & k_2 & \cdots & r_d
\end{pmatrix} .\]
By the same argument as in Proposition \ref{prop:first_estimate}, we have
\[ p_{\overline{A}}(-s_i)=-k_i \prod_{j \neq i} (s_j-s_i) .\]
Because of the total ordering $s_1<s_2<\ldots<s_d$, we see that $p_{\overline{A}}(-s_i)p_{\overline{A}}(-s_{i+1})<0$ for $1 \leq i \leq d-1.$ By the mean value theorem, $p_{\overline{A}}(t)$ has a real root, say $\lambda_i$, in the interval $[-s_{i+1}, -s_i].$ In particular, $\lambda_i<0$ and $|\lambda_i| \geq s_i$ for $1 \leq i \leq d-1$. We also note that
\[ \sum_{i=1}^d \lambda_i=\Tr(\overline{A})=\sum_{i=1}^d r_i .\]
Hence
\[ \lambda_d=\sum_{i=1}^d r_i - \sum_{i=1}^{d-1} \lambda_i >0 .\]
We then have
\begin{align*}
E(K_d[G_1, G_2, \ldots, G_d])-\sum_{i=1}^d E(G_i) &= \sum_{i=1}^{d} |\lambda_i| -\sum_{i=1}^d r_i \\
&= \sum_{i=1}^{d-1} |\lambda_i|+ \left(\sum_{i=1}^d r_i - \sum_{i=1}^{d-1} \lambda_i \right)-\sum_{i=1}^d r_i \\
&=2 \sum_{i=1}^{d-1} |\lambda_i| \geq 2 \sum_{i=1}^{d-1} s_i.
\end{align*}
Since $1 \leq s_1<s_2<\ldots<s_d$, the above inequality implies that
\[E(K_d[G_1, G_2, \ldots, G_d])-\sum_{i=1}^d E(G_i)> 2(d-1)=E(K_d) .\]
\end{proof}
\section*{Acknowledgments}
This work was supported by BrainsCAN at Western University through the Canada First Research Excellence Fund (CFREF), the NSF through a NeuroNex award (\#2015276), the Natural Sciences and Engineering Research Council of Canada (NSERC) grant R0370A01, and SPIRITS 2020 of Kyoto University. J.M.~gratefully acknowledges the Western University Faculty of Science Distinguished Professorship in 2020-2021. Parts of this article were written during the workshop ``Spectral graph and hypergraph theory: connections and applications" organized by the American Institute of Mathematics from December 6th to December 12th, 2021. T.T.N. would like to thank the organizers of this conference and the American Institute of Mathematics for the stimulating working environment and kind hospitality.
\bibliographystyle{plain}
| 2024-02-18T23:40:03.410Z | 2022-07-12T02:05:16.000Z | algebraic_stack_train_0000 | 1,195 | 6,375 |
|
proofpile-arXiv_065-5932 | \section{Introduction}
High-dimensional quantum systems feature a number of interesting phenomena, beyond what is possible for qubit systems. For example, the effect of entanglement is known to become increasingly robust to noise when higher dimensions are considered, the robustness becoming even arbitrary large \cite{zhu2021high, ecker2019overcoming}. In turn, the nonlocal correlations obtained from measurements on high-dimensional systems also feature significantly increased robustness. Indeed, these effects offer interesting perspectives for quantum information processing, allowing, e.g., for quantum communications over very noisy channels.
In this work, we consider the effect of genuine high-dimensional steering (GHDS), which has been introduced recently \cite{designolle2021genuine}. The steering scenario can be viewed as the certification of entanglement between an untrusted party (Alice) and a trusted one (Bob). Hence steering is usually referred to as being one-sided device-independent (1-SDI). The key point of GHDS is to certify not only the presence of entanglement, but a minimal dimensionality of entanglement (specifically the Schmidt number) from observed correlations in a 1-SDI scenario. More formally, this approach introduces the notion of $n$-preparable assemblages, i.e., those assemblages being preparable based on any possible entangled state of Schmidt rank at most $n$; 1-preparable assemblages being then simply those assemblages that cannot lead to steering. Next, one can construct a steering inequality for $n$-preparable assemblages, the violation of which implies the presence of genuine $n+1$-dimensional steering. This was demonstrated in a quantum optics experiment (based on photon-pairs entangled in orbital angular momentum) reporting the 1-SDI certification of 14-dimensional entanglement.
A natural question at this point is to understand what are the resources required in terms of measurements for demonstrating GHDS. Indeed, the effect of steering uses not only an entangled state as a resource, but also a well-chosen set of local measurements for Alice. The latter must be incompatible (in the sense of being non-jointly measurable), but it turns out that steering has a direct connection to measurement incompatibility.
The present work explores this question, and establishes a general connection between GHDS and the notion of $n$-simulability of high-dimensional measurements which has been recently introduced in Ref.~\cite{ioannou2022simulability}. This notion generalises the concept of joint measurability and provides a quantification of measurement incompatibility in terms of a dimension. The connection we uncover generalises the well-known relations between quantum steering and joint measurability. Moreover, we also extend the connection to quantum channels, in particular the characterisation of their high-dimensional properties. These general tripartite connections between high-dimensional steering, measurements and channels, allow for results of one area to be directly translated in others, which we illustrate with several examples.
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\textwidth]{steering,n-sim,schmidt}
\caption{Concepts and connections that appear in this work. (a) Quantum Steering scenario. (b) A set of measurements is $n$-simulable if they can be replaced by an $n$-partially entanglement breaking channel ($n$-PEB) followed by some measurements. (c) Illustration of the Schmidt number (SN) of a bipartite state: the state of two $5$ level systems is a combination of states with only qubit entanglement, hence the overall state has SN at most $2$. }
\label{fig:fig1}
\end{figure*}
\section{Summary of results}
We start by identifying the resources for GHDS. In particular, we show that an assemblage is $n$-preparable if it can be prepared via an entangled state of Schmidt number $n$ or if the set of Alice's local measurements are $n$-preparable. Hence the observation of genuine $n+1$-dimensional steering implies the presence of both (i) an entangled state of Schmidt number (at least) $n+1$, and (ii) a set of measurements for Alice that is not $n$-simulable. In this sense, GHDS provides a dimensional certification of both the entangled state and the local measurements. Moreover, we show that there is a one-to-one mapping between any $n$-preparable assemblage and a set of measurements that is $n$-simulable, generalising the existing connection between steering and joint measurability (corresponding here to the case $n=1$).
This connection allows us to import results from one area to the other. For example, we can construct optimal models for simulating the correlations of high $d$-dimensional entangled states (so-called isotropic states) based on lower $n$-dimensional entanglement (and classical shared randomness). This simulation models hold for all possible local measurements on Alice's and Bob's side. In this sense, these models can be considered as a generalisation of the well-known local hidden state model of Werner, where classical shared randomness is augmented with low-dimensional entanglement. Moreover, we can translate steering inequalities for GHDS into criteria for testing non $n$-simulability of measurements.
Finally, we obtain a dimensional characterisation of quantum channels via channel-state duality. In particular, we consider channels that map the set of all measurements to $n$-simulable ones, and describe the corresponding Choi states.
We conclude with a number of open questions.
\section{Basic concepts and questions}
A central notion for us will be \textit{quantum steering}, see e.g. \cite{cavalcanti2016quantum, uola2020quantum} for recent reviews. Here, one party (Alice) performs local measurements $\{M_{a|x}\}$ on a state $\rho_{AB}$, a unit-trace positive semi-definite matrix acting on a finite-dimensional Hilbert space, that she shares with another distant party (Bob). The measurements are collections of matrices for which $M_{a|x} \geq 0$ $\forall a,x$ and $\sum_a M_{a|x} = \mathds{1} $ for each $x$. Here $x$ indexes the measurement and $a$ indexes the outcome. For each $x$, the collection $\{M_{a|x}\}_a$ is called a positive operator-valued measure (POVM for short). By performing her measurements, Alice remotely prepares the system of Bob in different possible states denoted by
\begin{equation}
\sigma_{a|x}:=\text{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ [\rho_{AB}] \bigg), \label{eq:steeringassem}
\end{equation}
usually termed an \textit{assemblage}, see Figure (\ref{fig:fig1}a). Such an assemblage demonstrates quantum steering when it does not admit a \textit{local hidden state} (LHS) model, i.e. a decomposition of the form
\begin{equation} \label{LHS}
\sigma_{a|x} = p(a|x) \sum_\lambda ~ p(\lambda|a,x) ~ \sigma_\lambda \,,
\end{equation}
where $p(a|x)$ is a normalisation factor and $p(\lambda|x,\lambda)\sigma_\lambda$ is an ensemble of states whose priors get updated upon Bob asking Alice to perform the measurement $x$ and her reporting back the outcome $a$.
Steering represents a form of quantum correlations that is intermediate between entanglement and Bell nonlocality \cite{wiseman2007steering,quintino2015inequivalence}. Specifically, there exist entangled states that cannot lead to steering, and there exist some steerable states that cannot lead to Bell inequality violation (nonlocality). Also, the steering scenario is commonly referred to as \textit{one-sided device-independent} (1-SDI), as Alice's device is untrusted but Bob's device is fully characterised. Since steering requires the presence of entanglement---separable states always admitting an LHS model---it also represents a 1-SDI method for certifying entanglement. Moreover, steering is an asymmetric phenomenon, as there exist states $\rho_{AB}$ for which steering is only possible in one direction (e.g., from $A$ to $B$) \cite{bowles2014one}.
One can take the concept of quantum steering a step further in terms of bipartite entanglement detection. Instead of only certifying the presence of entanglement, it is possible to use steering to characterise the dimensionality entanglement dimensionality, as quantified via the Schmidt number \cite{terhal2000schmidt}. For a pure state $\ket{\psi}$, this corresponds to the \textit{Schmidt rank} (SR), i.e., the minimum number of terms needed to express $\ket{\psi}$ as a linear combination of product states. The \textit{Schmidt number} (SN) \cite{terhal2000schmidt} is a generalisation to mixed states, formally defined as
\begin{align}
\text{SN}(\rho) := \underset{\{ p_k, ~\ket{\psi_k} \}}{\min} \max_k \quad&\text{SR}(\ket{\psi_k}) \\\nonumber
\quad \text{s.t} \quad &\rho = \sum_k p_k \ketbra{\psi_k}{\psi_k}.
\end{align}
The Schmidt number thus quantifies the entanglement dimensionality, in that it tells the minimum number of degrees of freedom that one needs to be able to entangle in order to produce the state, see Figure (\ref{fig:fig1}c). As an example, witnessing a Schmidt number of three implies that qubit entanglement, even when mixed between different subspaces, is not enough to produce the state.
In \cite{designolle2021genuine}, the concept of \textit{genuine high-dimensional steering} (GHDS) was introduced, where one asks whether a given assemblage $\sigma_{a|x}$ can be produced using a bipartite state $\rho_{AB}$ of Schmidt number at most $n$, in which case we term the assemblage \textit{$n$-preparable}. In this framework, an assemblage is LHS if and only if it is $1$-preparable, as any LHS assemblage can be prepared using only separable states \cite{Kogias15,Moroder16}. Hence if an assemblage is not $n$-preparable, this guarantees that the underlying state $\rho_{AB}$ is of Schmidt number at least $n+1$. This represents a 1-SDI certification of entanglement dimensionality, illustrated in a recent quantum optics experiment certifying up to 14-dimensional entanglement \cite{designolle2021genuine}.
So far, the focus of GHDS is on the dimensionality of the shared entangled state. There is however another resource that is crucial for observing quantum steering, namely the set of measurements performed by Alice, which must be incompatible. More generally, there exist in fact a deep connection between measurement incompatibility (in the sense of being not jointly measurable) and quantum steering \cite{quintino14,uola14,uola15}. In particular, this implies that any set of incompatible measurements for Alice can be combined with an appropriate state $\rho_{AB}$ for demonstrating steering.
This naturally raises the question of what are the necessary resources in terms of measurements for demonstrating GHDS. Intuitively, the latter should also require a minimal ``dimensionality'' for the set of measurements. Below we will make this intuition precise, by using the concept of $n$-simulability of a set of measurements. More generally, we will establish a deep connection between GHDS (more precisely the notion of $n$-preparability of an assemblage) and $n$-simulability of set of measurements. This generalises the previously known connection between steering and measurement incompatibility.
A set of measurements $\{M_{a|x}\}$, defined on a Hilbert space of dimension $d$, is said to be $n$-simulable when the statistics of this set of measurements on any possible quantum state can be exactly recovered using a form of compression of quantum information to a lower $n$-dimensional space. Consider for example Alice (on the moon), sending an arbitrary state $\rho$ to a distant party Bob (on earth), who will perform a set of POVMs $\{M_{a|x}\}$ (see Fig. 1). Which POVM Bob performs depends on some input $x$. The expected (target) data is given by $p(a|x,\rho) = \Tr(M_{a|x} \rho)$. As resource, we consider here the dimensionality of the quantum channel between Alice and Bob, while a classical channel is always available for free. The goal is then to compress as much as possible the initial state of Alice, in order to use a quantum channel with minimal dimension, while still recovering exactly the target data. More formally, we demand that
\begin{equation} \label{n-simulable}
M_{a|x} = \sum_{\lambda} \Lambda_{\lambda}^*( N_{a|x,\lambda})
\end{equation}
where $\Lambda = \{\Lambda_{\lambda}\}_{\lambda}$ denotes the instrument (compressing from dimension $d$ to $n$), with classical output $\lambda$, and $N_{a|x,\lambda}$ is a set of $n$-dimensional POVMs performed by Bob upon receiving the input $x$ and the classical information $\lambda$ communicated by Alice. Here $\Lambda_\lambda^*$ refers to the Heisenberg picture of $\Lambda_\lambda$. A set of measurements is termed $n$\textit{-simulable} whenever a decomposition of the form \eqref{n-simulable} can be found.
An important case is $1$-simulability, i.e., when the full quantum information can be compressed to purely classical one. This is possible if and only if the set of POVMs is jointly measurable, i.e.,
$M_{a|x} = \sum_\lambda ~ p(a|x,\lambda) ~ G_\lambda$, for some probability distribution $p(a|x,\lambda)$ and a ``parent'' measurement $G_\lambda$, see \cite{JMinvitation,JMreview} for reviews on the topic. A set of POVMs that is not jointly measurable (hence called \textit{incompatible}), can nevertheless be $n$-simulable, for some $n$ with $2 \leq n \leq d$.
The notion of $n$-simulability can also be connected to quantum channels, and their dimensional properties. This requires the use of a property of channels that is analogous Schmidt number of bipartite states. Namely, one says that a channel $\Lambda$ is \textit{$n$-partially entanglement breaking} ($n$-PEB) if $\text{SN}(\Lambda \otimes \mathds{1} \rho) \leq n$ for all $\rho$ \cite{chruscinski2006partially}. Clearly, for the case $n=1$ this concept corresponds to entanglement breaking channels.
This leads to an alternative formulation of \textit{$n$-simulability}, which we will primarily use in the following sections: a measurement assemblage $M_{a|x}$ is \textit{$n$-simulable} if and only if there exists an $n$-PEB quantum channel $\Lambda$ and a measurement assemblage $N_{a|x}$ such that $M_{a|x} = \Lambda^* \big ( N_{a|x} \big )$.
In the rest of the paper, we will first establish precisely the connection between $n$-preparability and $n$-simulability. In turn, we will discuss simulation models for for the correlations of entangled states (of Schmidt number $d$) using as resource lower-dimensional entanglement (of Schmidt number $n<d$), considering all possible measurements. This idea can be seen as a generalisation of the problem of simulating the correlations of entangled state via local hidden variables (or local hidden state models). Finally, in the last section of the paper, we will also extend the connection to quantum channels and their characterisation in terms of dimension. This will provide a full tripartite connection, for characterising dimension in steering assemblages, incompatibility of sets of measurements, and quantum channels.
\section{High-dimensional steering and simulability of measurements}
In this section, we present in detail the structural connection between $n$-preparability of steering assemblages and $n$-simulability of sets of measurements.
We start with a first result clearly identifying the resource for GHDS. More precisely, the following Theorem implies that observing GHDS, i.e., an assemblage which is not $n$-preparable, implies that (i) the shared entangled state $\rho_{AB}$ has at least Schmidt number $n+1$, and (ii) the set of measurements $\{M_{a|x}\}$ performed by Alice is not $n$-simulable. In other words, one really needs both high-dimensional entanglement and high-dimensional measurement incompatibility to witness genuine high-dimensional steering.
More formally we can prove the following.
\begin{theorem} \label{theorem:compat->prep}
If $M_{a|x}$ is $n$-simulable or $\rho_{AB}$ has Schmidt number at most $n$, then the assemblage
\begin{equation}
\sigma_{a|x}:=\textup{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ [\rho_{AB}] \bigg)
\end{equation}
is $n$-preparable.
\end{theorem}
\begin{proof}
If $\rho_{AB}$ has SN at most $n$, this simply follows from the definition of $n$-preparability. Now suppose that $M_{a|x}$ is $n$-simulable. Then there exists a $n$-PEB channel $\Lambda$ and measurements $N_{a|x}$ such that $M_{a|x} = \Lambda^* (N_{a|x})$. By the definition of the dual, we can hence write
\begin{align}
\sigma_{a|x} &= \Tr_A \Big ( \Lambda^* (N_{a|x}) \otimes \mathds{1} [ \rho_{AB} ] \bigg ) \\
& = \Tr_A \Big ( \big(N_{a|x} \otimes \mathds{1}\big) \big( \Lambda \otimes \mathds{1} \big) [\rho_{AB}] \Big )
\end{align}
and as $\Lambda$ is $n$-PEB, then $\Lambda \otimes \mathds{1} [\rho_{AB}]$ has SN at most $n$, so $\sigma_{a|x}$ is $n$-preparable.
\end{proof}
It is worth noting that, for the simplest case of $n=1$, the above Theorem corresponds to the well-known fact that an assemblage constructed from a separable state or via a jointly measurable set of POVMs always admits a LHS model. In other words, the observation steering proves the presence of an entangled state and an incompatible set of POVMs for Alice.
Our next result establishes a general equivalence between any $n$-preparable assemblage and a set of POVMs that is $n$-simulable, and vice versa. The main idea is that a set of quantum measurements $M_{a|x}$ and a steering assemblage $\sigma_{a|x}$ are very similar types of mathematical objects: both are composed of positive semi-definite matrices, and $\sum_a M_{a|x}=\mathds{1}\quad \forall x$ whereas $\sum_a \sigma_{a|x}$ will be equal to some fixed state $\rho_B = \text{Tr}_A (\rho_{AB})$ for all $x$. A direct connection can be established, namely that $\sigma_{a|x}$ is LHS if and only if $\rho_B^{-\frac{1}{2}} \sigma_{a|x} \rho_B^{-\frac{1}{2}}$ is jointly measurable (when interpreted as a set of measurements) \cite{uola15}. The Theorem below can be considered a generalisation of this result, in the sense that the proof of Ref. \cite{uola15} corresponds to the case $n=1$.
\begin{theorem} \label{thm: n-sim = n-prep}
Consider a steering assemblage $\sigma_{a|x}$ and measurements $M_{a|x}$ such that $M_{a|x}=\rho_B^{-\frac{1}{2}} ~ \sigma_{a|x} ~ \rho_B^{-\frac{1}{2}}$, where $\rho_{B} := \sum_a \sigma_{a|x}$ is of full rank. Then $M_{a|x}$ is $n$-simulable if and only if $\sigma_{a|x}$ is $n$-preparable.
\end{theorem}
\begin{proof} Let $N_{a|x}$ be a measurement assemblage and $\rho_{AB}$ be a state such that $\Tr_A(\rho_{AB}) = \rho_B$. Let $(\cdot)^T$ denote the transpose with respect to an eigenbasis of $\rho_B$. We then have the following equivalences
\begin{align}
\sigma_{a|x} &= \Tr_A (N_{a|x} \otimes \mathds{1}~\rho_{AB} ) \\
\iff M_{a|x}&= ~ \rho_B^{ -\frac{1}{2}} ~\Tr_A (N_{a|x} \otimes \mathds{1}~\rho_{AB} ) ~ \rho_B^{ -\frac{1}{2}} \\
\iff M_{a|x}^T&= ~ \rho_B^{ -\frac{1}{2}} ~\Tr_A (N_{a|x} \otimes \mathds{1}~\rho_{AB} )^T ~ \rho_B^{ -\frac{1}{2}} \\
\iff M_{a|x}^T &= \Lambda_{\rho_{AB}}^* \Big ( N_{a|x} \Big ),
\end{align}
where in the third line we used the fact that $(\rho_B^{-\frac{1}{2}})^T=\rho_B^{-\frac{1}{2}}$, as the transpose is taken in an eigenbasis of $\rho_B$, and in the last line we have invoked the form of channel-state duality from Ref.~\cite{kiukas2017continuous}.
Now observe that the existence of a state $\rho_{AB}$ in the above with Schmidt number at most $n$ is equivalent to $\sigma_{a|x}$ being $n$-preparable. We can also see that there exists $\rho_{AB}$ with SN$(\rho_{AB})\leq n$ if and only if $M_{a|x}^T$ is $n$-simulable, as such state corresponds to $\Lambda_{\rho_{AB}}$ being $n$-PEB, see Appendix A for details. To finalize the proof we must show that $M_{a|x}$ is $n$-simulable if and only if $M_{a|x}^T$ is $n$-simulable. This can be seen as follows. First note that $M_{a|x}^T$ defines a valid collection of measurements. Suppose that $M_{a|x} = \Lambda^*(N_{a|x})$ with $\Lambda$ $n$-PEB and $N_{a|x}$ arbitrary measurements. Then letting $\mathcal{T}$ denote the transpose map, we have that $M_{a|x}^T = (\mathcal{T} \circ \Lambda^*)(N_{a|x}) = ( \Lambda \circ \mathcal{T}^*)^*(N_{a|x})$. As $\Lambda$ is $n$-PEB, $\Lambda \circ \mathcal{T}^*$ is also $n$-PEB. Hence $M_{a|x}^T$ is $n$-simulable. The converse direction follows from $(M_{a|x}^T)^T = M_{a|x}$.
\end{proof}
As a technical remark, note that as for any $a$ and $x$ the support of $\sigma_{a|x}$ is contained within the support of $\rho_B = \sum_a \sigma_{a|x}$ (this follows as $\sigma_{a|x}$ are all positive semi-definite), we can still invoke the above theorem in the case where $\rho_B$ is not full rank, by restricting $\sigma_{a|x}$ to the support of $\rho_B$.
Theorem \ref{thm: n-sim = n-prep} also allows to prove the following result, which complements Theorem \ref{theorem:compat->prep}. This shows that for any set of POVMs that is not $n$-simulable, one can always find an entangled state such that the resulting assemblage is not $n$-preparable. Again, this generalizes some previous results stating that any incompatible set of POVMs can lead to steering \cite{quintino14,uola14}, which corresponds to the case $n=1$ of the proposition below.
\begin{proposition}
If $M_{a|x}$ is not $n$-simulable, then the assemblage
\begin{equation}
\sigma_{a|x}:=\textup{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ \ketbra{\Phi^+} \bigg)
\end{equation}
is not $n$-preparable, where $\ket{\Phi^+}=\frac{1}{\sqrt{d}}\sum_i \ket{ii}$.
\end{proposition}
\begin{proof}
We have that
\begin{align}
\sigma_{a|x}=\text{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ \ketbra{\Phi^+} \bigg) = \frac{1}{d}~ M_{a|x}^T.
\end{align}
By the proof of Theorem~\ref{thm: n-sim = n-prep}, if $M_{a|x}$ is not $n$-simulable, then $M_{a|x}^T$ is not $n$-simulable. Then invoking Theorem \ref{thm: n-sim = n-prep} with $\rho_B = \frac{\mathds{1}}{d}$, we have that that $\sigma_{a|x}$ is not $n$-preparable.
\end{proof}
In the final part of this section, we show that the trade-off between high-dimensional entanglement, high-dimensional measurement incompatibility, and high-dimensional steering can be made quantitative. For this, we use a specific resource quantifiers known as the convex weight \cite{Steeringweight}. Consider for example the quantification of entanglement via the weight. For any entangled state $\rho$, we can measure its entanglement through its weight, given by the following quantity
\begin{equation}
\label{eq: WeightDef}
\begin{split}
\mathcal{W}_F(\rho) := &\min \lambda\\
&\mathrm{s.t.}\ D= (1-\lambda) \rho_{sep} + \lambda\sigma,
\end{split}
\end{equation}
where the minimisation runs over any state $\rho_{sep}$ that is separable, and $\sigma$ an arbitrary state. As expected, $\mathcal{W}_F (\rho) =0$ when $\rho$ is separable. More generally, this quantifier can apply to objects such as states, measurements or steering assemblages, with respective free sets $E_n$: the set of states with Schmidt number at most $n$, $S_n$: the set of of $n$-simulable measurements assemblages, and $P_n$: the set of $n$-preparable steering assemblages. We can now state our next result, which quantitatively illustrates the necessity of high-dimensional measurement incompatibility and entanglement for GHDS:
\begin{restatable}{theorem}{weighttheorem} \label{thm:weight} Given an assemblage $\sigma_{a|x}=\textup{Tr}_A (M_{a|x}\otimes \mathds{1} ~ [\rho_{AB}])$, we have the following inequality:
\[
\mathcal{W}_{P_n}(\sigma_{a|x}) \leq \mathcal{W}_{S_n}(M_{a|x}) \mathcal{W}_{E_n}(\rho_{AB}).
\]
For the case $n=1$ we get a quantitative connection among steering, measurement incompatibility and entanglement.
\end{restatable}
We defer the proof of this theorem to the appendix.
\section{Simulating the correlations of high-dimensional entangled states using low-dimensional entanglement}
Strong demonstrations of the non-classical correlations of entangled states comes from the observation of Bell inequality violation, or from quantum steering. A long-standing topic of research is to understand the link between entanglement and these stronger forms of quantum correlations, see e.g. \cite{brunner2014bell,Augusiakreview}. In a seminal paper, Werner showed that certain entangled states, referred to as Werner states, cannot lead to Bell inequality violation \cite{Werner1989}. This result is based on the construction of an explicit local hidden variable model that reproduces exactly the correlations expected from any possible local projective measurements on the Werner state. Moreover, it turns out that the model construct by Werner is in fact of the form of an LHS model (as in Eq. \eqref{LHS}, see also Fig~\ref{fig:LHSmodel}), hence these Werner states can also never lead to quantum steering \cite{wiseman2007steering}. Note that these results can be extended to general POVMs using the model of Ref. \cite{barrett2002nonsequential}, which can be shown to be of LHS form \cite{quintino2015inequivalence}.
Here we revisit the above questions and propose a new perspective, based on the ideas developed in the previous sections of the paper. Instead of considering simulation models that involve only classical resources (classical shared randomness), we consider now simulation models assisted by entanglement, see Fig.~\ref{fig:EALHSmodel}. Of course, for this problem to be non-trivial, we must demand that the entanglement used in the simulation model is somehow weaker compared to the entanglement
of the original state to be simulated. The dimensionality of entanglement (as given by the Schmidt number) provides a good measure for this problem.
Consider an entangled state $\rho_{AB}$ of Schmidt number $d$ and arbitrary local measurements (possibly infinitely many) for both Alice and Bob. We now ask if we can simulate the resulting correlations with a model involving lower-dimensional entangled states (of Schmidt number $n<d$) and classical shared randomness. Of course, building such models can be challenging, as the model should reproduce exactly all correlations for any possible choice of local measurements. Nevertheless, we will see that using the ideas developed above, we can come up with such entanglement-assisted simulation models, and moreover prove their optimality.
The main idea to construct these simulation models is to apply Theorem~\ref{theorem:compat->prep} to a result obtained recently in \cite{ioannou2022simulability}. The latter consist in obtaining bounds (in terms of noise robustness) for $n$-simulability for the (continuous) set of all projective measurements (in dimension $d$) under white noise. From Theorem~\ref{theorem:compat->prep}, we obtain an equivalent assemblage (with a continuous input $x$) that is $n$-preparable. The last point is to notice that this assemblage corresponds in fact to the one obtained from performing arbitrary local projective measurements on a shared entangled state $\rho_{AB}$, which takes the form of an isotropic state, i.e.
\begin{equation} \label{iso}
\rho(\eta'):= \eta' \ketbra{\Phi^+} + (1-\eta') \frac{\mathds{1}}{d^2}
\end{equation}
where $\ket{\Phi^+}=\frac{1}{\sqrt{d}}\sum_i \ket{ii}$ and $0 \leq \eta' \leq 1$. Hence we obtain a simulation model using only entanglement with Schmidt number $n$ which reproduces exactly the correlations of some isotropic state of dimension $d\times d$. Interestingly, it appears that this isotropic state can have a Schmidt number that is larger than $n$.
More formally, consider the set of all projective measurements (PVMs) subject to white noise
\begin{equation}\label{noisyPVM}
\mathcal{M}_{PVM}^\eta:=\bigg \{\eta M_{a|U} + (1-\eta)\frac{\mathds{1}}{d} ~ : ~ U\in U(d) \bigg \} \,,
\end{equation}
where $U(d)$ is the unitary matrix group, $M_{a|U} = U\ketbra{a}U^\dagger$ and $\ket{a}$ denotes the computational basis.
It was shown in \cite{ioannou2022simulability} that the set $\mathcal{M}_{PVM}^\eta$ is $n$-simulable if $\eta \leq (d \sqrt{\frac{n+1}{d+1}}-1)(d-1)^{-1}$. Then by passing the noise from the measurements onto the state (see for example \cite{uola14}), we have that:
\begin{align}
&\text{Tr}_A \bigg (\bigg [\eta M_{a|U} + (1-\eta)\frac{\mathds{1}}{d} \bigg ]\otimes \mathds{1} ~ \ketbra{\Phi^+})\\
=&\text{Tr}_A \bigg (M_{a|U}\otimes \mathds{1} ~ \rho (\eta) \bigg ).
\end{align}
Hence we reproduce exactly the assemblage expected from arbitrary projected measurement on an isotropic state with $\eta'= \eta$. Moreover, it is known that $\text{SN}(\rho(\eta)) \geq n+1 \quad \text{if} \quad \eta > \frac{dn-1}{d^2-1}$ \cite{terhal2000schmidt}. Hence for
\begin{equation}
\frac{dn-1}{d^2-1} < \eta \leq \frac{d \sqrt{\frac{n+1}{d+1}}-1}{d-1}
\end{equation}
the resulting assemblage can be reproduced via a simulation model involving only entangled states of Schmidt number $n$, despite the state possessing a Schmidt number of $n+1$. More generally, one can deduce a general bound on the noise parameter $\eta$ for guaranteeing $n$-preparability. We have illustrated these bounds this in Fig.~\ref{fig:my_label} for the case of dimension four. Remarkably, as the construction for PVMs in Ref. \cite{ioannou2022simulability} is optimal, the simulation models we obtain are also optimal (considering all possible PVMs). An interesting question is to understand how to extend these bounds considering all POVMs, but this is a challenging question, still open for the simplest case of $n=1$.
\begin{figure}
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{steering_scen_2}\vspace{-10pt}
\caption{}
\label{fig:LHSmodel}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\vspace{5pt}\includegraphics[width=\textwidth]{steering_scen_3} \vspace{-10pt}
\caption{}
\label{fig:EALHSmodel}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\hspace*{-5pt}\includegraphics[width=\textwidth]{SDI-SN_plot} \vspace*{-15pt}
\caption{}
\label{fig:my_label}
\end{subfigure}
\caption{(a) Local hidden state model: one aims at simulating the assemblage with a separable state.\\
(b) $n$-preparability: simulation of an assemblage using states with low-dimensional entanglement.\\
(c) High-dimensional entanglement and steering properties of the isotropic state with local dimension $d=4$. The Schmidt number (SN) bounds can be found in \cite{terhal2000schmidt}, and in this work we translate known values on the $n$-simulability of all PVMs from \cite{ioannou2022simulability} to thresholds on states being such that they can only lead to n-preparable assemblages under all projective measurements. In the figure this is referred to as the one-sided semi-device independent Schmidt number (1-SDI-SN) under all PVMs. The bound for LHS models for all POVMs is from \cite{barrett2002nonsequential, almeida2007noise}.}
\end{figure}
\section{Criteria for $n$-simulability}
The connections established in Section III also allow us to translate $n$-preparability inequalities into criteria for $n$-simulability. As an example, we take the set of $n$-preparability witnesses presented in Ref. \cite{designolle2021genuine}. Such witnesses state that for an $n$-preparable state assemblage $\{\sigma_{a|x}\}$ with $2$ inputs and $d$ outputs, one has that
\begin{equation} \label{witness}
\sum_{a,x}\text{Tr}[\sigma_{a|x}W_{a|x}]\leq N\Big(\frac{\sqrt{n}-1}{\sqrt{n}+1}+1\Big) \,.
\end{equation}
where $N=1+1/\sqrt{d}$. The witness $W_{a|x}$ consists of a pair of mutually unbiased bases (MUBs for short) transposed in the computational basis, i.e., $W_{a|1}=|a\rangle\langle a|$ and $W_{b|2}=|\varphi_b\rangle\langle\varphi_b|^T$, where $\{|a\rangle\}$ is the computational basis and $\{|\varphi_b\rangle\}$ is an orthonormal basis with the property $|\langle a|\varphi_b\rangle|^2=1/d$ for each $a$ and $b$.
As an $n$-simulable set of measurements leads to an $n$-preparable state assemblage by Theorem \ref{theorem:compat->prep}, violation of a witness of this type in a steering scenario verifies that Alice's measurements are not $n$-simulable. As an example, we take a pair of MUBs subjected to white noise with visibility $\eta$ (similarly to Eq. \eqref{noisyPVM}) on Alice's side and the isotropic state \eqref{iso}. Plugging the resulting assemblage into the witness \eqref{witness}, we get that
\begin{align}
\eta\leq\frac{(d+\sqrt{d}-1)\sqrt{n}-1}{(d-1)(\sqrt{n}+1)}.
\end{align}
Hence, for a visibility larger than this bound, a pair of MUBs is provably not $n$-simulable. We note that for the case $n=1$ we retrieve the known tight joint measurability threshold of two MUBs subjected to white noise \cite{Carmeli12,Haa15,Uola16}. Obtaining similar bounds for complete sets of MUBs, known for $n=1$ \cite{Designolle2019}, would be interesting.
\section{Quantum channels}
An important superset of entanglement breaking channels is that of incompatibility breaking channels \cite{heinosaari2015incompatibility}, which are channels $\Lambda$ such that $\Lambda^*(M_{a|x})$ is jointly measurable for any $M_{a|x}$. Via channel-state duality these channels correspond respectively to separable and unsteerable states (where the direction of unsteerability corresponds to whether the channel is applied on the first or second system in the definition of channel-state duality). The connections between high-dimensional steering, $n$-simulability and $n$-PEB channels motivate the following definition:
\begin{definition} \label{def: PSB}
A channel $\Lambda$ is \textbf{$n$-partially incompatibility breaking} ($n$-PIB) if for any measurement assemblage $N_{a|x}$ the resulting measurement assemblage $\Lambda^*(N_{a|x})$ is $n$-simulable\footnote{We note that our definition here is different to the notion of $n$-incompatibility breaking channels defined in \cite{heinosaari2015incompatibility}, which denotes channels who break the incompatibility of any $n$ observables.}.
\end{definition}
Hence, just as $\Lambda \otimes \mathds{1}$ maps all bipartite states to states with Schmidt number $n$ for $\Lambda$ a $n$-PEB channel, an $n$-PIB channel maps any measurement assemblage to an $n$-simulable one (in the Heisenberg picture). We can also gain insight from considering the structure of $n$-PIB channels and their relation to $n$-PEB channels. Elaborating upon Def.~\ref{def: PSB}, for $\Lambda$ to be $n$-PIB we require that for all measurement assemblages $N_{a|x}$, there exists an $n$-PEB channel $\Omega$ and a set of measurements $M_{a|x}$ such that
\begin{equation}
\Lambda^*(N_{a|x})=\Omega^*(M_{a|x}). \label{eq:explicit-n-PIB}
\end{equation}
Therefore, by simply taking $\Omega:=\Lambda$ and $M_{a|x}:=N_{a|x}$ in Eq.~\ref{eq:explicit-n-PIB}, we immediately arrive at the following result:
\begin{proposition}
Every $n$-PEB channel is $n$-PIB.
\end{proposition}
It is illuminating to consider the corresponding Choi states. For $n$-PEB channels, the Choi states are exactly the states with Schmidt number $n$. \cite{chruscinski2006partially}. For $n$-PIB channels, we have the following result:
\begin{theorem} \label{thm:pib = sdi-sn}
$\Lambda$ is $n$-PIB if and only if $\rho_\Lambda$ only leads to $n$-preparable assemblages.
\end{theorem}
\begin{proof}
Let $\sigma=\text{Tr}_A(\rho_\Lambda)$ fix the channel-state correspondence.
Suppose $\Lambda$ is $n$-PIB, that is, for all measurements $N_{a|x}$, we have that $\Lambda^*(N_{a|x})$ is $n$-simulable. By Theorem \ref{thm: n-sim = n-prep}, this is equivalent to $\sigma^{\frac{1}{2}}\Lambda^*(N_{a|x})^T\sigma^{\frac{1}{2}}$ being $n$-preparable for all $N_{a|x}$. Via channel-state duality, this is equivalent to
\begin{equation}
\text{Tr}_A(N_{a|x} \otimes \mathds{1} \rho)
\end{equation}
being $n$-preparable for all $N_{a|x}$.
\end{proof}
The result of the above Theorem is put into context of other similar type connections between a channel and its Choi state in Table~\ref{tab:cs-duality}. We note that our results on bounding entanglement assisted simulation models for the noisy singlet state translate directly into bounds on the identity channel under depolarising noise for being $n$-PIB on the resrticted class of projective measurements. This also shows that when only projective measurements are considered, there are channels that are $n$-PEB without being $n$-PIB.
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{2}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}\hline
Channel & State & Reference \\\hline
Entanglement breaking & Separable & \cite{horodecki2003entanglement} \\
Incompatibility breaking & Unsteerable & \cite{heinosaari2015incompatibility, kiukas2017continuous}\\
$n$-PEB & SN $n$ & \cite{terhal2000schmidt, chruscinski2006partially} \\
$n$-PIB & SDI-SN $n$ & Theorem \ref{thm:pib = sdi-sn}. \\\hline
\end{tabular}
\caption{Connections between channels and their Choi states. Our work naturally extends this picture by generalising both incompatibility breaking channels and unsteerable states in terms of dimension, and proving that they directly correspond to each other through generalised channel state-duality.}
\label{tab:cs-duality}
\end{table}
\section{Conclusions}
We have uncovered deep connections between high-dimensional versions of quantum steering, measurement incompatibility, and quantum channels, and demonstrated how a rich transfer of information is possible between these areas. In particular, we showed that the concept of $n$-simulability for sets of POVMs is equivalent to $n$-preparability for state assemblages in steering. This generalises the well-known connection between steering and joint measurability, which simply corresponds here to the case $n=1$.
We identified the resources required for observing GHDS, in particular that both high-dimensional measurements and high-dimensional entanglement are necessary. In the light of these results, we conclude that the experiment of Ref.~\cite{designolle2021genuine} also demonstrates measurements in pairs of MUBs that are highly incompatible, in the sense that are they not $14$-simulable.
Another direction is the idea of quantifying the degree of steering of an entangled state via a dimension. We obtained optimal models for isotropic entangled state, considering all projective measurements. This can be seen as a generalisation of the well-known type of local (hidden state) models by Werner, now allowing for low-dimensional entanglement as a resource. In turn, this leads to a characterisation of channels that map any set of projective measurements into $n$-simulable ones.
There are many exciting notions to explore that would extend this research direction. It would be useful to have better bounds on both $n$-preparability and $n$-simulability, and our work demonstrates that any progress here can be readily applied to both notions, providing a practical bridge between the two scenarios. Of particular interest would be to find bounds on the isotropic state being of SDI-SN $n$ under all POVMs, which would directly translate into the $n$-simulability of all POVMs. This follows analogous lines to the $n=1$ case (finding LHS bounds under projective/POVM measurements) \cite{barrett2002nonsequential}.
A natural further question would be to explore these questions in the context of nonlocality \cite{brunner2014bell}, which can be thought of as a fully-device independent (FDI) regime. Analogously to the steering case, one could define a behaviour $p(a,b|x,y)$ to be $n$-preparable if it could have arisen from a shared state of Schmidt number at most $n$, and define a state to have fully-device independent Schmidt number $n$ (FDI-SN $n$) if it can only lead to $n$-preparable behaviours. This is related to \cite{brunner2008testing}, where the authors introduce the concept of dimension witnesses to lower bound the dimension of the underlying state. One can quickly see in this scenario that if either of the two parties use $n$-simulable measurements, then the resulting behaviour will be $n$-preparable. Similarly, uncharacterised measurements on an $n$-preparable assemblage can only result in an $n$-preparable behaviour. However, it is less clear how one could characterise the corresponding channels whose Choi states have FDI-SN $n$. In the steering case we were able to exploit and generalise known connections with measurement incompatibility, but it seems that new tools may be needed to attack this problem in the fully device independent regime.
\textit{Acknowledgments.---} We acknowledge financial support from the Swiss National Science Foundation (projects 192244, Ambizione PZ00P2-202179, and NCCR SwissMAP). BDMJ acknowledges support
from UK EPSRC (EP/SO23607/1). T.C. would like to acknowledge the funding Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's
Excellence Strategy EXC-2123 QuantumFrontiers 390837967, as well as the support of the Quantum Valley Lower Saxony and the DFG through SFB 1227 (DQ-mat).
\bibliographystyle{unsrt}
| 2024-02-18T23:40:03.530Z | 2022-07-12T02:00:46.000Z | algebraic_stack_train_0000 | 1,203 | 6,512 |
|
proofpile-arXiv_065-5958 | \section*{Introduction}
As all market makers, FX dealers are naturally portfolio managers. By providing liquidity to clients in multiple currency pairs, they build inventory and have to manage the ensuing inventory risk which is a subtle combination of uncertainty of the client flow, market liquidity and market price risk at the portfolio level.\\
The management of inventory risk in financial markets has been a topic of recent interest in the academic field of quantitative finance, starting with the seminal paper \cite{avellaneda2008high} by Avellaneda\footnote{Marco Avellaneda passed away while we were finishing this paper. This paper is therefore a natural opportunity to pay him a tribute for his major contributions to the field and beyond.} and Stoikov who revived an old economic literature on the topic that dated back to the 1980s (see, for instance, Ho and Stoll \cite{ho1981optimal}).
Avellaneda and Stoikov paved the way to a long list of contributions. In a nutshell,\footnote{See the books \cite{cartea2015algorithmic} and \cite{gueant2016financial} for a detailed bibliography.} Guéant et al. provided in \cite{gueant2013dealing} a detailed analysis of the stochastic optimal control problem introduced in \cite{avellaneda2008high} and proposed closed-form approximations of the optimal quotes. New features were then progressively added to get closer and closer to reality: several trade sizes, client tiering, risk externalization through hedging, etc.
Cartea and Jaimungal, along with various coauthors, replaced in \cite{cartea2014buy} the original expected utility framework of \cite{avellaneda2008high} by a more intuitive one closer to mean-variance optimization and added new features while studying the impact of parameters ambiguity in a series of papers.
\\
To be used in practice, market making models need to take into account the dependence structure between assets since market makers or market making algorithms typically cover dozens or hundreds of assets.
A mathematical framework for multi-asset market making has been proposed in \cite{gueant2016financial, gueant2017optimal}, but computing the optimal quotes almost always requires to solve differential equations in very high dimension (the number of equations typically grows exponentially with the number of assets).
Several techniques have been proposed to tackle the curse of dimensionality: the use of a few numbers of risk factors, the use of neural networks and reinforcement learning techniques, and the use of a reduction technique toward a linear-quadratic control problem (see \cite{bergault2021closed}) that provides surprisingly good approximations of the optimal market making strategy.\\
Existing multi-asset market making models consider assets labelled in the same currency. This is typically consistent with the problem faced by market makers in most asset classes (e.g. corporate bonds in Europe or in the US). However, the problem faced by FX dealers is different: each currency pair provides indeed a valuation of one currency in terms of the other. The inventory of a given currency is usually managed in the most liquid, so-called direct currency pair, typically against USD.
However, non-USD pairs, so-called crosses, are also very important. The presence of crosses introduces both complexity and opportunities since there are several ways to achieve the same result (e.g. buying EURGBP is equivalent to buying EURUSD and selling GBPUSD). Our model is, to our knowledge, the first to address the problem faced by an FX dealer who quotes a wide variety of currency pairs, including crosses, and can mitigate inventory risk by hedging on dealer-to-dealer (D2D) and/or all-to-all platforms. In particular it answers subtle questions where liquidity and correlation issues are intertwined, e.g. for positively correlated EURUSD and GBPUSD legs, what should be the spread of EURGBP? How should it compare to the sum of leg spreads? How to attract offsetting client flow or deter risky client flow in an optimal way when the marker maker is active in both direct pairs and crosses? How to optimally hedge the portfolio when trading platforms exist for direct pairs and some crosses?\\
As a consequence of FX market specific characteristics, our model differs from classical market making models in many ways, going from price dynamics to how trade sizes are defined, to the choice of the mathematically relevant state variables.
To obtain optimal quotes and optimal hedging rates, one needs to solve a high-dimensional differential equation, but the techniques proposed in \cite{bergault2021closed} can be adapted so that approximations of the optimal strategies can be obtained by solving a low-dimensional matrix Riccati-like differential equation (the dimensionality of the differential equations to solve only grows quadratically with the number of currencies, hence linearly with the maximal number of currency pairs).\\
We start by presenting our multi-currency market making model and show that, through a smart choice of the state variables,
the problem boils down to solving a partial differential equation.
We then demonstrate how the ideas developed in \cite{bergault2021closed} can be adapted to approximate the true value function in closed form up to the computation of the solution of a matrix Riccati-like differential equation. We hereby claim that our approach is sufficiently scalable for practical use since the size of the (square) matrix mentioned above corresponds to the number of currencies. We further discuss the relevance of our closed-form approximations and illustrate numerically the resulting market making strategy in a market with 5 major currencies: USD, EUR, JPY, GBP and CHF.
In particular, we demonstrate how correlated currency pairs are engaged through pricing and hedging when managing the risk in a single pair and show the impact of the presence of several currency pairs on the internalization vs. externalization dilemma faced by FX dealers \cite{butz2019internalisation}.
With the help of Monte Carlo simulations, we confirm that the inventory probability distribution is shaped up by the portfolio risk profile and that the observed risk autocorrelation time is much shorter than any of the currency pair position autocorrelation times, leading to cost savings.
\\
\section*{Multi-currency market making model}
We consider a market with $d$ currencies over a time interval of length $T$. We regard currency $1$ as the reference currency, typically USD, and consider $d$ price processes $(S^1_t)_{t\in [0,T]}, \ldots, (S^d_t)_{t\in [0,T]}$ modelling the evolution of the market price (exchange rate) of each of the $d$ currencies in terms of the reference currency.\footnote{Of course $(S^1_t)_{t\in [0,T]}$ is constant with $S^1_t = 1, \forall t\in [0,T]$.}\\
We consider an FX dealer in this market. The dealer has inventories in $d$ currencies modelled by $d$ processes $(q^1_t)_{t\in [0,T]}, \ldots, (q^d_t)_{t\in [0,T]}$. We assume they divide their clients into $N$ tiers and stream them pricing ladders at the bid and at the ask for each currency pair. For each tier $n \in \{1, \ldots, N\}$ and each couple $(i,j) \in \{1, \ldots, d\}^2$ with $i\neq j$, we introduce a $\mathbb R_+^* - $marked point process $J^{n,i,j}(dt,dz)$ modelling transactions with clients from tier $n$ regarding the currency pair $(i,j)$, where $z$ is the size variable measured in reference currency. Formally, if $J^{n,i,j}(dt,dz)$ has a jump corresponding to size $z$ at time $t$, it means that the market maker sells to the client a ``quantity'' $z/S^j_t$ of currency $j$ and receives in exchange a payment in currency $i$ in line with the corresponding streamed pricing ladder. To build our model, we assume that this payment is decomposed into a ``quantity'' $z/S^i_t$ of currency $i$ and fees, denoted by $z\delta^{n,i,j}(t,z)$, that are accumulated on a separate account labeled in reference currency -- hence $\delta^{n,i,j}$ represents the markup (possibly negative) in percentage or basis points.\\
For each $n \in \{1, \ldots, N\}$ and each couple $(i,j) \in \{1, \ldots, d\}^2$ with $i\neq j$, the process $J^{n,i,j}(dt,dz)$ has an intensity kernel $(\nu^{n,i,j}_t(dz))_{t\in [0,T]}$ verifying
$$\nu^{n,i,j}_t(dz) = \Lambda^{n,i,j}\left(z, \delta^{n,i,j} (t,z)\right)dz,$$
where $\Lambda^{n,i,j}$ is called the intensity function of the process $J^{n,i,j}(dt,dz)$. Following \cite{barzykin2021market}, we assume that the function $\Lambda^{n,i,j}$ is of the logistic type:\footnote{Generalizations are of course straightforward.}
$$\Lambda^{n,i,j}(z,\delta)=\lambda^{n,i,j}(z) f^{n,i,j}(z,\delta) \quad \text{with} \quad f^{n,i,j}(z,\delta) = \frac{1}{1+e^{\alpha^{n,i,j}(z) + \beta^{n,i,j}(z) \delta}}.$$
In addition to skewing quotes (internalization) to attract or divert client flow, the FX dealer can trade currency $i$ against currency $j$ on the D2D segment of the market (externalization). To model this form of hedging we introduce for each couple $(i,j) \in \{1, \ldots, d\}^2$ with $i<j$ a process $\left(\xi^{i,j}_t \right)_{t\in [0,T]}$ which models the amount (expressed in reference currency) per unit of time of currency $i$ bought by the dealer and paid in currency $j$. Unlike what happened for pricing, we only consider couples $(i,j)$ with $i<j$: $\xi^{i,j}_t$ can be negative if the dealer buys currency $j$ and pays in currency $i$. When trading at rate $\xi^{i,j}_t$, we assume that the dealer incurs execution costs\footnote{In reality, not all pairs are available for trading on D2D platforms. This would correspond to very high execution cost in our model.} modeled by a term $L^{i,j}(\xi^{i,j}_t)$ (accounted in reference currency like the above fees), where the function $L^{i,j}$ is chosen of the form $L^{i,j}(\xi) = \psi^{i,j} |\xi| + \eta^{i,j} |\xi|^{1+\phi^{i,j}}$ ($\phi^{i,j} = 1$ throughout this paper).\\
Wrapping up, we get that the dynamics of inventories is given by
$$\forall i\in \{1, \ldots, d\}, \quad dq^i_t = \sum_{n=1}^N \underset{j\neq i}{\sum_{ j=1}^d} \int_{z \in \mathbb R_+^*} \frac z{S^i_t} \left(J^{n,i,j}(dt,dz) - J^{n,j,i}(dt,dz) \right)+ \left( \sum_{j=i+1}^d \frac{\xi^{i,j}_t}{S^i_t} - \sum_{j=1}^{i-1}\frac{\xi^{j,i}_t}{S^i_t} \right)dt$$
and the dynamics of the account where fees and execution costs are accounted for is
$$dX_t = \sum_{n=1}^N \sum_{1\le i\neq j \le d} \int_{z \in \mathbb R_+^*} z\delta^{n,i,j} (t,z) J^{n,i,j}(dt,dz) - \sum_{1\le i<j\le d} L^{i,j}\left(\xi^{i,j}_t \right)dt.$$
Let us now come to the dynamics of exchange rates with respect to the reference currency. We assume that
$$\forall i\in \{1, \ldots, d\}, \quad dS^i_t = \mu^i_t S^i_t dt + \sigma^i S^i_t dW^i_t + k^i \left( \sum_{j=i+1}^d \xi^{i,j}_t - \sum_{j=1}^{i-1}\xi^{j,i}_t \right) S^{i}_t dt,$$
where $(\mu^i_t)_{t\in [0,T]}$ is a deterministic drift, $\sigma^i \ge 0$ is the volatility of currency $i$ with respect to the reference currency, $k^i$ is a linear permanent market impact parameter, and $\left(W^1_t,\ldots,W^d_t \right)_{t\in [0,T]}$ is a $d$-dimensional correlated Brownian motion. Of course, $\mu^1 = \sigma^1 = k^1 = 0$.\\
It is convenient for what follows to use vector and matrix notations:
$$S_t = \left(S^1_t, \ldots, S^d_t \right)^\intercal \in \mathbb R^d, \quad \mu(t) = \mu_t = \left(\mu^1_t, \mu^2_t, \ldots, \mu^d_t \right)^\intercal \in \mathbb R^d \quad \text{and} \quad \Sigma = (\rho^{i,j} \sigma^i \sigma^j)_{1\le i, j \le d} \in \mathcal S^+_{d}(\mathbb R),$$ where $\rho^{i,j} = \frac{d\langle W^i,W^j\rangle}{dt}$.\\
Given this dynamics for prices, we conclude that for all $i\in \{1, \ldots, d\}$, the process $\left(Y^i_t \right)_{t\in [0,T]}$ = $\left(q^i_t S^i_t \right)_{t\in [0,T]}$ corresponding to the inventory of currency $i$ measured in reference currency has the following Markovian dynamics:
\begin{align}
dY^i_t &= \mu^i_t Y^i_{t-} dt + \sigma^i Y^i_{t-} dW^i_t + k^i \left( \sum_{j=i+1}^d \xi^{i,j}_t - \sum_{j=1}^{i-1}\xi^{j,i}_t \right) Y^{i}_{t-} dt\nonumber\\
&\quad + \sum_{n=1}^N \underset{j\neq i}{\sum_{ j=1}^d} \int_{z \in \mathbb R_+^*} z \left(J^{n,i,j}(dt,dz) - J^{n,j,i}(dt,dz) \right) + \left( \sum_{j=i+1}^d \xi^{i,j}_t - \sum_{j=1}^{i-1}\xi^{j,i}_t \right)dt.\nonumber
\end{align}
In what follows, we denote by $(Y_t)_{t\in [0,T]}$ the vector of inventories measured in reference currency, i.e. $Y_t= \left(Y^1_t, \ldots, Y^d_t \right)^\intercal \in \mathbb R^d$.\\
The FX dealer wants to maximize the Mark-to-Market value of their portfolio at time $T$, while mitigating inventory risk. Mathematically, we assume that they want to maximize
$$\mathbb E \left[ X_T + \sum_{i=1}^d Y^i_T- \frac{\gamma}{2} \int_0^T Y_t^\intercal \Sigma Y_t dt - \ell \left(Y_T \right) \right]$$
over the admissible controls $(\delta^{n,i,j})_{1\le n \le N, 1\le i\neq j \le d}$ and $(\xi^{i,j})_{1\le i<j \le d}$, where $\gamma >0$ represents the risk aversion of the market maker and $\ell$ is a penalty for the remaining inventory at time $T$.\footnote{$\ell$ could account for the market impact when unwinding.} Applying Ito's formula to the process $\left(X_t + \sum_{i=1}^d Y^i_t \right)_{t \in [0, T]}$ allows us to see that this problem is equivalent to maximizing
\begin{align*}
\mathbb{E}&\Bigg[\int\limits_{0}^{T} \Bigg\lbrace \sum_{n=1}^N \sum_{1\le i\neq j \le d} \int_{z \in \mathbb R_+^*} \Big(z\delta^{n,i,j}(t,z) \Lambda^{n,i,j}(\delta^{n,i,j}(t,z)) \Big)dz + \sum_{i=1}^d \bigg(\mu^i_t + k^i \Big( \sum_{j=i+1}^d \xi^{i,j}_t - \sum_{j=1}^{i-1}\xi^{j,i}_t \Big) \bigg)Y^{i}_t\\
&\qquad \qquad - \sum_{1\le i<j\le d} L^{i,j}\left(\xi^{i,j}_t \right) - \frac{\gamma}{2} Y_t^\intercal \Sigma Y_t \Bigg\rbrace dt - \ell(Y_T) \Bigg].\nonumber
\end{align*}
We denote by $\theta:[0,T]\times \mathbb R^d \rightarrow \mathbb{R}$ the value function of this stochastic control problem. The associated Hamilton-Jacobi-Bellman equation is
\begin{equation}
\begin{cases}
\!&0 = \partial_t \theta(t,y) + y^\intercal \mu(t) - \frac{\gamma}{2}y^\intercal \Sigma y + \frac 12\text{Tr}\left(\mathcal D (y)\Sigma \mathcal D (y) D^2_{yy} \theta(t,y)\right)\\
\!& \qquad + \text{\scalebox{0.6}[1]{$\bigint$}}_{\!\!\mathbb{R}_{+}^{*}} \underset{n=1}{\overset{N}{\mathlarger \sum}} \underset{1\le i\neq j \le d}{{\mathlarger \sum}} zH^{n,i,j} \left(z,\frac{\theta(t,y) - \theta(t,y+ze^i - z e^j) }{z}\right)\lambda^{n,i,j}(z) dz\\
\!& \qquad + \underset{1\le i<j \le d}{{\mathlarger \sum}} \mathcal H ^{i,j} \left(\partial_{y^i}\theta(t,y) - \partial_{y^j}\theta(t,y) + k^i y^i \left(1 + \partial_{y^i}\theta(t,y) \right) - k^j y^j \left(1 + \partial_{y^j}\theta(t,y) \right)\right),\\
\!&\theta(T,y) = -\ell(y),
\end{cases}
\label{eqn:HJB}
\end{equation}
where
\begin{equation}
H^{n,i,j}:(z,p)\in\mathbb R_+^* \times \mathbb{R} \mapsto \underset{\delta }{\sup}\ f^{n,i,j}(z,\delta)(\delta-p),\nonumber
\end{equation}
\begin{equation}
\mathcal H^{i,j}:p\in\mathbb{R} \mapsto \underset{\xi}{\sup}\ p\xi - L^{i,j}(\xi),\nonumber
\end{equation}
and $\mathcal D (y)$ denotes the diagonal $d\times d$ matrix such that $\mathcal D (y)_{i,i} = y_i$ for all $i\in\{1, \ldots, d\}.$\\
It is proved in \cite{gueant2017optimal} that for all $(n,i,j)$, the supremum in the definition of $H^{n,i,j}(z,p)$ is reached at a unique $\bar \delta^{n,i,j}(z,p) = (f^{n,i,j})^{-1} \left(-\partial_p{H^{n,i,j}} (z,p) \right)$ and this function can easily be computed numerically in the logistic case we consider. If $\theta$ is known, we therefore obtain the optimal quotes in the following form
\begin{align}\label{optquotes}
\delta^{n,i,j*}(t,z) = \bar \delta^{n,i,j} \left(z, \frac{\theta(t,Y_{t-}) - \theta(t,Y_{t-}+ze^i - z e^j) }{z} \right).
\end{align}
Similarly, the optimal trading rates are given by
\begin{align}\label{optrates}
\xi^{i,j*}_t = {\mathcal H^{i,j}}' \left(\partial_{y^i}\theta(t,Y_{t-}) - \partial_{y^j}\theta(t,Y_{t-}) + k^i Y^i_{t-} \left(1 + \partial_{y^i}\theta(t,Y_{t-}) \right) - k^j Y^j_{t-} \left(1 + \partial_{y^j}\theta(t,Y_{t-}) \right) \right).
\end{align}
\section*{Approximation of the value function and the optimal strategy}
Following the same ideas as in \cite{bergault2021closed}, we now approximate for $n \in \{1, \ldots, N\}$ and for each couple $(i,j) \in \{1, \ldots, d\}^2$ with $i\neq j$ the Hamiltonian function $H^{n,i,j}$ by a quadratic function $\check H^{n,i,j}$:
$$\check H^{n,i,j}(z,p) = \alpha_0^{n,i,j}(z) + \alpha_1^{n,i,j}(z) p + \frac 12 \alpha_2^{n,i,j}(z) p^2,$$
where a natural choice is of course
$$\alpha_0^{n,i,j}(z) = H^{n,i,j}(z,0),\quad \alpha_1^{n,i,j} = \partial_p{H^{n,i,j}}(z,0),\quad \text{and} \quad \alpha_2^{n,i,j} = \partial^2_{pp}{H^{n,i,j}}(z,0).$$
The structure of the problem leads us to approximate the Hamiltonian terms associated with $\mathcal H^{i,j}$ by $0$ since $\mathcal H^{i,j}$ is typically flat around $0$ when $\psi^{i,j} > 0$.\\
We then consider the new equation
\begin{equation}
\begin{cases}
\!&0 = \partial_t \check \theta(t,y) + y^\intercal \mu(t) - \frac{\gamma}{2}y^\intercal \Sigma y + \frac 12\text{Tr}\left(\mathcal D (y)\Sigma \mathcal D (y) D^2_{yy} \check \theta(t,y)\right)\\
\!& \qquad + \text{\scalebox{0.6}[1]{$\bigint$}}_{\!\!\mathbb{R}_{+}^{*}} \underset{n=1}{\overset{N}{\mathlarger \sum}} \underset{1\le i\neq j \le d}{{\mathlarger \sum}} z\check H^{n,i,j} \left(z,\frac{\check\theta(t,y) - \check \theta(t,y+ze^i - z e^j) }{z}\right)\lambda^{n,i,j}(z) dz,\\
\!&\check \theta(T,y) = -\ell(y)
\end{cases}
\label{eqn:HJBap0}
\end{equation}
which can be written as
\begin{equation}
\begin{cases}
\!&0 = \partial_t \check \theta(t,y) + y^\intercal \mu(t) - \frac{\gamma}{2}y^\intercal \Sigma y + \frac 12\text{Tr}\left(\mathcal D (y)\Sigma \mathcal D (y) D^2_{yy} \check \theta(t,y)\right)\\
\!& \quad + \underset{n=1}{\overset{N}{\mathlarger \sum}} \underset{1\le i\neq j \le d}{{\mathlarger \sum}} \text{\scalebox{0.6}[1]{$\bigint$}}_{\!\!\mathbb{R}_{+}^{*}}\Big( z\alpha_0^{n,i,j}(z) + \alpha_1^{n,i,j}(z) \left(\check\theta(t,y) - \check \theta(t,y+ze^i - z e^j) \right)\\
\!& \quad + \frac 1{2z}\alpha_2^{n,i,j}(z)\left(\check\theta(t,y) - \check \theta(t,y+ze^i - z e^j) \right)^2 \Big) \lambda^{n,i,j}(z) dz,\\
\!&\check \theta(T,y) = -\ell(y).
\end{cases}
\label{eqn:HJBap1}
\end{equation}
If $\ell(y) = y^\intercal \kappa y$ with $\kappa$ a semi-definite positive symmetric matrix, then Eq. \eqref{eqn:HJBap1} has a solution of the form $\check \theta(t,y) = -y^\intercal A(t)y - y^\intercal B(t) - C(t)$ where $t \mapsto A(t) \in \mathcal{S}_d$, $t \mapsto B(t) \in \mathbb R^d$ and $t \mapsto C(t) \in \mathbb R$ solve differential equations. As the value of $C$ is irrelevant for what follows, we only report here the equations for $A$ and $B$:
\begin{align}\label{ODEsys}
\begin{cases}
A'(t) &= 2A(t) M A(t) - \Sigma \odot A(t) - \frac{\gamma}{2}\Sigma\\
B'(t) &= \mu(t) + 2A(t) V + 2A(t) \tilde V\left(A(t) \right) + 2A(t) M B(t),\\
A(T) &=\kappa ,\quad B(T) = 0,
\end{cases}
\end{align}
where $\odot$ denotes the Hadamard product,
$$M = \mathcal D \left(\left(\overline M + \overline M^\intercal \right) U \right) - \left(\overline M + \overline M^\intercal \right),$$
$$V = \left(\underline M - \underline M^\intercal \right)U,$$
and
$$\tilde V (A) = \left(\overline V(A) - \overline V(A)^\intercal \right)U $$
with $U = (1,\ldots, 1) ^\intercal \in \mathbb R^d$, $\overline M$ a $d\times d$ matrix such that
$$\overline M_{i,j} = \sum_{n=1}^N \int_{\mathbb R_+^*}\alpha_2^{n,i,j}(z) z \lambda^{n,i,j}(z)dz,$$
$\underline M$ a $d\times d$ matrix such that
$$\underline M_{i,j} = \sum_{n=1}^N \int_{\mathbb R_+^*}\alpha_1^{n,i,j}(z) z \lambda^{n,i,j}(z)dz,$$
and
$$\overline V(A)= \overline{\mathcal D}(A) P + P \overline{\mathcal D}(A) -2 P \odot A,$$
where $\overline{\mathcal D}(A)$ is a $d\times d$ diagonal matrix with the same diagonal as $A$, and $P$ is a $d\times d$ matrix such that
$$P_{i,j} = \sum_{n=1}^N \int_{\mathbb R_+^*}\alpha_2^{n,i,j}(z) z^2 \lambda^{n,i,j}(z)dz.$$
The ODE system \eqref{ODEsys} involves a matrix Riccati-like differential equation whose solution can be approximated very easily using an Euler scheme. Once $A$ and $B$ are obtained, approximations of the optimal strategies can be obtained by replacing $\theta$ by $\check \theta$ in Eqs. \eqref{optquotes} and \eqref{optrates}. We thereby obtain
$$\check \delta^{n,i,j}(t,z) = \bar \delta^{n,i,j} \bigg(z,\Big(\big(2Y_{t-} + z (e^i-e^j)\big)^\intercal A(t) + B(t)^\intercal \Big) (e^i-e^j) \bigg),$$
and
\begin{eqnarray*}
\check \xi^{i,j}_t = {\mathcal H^{i,j}}' \Big(-\big(A(t) Y_{t-} + B(t)\big)^\intercal(e^i-e^j)&\!\!+\!\!& k^i Y^i_{t-} \left(1 -\big(A(t) Y_{t-} + B(t)\big)^\intercal e^i \right)\\ &\!\!-\!\!& k^j Y^j_{t-} \left(1 -\big(A(t) Y_{t-} + B(t)\big)^\intercal e^j \right) \Big).
\end{eqnarray*}
\section*{Numerical results and discussion}
Before illustrating the market making strategy proposed above, it is noteworthy that we have validated, when $d=2$, our approximations against the optimal strategy computed thanks to the solution of Eq. \eqref{eqn:HJB} approximated with a monotone implicit Euler scheme on an inventory grid. Using parameters inspired from earlier work \cite{barzykin2021algorithmic, barzykin2021market}, we studied both the strategies and the corresponding efficient frontier. Only under extreme conditions which are not practically relevant, such as strong order flow asymmetry (as high as fivefold) and very high or very low risk aversion, have we detected significant deviations.\\
For our illustrations, we consider a market with 5 major currencies: USD, EUR, JPY, GBP and CHF. We considered two tiers and a discretization of trade sizes corresponding to $1$, $5$, $10$, $20$ and $50$ M\$ for all currency pairs. We used the parameters documented in Table~\ref{parameters_table} unless specified otherwise. These parameters have been selected by analysing a subset of HSBC market making franchise, as previously described in~\cite{barzykin2021market}. However, they should not be considered as representative of HSBC but rather of a typical FX dealer. Standard currency pair naming convention is respected except that we used CHFUSD and JPYUSD instead of USDCHF and USDJPY to be consistent with our model (USD being the reference currency in our examples).\\
\begin{table}[!h]
\begin{center}
\vspace{10pt}
{\bf Direct pairs} \\
\begin{tabular}{|lccccccc|}
\hline
Pair & $\sigma \left(\frac{\text{bps}}{\sqrt{\text{day}}}\right)$
& $\lambda(z) \left(\frac{1}{\text{day}}\right)$
& $\alpha$ & $\beta \left(\frac{1}{\text{bps}}\right)$
& $\psi$ (bps)
& $\eta \left(\frac{\text{bps}\cdot\text{day}}{\text{M\$}}\right)$
& $k \left(\frac{\text{bps}}{\text{M\$}}\right)$ \\
\hline
EURUSD & 80 & 900, 540, 234, 90, 36 & -1.9, -0.3 & 11, 3.5 & 0.1 & $10^{-5}$ & $5\cdot10^{-3}$ \\
GBPUSD & 70 & 600, 200, 150, 40, 10 & -1.4, 0.0 & 5.5, 2.0 & 0.15 & $1.5\cdot10^{-5}$ & $7\cdot10^{-3}$ \\
CHFUSD & 60 & 420, 140, 105, 28, 7 & -1.2, 0.0 & 4.5, 1.9 & 0.25 & $2.5\cdot10^{-5}$ & $8\cdot10^{-3}$ \\
JPYUSD & 60 & 825, 375, 180, 105, 15 & -1.6, -0.1 & 9.0, 3.0 & 0.1 & $1.5\cdot10^{-5}$ & $6\cdot10^{-3}$ \\
\hline
\end{tabular}
\vspace{10pt}
{\bf Crosses} \\
\begin{tabular}{|lcccccc|}
\hline
Pair & $\rho$
& $\lambda(z) \left(\frac{1}{\text{day}}\right)$
& $\alpha$ & $\beta \left(\frac{1}{\text{bps}}\right)$
& $\psi$ (bps)
& $\eta \left(\frac{\text{bps}\cdot\text{day}}{\text{M\$}}\right)$ \\
\hline
EURGBP & 0.6 & 400, 50, 25, 20, 5 & -0.5, 0.5 & 3.5, 2.5 & 0.25 & $3\cdot10^{-5}$ \\
EURCHF & 0.5 & 400, 50, 25, 20, 5 & -0.5, 0.5 & 3.5, 2.5 & 0.25 & $3\cdot10^{-5}$ \\
EURJPY & 0.3 & 400, 50, 25, 20, 5 & -0.5, 0.5 & 3.5, 2.5 & 0.25 & $3\cdot10^{-5}$ \\
GBPCHF & 0.3 & 160, 20, 10, 8, 2 & -0.5, 0.5 & 3.5, 2.5 & 0.4 & $5\cdot10^{-5}$ \\
GBPJPY & 0.2 & 160, 20, 10, 8, 2 & -0.5, 0.5 & 3.5, 2.5 & 0.4 & $5\cdot10^{-5}$ \\
CHFJPY & 0.4 & 80, 10, 5, 4, 1 & -0.5, 0.5 & 3.5, 2.5 & 0.4 & $5\cdot10^{-5}$ \\
\hline
\end{tabular}
\caption{
Parameters for the currency pairs. The correlation coefficient $\rho$ provided for crosses describes correlation between the corresponding dollar-based legs.
Size ladders are the same for all pairs in reference currency, i.e. $z = 1, 5, 10, 20, 50$ M\$.
Two client tiers with different $\alpha$ and $\beta$ parameters (independent of $z$) are considered for each pair. Intensity amplitudes $\lambda(z)$ are taken to be the same for each tier.
}
\label{parameters_table}
\end{center}
\end{table}
In what follows, both the drift vector $\mu$ and the terminal penalty $\kappa$ are assumed to be $0$. The time horizon is set to $T = 0.05$ days (72 minutes) that ensures convergence towards stationary quotes and hedging strategy at time $t = 0$. In particular, to compute the market making strategy, we used $A(0)$ and $B(0)$ instead of $A(t)$ and $B(t)$ throughout to mimic what would happen in the stationary case.\footnote{The dynamics of $A(t)$ and $B(t)$ would matter if $\mu$ was not constant.}\\
Fig.~\ref{optimal_tob_majors} illustrates top of book pricing (i.e. for a size of $1$M\$) of EURUSD, GBPUSD and EURGBP as functions of GBP inventory, keeping the other inventories at $0$, for tier 1. GBPUSD pricing looks familiar, with a skew to attract risk-offsetting flow and divert risky flow. Without correlation and without the cross, EURUSD pricing would be unaffected by GBP inventory. Here, instead, positive correlation leads to a protective pricing strategy for EURUSD. The pricing of the cross pair EURGBP also attracts or diverts the flow as a function of the GBP inventory, and this in turn influences the pricing of EURUSD.\\
\begin{figure}[!h]
\centering
\includegraphics[width=0.78\textwidth]{optimal_tob_majors.pdf}
\caption{Optimal top of book pricing for the currency pairs EURUSD, GBPUSD and EURGBP as functions of GBP inventory with other inventories set to $0$ (tier 1). The curves represent respectively $\delta^{1,X,Y}$ and $-\delta^{1,Y,X}$ for each currency pair XY. Risk aversion: $\gamma = 20$ (M\$)$^{-1}$.
}
\label{optimal_tob_majors}
\end{figure}
Fig.~\ref{optimal_execution_majors} shows optimal hedging strategy as a function of EUR inventory when other inventories are equal to zero. EURUSD execution rate displays a familiar pattern with pure internalization area in the middle and nearly linear growth for larger positions. Understandably, when the inventory becomes very large the dealer may want to offload part of the risk into other correlated direct currency pairs and crosses.
The main reason is that for large inventories the price skew has likely already been exploited and one cannot expect much different client flow when skewing further.\\
\begin{figure}[!h]
\centering
\includegraphics[width=0.78\textwidth]{optimal_execution_majors.pdf}\\
\caption{Optimal execution rates for the different currency pairs as functions of EUR inventory with other inventories set to $0$. Insert: correlation matrix.
Risk aversion: $\gamma = 20$ (M\$)$^{-1}$.
}
\label{optimal_execution_majors}
\end{figure}
\vspace{-7mm}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.75\textwidth]{thresholds_trio.pdf}\\
\caption{Pure internalization area thresholds for GBP as functions of EUR inventory in a market with USD, EUR and GBP. Different correlation levels are color coded as labeled. Dashed line correspond to a market without the cross, i.e. only \mbox{EURUSD} and \mbox{GBPUSD}. Risk aversion: $\gamma = 20$ (M\$)$^{-1}$.
}
\label{thresholds_trio}
\end{figure}
Fig.~\ref{thresholds_trio} explores the effect of correlation and the influence of the cross pair on the pure internalization area in the case of the three currencies USD, EUR and GBP. In line with intuition, the pure internalization area is slanted because of correlation: it is not always optimal for a dealer to start hedging externally when positions in correlated currencies already mitigate part of the risk. The pure internalization area is even more slanted in the presence of the cross pair which can attract client flow on its own and limit the necessity to hedge externally. Interestingly, the presence of the cross influences the internalization area even without correlation: it is slanted and not horizontal even when $\rho = 0$.\\
Once the optimal strategy has been computed, one can follow \cite{barzykin2021market} and simulate the inventories resulting from the use of the market making strategy via standard Monte Carlo procedure. Fig.~\ref{inventory_pdf_trio} illustrates the probability distribution of inventories in EUR and GBP in the case of a market with the three currencies USD, EUR and GBP. This empirical distribution is superimposed onto the risk contour plot and we clearly see that risk drives the inventory distribution.\\
\begin{figure}[!h]
\centering
\includegraphics[width=0.75\textwidth]{inventory_pdf_trio.pdf}\\
\caption{Inventory risk (countour plot) defined as $\frac{\gamma}{2} y^\intercal \Sigma y$ and inventory probability distribution associated with simulations of the market making strategy
(2d histogram on the basis of a $10^6$ second long Monte Carlo trajectory) in a market with EURUSD, GBPUSD and EURGBP.
Risk aversion: $\gamma = 20$ (M\$)$^{-1}$.
}
\label{inventory_pdf_trio}
\end{figure}
Fig.~\ref{inventory_acf_majors} returns to the case of a market with the 5 currencies and shows that the inventory autocorrelation functions of individual pairs decay much slower than the risk autocorrelation function. This means that the dealer is able to offload risk fast while essentially trading slower and thus saving on impact and transaction cost. The figure also provides information on volume share, P\&L share and internalization ratio.
The latter is consistent with previously reported typical internalization levels of about 80\% for G10 currencies by top-tier banks \cite{schrimpf2019fx}.\\
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\textwidth]{inventory_acf_majors.pdf}\\
\caption{Component inventory and portfolio risk autocorrelation functions on the basis of a $5 \cdot 10^6$ second long Monte Carlo trajectory. Insert pie chart shows the corresponding volume share by currency pair (outer layer),
P\&L share by currency outside of USD (middle layer) and traded volume distribution among client tiers and hedging
(inner layer, $H$ denotes external hedging, $T_1$ and $T_2$ stand for the tiers). Risk aversion: $\gamma = 20$ (M\$)$^{-1}$.
}
\label{inventory_acf_majors}
\end{figure}
\section*{Concluding Remarks}
We have introduced and analyzed in detail numerically a multi-currency market making model incorporating fundamental risk controls taking into account correlations, client tiering, pricing ladders and external hedging with transaction cost and market impact. Approximation techniques are proposed which make the framework scalable to any number of currency pairs and thus offering immediate practical application to the FX industry.
The results obtained demonstrate efficient risk reduction due to optimization at portfolio rather than individual currency pair level.
\section*{Statement and acknowledgment}
The results presented in this paper are part of the research works carried out within the HSBC FX Research Initiative.
The views expressed are those of the authors and do not necessarily reflect the views or the practices at HSBC.
The authors are grateful to Richard Anthony (HSBC) for helpful discussions and support throughout the project.
\bibliographystyle{plain}
| 2024-02-18T23:40:03.665Z | 2022-07-12T02:01:27.000Z | algebraic_stack_train_0000 | 1,210 | 5,524 |
|
proofpile-arXiv_065-5968 | \section{Introduction}\label{sec:Intro}
Efficient reliable multicasting/broadcasting techniques
have been
investigated during the past thirty years
\cite{Metzer84:retransmission} and especially during the past decade
\cite{byers02:fountain,luby02:LT,shokrollahi06:raptor,Medard08:ARQ,Liva10:fountain,liva2010carq,blasco2011concatenation,schotsch2011performance,Vary2011:Allerton}.
Perhaps, the most successful approach to reliable multicast deals
with the so-called fountain codes \cite{byers02:fountain}. Consider
the case where a sender (or source) needs to deliver a source block (e.g., a file) to a set
of $N$ receivers. Consider furthermore the case where receivers are
affected by packet losses. In this scenario, the usage of an
\ac{ARQ} protocol can result in large inefficiencies, since receivers
may loose different packets, and hence a large number of
retransmissions would crowd the downlink channel. When a fountain
code is used, the source block is split in a set of $k$ source
packets, which we will denote as source symbols. The sender computes linear
combinations (also referred to as fountain coded packets, or output symbols) of the $k$ source packets and broadcasts them through
the communication medium. After receiving $k$ fountain coded
packets, the receivers can try to recover the source packets. {In case of decoding failure, they will} try again to decode
{after receiving} additional packets.
The efficiency of a fountain code deals with the amount of packets
that a receiver needs to collect for
recovering the source {block}. An \emph{idealized} fountain code would
allow the recovery with a failure probability $P_f=0$ from any
set of $k$ received packets. Actual fountain decoders need in
general to receive a larger amount of packets, $m=k+\delta$, for
succeeding in the recovery. Commonly, $\delta$ is referred to as (receiver)
\emph{overhead} of the fountain code, and is used to measure its
efficiency.
The first class of practical fountain codes are \ac{LT} codes
\cite{luby02:LT}. Among them, random \ac{LT} codes or \acp{LRFC}
{\cite{shokrollahi06:raptor,Medard08:ARQ}} deserve a particular attention due to
their excellent performance and to the {relatively simple} performance
model. Under \ac{ML} decoding, the failure probability of a binary
\ac{LRFC} {\cite{shokrollahi06:raptor,Medard08:ARQ}} can be accurately
modeled as $P_f\sim 2^{-\delta}$ for $\delta\geq2$. It can be proved
that $P_f$ is actually always upper bounded by $2^{-\delta}$
{\cite{berlekamp:bound,shokrollahi06:raptor,Medard08:ARQ}}. In
\cite{Liva10:fountain,schotsch2011performance} it was shown that
this expression is still accurate for fountain codes based on sparse
matrices (e.g., Raptor codes {\cite{shokrollahi06:raptor}})
under \ac{ML} decoding. In \cite{Liva10:fountain}, the
performance achievable by performing linear combinations of packets
on finite fields of order {larger} than $2$ ($\mathbb {F}_q$,
$q>2$) was analyzed. For a \ac{LRFC} over $\mathbb {F}_q$,
the failure probability under \ac{ML} decoding is bounded as
\cite{Liva10:fountain}
\begin{equation}\label{eq:tightbounds}
q^{-\delta-1}\leq P_f(\delta,q) < \frac{1}{q-1}q^{-\delta}
\end{equation}
where both bounds are tight already for $q=2$, and become tighter for
increasing $q$. The improvement in efficiency obtained by fountain
codes operating on fields of order larger than $2$ has been analyzed
in \cite{Liva10:fountain,Vary2011:Allerton} and has led to recent
standardization activities \cite{lubyraptorq}. In
\cite{Liva10:fountain,Vary2011:Allerton} it was also shown that
non-binary Raptor and \ac{LT} codes can in fact tightly approach the
bounds \eqref{eq:tightbounds} down to moderate error rates under
\ac{ML} decoding. Thus, \eqref{eq:tightbounds} can be successfully
used to model the performance of common classes of fountain codes.
The result is remarkable considering that for Raptor codes, under \ac{BP} decoding, both the encoding and decoding costs\footnote{The
cost is defined as the number of arithmetic field operations divided by
the number of source symbols, $k$.} are
$\mathcal O(\log(1/\varepsilon) )$ {\cite[Theorem 5]{shokrollahi06:raptor}},
being $\varepsilon=\delta/k$ the overhead (normalized to $k$) needed
to recover the source symbols with a high probability. For a
\ac{LRFC} the encoding cost is $\mathcal O(k)$ and the decoding cost
is $\mathcal O(k^2)$, and thus it does not scale favorably with the
source block size. However, \ac{BP} decoding is scarcely used in
practical Raptor decoder implementations \cite{MBMS05:raptor} due
its poor performance with source block lengths of practical interest ($k$
up to few thousands symbols). Efficient \ac{ML} decoding algorithms
based on \ac{GE} are usually adopted
\cite{lamacchia91:solving,studio3:RichardsonEncoding,miller04:bec,shokrollahi2005systems,MBMS05:raptor,paolini12:TCOM},
for which the decoding cost is $\mathcal O(k^2)$, though the
fraction of symbols that are recovered with quadratic cost can be
kept remarkably small. Similarly, in the short source block length
regime, the application of \acp{LRFC} under \ac{GE} decoding is
usually considered practical
\cite{Liva10:fountain,Vary2011:Allerton}.
In this paper, we introduce and analyze a further improvement of
the approach proposed in \cite{Liva10:fountain,Vary2011:Allerton}
{to design} fountain codes with good performance for short block
lengths. More specifically, a $(n,k)$ \ac{MDS} code is introduced in
parallel concatenation with the \ac{LRFC}. By doing that, the first
$n$ output symbols {are the codeword symbols of the \ac{MDS} code.}\footnote{This represents a crucial difference with Raptor
codes, for which the output of the precode is further encoded by a
\ac{LT} Code. Hence the first $n$ output symbols of a Raptor encoder
do not coincide with the output of the precode.}
We will assume that the
\ac{MDS} linear block code is constructed on the same field $\mathbb
{F}_q$ {as} the fountain code.
{A related rate-less construction was proposed in \cite{kasai}, where a mother non-binary low-density parity-check code was modified by replicating the codeword symbols (prior multiplication by a non-zero field element) and thus by (arbitrarily) lowering the code rate. In our work, the mother code is a \ac{MDS} code, while additional redundant symbols are produced by a linear random fountain encoder.}
For the proposed scheme, we illustrate how the performance of
\acp{LRFC} in terms of probability of decoding failure can be
remarkably improved thanks to the concatenation, especially for low to moderate
packet loss probabilities. Tight bounds on the decoding failure
probability vs. overhead {are} derived under the assumption of
\ac{ML} decoding. The accuracy of the bounds is confirmed through
simulations. An efficient \ac{ML} decoding algorithm is presented for
the case where a (generalized) \ac{RS} is used in the concatenation.
An analysis for the general case where the \ac{MDS} code is replaced by any arbitrary
linear block code, in a finite rate regime, is provided in the Appendix.
The paper is organized as follows. In Section
\ref{sec:concatenation} the proposed concatenated scheme is
introduced. Section \ref{sec:eff_decoding} provides an efficient {\ac{ML}} decoding algorithm. In Section \ref{sec:bounds} the performance {is analyzed and tight bounds on the decoding failure probability are derived}, while numerical results are presented in Section
\ref{sec:results}. Conclusions follow in Section \ref{sec:conc}.
\section{Concatenation of Block Codes with {Linear Random} Fountain
Codes}\label{sec:concatenation}
We define the
source block $\mathbf{u}=(u_1, u_2, \ldots, u_k)$ as a vector of source
symbols belonging to a finite field of order $q$, i.e.,
$\mathbf{u}\in \mathbb {F}_q^k$. In the proposed approach, the
source block is first encoded via a $(n,k)$ linear block
code $\mathcal{C}'$ over $\mathbb {F}_q$ with generator matrix
$\mathbf{G}'$. The encoded block is hence
given by
$\mathbf{c}'=\mathbf{u}\mathbf{G}'=(c'_1,c'_2,\ldots,c'_n)$. Additional
redundancy symbols can be obtained by computing {linear random}
combinations of the $k$ source symbols as
\begin{equation}
c_i=c_{i-n}''=\sum_{j=1}^{k}g_{j,i}u_j, \qquad
i=n+1,\ldots, l
\label{eq:encoding}
\end{equation}
where the coefficients $g_{j,i}$ in \eqref{eq:encoding} are picked from $\mathbb {F}_q$
with a uniform probability.
{The encoded sequence is thus}
$\mathbf{c}=(\mathbf{c}'|\mathbf{c}'')$. The
generator matrix of the concatenated code has the form
\begin{equation}
\mathbf{G}=
\underbrace{\left(\begin{array}{cccc}
g_{1,1} & g_{1,2} & \ldots & g_{1,n} \\
g_{2,1} & g_{2,2} & \ldots & g_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
g_{k,1} & g_{k,2} & \ldots & g_{k,n}
\end{array}\right|}_{\mathbf{G}'} \underbrace{\left|\begin{array}{cccc}
g_{1,n+1} & g_{1,n+2} & \ldots & g_{1,l} \\
g_{2,n+1} & g_{2,n+2} & \ldots & g_{2,l} \\
\vdots & \vdots & \ddots & \vdots \\
g_{k,n+1} & g_{k,n+2} & \ldots & g_{k,l}
\end{array}\right)}_{\mathbf{G}''}
\end{equation}
where $\mathbf{G}''$ is the generator matrix of the \ac{LRFC}. Note
that, being the \ac{LRFC} rate-less, the number $l$ of columns of
$\mathbf{G}$ can grow indefinitely. The encoder can be
seen hence as a parallel concatenation of the linear block code
$\mathcal C '$ and of a \ac{LRFC} (Fig. \ref{fig:par}) {and the encoded sequence can be written as
$\mathbf{c}=\mathbf{u}\mathbf{G}=(c_1,c_2,\ldots,c_l)$.} {The proposed construction allows generating infinitely many redundancy symbols. Thus, the encoder may be seen as a modified fountain encoder, whose first $n$ output symbols $(c_1,c_2,\ldots,c_n)$ correspond to the codeword output by the encoder of $\mathcal{C}'$, whereas the following $l-n$ symbols are the output of the \ac{LRFC} encoder.}
\begin {figure}[h]
\begin{center}
{\small
\begin{tikzpicture}[auto, node distance=1.3cm, label distance=6mm, >=latex']
\node [input, name=input] {};
\node [r_input, right of=input, node distance = 1.5cm](r_input) {};
\node [left_of_block, above of =r_input, node distance = 1.5 cm] (left_of_block) {};
\node [left_of_LRFC, below of =r_input, node distance = 1.5 cm] (left_of_LRFC) {};
\node [block, right of=left_of_block, node distance = 2 cm] (blockcode) {Block Code $(n,k)$};
\node [block, right of=left_of_LRFC, , node distance = 2 cm] (LRFC) {LRFC};
\node [point, right of=blockcode, node distance = 1.8 cm] (blockcode_point){};
\node [point, right of=LRFC, node distance = 1.8 cm] (LRFC_point) {};
\node [point, right of=input, node distance = 6.5cm] (out_point) {};
\node [output, right of=out_point, node distance = 1.5 cm] (output) {};
\draw [-] (input) -- node [label=above:{$u_1,u_2...u_k$}] { } (r_input);
\draw [-] (r_input) -- node { } (left_of_block);
\draw [-] (r_input) -- node { } (left_of_LRFC);
\draw [->] (left_of_block) -- node {} (blockcode);
\draw [->] (left_of_LRFC) -- node {} (LRFC);
\draw [-] (blockcode) -- node
[label=above:{$c_1,c_2...c_n$}] {} (blockcode_point);
\draw [-] (LRFC) -- node
[ label=above:{$c_{n+1},c_{n+2}...$}] {} (LRFC_point);
\draw [->] (out_point) -- node
[label=above:{$c_1,c_2...c_n,c_{n+1}...$}] {} (output);
\draw [-] (blockcode_point) -- node{} (out_point);
\end{tikzpicture}}
\caption{Fountain coding scheme seen as a parallel concatenation of
a $(n,k)$ linear block code and a linear random fountain
code.}\label{fig:par}
\end{center}
\end {figure}
\section{Efficient Decoding}\label{sec:eff_decoding}
{We consider a multicast setting, where a number of receivers try to retrieve the source block from the respectively-received output symbols. In this context, the decoder behaves as for a conventional fountain decoder. At each receiver, the correctly-received output symbols are forwarded to the decoder. As soon as $k$ output symbols are collected, a decoding attempt is performed. If the decoding is not successful, further output symbols are collected. Whenever an additional output symbol is received, another decoding attempt is performed. In case of successful decoding, the receiver acknowledges the correct reception. The overall number of symbols collected at a receiver is denoted by $m=k+\delta$ (recall that $\delta$ is referred to as the overhead). On the encoder side, as soon as a target success rate among the receivers is attained, encoding stops. Note that at each receiver, the $m$ output symbols that are collected may belong to
\begin{itemize}
\item[i)] the output of the $\mathcal C'$ encoder only,
\item[ii)] the output of the \ac{LRFC} encoder only,
\item[iii)] both the outputs of the $\mathcal C'$ encoder and the \ac{LRFC} encoder.
\end{itemize}
While in the third case there is no different with respect to a classical LRFC case, in the other two cases the structure of the $\mathcal C'$ generator matrix can be exploited to reduce the decoding complexity, as we will see next. Furthermore, when the channel erasure probability is sufficiently low, the event i) may dominate, leading to a remarkable improvement in the decoding failure probability. In this sense, the proposed scheme provides the same performance of a (universal) \ac{LRFC} at high channel erasure probabilities, whereas it will enjoy a boost in the efficiency when the channel erasure probability is low.}
We denote
by $\msr J=\{j_1, j_2, \ldots, j_{m}\}$ the set of the indexes on
the symbols of $\mathbf{c}$ that have been {collected by a specific receiver}. The received
vector {$\mathbf{y}$} is hence given by
\[
\mathbf{y}=(y_1, y_2, \ldots,
y_{m})=(c_{j_1},c_{j_2},\ldots,c_{j_{m}})
\]
and it can be related to the source block $\mathbf{u}$ as
{$
\mathbf{y}=\mathbf{u}\tilde{\mathbf{G}}$.}
Here,
$\tilde{\mathbf{G}}$ denotes the $k\times m$ matrix made by the
columns of $\mathbf{G}$ with indexes in $\msr J$, i.e.,
\[
\tilde{\mathbf{G}}= \left(\begin{array}{cccc}
g_{1,j_1} & g_{1,j_2} & \ldots & g_{1,j_{m}} \\
g_{2,j_1} & g_{2,j_2} & \ldots & g_{2,j_{m}} \\
\vdots & \vdots & \ddots & \vdots \\
g_{k,j_1} & g_{k,j_2} & \ldots & g_{k,j_{m}}
\end{array}\right).
\]
The recovery of $\mathbf{u}$ reduces to solving the system of
$m=k+\delta$ linear equations in $k$ unknowns
\begin{equation}
\tilde{\mathbf{G}}^T\mathbf{u}^T=\mathbf{y}^T.\label{eq:solve}
\end{equation}
{The solution of \eqref{eq:solve} can be obtained (e.g., via Gaussian
elimination)} if and only if $\textrm{rank}
(\tilde{\mathbf{G}})=k$.
Assuming $\mathcal C'$ being \ac{MDS}, the system
is solvable with probability $1$ if, among the $m$ received symbols,
at least $k$ have indexes in $\{1, 2, \ldots, n\}$, i.e., if at least
$m'\geq k$ symbols produced by the linear block encoder have been
received. {Let us consider the less trivial case where $m'<k$
among the $m$ received symbols have indexes in $\{1, 2, \ldots,
n\}$. We can partition $\tilde{\mathbf{G}}^T$ as
\begin{equation}
\tilde{\mathbf{G}}^T=\left(\begin{array}{c} \tilde{\mathbf{G}}'^T\\
\tilde{\mathbf{G}}''^T \end{array}\right)=\left(\begin{array}{cccc}
g_{1,j_1} & g_{2,j_1} & \ldots & g_{k,j_1} \\
g_{1,j_2} & g_{2,j_2} & \ldots & g_{k,j_2} \\
\vdots & \vdots & \ddots & \vdots \\
g_{1,j_{m'}} & g_{2,j_{m'}} & \ldots & g_{k,j_{m'}}\\ \hline
g_{1,j_{m'+1}} & g_{2,j_{m'+1}} & \ldots & g_{k,j_{m'+1}} \\
g_{1,j_{m'+2}} & g_{2,j_{m'+2}} & \ldots & g_{k,j_{m'+2}} \\
\vdots & \vdots & \ddots & \vdots \\
g_{1,j_{m}} & g_{2,j_{m}} & \ldots & g_{k,j_{m}}
\end{array}\right). \label{eq:G_partition}
\end{equation}
The \ac{MDS} property of $\mathcal{C}'$ assures that $\textrm{rank}
(\tilde{\mathbf{G}}')=m'$, i.e., the first $m'$ rows of
$\tilde{\mathbf{G}}^T$ are linearly independent. Note that the
$m''\times k$ matrix $\tilde{\mathbf{G}}''^T $ (with $m''=m-m'$) {can be modeled as
a random matrix whose elements are uniformly distributed in
$\mathbb F _q$.} It follows that the {matrix in}
\eqref{eq:G_partition} can be put (via column permutations over
$\tilde{\mathbf{G}}^T$ and row permutations/combinations over
$\tilde{\mathbf{G}}'^T$) in the form
\begin{equation}
\hat{\mathbf{G}}^T=\left(\begin{array}{ccc} \mathbf{I} & \vline &
\mathbf{A} \\\hline
\mathbf{0} & \vline & \mathbf{B}\\
\end{array}\right), \label{eq:G_partition_manipulation}
\end{equation}
where $\mathbf I$ is the $m' \times m'$ identity matrix,
$\mathbf{0}$ is a $m'' \times m'$ all-$0$ matrix, and $\mathbf{A}$,
$\mathbf{B}$ have respective sizes $m' \times (k-m')$ and $m''
\times (k-m')$. Note that the lower part of $\hat{\mathbf{G}}^T$
given by $\left(\mathbf{0} | \mathbf{B}\right)$ is obtained by
adding to each row of $\tilde{\mathbf{G}}''^T$ a linear combination
of rows from $\tilde{\mathbf{G}}'^T$, in a way that the $m'$
leftmost columns of $\tilde{\mathbf{G}}''^T$ are zeroed-out. It
follows that the statistical properties of $\tilde{\mathbf{G}}''^T$
are inherited by the $m'' \times (k-m')$ submatrix $\mathbf{B}$,
whose elements are hence {uniformly distributed} in $\mathbb
F_q$. {It follows that \eqref{eq:solve}} is solvable if and only if $\mathbf{B}$ is full
rank, i.e., if and only if $\textrm{rank}(\mathbf{B})=k-m'$.}
\subsection{An Efficient Decoding Algorithm}
{We assume next} the case where the \ac{MDS} code is a
$(n,k)$ \ac{GRS} code {with transposed generator matrix in
Vandermonde form}
\begin{equation}\label{eq:Transpose_generator_RS}
\mathbf{G}'^{T} = \left( {\begin{array}{*{20}c}
1 & \beta_1 & \cdots & \beta_1^{k-1} \\
1 & \beta_2 & \cdots & {\beta_2^{k - 1} } \\
\vdots & \vdots & \ddots & \vdots \\
1 & {\beta_{n}} & \cdots & \beta_{n}^{k - 1} \\
\end{array}} \right),
\end{equation}
where $\beta_i$, $i=1,\ldots ,n$, are $n$ distinct non-{zero} elements
of
$\mathbb{F}_q$. Efficient decoding can be achieved by {taking advantage of} the structure of
$\mathbf{G}'$.\footnote{In this
work we consider \ac{MDS} codes based on Vandermonde matrices, but
similar {arguments} hold for \ac{MDS} codes based on Cauchy
matrices.} {In fact, a Vandermonde matrix
can be inverted} with
quadratic complexity
\cite{Parker64:InverseVandermonde,Turner66:InverseVandermonde,Kaufman:InverseVandermonde,Wertz:InverseVandermonde,Gohberg:FastAlgorithmVandermonde}. {This property has been widely exploited} for efficient decoding
of \ac{GRS} over erasure channels
\cite{Forney66:concatenatedCodes,mceliece2002theory,brauchle2009systematic,brauchle2011efficient}.
In the following, we first review an efficient method for the
inversion of a Vandermonde matrix based on the LU factorization
\cite{Turner66:InverseVandermonde}. Then, we apply the algorithm of
\cite{Turner66:InverseVandermonde} to the decoding of the proposed concatenated
scheme.
\medskip
\subsubsection{Vandermonde Matrices and Their Inverse}\label{subsec:Vandermonde}
Let us consider a $\gamma \times \gamma$ Vandermonde matrix
\begin{equation
\mathbf{V} = \left( {\begin{array}{*{20}c}
1 & {x_1 } & \cdots & { x_1^{\gamma - 1} } \\
1 & {x_2 } & \cdots & { x_2^{\gamma - 1} } \\
\vdots & \vdots & \ddots & \vdots \\
1 & {x_\gamma } & \cdots & { x_\gamma^{\gamma - 1} } \\
\end{array}} \right)\nonumber
\end{equation}
where $x_i$, $i=1,\dots,\gamma$, are $\gamma$
distinct non-zero elements of $\mathbb{F}_q$. In the following, $\gamma$ will be referred to as
the \emph{degree} of the Vandermonde matrix.
The inverse of a $\mathbf V$ matrix can be efficiently computed
according to \cite{Turner66:InverseVandermonde} by means of two
recursions. In particular, the inverse matrix $ \mathbf{V}^{- 1}$
can be obtained as
\[
\mathbf{V}^{- 1}= \mathbf{U}^{-1} \mathbf{L}^{ - 1}
\]
where
$\mathbf{U}$ is an upper triangular matrix whereas $\mathbf{L}$ is a lower
triangular matrix. The coefficients $l_{i,j}$ of $\mathbf{L}^{-1}$
are given by
\begin{equation}
l_{i,j}=
\prod\limits_{h = 1,h \ne j}^i {\frac{1}{{x_j - x_h }}} \qquad j \le i,\,\, i>1
\nonumber
\end{equation}
with $l_{1,1}=1$ and $l_{i,j}=0$ for $j>i$. Note that, for the
$j$-th column of $\mathbf{L}^{-1}$, the elements below the
main diagonal can be computed according to the recursion
\[
l_{i,j} =
\frac{l_{i - 1,j}}{{x_j - x_i }}
\]
for $i=j+1,\dots,\gamma$, after computing $l_{j,j}$. {
Similarly, the coefficients $u_{i,j}$ of $\mathbf{U}^{-1}$ are given by
\begin{equation}
u_{i,j}=\left\{\begin{array}{lll}
u_{i - 1,j - 1} - u_{i,j - 1} x_{j - 1} & \qquad & j > i >1\\
- u_{i,j - 1} x_{j - 1} & \qquad & j > i, i=1
\end{array}\right.
\nonumber
\end{equation}
with $u_{i,i}=1$ and $u_{i,j}=0$ for $j<i$.}
The complexity of
computing $\mathbf{L}^{-1}$ and $\mathbf{U}^{-1}$ is $\mathcal
O(\gamma^2)$.
Let us denote with $ \msr J' =\left\{ { j_{1} ,j_{2} , \dots ,j_{
m'} } \right\} $ any set of $m' \le n$ indexes of rows of
$\mathbf{G}'^{T}$. Consider the square submatrix $\mathbf V$ of
$\mathbf{G}'^{T}$ composed by the $m'$ rows (shortened to their
first $m'$ elements) of $\mathbf{G}'^{T}$ with indexes in $\msr J'$,
\[
\mathbf V=\left( {\begin{array}{*{20}c}
1 & \beta_{j_{1}} & \cdots & \beta_{j_{1}}^{m'-1} \\
1 & \beta_{j_{2}} & \cdots & {\beta_{j_{2}}^{m'-1} } \\
\vdots & \vdots & \ddots & \vdots \\
1 & {\beta_{j_{m'}}} & \cdots & \beta_{j_{m'}}^{m'-1} \\
\end{array}} \right).
\]
Note that $\mathbf V$ is always a Vandermonde matrix
of degree $m'$, with elements $ x_i^{t - 1} =
\beta_{j_i} ^{t - 1}$, for $i,t=1,\dots,m'$. {This observation leads to the following decoding algorithm.}
\medskip
\subsubsection{Decoding Algorithm}\label{subsec:dec_algorithm}
Decoding can be performed with complexity $\mathcal O (k^2)$ {(equivalently, with a $\mathcal O (k)$ cost)} if $m' \ge k$
symbols from the \ac{MDS} code have been received. {In fact, this is the complexity of inverting a Vandermonde matrix of degree $k$.}
If $m'=0$, {the decoding complexity is equivalent to that of \ac{LRFC} decoder, thus cubic in $k$} (resulting in a $\mathcal O (k^2)$ cost), {which is the complexity of applying the \ac{GE} algorithm to solve a linear system of at least $k$ equations in $k$ unknowns.}
Let us consider the case {where $0 < m' < k$ symbols of the
\ac{MDS} code have been collected, among the $m \ge k$ received symbols.}
{We can define $m'$ as a fraction of $k$, $m' = \xi k$, with $0 < \xi < 1$.} The matrix
$\tilde{\mathbf{G}}^T$ can be written as \[ \tilde{\mathbf{G}}^T=\left(\begin{array}{ccc}
\mathbf{V} & \vline & \mathbf{A} \\\hline
\mathbf{B} & \vline & \mathbf{C}\\
\end{array}\right)\] where $\mathbf{V}$ is a Vandermonde matrix of degree $m'$, whereas $\mathbf{A}$, $\mathbf{B}$, $\mathbf{C}$ have respective sizes $m' \times (k-m')$, $(m-m') \times m'$, $(m-m') \times (k-m')$. An efficient decoding algorithm can be derived by inverting $\mathbf{V}$ according to the algorithm presented in Section \ref{subsec:Vandermonde}. {Given the matrix $\mathbf{V}^{-1}$, $\tilde{\mathbf{G}}^T$ can be multiplied by a full-rank matrix $\mathbf{M}$, with
\[ \mathbf{M} = \left(\begin{array}{ccc} \mathbf{V}^{-1} & \vline & \mathbf{0} \\\hline
\mathbf{0} & \vline & \mathbf{I}\\
\end{array}\right),\]
$\mathbf{I}$ being a $(m-m')\times(m-m')$ identity matrix, leading to the matrix depicted in Fig.~\ref{fig:subfig1}.
Accordingly, \eqref{eq:solve} is modified as
\[
\mathbf{M} \cdot \tilde{\mathbf{G}}^T \cdot \mathbf{u}^T =
\mathbf{M} \cdot \mathbf{y}^T.
\]}
The complexity of
multiplying {the $m' \times m'$ matrix} $\mathbf{V}^{-1}$ with the {matrix} $\mathbf{A}$, leading to
the $m' \times (k-m')$ matrix $\mathbf{A}'$, is
{$\mathcal O({m'}^2(k-m'))$, which is the complexity of performing standard matrix multiplications}.
Referring to Fig.~\ref{fig:subfig1}, the $i$-th row of the matrix $\mathbf{B}$ (for $i=1,\dots,m-m'$) can be zeroed-out by adding to it a linear combination of the $m'$ rows of $\left(\mathbf{I} | \mathbf{A}'\right)$.
The complexity of zeroing-out $\mathbf{B}$ is {$\mathcal O((m-m')m'(k-m'))$}, and the resulting system matrix is depicted in Fig.~\ref{fig:subfig2}. {In fact, $\mathbf{B}$ is a random matrix with entries uniformly distributed in $\mathbb F_q$. Due to the linear combinations performed to zero-out the matrix $\mathbf B$, the matrix $\mathbf{C}$ results in
in a new matrix $\mathbf{C}'$. Thus, a \ac{GE} step is performed on the matrix $\mathbf{C}'$ in order to recover the $k-m'$ symbols involved in the lower part of the system of equations with complexity $\mathcal O((k-m')^3)$}. {Finally, back-substitution is applied in order to recover the $m'$ symbols involved in the upper part of the system of equations with complexity $\mathcal O(m'(k-m'))$.}
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.6\columnwidth,draft=false]{./Figure2-eps-converted-to.pdf}
\centering \caption{Matrix of the system of equations in \eqref{eq:G_partition} after the multiplication with $\mathbf{M}$.}\label{fig:subfig1}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\columnwidth,draft=false]{./Figure3-eps-converted-to.pdf}
\centering \caption{Matrix of the system of equations in \eqref{eq:G_partition} with $\mathbf{B}=\mathbf{0}$.}\label{fig:subfig2}
\end{center}
\end{figure}
{Since $m'$ is a fraction of $k$, the complexity of the proposed algorithm is $\mathcal O(k^3)$ (i.e., $\mathcal O(k^2)$ cost). However, the constant hidden by the $\mathcal O$-notation becomes smaller as $m'$ approaches $k$ (in the limit case where $m'=k$, the decoding complexity is actually quadratic in $k$).}
\section{Performance Analysis} \label{sec:bounds} Based on the bounds
{\eqref{eq:tightbounds}}, tight upper and lower bounds for
the decoding failure probability of the fountain coding scheme can
be derived in case of a memory-less erasure channel. The decoding failure
probability $P_f=\Pr\{F\}$, where $F$ denotes the decoding
failure event is defined as the probability that the source block
$\mathbf{u}$ cannot be recovered out of a set of received symbols.
{We focus on the case where the linear block code
used in concatenation with the \ac{LRFC} is maximum distance
separable (MDS). When binary codes will be used, we assume
$(k+1,k)$ \ac{SPC} codes. When operating
on higher order finite fields, we consider \ac{GRS}
codes.}
Suppose now that an encoded sequence $\mathbf{c}$ composed of $l \ge n$ symbols
is transmitted over an erasure channel with erasure
probability of $\epsilon$.\footnote{The case $l < n$ is not considered since it
is equivalent to shortening the linear block code.} The probability that at least $k$ symbols
out of the $n$ symbols produced by the linear block code encoder are
received is given by
\begin{equation}
Q(\epsilon)=\sum_{i=k}^n {n \choose i} (1-\epsilon)^i
\epsilon^{n-i}.
\nonumber
\end{equation}
Hence, with a probability $P(\epsilon)=1-Q(\epsilon)$ the
receiver would need to collect symbols encoded by the \ac{LRFC}
encoder to recover the source block. Assuming that the receiver collects
$m=k+\delta$ symbols, out of which only $m'<k$ have been produced by
the linear block encoder, the conditional decoding failure
probability can be expressed as
\begin{equation}
\Pr\{F|m',m'<k,\delta\}=\Pr\{\textrm{rank}(\mathbf{B})<k-m'\}.\label{eq:cond_F_prob_1}
\end{equation}
{Note that $\mathbf{B}$ is a $m'' \times (k-m') = (k+\delta-m')
\times (k-m')$ random matrix having $\delta$
rows in excess w.r.t. the number of columns. We can thus
replace \eqref{eq:cond_F_prob_1} in \eqref{eq:tightbounds}, obtaining
the bounds
\begin{equation}
q^{-\delta-1}\leq\Pr\{F|m',m'<k,\delta\}<\frac{1}{q-1}q^{-\delta}.\label{eq:cond_F_prob_2_bounds}
\end{equation}
{Observing that the the bounds in \eqref{eq:tightbounds} are independent from the size of the matrix (i.e., they
depend only on the overhead), the conditioning
on $m'$ can be removed from \eqref{eq:cond_F_prob_2_bounds}, leaving}
\[
q^{-\delta-1}\leq\textrm{Pr}\{F|m'<k,\delta\}<\frac{1}{q-1}q^{-\delta}.
\]
The failure probability can be written as a function of $\delta$ and $\epsilon$ as
\begin{equation}
\begin{array}{cc}
P_f(\delta,\epsilon)=& \Pr\{F|m'<k,\delta\}\Pr\{m'<k\}\\
&+\Pr\{F|m'\geq k,\delta\}\Pr\{m'\geq k\}
\label{eq:general_bound}
\end{array}
\end{equation}
where $\Pr\{F|m'\geq k,\delta\}=0$ (since at least $k$ symbols output by the \ac{MDS} code encoder have been collected)} and
$\Pr\{m'<k\}=P(\epsilon)$. It results that
\begin{equation}
P(\epsilon) q^{-\delta-1}\leq
P_f(\delta,\epsilon)<P(\epsilon)\frac{1}{q-1}q^{-\delta}.\label{eq:final_bounds}
\end{equation}
From an inspection of \eqref{eq:tightbounds} and
{\eqref{eq:final_bounds}}, one can note how the bounds on the failure
probability of the concatenated scheme are scaled down by a factor
$P(\epsilon)$, which is a monotonically increasing
function of $\epsilon$. It follows that, when the channel conditions
are \emph{bad} (i.e., large $\epsilon$) $P(\epsilon)\rightarrow
1$, and the bounds in {\eqref{eq:final_bounds}} tend to coincide with
the bounds in \eqref{eq:tightbounds}. When the channel conditions
are \emph{good} (i.e., small $\epsilon$), most of the time $m'\geq
k$ symbols produced by the linear block encoder are received,
leading to a decoding success (recall the assumption of \ac{MDS}
code). In these conditions, $P(\epsilon)\ll 1$, and according to
the bounds in {\eqref{eq:final_bounds}} the failure probability may
decrease by several orders of magnitude. {Since the probability of
decoding failure of the concatenated scheme is a function of the erasure probability, the scheme is
not universal anymore. More specifically, at low channel erasure probabilities the proposed scheme will outperform universal (random) \acp{LRFC}, whereas for large erasure probabilities it will perform as a universal \ac{LRFC}.}
Fig. \ref{GF_2} shows the probability of decoding failure as a
function of the number of overhead symbols for a concatenated code
built using a $(11,10)$ \ac{SPC} code {over} $\mathbb {F}_2$. It can be
observed how, for lower erasure probabilities, {the gain in performance of the concatenated code with respect to a \ac{LRFC}
increases}. For
$\epsilon=0.01$ the decoding failure probability is more than $2$
orders of magnitude lower {than that of a \ac{LRFC}}. Fig. \ref{GF_16} shows the probability of
decoding failure vs. the number of overhead symbols for the
concatenation of a $(15,10)$ \ac{RS} and a \ac{LRFC} over $\mathbb
{F}_{16}$. The performance of the concatenated code is compared with
that of the \ac{LRFC} built on the same field for different erasure
probabilities. In this case the decrease in terms of probability of
decoding failure is {even more evident than the one of the binary case}. For a channel with an erasure probability
$\epsilon=0.05$, the probability of decoding failure of the
concatenated scheme is $4$ orders of magnitude lower than {that of} the
\ac{LRFC}.
{The analysis provided in this section is also
valid if the \ac{LRFC} is replaced by a Raptor
code.\footnote{{As observed in \cite{Liva10:fountain}, short Raptor
codes over $\mathbb {F}_{q}$ show performance close to those of
\acp{LRFC} constructed over the same field, down to moderate-low
error rates. We therefore expect that the results attained by the
proposed concatenation could be closely approached by replacing the
non-binary \ac{LRFC} with a non-binary Raptor code.}} In order to
calculate the performance of such a concatenated code one has to
{replace} in \eqref{eq:general_bound} the term
$\Pr\{F|m'<k,\delta\}$ by the probability of decoding failure
of the Raptor code. {Also in this case, the failure probability of the
concatenated scheme is reduced by a factor $P(\epsilon)$ with respect to that of the Raptor code}.}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.95\columnwidth,draft=false]{./Figure4-eps-converted-to.pdf}
\centering \caption{$P_f(\delta,\epsilon)$ vs. overhead for a concatenated code built using a $(11,10)$ \ac{SPC} code over $\mathbb {F}_{2}$ for different values of $\epsilon$.
Upper bounds are represented by solid lines and lower bounds are represented by dashed lines.} \label{GF_2}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.95\columnwidth,draft=false]{./Figure5-eps-converted-to.pdf}
\centering \caption{$P_f(\delta,\epsilon)$ vs. overhead for a concatenated code built using a $(15,10)$ \ac{RS} over $\mathbb {F}_{16}$ for different values of $\epsilon$.
Upper bounds are represented by solid lines and lower bounds are represented by dashed lines.} \label{GF_16}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.95\columnwidth,draft=false]{./Figure6-eps-converted-to.pdf}
\centering \caption{$P_f(\delta,\epsilon)$ vs. overhead for a
the concatenation of a $(15,10)$ \ac{RS} and \ac{LRFC} over $\mathbb {F}_{16}$ and $\epsilon=0.1$. Upper and lower bounds are represented by solid and dashed lines, respectively. The markers '$\circ$' denote simulations.} \label{GF_16_sim}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.95\columnwidth,draft=false]{./Figure7-eps-converted-to.pdf}
\centering \caption{$P_f(\delta,\epsilon)$ vs. overhead symbols for a
the concatenation of a $(11,10)$ \ac{SPC} code and a \ac{LRFC} over $\mathbb {F}_{2}$ and $\epsilon=0.1$. Upper bounds are represented by solid lines and lower bounds are represented by dashed lines. The points marked with '$\circ$' denote actual simulations.} \label{GF_2_sim}
\end{center}
\end{figure}
\section{Numerical Results}\label{sec:results}
{Fig. \ref{GF_16_sim} shows the probability of decoding failure $P_f$, as a function of the overhead $\delta$, obtained via Monte Carlo simulations. } {The results refer to a concatenation of a
$(15,10)$ \ac{RS} code with a
\ac{LRFC} over $\mathbb {F}_{16}$, for a channel erasure
probability $\epsilon=0.1$. The results are compared with}
the bounds of {\eqref{eq:final_bounds}}. {As expected, the
simulation results tightly match the bounds. Fig. \ref{GF_2_sim}
shows the simulation results for a concatenated code using a
$(11,10)$ parity check code over $\mathbb {F}_{2}$, and a channel with
an erasure probability $\epsilon=0.1$. Also in this case, the results are remarkably close to the bounds.}
{The performance of the concatenated scheme in a system
with a large receivers population has been performed. The number of receivers is denoted by $N$.
We considered the erasure channels from the transmitter to the different
receivers to be independent, albeit with an identical erasure probability
$\epsilon$. Furthermore, we assumed that the receivers send an
acknowledgement to the transmitter whenever they successfully
decode the source block. Ideal (error- and delay-free) feedback channels have been
considered. After retrieving all the acknowledgments, the
transmitter stops encoding additional symbols from the source block.}
{We denote next by $\Delta$ the number of symbols transmitted by the sender, in excess with respect to $k$. We refer to $\Delta$ as the transmission overhead. When $k+\Delta$
symbols have been transmitted, the probability that a specific
receiver gathers exactly $m$ symbols is}
\begin{equation}
\ S\left(\Delta,m\right) = \binom{k+\Delta}{m}(1-\epsilon)^{m}\epsilon^{k+\Delta-m}.
\label{system_prob m}
\end{equation}
The probability of decoding failure at the receiver given that the
transmitter has sent $k+\Delta$ symbols is hence
\begin{align*}
\ P_{e} =& \sum_{m=0}^{k-1}\ S\left(\Delta,m\right)+\\
& + \sum_{m=k}^{k+\Delta}\ S\left(\Delta,m\right) P_{f}(\delta=m-k,\epsilon).
\end{align*}
The probability that at least one receiver is not able to decode the source block
is thus
\begin{equation}
\ P_E(N,\Delta,\epsilon) = 1-(1-P_{e})^{N}
\label{system_failure_one user}
\end{equation}
{Observe that $P_E(N,\Delta,\epsilon)$ can be easily bounded by means of} {\eqref{eq:final_bounds}}. {Following this approach, we compare the performance of the proposed concatenation to that of \acp{LRFC} and to that of idealized fountain codes. We assume a system
with $N=10^{4}$ receivers and a channel with an erasure probability
$\epsilon=0.01$. The performance of \ac{LRFC} codes over $\mathbb
{F}_{2}$ and $\mathbb {F}_{16}$ is depicted in Fig. \ref{sim_sender_side} together with that of two
concatenated schemes: A concatenation of a $(11,10)$ \ac{SPC} code
with a \ac{LRFC} code over $\mathbb {F}_{2}$,} and a concatenation of a
$(15,10)$ \ac{RS} code and a \ac{LRFC} code over $\mathbb {F}_{16}$.
{It can be seen how the concatenated scheme in $\mathbb {F}_{2}$
outperforms the binary \ac{LRFC}. To achieve $P_E = 10^{-4}$ the concatenated scheme
needs only $\Delta=20$ overhead symbols whereas the
\ac{LRFC} requires a transmission overhead $\Delta=27$. In the case of
a field order $16$, the concatenated
code shows a performance very close to that of an idealized
fountain code.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\columnwidth,draft=false]{./Figure8-eps-converted-to.pdf}
\centering \caption{ $P_E$ vs. overhead at the transmitter in a system with $N=10000$ receivers and $\epsilon=0.01$. Results are shown for different fountain codes:
\ac{LRFC} in $\mathbb {F}_{2}$, \ac{LRFC} in $\mathbb {F}_{16}$, concatenation of a (11,10) \ac{SPC} code with a \ac{LRFC} code in $\mathbb {F}_{2}$,
and a concatenation of a {$(15,10)$} \ac{RS} code and a \ac{LRFC} code over $\mathbb {F}_{16}$.}\label{sim_sender_side}
\end{center}
\end{figure}
\section{Conclusions}\label{sec:conc}
A novel coding scheme has been introduced. The scheme
consists of a parallel concatenation of a \ac{MDS} block code with a
\ac{LRFC} code, both constructed over the same field. The performance of the concatenated coding scheme
has been analyzed through derivation of tight bounds on the
probability of decoding failure as a function of the receiver overhead. It
has been shown how under \ac{ML} decoding the concatenated scheme performs as well as
\ac{LRFC} codes in channels characterized by high erasure
probabilities, whereas {it provides} failure probabilities {lower than those of \ac{LRFC} codes} by
several orders of magnitude at moderate/low erasure probabilities. An efficient decoding algorithm has been introduced for the case in which the generator
matrix of the \ac{MDS} block code is in Vandermonde form. Finally, the complexity of the proposed decoding algorithm has been analyzed, {showing remarkable complexity savings at moderate/low erasure probability regimes.}
| 2024-02-18T23:40:03.715Z | 2019-09-19T02:17:41.000Z | algebraic_stack_train_0000 | 1,213 | 6,254 |
|
proofpile-arXiv_065-6014 | \section{Introduction}
\label{introduction}
Pulsars are one of the most impressive observable celestial objects in the
sky. They are assumed to be rotating neutron stars which emit radio signals.
However, their importance follows from the fact that they are physical laboratories which
provide extreme conditions of strong magnetic fields which cannot be
reproduced on Earth.
The structure of the strong magnetic fields in a Pulsar is described by a
scalar function which satisfies the elliptic second-order partial
differential equation \cite{pulsar1,pulsar2}
\begin{equation}
\left( 1-x^{2}\right) \left( \Psi_{,xx}+\Psi_{,zz}\right) -\frac{1+x^{2
}{x}\Psi_{,x}+F\left( \Psi\right) =0, \label{eq.01
\end{equation}
where the singularities at $x=0$ and $x=1$ represent the centre of the star,
($x$ is the radius coordinate) and the surface of the pulsar is
located at $x=1$. Function $F\left( \Psi\right) $ is related to the profile of the
magnetic field for the polar coordinate \cite{pulsar1}. Equation (\ref{eq.01})
is also known as the relativistic force-free Grad-Shafranov equation
\cite{ppl1}.
In order to arrive at such a simple scalar equation, (\ref{eq.01}), for the
magnetic field, various Ans\"atze have been assumed for the physical state of
the star. In particular it has been assumed that \cite{pulsar1}: (a) the
system is axisymmetric and time-independent; (b) the electrons and the ions
have a well-defined velocity and density; (c) there are no gravitational or
particle collision effects; (d) inertial forces have been considered and (e)
it is assumed that the surface of the uniformly rotating star is a perfect conductor.
Because of the nonlinearity and the existence of the two singular points, the
Pulsar equation, (\ref{eq.01}), cannot be integrated in general and only few
solutions are known in the literature.\ Originally, an asymptotic analytical
solution which describes the magnetic field near to the surface of the star was
presented by Michel in \cite{pulsar2}. This was also the main inspiration for
the recent works of Uzdensky \cite{pl3} and Gruzinov \cite{pl4}. In
\cite{pl3} an interesting discussion of the physical state of the boundary
conditions is given. However, numerical solutions which describe the global
evolution of the Pulsar equation have been presented in the literature. One of
the first numerical force-free solutions was derived by Contopoulos et al.
\cite{pl5}, while other numerical solutions can be found in
\cite{pl6,pl7,pl8,pl9} and references therein.
In this work we are interested to apply the powerful method of Lie
symmetries~\cite{Stephani,Bluman} in order to study the existence of invariant
solutions for the Pulsar equation near to the singularity, $x=1,~$and to find
analytical asymptotic solutions, the so-called similarity solutions. In
particular, we classify the source of the magnetic field, i.e. function
$F\left( \Psi\right) $, such that the Pulsar equation, near to the
singularity, $x=1$, be invariant under the action of one-parameter point
transformations. This kind of classification was firstly introduced by Ovsiannikov
\cite{ovsiannikov} and has been applied to various physical systems for the
determination of new analytical solutions, for instance see
\cite{ref1,ref2,ref3,ref4,ref5,ref6,ref7,ref8,ref9,ref10,qm1,qm2,qm3,qm4} and
references therein, for various applications of the Lie symmetry
classification in Physics.
The novelty of Lie symmetries is that symmetries can be used to define
invariant surfaces and to reduce the number of dependent variables -- for
partial differential equations -- or to reduce the order of the differential
equation for ordinary differential equations. Hence new integrable systems
can be constructed and new analytical solution to be determined. The plan of
the paper follows.
In Section \ref{sec2a} the basic properties and definitions for the Lie
(point) symmetries of differential equations are presented. In the same
Section we perform the classification of the Lie symmetries of the pulsar
equation near to the surface of the star and we find that there are six
different admitted groups of point-transformations which leave the pulsar
equation invariant for six different functional form of the source, $F\left(
\Psi\right) $. The context of singularity analysis is discussed which is used
in subsequent Sections to prove the integrability of some of the reduced
differential equations. The application of the Lie symmetries and the
determination of the similarity inner solutions is performed in Section
\ref{sec3a}. New asymptotic analytic solutions near to the surface of the star
are presented. Finally in Section \ref{sec4a} we discuss our results and we
draw our conclusions.
\section{Lie symmetry analysis}
\label{sec2a}
For the convenience of the reader we present the
basic properties and definitions of Lie point symmetries of differential
equations and more specifically we discuss the case of second-order
differential equations of the form $\mathbf{A\equiv}A\left( x^{k
,\Psi,\Psi_{,i},\Psi_{,ij}\right) =0$, where $x^{k}$ are the independent
variables and $\Psi$ is the dependent variable with first derivative
$\frac{\partial\Psi}{\partial x^{i}}=\Psi_{,i}$.
Let
\begin{equation}
X=\xi^{i}\left( x^{k},u\right) \partial_{i}+\eta\left( x^{k},\Psi\right)
\partial_{u},\label{go.10
\end{equation}
be the generator of the local infinitesimal one-parameter point transformation,
\
\begin{align}
\bar{x}^{k} & =x^{k}+\varepsilon\xi^{i}\left( x^{k},\Psi\right) ,\\
\bar{\eta} & =\eta+\varepsilon\eta\left( x^{k},\Psi\right) .
\end{align}
Then $X$ is called a Lie symmetry for the differential equation, $\mathbf{A}$,
iff
\begin{equation}
X^{\left[ 2\right] }A=\lambda A\label{go.11
\end{equation}
in which $X^{\left[ 2\right] }$ is called the second prolongation/extension
in the jet-space and is defined a
\begin{equation}
X^{\left[ 1\right] }=X+\left( D_{i}\eta-\Psi_{,k}D_{i}\xi^{k}\right)
\partial_{\Psi_{,i}}+\left( D_{i}\eta_{j}^{\left[ i\right] }-\Psi_{jk
D_{i}\xi^{k}\right) \partial_{\Psi_{,ij}}.\label{go.13
\end{equation}
The novelty of Lie symmetries is that they can be used to determine similarity
transformations, i.e. differential transformations where the number of
independent variables is reduced \cite{Bluman}. The similarity transformation
is calculated with the use of the associated Lagrange's system,
\begin{equation}
\frac{dx^{i}}{\xi^{i}}=\frac{du}{\Psi}=\frac{d\Psi_{i}}{\Psi_{\left[
i\right] }}=...=\frac{d\Psi_{ij..i_{n}}}{\Psi_{\left[ ij...i_{n}\right] }}
\end{equation}
Solutions of partial differential equations which are derived with the
application of Lie invariants are called similarity solutions. In this
specific work we use the Lie symmetries to reduce the Pulsar equation to a second-order
differential equation. For this equation we shall analytic
solutions by using the symmetry approach and, if we fail, we apply the
singularity analysis.
\subsection{Singularity analysis}
Singularity analysis is another powerful mathematical method which is
applied to study the integrability of differential equations and to present the
solutions of differential equations in algebraic form, in particular by using
Laurent expansions around a movable singularity.
Singularity analysis is also known as the Painlev\'{e} Test
\cite{Painleve1,Painleve2,Painleve3,Painleve4} and has been applied in various
problems for the study of integrability of \ given differential equations.
Ablowiz, Ramani and Segur \cite{Abl1,Abl2,Abl3} systematized the Painlev\'{e}
Test in a simple algorithm, also known as the ARS algorithm. The main feature of
the ARS\ algorithm is its simplicity. It consists of three main algebraic
stems: (a) determination of leading-order behaviour; (b) determination of
resonances and (c) consistency of Laurent expansion. For every step of the
algorithm there are various criteria which should be applied, these criteria
are summarized in the review of Ramani et al. \cite{buntis}.
If a given differential equation passes the three steps of the ARS algorithm,
then we conclude that the given differential equation is algebraically integrable.
However, should the differential equation fail the ARS
algorithm, we cannot make a conclusion about the integrability of the differential
equation. While the ARS algorithm is straightforward on its application, one
of the main disadvantages is that it depends upon the coordinates in which the
given equation is defined, for a recent discussion we refer the reader to
\cite{anleach}.
\subsection{Pulsar equation near to the singularity}
We define the new coordinate, $y=x-1$, in order to move the surface of the
star to $y=0$. It follow, $y>0$, when $x>1$ and $y<0$ , when $x<1$. In the new
coordinates the Pulsar equation (\ref{eq.01}) become
\begin{equation}
y\left( 2+y\right) \left( \Psi_{,yy}+\Psi_{,zz}\right) +\frac{\left(
2+2y+y^{2}\right) }{1+y}\Psi_{,y}-F\left( \Psi\right) =0. \label{eq.02
\end{equation}
Near to the surface with $y\simeq0$, (\ref{eq.02}) is
approximated by the simpler form \cite{pulsar2}
\begin{equation}
2y\left( \Psi_{,yy}+\Psi_{,zz}\right) +2\Psi_{,y}-F\left( \Psi\right) =0.
\label{eq.02a
\end{equation}
This is the equation which Michel \cite{pulsar2} used to find the first analytical
expression for the force-free magnetosphere and inspired the later works of
\cite{pl3,pl4}. Equation (\ref{eq.02a}) is the one that we use to perform the
symmetry classification.
Moreover, we follow \cite{pl3,pl4} and we work on the polar-like coordinates
\begin{equation}
y=r\sin\theta~,~z=r\cos\theta, \label{p.03
\end{equation}
where equation (\ref{eq.02a}) takes the for
\begin{equation}
2r\sin\theta\left( \Psi_{,rr}+\frac{1}{r^{2}}\Psi_{,\theta\theta}\right)
+2\left( 2\sin\theta+\frac{\cos\theta}{r}\right) \Psi_{,r}-F\left(
\Psi\right) =0. \label{pl.04
\end{equation}
Hence the surface is indicated when $r=0$ or $\theta=0$. We continue with the
classification of the sources, $F\left( \Psi\right) $, such that equation
(\ref{pl.04}) be invariant under one-parameter point transformations, i.e.
Lie symmetries exist, while in the following section we discuss the
application of the Lie symmetries by performing reduction of the equation with
the use of the Lie invariants.
\subsection{Symmetry classification}
For the second-order differential equation (\ref{pl.04}) the symmetry
condition (\ref{go.11}) provides that for arbitrary function, $F_{A}\left(
\Psi\right) =F\left( \Psi\right) $, the differential equation admit the
unique symmetry vector
\[
Y=\cos\theta\partial_{r}-\frac{\sin\theta}{r}\partial_{\theta}.
\]
That vector field corresponds to the translation symmetry, $\partial_{z}$, in
the original coordinates, for equation (\ref{eq.02}) which is also a symmetry
of equation (\ref{eq.01}). Reduction with the use of the symmetry vector
$Y$ leads to solutions which are independent of the $z-$direction and are
not of special interest.
However, for specific functions, $F\left( \Psi\right) $, the differential
equation (\ref{pl.04}) can be invariant under a higher dimensional
Lie algebra. In particular we find five different cases:
\begin{itemize}
\item When the source is constant, i.e. $F_{B}\left( \Psi\right) =F_{0}$,
the differential equation (\ref{pl.04}) admits four plus infinity symmetries,
these are
\[
X_{1}=\partial_{r}~,~X_{2}=Y~,~X_{3}=r^{2}\cos\theta\partial_{r}+r\sin
\theta\partial_{x}-r\cos\theta~\Psi\partial_{\Psi
\
\begin{equation}
X_{4}=\Psi\partial_{\Psi}~,~X_{\infty}=b\left( r,\theta\right)
\partial_{\Psi}
\end{equation}
where $b\left( r,\theta\right) $ is a solution of equation (\ref{pl.04}).
The last two symmetries, i.e. $X_{4}$ and $X_{\infty},$ denote the linearity
of equation (\ref{pl.04}). The Lie Brackets of the admitted algebra are given
in Table \ref{tac1}
\end{itemize}
\begin{table}[tbp] \centering
\caption{Lie Brackets of the admitted Lie symmetries for the free pulsar
equation \ref{pl.04}}
\begin{tabular}
[c]{c|cccc
$\left[ \cdot,\cdot\right] $ & $X_{1}$ & $X_{2}$ & $X_{3}$ & $X_{4}$\\\hline
$X_{1}$ & $0$ & $0$ & $0$ & $0$\\
$X_{2}$ & $0$ & $0$ & $-X_{3}$ & $X_{4}$\\
$X_{3}$ & $0$ & $X_{3}$ & $0$ & $2X_{2}-X_{1}$\\
$X_{4}$ & $0$ & $-X_{4}$ & $-2X_{2}+X_{1}$ & $0
\end{tabular}
$\label{tac1
\end{table
\begin{itemize}
\item For linear source, $F_{C}\left( \Psi\right) =F_{1}\Psi,~$the
differential equation admits two plus infinity symmetries, those
are~$X_{2}~,~X_{4}~$and$~X_{\infty}.$
\item Moreover, for the power-law source, $F_{F}\left( \Psi\right)
=F_{1}\Psi^{\frac{1}{N}+1},~N\neq0,\frac{1}{2},-1,~$Pulsar equation near to
the surface admits two Lie point symmetries, these ar
\begin{equation}
X_{2}~,~X_{\left( N\right) }=r\partial_{r}-N\Psi\partial_{\Psi
\end{equation}
with Lie Bracket $\left[ X_{2},X_{\left( N\right) }\right] =X_{2}$.
\item In the special case for which$~N=\frac{1}{2}$ in the latter case, or
$F_{E}\left( \Psi\right) =F_{1}\Psi^{3},~$the Pulsar equation (\ref{pl.04})
is invariant under a three-dimensional Lie algebra with elements the vector
fields $X_{2},~X_{\left( 1/2\right) },~X_{3},~$and Lie Brackets as are
presented in Table \ref{tac2}.
\end{itemize}
\begin{table}[tbp] \centering
\caption{Lie Brackets of the admitted Lie symmetries for the pulsar equation
with cubic-law force}
\begin{tabular}
[c]{c|ccc
$\left[ \cdot,\cdot\right] $ & $X_{2}$ & $X_{\left( 1/2\right) }$ &
$X_{3}$\\\hline
$X_{2}$ & $0$ & $-X_{(1/2)}$ & $X_{3}$\\
$X_{\left( N\right) }$ & $X_{(1/2)}$ & $0$ & $2X_{2}$\\
$X_{3}$ & $-X_{3}$ & $-2X_{2}$ & $0
\end{tabular}
$\label{tac2
\end{table
\begin{itemize}
\item Finally, for the exponential source, $F_{F}\left( \Psi\right)
=F_{1}e^{-\frac{1}{C}\Psi}$,~$C\neq0,$ the Pulsar equation admits two Lie
point~symmetries,
\[
X_{2}~,~\bar{X}_{\left( C\right) }=r\partial_{r}+C\partial_{\Psi}\text{.
\]
with Lie Bracket $\left[ X_{2},\bar{X}_{\left( C\right) }\right] =X_{2}$.
We mention that the exponential-lie source was introduced in \cite{ppl1} as a jet model.
\end{itemize}
We continue with the application of the Lie symmetry vectors to determine
analytical solutions of the Pulsar equation (\ref{pl.04}). The solutions that
we determine are valid as first approximations of the general solution
near to the surface of the star. In particular, near to the surface of the
star, $y\simeq0$, the differential equation can be seen as a singular
pertubative equation and the theory of singular perturbative differential
equations \cite{sper1,sper2} can be applied in order to justify the
approximation of the analytical solution. The solutions near to the point
$y\simeq0$ are called inner solutions \cite{sper1}.
\section{Similarity solutions}
\label{sec3a}
As we discussed in the previous Section, for every Lie symmetry we can define
a surface where the solution is independent of one of the variables, that is,
define similarity variables.
For arbitrary source, $F\left( \Psi\right) $, from the vector field
$\partial_{z}$ the invariant solution is the one where $\Psi\left(
y,z\right) =\Psi\left( y\right) $ and the resulting differential equation
is the ordinary differential equation
\begin{equation}
2y\Psi_{,yy}+2\Psi_{,y}-F\left( \Psi\right) =0. \label{sc.01
\end{equation}
That is not a solution of special interest. Hence we proceed with our analysis by
using the remainder of the symmetry vectors.
\subsection{Invariant solutions for constant source}
The case of constant source also covers the free-source problem when
$F_{0}=0$. Indeed in equation (\ref{eq.02a}) for $F\left( \Psi\right)
=F_{0}$ we can replace $\Psi\rightarrow\Psi+\frac{F_{0}}{2}y$. Then the
source-free case follows. From table \ref{tac1} it follows that there are
four possible reductions which we can perform. They are: (a) reduction with the
symmetry vector $X_{1}+\mu X_{4}$;~(b) reduction with $X_{2}+\mu X_{4}$;~(c)
reduction with $X_{3}+\mu X_{4}$; and (d) reduction with $X_{3}+\mu X_{2}$.
\ For each of these reductions the reduced equation is a linear second-order
differential equation which can be integrated easily.
\subsubsection{Reduction with $X_{1}+\mu X_{4}$}
The first possible reduction of the source-free Pulsar equation (\ref{pl.04})
provides the solution to be
\begin{equation}
\Psi_{1}\left( r,\theta\right) =r^{\mu}\Sigma\left( \theta\right) ,
\label{sc.02
\end{equation}
where $\Sigma\left( \theta\right) $ satisfies the ordinary differential
equatio
\begin{equation}
\Sigma_{,\theta\theta}+\cot\theta~\Sigma_{,\theta}+\mu\left( \mu+1\right)
\Sigma=0 \label{sc.03
\end{equation}
the closed-form solution of which is given in terms of the Legendre functions a
\begin{equation}
\Sigma\left( \theta\right) =\sigma_{1}P_{\mu}\left( \cos\theta\right)
+\sigma_{2}Q_{\mu}\left( \cos\theta\right) \label{sc.04
\end{equation}
in which $P_{\mu}\left( \theta\right) ,$ $Q_{\mu}\left( \theta\right) $
denote the Legendre functions.
For special values of the parameter $\mu$ the solution (\ref{sc.04}) can be
simplified as follow
\begin{equation}
\Sigma\left( \theta\right) =\sigma_{1}+\sigma_{2}\ln\left( \frac
{1-\cos\theta}{\sin\theta}\right) ,~\mu=0, \label{sc.05
\end{equation
\begin{equation}
\Sigma\left( \theta\right) =\sigma_{1}\cos\theta+\sigma_{2}\left(
1-\frac{\cos\theta}{2}\ln\left( \frac{\cos\theta-1}{\cos\theta+1}\right)
\right) ,~\mu=1. \label{soll1
\end{equation}
It is important to mention that in general the parameter, $\mu$, can be any complex
number and, when it is imaginary, solution (\ref{sc.02}) becomes periodic as
follows $\Psi\left( r,\theta\right) =\exp\left( i|\mu|\ln r\right)
\Sigma\left( \theta\right) .~$
Solution (\ref{sc.02}) is well-known in the literature and was derived by
Michel in \cite{pulsar2}. In particular for $\mu=\frac{1}{2}$ solution
(\ref{sc.02}) provides a magnetic field which diverges as the inverse square
root of $r$ such that the total energy of the magnetic field remains finite
at the surface of the star, i.e. when $y=0$. That is a physical condition
which imposes a boundary condition and restricts the free parameters of the solution.
The analytical solutions\ which are presented in the following Sections are
new in the literature, but, as we see, they do not provide explicitly any
law of the form $\Psi\simeq r^{\frac{1}{2}}$.
\subsubsection{Reduction with $X_{2}+\mu X_{4}$}
Consider now reduction with the Lie symmetry vector, $X_{2}+\mu X_{4}$. \ The
invariant solution is calculated in Cartesian coordinates to be
\begin{equation}
\Psi_{2}\left( y,z\right) =\Sigma\left( y\right) e^{\mu z}, \label{soll2
\end{equation}
where the function $\Sigma\left( y\right) $ is
\begin{equation}
\Sigma\left( y\right) =\sigma_{1}J_{0}\left( \mu y\right) +\sigma_{2
Y_{0}\left( \mu y\right) , \label{soll3
\end{equation}
in which $J_{m}\left( y\right) ,~Y_{m}\left( y\right) $ denote the Bessel
functions of the first and second kind, respectively.
\subsubsection{Reduction with $X_{3}+\mu X_{4}$}
Reduction with the Lie symmetry vector, $X_{3}+\mu X_{4}$, provides
the invariant solution
\begin{equation}
\Psi_{3}\left( r,\theta\right) =\frac{1}{r}e^{\mu r\cos\theta}\Sigma\left(
\frac{\sin\theta}{r}\right) , \label{soll4
\end{equation}
where again the function $\Sigma\left( \frac{\sin\theta}{r}\right) $ is
expressed n terms of the Bessel functions $J_{m}~$and$~Y_{m}$ as
\begin{equation}
\Sigma\left( \frac{\cos\theta}{r}\right) =\sigma_{1}J_{0}\left( \mu
\frac{\sin\theta}{r}\right) +\sigma_{2}Y_{0}\left( \mu\frac{\sin\theta
{r}\right) .
\end{equation}
\subsubsection{Reduction with $X_{3}+\mu X_{2}$}
The last possible reduction that we can perform in the source-free scenario is
with the use of the Lie symmetry vector, $X_{3}+\mu X_{2}.$ The invariant
solution is calculated to b
\begin{equation}
\Psi_{4}\left( r,\theta\right) =\frac{\Sigma\left( \sigma\right)
{\sqrt{\mu r^{2}-1}}, \label{soll5
\end{equation}
where the new independent variable $\sigma=\sigma\left( r,\theta\right) $ is
defined as $\sigma=\frac{r\sin\theta}{\mu r^{2}-1}$. The function
$\Sigma\left( \sigma\right) $ satisfies the second-order
differential equatio
\begin{equation}
2\sigma\left( 4\mu\sigma^{2}+1\right) \Sigma_{,\sigma\sigma}+2\left(
1+12\mu\sigma^{2}\right) \Sigma_{,\sigma}+6\mu\sigma\Sigma=0,
\end{equation}
the solution of which is expressed in terms of the Legendre functions $P\left(
\sigma\right) ,~Q\left( \sigma\right) ,$ that is,
\begin{equation}
\Sigma\left( \sigma\right) =\sigma_{1}P_{-\frac{1}{4}}\left( 8\mu\sigma
^{2}+1\right) +\sigma_{2}Q_{-\frac{3}{4}}\left( 8\mu\sigma^{2}+1\right) .
\end{equation}
The source-free equation, (\ref{pl.04}), is linear, a property that follows also
from the existence of the symmetry vectors $X_{4}$ and $X_{\infty}$. Hence
the general solution can be written as a sum of the specific invariant
solutions $\Psi_{1},~\Psi_{2},~\Psi_{3}$ and $\Psi_{4}$ calculated above, over
all the possible values of the free parameters $\mu$ for each solution.
However, the general solution is restricted only when initial/boundary
conditions are applied in the problem.
In the following lines, the reduction process is applied for the remainder of the
cases provided by the Lie symmetry classification.
\subsection{Invariant solutions for linear source}
For the linear source, $F_{C}\left( \Psi\right) =F_{1}\Psi$, it is possible
to perform only one reduction with the symmetry vector $X_{2}+\mu X_{4}$.
The invariant solution is calculated in Cartesian coordinates to b
\begin{equation}
\Psi\left( y,z\right) =e^{\mu z}\left( \Psi_{1}M_{a,0}\left( 2i\mu
y\right) +\Psi_{2}W_{a,0}\left( 2i\mu y\right) \right), \label{soll6
\end{equation}
where $\alpha=\frac{iF_{1}}{4\mu};$ and $M_{a,b},$ $W_{a,b}$ are Whittaker functions.~
\subsection{Invariant solutions for power-law source}
For the power-law source, $F_{D}\left( \Psi\right) =F_{1}\Psi^{\frac{1
{N}+1}$ $\ $, we perform reduction by using the Lie symmetry vector $X_{\left(
N\right) }$. \ The reduced equation is calculated to be
\begin{equation}
2\sin\theta~\Sigma_{,\theta\theta}+2\cos\theta~\Sigma_{,\theta}+\left(
2N\left( N-1\right) \sin\theta-F_{1}\Sigma^{\frac{1}{N}}\right) \Sigma=0
\label{soll9a
\end{equation}
while the solution of the Pulsar equation, (\ref{pl.04}), is expressed as
\begin{equation}
\Psi\left( r,\theta\right) =r^{-N}\Sigma\left( \theta\right) .
\label{soll7
\end{equation}
The reduced equation, (\ref{soll9a}), has been derived before in \cite{pl3,pl4}
and actually the power-law source can describe the magnetic field of the
Pulsar after the surface boundary. More specifically, in \cite{pl4} it was
assumed that, when the source-free axisymmetric pulsar magnetosphere closes,
there exists a boundary condition in order for the solution of the power-law
source to continue to describe the magnetic field. Hence with that assumption it
was found that the value of $N$ is approximately $N\simeq-2.4,$ such that
$\Psi\simeq r^{2.4}$~\cite{pl4}.
It is interesting to comment here that solutions (\ref{sc.02}) and
(\ref{soll7}) were derived before without any knowledge of the symmetries of the
differential equation (\ref{pl.04}). Moreover, those specific invariant
solutions satisfy the boundary conditions imposed by the physics of the
neutron star.
On the other hand, in the coordinates $\left\{ y,z\right\} $, the reduced
solution can be written equivalently as $\Psi\left( y,z\right)
=y^{-N}\Lambda\left( \sigma\right) $, where $\theta=\arcsin\sigma$, and
$\Lambda\left( \sigma\right) $ now satisfies the equation
\begin{equation}
2\sigma\left( 1-\sigma^{2}\right) \Lambda_{,\sigma\sigma}+2\left(
1+2\sigma^{2}\right) \Lambda_{,\sigma}+\left( 2\sigma N\left( N-1\right)
-F_{1}\Lambda^{\frac{1}{N}}\right) \Lambda=0. \label{eq00
\end{equation}
This nonlinear equation does not admit any Lie symmetry and for that we
apply the singularity analysis to study the integrability and write the
analytical solution.
Equation (\ref{eq00}) is a nonautonomous equation. With the new
change of variables, $\sigma=Y\left( s\right) $, $\Lambda\left(
\sigma\right) =Y_{,s}\left( s\right) ,~$we increase the order of the
differential equation, but the new equation is autonomous. We apply the
steps of the ARS algorithm.
We determine the leading-order behaviour to be $Y_{A}\left( s\right)
=Y_{0}s^{\frac{1}{1+N}}$, for $N\neq-1,$ where $Y_{0}$ is an arbitrary
constant. Hence once expects one of the resonances to be zero.
As far as concerns the resonances they are calculated to b
\begin{equation}
q_{1}=-1~,~q_{2}=0~,~q_{3}=\frac{2N-1}{1+N},
\end{equation}
which means that the differential equation passes the singularity test and the
\ analytical solutions is expressed by a Right Painlev\'{e} series for
$N\in\left( -\infty,-1\right) \cup\left( \frac{1}{2},+\infty\right) $ with
step which depends on the value $N$.
In order to perform the consistency test, we select $N=2$ which means that the
third resonance is $q_{3}=1$. Hence the step of the Laurent expansion is
$\frac{1}{3},$ and the~Painlev\'{e} series which describes the solution is
\begin{equation}
Y\left( s\right) =Y_{0}s^{\frac{1}{3}}+Y_{1}s^{\frac{2}{3}}+Y_{2
s+Y_{3}s^{\frac{4}{3}}
{\displaystyle\sum\limits_{I=4}^{\infty}}
Y_{I}s^{\frac{1+I}{3}}.
\end{equation}
The three integration constants are: the position of the singularity and the
coefficients $Y_{0}$ and $Y_{2}$. \ The rest of the coefficients $Y_{J}$ are
functions of $Y_{0}$,~$Y_{2}$. Hence, equation (\ref{soll9a}) is integrable
though the singularity analysis.
However, there exists also a second leading-order behaviour, which is
$Y_{B}\left( s\right) =Y_{0}s^{\frac{1}{2-N}},~$with arbitrary $Y_{0}$ and
for all the values of $N$, such that $N\neq2$. The resonances are calculated
to be
\begin{equation}
\bar{q}_{1}=-1~,~\bar{q}_{2}=0~,~\bar{q}_{3}=\frac{2N-1}{N-2
\end{equation}
from which we infer that equation (\ref{soll9a}) is integrable.
\subsection{Invariant solutions for the cubic source}
When the power-law source has a cubic law, that is, $F_{E}\left( \Psi\right)
=F_{1}\Psi^{3}$, then from the symmetry classification we saw that the Pulsar
equation admits an extra Lie symmetry vector field. The reduction with the
vector field $X_{3}$, provides the invariant solution
\begin{equation}
\Psi\left( r,\theta\right) =\frac{1}{r}\Sigma\left( \frac{\sin\theta
{r}\right), \label{soll8
\end{equation}
where function $\Sigma$ satisfies the nonlinear differential equatio
\begin{equation}
2\xi\Sigma_{,\xi\xi}+2\Sigma_{,\xi}-F_{1}\Sigma^{3}=0~,~\xi=\frac{\sin\theta
}{r}. \label{th.00
\end{equation}
Equation (\ref{th.00}) admits the vector field $X_{\left( 1/2\right) }$ as Lie
(point) symmetry. The application of $X_{\left( 1/2\right) }$ give
\begin{equation}
w\left( \alpha\right) =\xi^{\frac{3}{2}}\Sigma_{,\xi}~,~\alpha=\xi^{\frac
{1}{2}}\Sigma,
\end{equation}
where now $w\left( \alpha\right) $ satisfies the first-order differential
equatio
\begin{equation}
\left( 2+\alpha\right) w_{,\alpha}-w-F_{0}\alpha^{3}=0,
\end{equation}
which is an Abel's equation of the second kind.
The solution of this Abel's equation cannot be written in a
closed-form. However, differential equation (\ref{th.00}) can be solved with
the singularity analysis and the generic solution is given in algebraic form.
Hence we apply the ARS algorithm. by firstly making the equation an autonomous
third-order equation with the transformation $\xi=Y\left( s\right) $ and
$\Sigma\left( \xi\right) =Y\left( s\right) _{,s}.$ Equation (\ref{th.00})
is written a
\begin{equation}
YY_{,s}Y_{,sss}-2Y\left( Y_{,ss}\right) ^{2}+\left( Y_{,s}\right)
^{2}Y_{,ss}-F_{1}\left( Y_{,s}\right) ^{6}=0.
\end{equation}
For this equation we perform the new change of coordinates~$Y\left(
s\right) =\frac{1}{Z\left( s\right) },$ where we find that the
leading-order behaviours are $Z\left( s\right) =Z_{0}s^{p}$, with $p_{1}=-1$
and $p_{2}=-2.$ In both cases, $Z_{0}$ is an arbitrary constant$.$
For $p_{1}$ the ARS algorithm provides the resonance
\begin{equation}
q_{1}=-1~,~q_{2}=0~,~q_{3}=\frac{3}{2
\end{equation}
which means that the the general solution is given by a Right Painlev\'{e}
expansion with step $\frac{1}{2}$, that i
\begin{equation}
Z\left( s\right) =Z_{0}s^{-1}+Z_{1}s^{-\frac{1}{2}}+Z_{2}+Z_{3}s^{\frac
{1}{2}}
{\displaystyle\sum\limits_{I=4}^{\infty}}
Z_{I}s^{-1+\frac{I}{2}} \label{th.01
\end{equation}
with free parameters $Z_{0}$~and $Z_{3}$. Note that the third constant of integration
denotes the position of the movable singularity. Finally the
consistency test provides that $Z_{1}=0$, $Z_{2}=-\frac{F_{1}}{2Z_{0}^{2}
$,~$Z_{4}=\frac{5\left( F_{1}\right) ^{5}}{4Z_{0}^{5}}~etc.$
As far as concerns the second leading-order behaviour, $p_{2}$, we find that the
resonances are
\begin{equation}
\bar{q}_{1}=-1~,~\bar{q}_{2}=-2~,~\bar{q}_{3}=-4
\end{equation}
and, while once expects one of the resonances to be zero, because $Z_{0}$ is
arbitrary, that is not true. Hence the ARS algorithm for the leading-order
term $p_{2}$ fails and solution (\ref{th.01}) is the only solution which can be
constructed by the ARS algorithm.
In Fig. \ref{figg01} the density plot of the invariant solution (\ref{soll8})
is given in space of variables $\left\{ y,z\right\} $.
\begin{figure}[ptb]
\includegraphics[height=7cm]{plotfea1.eps}\centering\caption{Density plot of
$\Psi\left( y,z\right) $ for the analytical solutions (\ref{soll8}). The
figure is for initial conditions where $\Sigma\left( 0\right) =0$ and
$\Sigma_{,\xi}\left( 0\right) >0$.
\label{figg01
\end{figure}
For completeness we mention that reduction with the symmetry vector
$X_{\left( 1/2\right) }$ provides the same solution as that of the
power-law source $F_{D}\left( \Psi\right) $ for $N=\frac{1}{2}$.
\subsection{Invariant solutions for exponential source}
Finally for the power-law source, $F_{F}\left( \Psi\right) =F_{1
e^{-\frac{1}{C}\Psi}$, we apply the invariants of the Lie symmetry vector
field $\bar{X}_{\left( C\right) }$, which provide us with the invariant
solutio
\begin{equation}
\Psi\left( r,\theta\right) =\ln r^{-C}+\Sigma\left( \theta\right),
\label{soll9
\end{equation}
where $\Sigma\left( \theta\right) $, satisfies the nonlinear
second-order differential equatio
\begin{equation}
2\sin\theta~\Sigma_{,\theta\theta}+2\cos\theta~\Sigma_{,\theta}-F_{1
e^{-\frac{1}{C}\Sigma}-2C\sin\theta=0.
\end{equation}
As before we prefer to work with the coordinates $\left\{ y,z\right\} $ \ and
write the invariant solution a
\begin{equation}
\Psi\left( y,z\right) =\ln y^{C}+\Lambda\left( \sigma\right) ~,~\text{with
~}\theta=\arcsin\sigma,
\end{equation}
in which function $\Lambda\left( \sigma\right) $ satisfies the differential
equatio
\begin{equation}
2\sigma\left( 1-\sigma^{2}\right) \Lambda_{,\sigma\sigma}+2\left(
1-2\sigma^{2}\right) \Lambda_{,\sigma}-2C\sigma-F_{1}e^{-\frac{1}{C}\Lambda
}=0. \label{soll10
\end{equation}
Equation (\ref{soll10}) has no symmetries and in order to prove the integrability
we apply the ARS algorithm. Indeed, under the change of variables
$\sigma=Y\left( s\right) $, $\Lambda\left( \sigma\right) =C\ln
Y_{,s}\left( s\right) ,$ the leading-order behaviour is calculated to
be~$Y\left( s\right) =Y_{0}s^{\frac{1}{1+C}}$, with resonances
\begin{equation}
\bar{q}_{1}=-1,~\bar{q}_{2}=0~,~\bar{q}_{3}=-\frac{1}{1+C}.
\end{equation}
Finally we apply the consistency test of the ARS algorithm for various values
of the parameter $C,$ and we infer that for $C\neq-1$ the differential
equation (\ref{soll10}) passes the singularity test and its solution can be
expressed in terms of a Laurent expansion.
In Fig. \ref{figg02} the density plot of the invariant solution (\ref{soll8})
is given in space of variables $\left\{ y,z\right\} $.
\begin{figure}[ptb]
\includegraphics[height=7cm]{ffa1.eps}\centering\includegraphics[height=7cm]{ffa2.eps}\centering\caption{Density
plot of $\Psi\left( y,z\right) $ for the invariant solution (\ref{soll9}).
The plots are for initial conditions $\Sigma\left( 0\right) =0$ and
$\Sigma_{,\theta}\left( 0\right) >0$ (left fig.) and $\Sigma_{,\theta
}\left( 0\right) <0$ (right fig.)
\label{figg02
\end{figure}
\section{Conclusions}
\label{sec4a}
In this work we applied two powerful mathematical methods in order to
determine analytical solutions for the Pulsar equation near to the surface of
the neutron star. More specifically we applied the Lie symmetry analysis to
classify the form of the source for the magnetic field in the Pulsar equation
such that the resulting equation admit Lie (point) symmetries, that is, be
invariant under the action of one-parameter point transformations. From the
classification process, we found that the (inner) Pulsar equation can be
invariant under the action of six different Lie algebras.
For each of the vector fields followed by the classification scheme we used
the (zeroth-order) Lie invariants to reduce the number of the independent
variables for the differential equation and write it as an ordinary
differential equation. That equation could be solved in all the cases
with the use of symmetries or with the application of the ARS algorithm. In
particular the ARS algorithm was applied to prove the integrability for some of
the reduced equations and write the analytical solution in a form of Laurent expansion.
The solutions that we derived are asymptotic solutions of the Pulsar equation
(\ref{eq.01}) near to the surface of the neutron star. Only two of the
solutions were derived before in the literature and these Lie invariant solutions
provide a finite magnetic field in the surface of the neutron star. The new
asymptotic solutions can be used as toy-models for the viability of numerical
approximations for the elliptic equation (\ref{eq.01})
In a forthcoming work we wish to study the boundary conditions which should be
satisfied in order that the new Lie invariant solutions be solutions of the
complete problem. Finally the physical implications of those solutions is a
subject for a future study.
\begin{acknowledgments}
AP thanks the University of Athens for the hospitality provided while part of this work
was performed.
\end{acknowledgments}
| 2024-02-18T23:40:03.912Z | 2019-09-19T02:17:14.000Z | algebraic_stack_train_0000 | 1,221 | 5,786 |
|
proofpile-arXiv_065-6114 | \section{Introduction}
Kinematics observations of star-forming molecular clouds reveal the turbulent motions of the clouds' gas and provide an understanding of how gas is funneled onto sites of star formation \citep[e.g.,][]{Pineda_2010, Kirk_2013, Friesen_2013}. Such kinematics measurements are obtained by modeling the emission lines from molecular transitions in the gas. Typically, emission lines without self-absorption (i.e., optically thin lines) are modeled as Gaussian distributions with a single centroid velocity and velocity dispersion. A major limitation of this ``single-Gaussian'' line fitting approach is its inability to account for spectra that display multiple velocity components along the line of sight. For instance, if two slabs of emitting gas with slightly offset centroid velocities lie along our line of sight to a particular cloud, a broadened second peak or ``shoulder'' is produced in the observed spectrum. Figure \ref{cartoon} shows a schematic of this situation. Traditional single-Gaussian line fitting pipelines, which assume the observed emission contains a single velocity component, would fit this broadened spectrum with a line width that is much larger than those of the individual line components that produced the observed spectrum. In addition, the centroid measurement would be skewed to a value in-between those of the individual line components. These inaccuracies have significant impacts on many analyses of star-forming regions. For example, virial stability analyses \citep[e.g.,][]{Kauffmann_2013, Pattle_2015, Pattle_2017, Seo_2015, Kirk_2017} and velocity gradient calculations \citep[e.g.,][]{Schneider_2010, Henshaw_2013, Kirk_2013, Peretto_2014} are highly dependent on velocity dispersion and centroid, respectively.
\begin{figure}[h]
\epsscale{0.67}
\plotone{ml_comp_cartoon.png}
\caption{Schematic diagram of a molecular cloud observation that would result in a spectrum with two velocity components. The observer views two cores along the line of sight (dashed line) at slightly offset centroid velocities (v$_1$ and v$_2$). The combination of the two Gaussian emission line profiles for each core (orange and blue spectra) results in a broadened observed spectrum (black spectrum) with a ``shoulder'' at one side.}
\label{cartoon}
\end{figure}
High spatial and spectral resolutions can provide observers with a lower chance of viewing multiple velocity component spectra as there can be less chance of ``smearing'' together slabs of gas that are close to one another in the spatial and spectral dimensions. When observing clouds at farther distances, however, there can be a higher chance of observing multiple velocity component spectra due to the worsened spatial resolving power. Thus, modern spectroscopic surveys of molecular clouds at large distances must incorporate a line-fitting strategy that considers multiple velocity components along the line-of-sight to obtain the most accurate kinematics measurements from their data.
Although several multi-component line fitting methods have been developed for molecular emission line observations, they either require user input and direction during the line fitting procedure \citep[e.g., SCOUSE: Semi-automated multi-COmponent Universal Spectral-line fitting Engine,][]{Henshaw_2016}, or they require several iterations of fitting both single and multiple-component models to test which model produces the ``best'' fit \citep{Riener_2019, Clarke_2018, Sokolov_2017, Lindner_2015, Chen_prep}. These semi-automated and brute-force methods are plagued by several issues: 1) They are highly dependent on the model's initial parameter guesses and degrees of freedom used for their $\chi^2$-minimization fits. For example, for $\chi^2$-minimization to converge onto the optimal solution, it must be fed initial conditions for the model parameters (e.g., velocity dispersion and centroid) that are near the ``true'' values of the emission. This requirement often leads to large amounts of pre-processing the data to obtain estimates for the centroid and dispersion of each velocity component that can be used as initial guesses for the line fitting procedure. Alternatively, one can blindly repeat the line fitting procedure using a large grid of initial parameter guesses to search for the optimal fit. 2) Due to this pre-processing or grid search requirement, traditional methods tend to be computationally expensive, often requiring hours to fit typical spectral cubes. 3) Furthermore, they tend to neglect spectra in neighboring pixels that could confirm the presence or lack of multiple velocity components.
This paper provides a solution for efficiently identifying multiple velocity component spectra using artificial neural networks (ANNs). ANNs are a type of machine learning model that attempts to map input features to output classes or values using hierarchical feature representations that are learned during the training process. These hierarchical features are learned by stacked layers of artificial neurons that use a weighted function to map the inputs they receive into outputs that are fed into subsequent layers. The complexity of features learned by each layer increases with depth into the network. For example, a neural network trained for facial recognition might first detect facial edges and contours, which can then be used to detect facial features such as noses, ears, and eyes, until the final layer is able to build facial templates that can be used to predict which face is being viewed in a given image.
In terms of astronomy, ANNs are becoming increasingly prevalent due to the advantages they can provide by learning non-linear patterns that traditional methods struggle to reproduce and making quick predictions once trained. For example, ANNs have been successfully applied to a variety of problems across many different fields of research, such as: detecting planets in the Kepler archive that were missed by the standard Kepler identification pipeline \citep{Shallue_2018}, discriminating galaxies with an active galactic nucleus from star-forming galaxies in Sloan Digital Sky Survey (SDSS) observations \citep{Teimoorinia_2018}, deriving stellar temperature, metallicity, and gravity from SDSS APOGEE stellar spectra \citep{Fabbro_2018}, detecting 72 previously missed fast radio burst (FRB) pulses from the first-discovered repeating FRB \citep{Zhang_2018}, and identifying wind-driven shells in magneto-hydrodynamic molecular cloud simulations \citep{Van_2019}. ANNs have also been used for multiple-component emission line identification of optical spectra. For instance, \citep{Hampton_2017} have trained an ANN to classify optical spectra of galaxies using parameters output by a traditional Gaussian line-fitting approach called LZIFU \citep{Ho_2016}.
Many of the ANNs used for astronomy applications rely on a particular type of ANN called a convolutional neural network (CNN or convnet), which preserves the spatial structure of its input features using convolutional kernels that are learned during training. The convolution involves taking the dot product between the network's input (which can be an image, spectrum, light curve, etc.) and a sliding kernel that is moved across the input in predefined steps. The output is a convolved feature map that is used in subsequent layers of the network to make a prediction on the input's class (in the case of classification). The convolved feature map not only preserves the spatial structure in the input image, but also reduces the number of input features into the next layer of the network, which leads to faster training times.
This paper will utilize the advantages of training a 1D CNN to classify input spectra as either single or multiple velocity components and predict the kinematics of each velocity component. The method requires no initial parameter guesses, incorporates spectra from nearby pixels to make predictions, and analyzes entire spectral cubes in seconds. Such improvements are welcome with the advent of large multi-receiver arrays where thousands of spectra can be collected in a reasonable time. Named Convnet Line-fitting of Velocities in Emission-line Regions (CLOVER), the method is also publicly available as a Python package called \texttt{astroclover}\footnote{\url{https://github.com/jakeown/astroclover/}}.
The paper is organized as follows: Section 2 describes the data used for training CLOVER and testing its performance; Section 3 outlines the CNN architecture of CLOVER; Section 4 compares CLOVER's classification performance to that of a traditional single-Gaussian line fitting method on both synthetic and real data; Section 5 discusses predicting kinematics from two-component spectra with CLOVER; Section 6 describes further applications of CLOVER classifications to emission lines with hyperfine splitting; Section 7 presents CLOVER kinematics predictions for NH$_3$ (1,1) synthetic data; Section 8 shows how CLOVER can be used to improve the accuracy of virial stability analyses of structures with multiple velocity components; and Section 9 summarizes the paper. In addition, Appendix A provides an overview of the installation and usage instructions for the \texttt{astroclover} Python package.
\section{Data}
\subsection{Training Set: Generating Synthetic Spectra}
All machine learning classification projects require a training set composed of input feature vectors (a.k.a., ``samples'' or ``examples'') that belong to one of the possible output classes the model will be trained to predict. In this paper, we train a network that has three distinct output classes: ``one-component'' spectra with only one velocity component along the line of sight, ``two-component'' spectra with two velocity components along the line of sight, and ``noise-only" spectra with negligible emission.
To generate the training set, synthetic spectral cubes on a 3$\times$3 pixel grid with 500 spectral channels were created. For the ``one-component'' class, a single Gaussian spectrum was injected into the grid's central pixel with peak intensity ($T_{peak}$) set to 1 K and values of velocity dispersion ($\sigma$) and centroid velocity ($V_{LSR}$) chosen at random from a uniform distribution with the following limits:
\begin{itemize}
\item $\sigma$: $2-11$ channels, which produces both narrow and broad Gaussians similar to real emission lines.
\item $V_{LSR}$: channel 112 to channel 388 of the 500 channel spectrum. This range is equivalent to $-0.55$ km s$^{-1}$ to 0.55 km s$^{-1}$ when the spectral axis has been normalized to $-1.0$ km s$^{-1}$ for the lowest velocity channel and $1.0$ km s$^{-1}$ for the highest velocity channel. This range provides a variety of centroid velocities while ensuring the emission line edges do not spill off the edges of the spectrum.
\end{itemize}
The Gaussians for the surrounding pixels in the 3$\times$3 grid are determined by applying a perturbation to the central pixel's Gaussian parameters. This step was done by drawing values from three normal distributions (one for each parameter) with mean of zero and variance of 0.05. The randomly drawn values were then added to the central pixel's parameter values to generate new Gaussians with slight offsets in $\sigma$, $V_{LSR}$, and $T_{peak}$. Finally, noise with an RMS drawn from a uniform distribution between 0.05 K and 0.25 K was injected into each spectral cube, creating both low-, mid-, and high- signal-to-noise ratio (SNR) training examples.
For the ``two-component'' class, two Gaussians were injected into the central pixel of the 3$\times$3 grid. The values of $\sigma$ for the two Gaussians were both drawn at random as described above for the one-component class. The value of $T_{peak}$ for the second Gaussian was drawn randomly from a uniform distribution between $2\times$RMS and 1 K, where RMS is the noise level selected for the cube. Similarly, the $V_{LSR}$ for the second Gaussian was randomly drawn from a uniform distribution between $V_{LSR,1}\pm1.5 \times \sigma_{max}$ and $V_{LSR,1}\pm5 \times \sigma_{max}$, where $\sigma_{max}$ is the value of $\sigma$ for the wider of the two Gaussians, $V_{LSR,1}$ is the centroid of the first component, and the sign of the offset ($\pm$) is chosen at random. Thus, the second component can be on either the left or right of the first component along the spectral axis.
This two-component sample generation approach created variations in the relative heights, velocity dispersions, and centroids of each velocity component. Moreover, the velocity centroid separation threshold for each velocity component minimized the number of two-component samples that are indistinguishable from one-component samples. This characteristic of the training set was necessary to prevent the CNN from overfitting (a tendency to predict the two-component class when it was clear the one-component class was more appropriate). Such separation thresholds are also often implemented in traditional line-fitting methods \citep[see, e.g.,][]{Lindner_2015, Henshaw_2016, Riener_2019} when deciding whether or not a multiple-component fit is appropriate. The outer pixels in the 3$\times$3 grid were filled by adding perturbations to both of the Gaussian components, as described above for the one-component class.
For each training example cube, only two spectra are used as input into the CNN: 1) the spectrum of the central pixel in the 3$\times$3 grid and 2) the averaged spectrum over all nine spectra in the 3$\times$3 grid. The first spectrum provides a ``local'' view of the pixel for which the class prediction is being made, while the second spectrum provides a ``global'' view of neighboring pixels that can provide insight into whether the central pixel is a one-component, two-component, or noise-only spectrum. Both spectra are normalized by dividing by the value of the brightest channel. This ``local+global'' setup also provides for a simple way to make predictions on real observations. In that case, a sliding window of size 3$\times$3 pixels is moved across the position-position plane of a spectral cube and a class prediction is made on the central pixel after feeding its ``local'' and ``global'' spectra into the trained network.
Following the aforementioned method, 300,000 synthetic samples (100,000 for each training set class, i.e., a ``balanced'' training set) were generated. Figure \ref{training_set} shows example local and global spectra for training set samples in the one-component, two-component, and noise-only classes. A validation set of 90,000 additional synthetic spectra (30,000 in each class) was also generated for monitoring performance during training (see Section 3). After training, the network's performance is tested on ten additional collections of 30,000 synthetic spectra (10,000 in each class).
\begin{figure}[htb]
\epsscale{0.84}
\plottwo{one_comp_train1.pdf}{one_comp_train2.pdf}
\plottwo{two_comp_train1.pdf}{two_comp_train2.pdf}
\centering
\plottwo{noise_train1.pdf}{noise_train2.pdf}
\caption{Example synthetic spectra included in the one-component (top row), two-component (middle row), and noise-only (bottom row) training set classes. Blue spectra show the central pixel in the spectral cube window (i.e., the ``local'' spectrum) and the orange spectra represent the $3\times3$ pixel average around the central pixel (i.e., the ``global'' spectrum).}
\label{training_set}
\end{figure}
\subsection{Test Set: Real $^{13}$CO, C$^{18}$O, \& HC$_5$N Spectral Cubes}
The test sets used to gauge the trained CNN's performance included three real spectral cubes observed from three different surveys of three distinct star-forming regions. The first cube was a $^{13}$CO ($1-0$) observation of L1689 in the Ophiuchus molecular cloud from the COMPLETE survey\footnote{available at \url{https://www.cfa.harvard.edu/COMPLETE/}} \citep{Ridge_2006} on the Five College Radio Astronomy Observatory (FCRAO). This cube has a spectral resolution of $\sim0.07$ km s$^{-1}$, the pixel size is $23\arcsec$, and the FCRAO has an angular resolution of $\sim46\arcsec$ at the rest frequency of $^{13}$CO ($1-0$).
The second cube was a C$^{18}$O ($3-2$) observation of DR21\footnote{available at \url{http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/} with proposal ID: M10BD01} in the Cygnus X giant molecular cloud complex observed by the James Clerk Maxwell Telescope (JCMT). The observations were accessed from the JCMT archive and have not been previously published, but were originally observed as part of a $^{12}$CO ($3-2$) survey by \cite{Gottschalk_2012}. The native spectral resolution of the cube was $\sim$ 0.056 km s$^{-1}$, but to improve the spectral SNR, we smooth spectrally with a Gaussian kernel to half the original spectral resolution: 0.11 km s$^{-1}$. The native angular resolution of the JCMT at the rest wavelength of C$^{18}$O $(3-2)$ is $\sim15\arcsec$, which we convolve to 32$\arcsec$ to improve SNRs further. The pixel scale of the cube is 7.2$\arcsec$.
The third cube was a HC$_5$N ($9-8$) observation of B18\footnote{available at \url{https://dataverse.harvard.edu/dataverse.xhtml?alias=GAS_Project}} in the Taurus molecular cloud observed by the Green Bank Ammonia Survey \citep{Friesen_2017} on the 100m Green Bank Telescope. This cube has a spectral resolution of $\sim0.07$ km s$^{-1}$, a pixel scale of $\sim11\arcsec$, and an angular resolution of $\sim31\arcsec$.
\begin{figure}[htb]
\epsscale{0.7}
\plotone{train_set_SNR4.pdf}
\caption{Histograms of signal-to-noise ratio for the ``local'' spectra in the training and test sets. Blue, orange, and green represent the one-component, two-component, and noise-only classes of the synthetic training set, respectively. Red, purple, and brown show the distributions for the real observations of L1689, DR21, and B18, respectively. The black dashed line shows SNR=4.}
\label{SNR}
\end{figure}
Table 1 outlines the characteristics of each spectral cube and star-forming region in the test set. These three regions were chosen due to their differing levels of star formation activity. B18 is a fairly quiescent, nearby ($d\sim135$ pc), star-forming region, while L1689 is a more active nearby cloud ($d\sim119$ pc) and DR21 is a distant ($d\sim1700$ pc) high-mass star-forming region producing O- and B-type stars. Thus, these spectral cubes provide a thorough test of the CNN performance across a variety of star-forming environments, instruments, and emission line transitions.
\begin{deluxetable}{ccccccccc}
\rotate
\tablewidth{0pt}
\tablecolumns{4}
\tablecaption{Test Set Spectral Cubes}
\tablehead{\colhead{Region} & \colhead{Cloud} & \colhead{Distance} & \colhead{Transition} & \colhead{Telescope} & \colhead{Rest Freq.\tablenotemark{e}} & \colhead{Spectral Res.} & \colhead{Spatial Res.} & \colhead{Pixel Scale}\\
& & (pc) & & & (MHz) & (km s$^{-1}$) & ($\arcsec$) & ($\arcsec$)}
\startdata
L1689 & Ophiuchus & 119$\pm$6\tablenotemark{a} & $^{13}$CO ($1-0$) & FCRAO & 110201.354 & 0.07 & 46 & 23\\
DR21 & Cygnus X & 1700\tablenotemark{b} & C$^{18}$O ($3-2$) & JCMT & 329330.552 & 0.11 & 32 & 7.2\\
B18 & Taurus & 135$\pm$20\tablenotemark{c} & HC$_5$N ($9-8$) & GBT & 23963.9010 & 0.07 & 31& 11\\
M17SW & M17 & 1700\tablenotemark{d} & NH$_3$ (1,1) & GBT & 23694.4955 & 0.07 & 32& 8.8\\
MonR2 & Orion-Monoceros & 900\tablenotemark{c} & NH$_3$ (1,1) & GBT & 23694.4955 & 0.07 & 32 & 8.8\\
\enddata
\tablenotetext{a}{\cite{Lombardi_2008}}
\tablenotetext{b}{\cite{Schneider_2006}}
\tablenotetext{c}{\cite{Schlafly_2014}}
\tablenotetext{d}{\cite{Xu_2011}}
\tablenotetext{e}{Accessed from \cite{Lovas_2004}.}
\label{Table_regions}
\end{deluxetable}
To be consistent with the synthetic spectra used to train the CNN, the spectral axis on all cubes was clipped to 500 channels centered on the line-of-sight velocity to each cloud. The ``local'' and ``global'' view spectra were extracted from each cube by sliding a $3\times3$ pixel window across the position-position plane of the cube. The CNN then makes a prediction on the class of the central pixel in the window using the ``local'' and ``global'' view spectra as input.
Figure \ref{SNR} shows the distributions of SNR, defined as the ratio of the peak emission line channel to the standard deviation of the off-line channels, for the ``local'' spectra in the test set cubes. Figure \ref{SNR} also displays the SNR distributions for the one-component, two-component, and noise-only training set classes. The one- and two-component training set classes have similar SNR distributions, which ensures that the network must use morphological differences rather than SNR differences to distinguish those classes. In addition, the one- and two-component training set distributions have a similar range in SNR as the majority of the real data. This similarity suggests the training set is representative of real data, which is necessary for the trained network to generalize its predictions for handling real observations with a wide range of SNR. Although the high SNR end of the training set is less populated than the real data, this difference is acceptable since the high SNR examples are typically much easier to classify than low SNR examples. We also note that the one- and two-component classes in the training set have a significant drop-off below SNR=4, which is effectively the SNR threshold where the ``noise'' class begins. An SNR of 4 is similar to typical minimum thresholds set for traditional line-fitting pipelines, however, making it a reasonable signal versus noise threshold for the network's predictions.
In Section 5.2, we also use NH$_3$ (1,1) cubes observed by the KFPA Examinations of Young STellar Object Natal Environments (KEYSTONE) survey (PI: James Di Francesco; Keown et al. 2019, submitted) using the 100m Green Bank Telescope. Our analysis is focused on the KEYSTONE observations of M17 and MonR2 at distances of 900 pc and 2000 pc, respectively. These cubes have a spectral resolution of 0.07 km s$^{-1}$, beam size of 32$\arcsec$, and pixel width of 8.8$\arcsec$.
\section{Methods: CNN Architecture}
The architecture of the 1D CNN adopted in this paper is shown in Figure \ref{architecture}. The network's hyper-parameters were set based on the success of previously published 1D CNNs featured in \cite{Fabbro_2018} and \cite{Shallue_2018}, which used spectra and light curves, respectively, as input. Following those papers, we also use a 1D CNN because each sample in our training set consists of two 1D spectra with 500 channels. The ``local'' and ``global'' view spectra are fed into individual convolutional columns before being reconnected into a joint fully-connected layer. The convolutional columns consist of two convolutional layers, each with 16 kernels with a width of 3 spectral channels. The ``convolution'' in the convolutional layers involves taking the dot product between the input spectra and the kernels, which are moved across each channel of the input spectra. The output are convolved feature maps (one for each kernel) that are used as input into the next layer of the network.
The weights on the convolutional kernels are learned during training and attempt to create convolved feature maps that highlight spectral features that can be used to make a decision about the class of the sample. The resulting convolved features from the two convolutional columns are then combined as inputs into a joint column of two fully-connected layers with 3000 artificial neurons each. All of the neurons in these layers have a rectified-linear (`relu') activation function, which transforms the inputs it receives into an output that is sent to the next layer in the network. The rectified-linear activation function is commonly used in deep neural networks because it solves the ``vanishing gradients problem,'' wherein large networks fail to train properly because the error of neurons deep in the network go to zero and can't be properly updated by gradient descent methods \citep{Hochreiter_2001}.
The final output layer has three artificial neurons with a `softmax' activation function. Each of these neurons has its own weight vector ($w$) that is the length of the output vector ($x$) from the previous network layer. The neurons first apply a weighted sum of $x$ by performing the dot product of $x$ and $w$. The output of the three dot products performed by each individual neuron form a new vector ($y$) of length three that is then passed to the softmax function, which is given by $e^{y_i}/\Sigma_je^{y_j}$ where $y_i$ is every element of $y$ and $\Sigma_je^{y_j}$ is the sum of the exponential of each element in $y$. Thus, the softmax activation function output is a length three vector that always sums to one and is interpreted as the probability of the input sample being in each of the three input classes.
The weights on the artificial neurons and convolutional kernels are optimized by minimizing the categorical cross-entropy loss function, using the `Adam' gradient descent optimization method \citep{Kingma_2014}. Since the categorical cross-entropy loss function increases as the predicted probabilities of the training set samples diverge from their ground-truth values, the model prediction accuracy is maximized if the cross-entropy function is minimized. For instance, an input training set sample that is a one-component class member has a label of [1, 0, 0]. If the softmax output of the network is [0.1, 0.5, 0.4] for that sample, the loss function output is high and thus the weights of the network need to be adjusted to minimize the loss. The gradient of the loss function is then calculated to determine in which directions the model weights should be updated to get closer to the minimum loss.
\begin{figure}[htb]
\epsscale{0.85}
\plotone{CNN_architecture5.pdf}
\caption{Architecture of the CNN chosen for this paper. The ``local'' and ``global'' view spectra for each sample are fed into individual columns of convolutional layers with 16, three-channel width kernels. The convolved features maps from each column are then joined into a single column of fully-connected layers with 3000 artificial neurons. The final layer predicts the class of the object using the softmax activation function. All other layers use a rectified-linear (relu) activation function.}
\label{architecture}
\end{figure}
The weights of the CNN are updated by iteratively moving through the training set in batches of 100 samples (a.k.a. `mini-batch gradient descent'). To prevent over-fitting, `early-stopping' is implemented by monitoring the model's performance on a validation set of 90,000 additional synthetic spectra (30,000 in each class) during training. After each epoch in the training process, where an epoch represents using all samples in the training set to update the weights of the CNN, the validation set loss is measured. If the validation set loss does not improve for five epochs in a row, training is stopped and the model from the epoch with the best validation set loss is saved.
As an additional comparison to the architecture using the local and global spectra as inputs, we also train two additional networks using only the local and only the global spectra as input. The results for these architectures are presented in Section 4. Tests of more complex architectures with additional layers, neurons, and larger kernel sizes produced models that overfit the training data. As such, the simpler architecture presented in Figure \ref{architecture} was chosen for CLOVER.
\section{Results}
\subsection{Testing on Synthetic Data}
After training the three CNN architectures (local-only, global-only, and local+global), we test their performance using ten independent validation sets of 30,000 synthetic spectra (10,000 in each class) described in Section 2.1. Predictions are made on each of the ten validation sets by the trained CNNs. The mean and standard deviation of the ten confusion matrices for each architecture are shown in Figure \ref{cm}. Each square in the confusion matrix shows the amount of correct or incorrect classifications the CNN has made for each class. For a perfect classification of all samples in the validation set, the upper left, central, and lower right squares in the matrices would each be 10,000. The off-diagonal squares represent the number of misclassifications in each class.
The mean classification accuracies and standard deviations for the CNN using only the local spectrum as input across the ten independent validation sets are $96.39 \pm 0.19 \%$, $99.95 \pm 0.02\%$, and $90.56 \pm 0.36\%$ for the one-component, noise, and two-component classes, respectively. For the CNN using only the global spectrum as input, the classification accuracies improve to $99.31 \pm 0.03 \%$, $100\%$, and $96.82 \pm 0.13\%$ for the three classes. This improvement can be attributed to the higher SNR of the global spectra, which makes it easier to classify the samples. When using both the local and global spectra as input to the CNN, the classification accuracies become $99.35 \pm 0.07 \%$, $100\%$, and $96.08 \pm 0.24\%$, which are similar to those of the global-only CNN.
Although the global-only and local+global CNNs show similar performance, we opt to use the local+global CNN for the remainder of our analysis since the local spectra can be useful for preventing overfitting on real data. For instance, there are scenarios in real observations where the global spectrum may appear to have two velocity components, but the local spectrum shows only a single component. For those cases, using the local spectrum as input prevents misclassification.
The accuracy of the local+global CNN is improved further by averaging the outputs of six independently trained CNNs. Since each CNN is trained with different random initializations for their parameter weights, there is a variance in their output predictions on a given test set. Averaging their predictions, however, reduces this variance and often leads to improved overall performance since each CNN may perform better or worse on particular samples. Known as `ensembling' or model `averaging,' this technique involves summing the three output class probabilities predicted by each CNN before selecting the class with the highest probability as the predicted class for a given sample. The confusion matrix for this ensemble CNN is shown in the middle right panel of Figure \ref{cm}. The accuracies for the one-component, noise, and two-component classes improves to $99.92 \pm 0.02 \%$, $100\%$, and $96.72 \pm 0.18\%$. We refer to this ensemble of CNNs as the ``ensemble CNN'' for the remainder of the paper.
\begin{figure}[htb]
\epsscale{1.0}
\plottwo{cnn_local_cm_3class_test10.pdf}{cnn_global_cm_3class_test10.pdf}
\plottwo{cm_3class_test10.pdf}{cm_3class_test10_ensemble.pdf}
\centering
\plottwo{chi_local_cm_3class_test10.pdf}{chi_global_cm_3class_test10.pdf}
\caption{Confusion matrices for ten validation sets of 30,000 synthetic spectra (10,000 in each class) classified by the CNN using only the ``local'' spectrum (top left), CNN using only the ``global'' spectrum (top right), CNN using both the local and global spectra (middle left), averaged ensemble of six CNNs (middle right), traditional $\chi^2$-minimization on the local spectrum (bottom left), and the traditional $\chi^2$-minimization on the global view spectrum (bottom right). The ``noise'' class for the $\chi^2$-minimization panels was selected based on a SNR threshold of 4. Each panel in the confusion matrices shows the mean and standard deviation for the ten validation sets.}
\label{cm}
\end{figure}
\clearpage
\begin{figure}[htb]
\epsscale{1.1}
\plottwo{misclass_3.pdf}{misclass_8.pdf}
\plottwo{misclass_2.pdf}{misclass_5.pdf}
\centering
\plottwo{misclass_1.pdf}{misclass_7.pdf}
\caption{Samples in the synthetic validation set misclassified by the ensemble CNN. The left column shows true two-component samples (True Class: 2) classified as one-component (Pred Comp: 1) by the ensemble CNN. The right column shows true one-component samples (True Class: 1) classified as two-component (Pred Comp: 2) by the ensemble CNN.}
\label{misclass}
\end{figure}
\clearpage
Visual inspection of the ensemble CNN misclassifications in the validation set also reveals that many of those samples indeed exhibit characteristics of the misclassified class. Several examples of these misclassified samples are shown in Figure \ref{misclass}. For instance, the true one-component samples that the ensemble CNN incorrectly identified as two-components often have a visible, but subtle, second peak due to the randomness of the noise injection. Similarly, the true two-component samples identified as one-components by the ensemble CNN are often indistinguishable from a true one-component sample. As such, these misclassifications are actually a positive sign that the ensemble CNN has ``learned'' the subtle differences between the one- and two-component classes rather than simply memorizing the samples in the training set.
\subsection{Performance Versus Two-Component Gaussian Line Fitting}
To gauge the ensemble CNN's performance against traditional line fitting methods, we also use a $\chi^2$-minimization model selection technique to make class predictions on the validation set. Both a single- and two-component Gaussian model are fit to the ``local'' spectrum for each validation set sample using the Levenberg-Marquardt $\chi^2$-minimization method in the \texttt{scipy.optimize.curve$\_$fit} Python package. For the one-component fit, we use the peak channel in the spectrum as the initial guess for centroid velocity, a set value of 1.0 for the peak intensity guess (spectra are scaled to a max value of 1.0), and a set value of 10 channels for the velocity dispersion guess. To find the optimal solution for the two-component fit, which is more susceptible to falling into local minima solutions rather than global minima, we perform the line-fitting using a grid of initial parameter guesses. The model with the lowest $\chi^2$ value was selected as the best-fitting two-component model. The initial guess for the first velocity component in the two-component model was set in the same way as the one-component model, while the second velocity component guesses were set as follows:
\begin{itemize}
\item $T_{peak}$: 0.1 less than the solution found for the one-component fit.
\item $V_{LSR}$: [$\pm$ 10, $\pm$ 30, $\pm$ 50, $\pm$ 70, $\pm$ 90, $\pm$ 110, $\pm$ 130, $\pm$ 150, $\pm$ 170, $\pm$ 190] channels from the solution found for the one-component fit. Thus, we search for centroids to the left and right of the one-component fit.
\item $\sigma$: 0.1 channel larger than the solution found for the one-component fit.
\end{itemize}
The $\chi^2$ values for the best-fitting single- and two-component models are then compared to select the ``better'' model for the spectrum. To penalize the larger number of model parameters in the two-component model and consider the number of data points being fit, we apply the Bayesian Information Criterion \citep[BIC; ][]{Schwarz_1978} to each model's $\chi^2$ value. This approach is similar to the Akaike Information Criterion used in other traditional two-component line fitting methods \citep[e.g.,][]{Henshaw_2016}, but has a built-in penalty for the number of data points in the models being compared. Namely, the model with the lowest BIC value is selected as the preferred model, where the BIC is given by the following expression:
\begin{equation}
BIC = N\ln(\chi^2) + p\ln(N) ~~~,
\end{equation} where $p$ is the number of model parameters and $N$ is the number of fitted data points. The BIC attempts to balance goodness-of-fit (e.g, $\chi^2$ value) against model complexity (i.e., the number of model parameters in relation to data set size) when comparing models. This approach tries to avoid overfitting (selecting a model that is too complex simply because it fits the data better), but at the same time limit underfitting (selecting a simpler model when a more complex model is more appropriate for the data).
The results of the BIC comparisons for the ``local'' spectra in the validation set are also shown in the bottom left panel of Figure \ref{cm}. The ``noise'' class in this case is defined by any spectrum below SNR=4.0. For this traditional model selection approach, the classification accuracies for the one-component, noise-only, and two-component classes are $98.86 \pm 0.11\%$, $97.31 \pm 0.15\%$, and $87.18 \pm 0.31\%$, respectively. These accuracies are similar to those from the CNN local-only classifications, with slightly lower accuracies for the noise-only and two-component classes, but a slight increase in accuracy for the one-component class.
We also repeat the traditional line fitting and model selection method using the ``global'' spectrum for each training sample. The lower right panel in Figure \ref{cm} shows the accuracy for those classifications. The classification accuracies in this case are $99.63 \pm 0.07\%$, $97.28 \pm 0.15\%$, and $98.65 \pm 0.11\%$ for the one-component, noise-only, and two-component classes, respectively. The higher SNR of the ``global'' spectra are likely contributing to this accuracy improvement.
\begin{figure}[htb]
\epsscale{1.0}
\plottwo{CNN_acc_snr.pdf}{CNN_acc_vlsr.pdf}
\caption{Model classification accuracy versus SNR (left) and centroid velocity separation (right) for two-component samples in the synthetic test set. Each data point represents the classification accuracy for samples within a bin centered on the data point's x-axis position. The classifications for the traditional $\chi^2$-minimization methods on the ``local'' and ``global'' spectra are shown in blue and orange, respectively. The color of the CNN Local+Global Ensemble data points (outlined in black) show the amount of test set samples within each bin. The centroid velocity separation calculation assumes each spectral channel is separated by $\sim0.07$ km s$^{-1}$.}
\label{acc}
\end{figure}
As an additional comparison between the ensemble CNN and $\chi^2$-minimization predictions, we also show in Figure \ref{acc} each method's classification accuracy for the two-component samples in the synthetic test set versus SNR and centroid velocity separation ($\Delta V_{LSR}$). Figure \ref{acc} shows that the classification accuracy for the $\chi^2$-global and ensemble CNN methods is stable between an SNR range of 4-20, with variations less than a few percent. In contrast, the $\chi^2$-local method has a severe drop-off in accuracy for the lowest SNR bin. Since the $\chi^2$-global and ensemble CNN methods incorporate the higher SNR global spectra into their classifications, they are less affected by the lower SNRs of the local spectra.
In terms of centroid velocity separation, all three methods show a significant drop-off in classification accuracy below $\Delta V_{LSR} \sim 1$ km s$^{-1}$. This effect is due to many of the low velocity separation two-component samples being indistinguishable in appearance to one-component class members. At $\Delta V_{LSR} > 1$ km s$^{-1}$, the components are distinct and easy to identify, causing accuracies to be near $100\%$ for all three methods. We also see that the $\chi^2$-local method's accuracy begins to decrease at higher velocity separations $\Delta V_{LSR} \sim 2$ km s$^{-1}$ than the other two methods. This behavior is once again likely related to the lower SNR of the local spectrum, which makes classifying close velocity components more difficult.
We also note that using solely an averaged spectrum (e.g., the global spectrum) to make classifications is not common for other traditional line-fitting methods \citep{Henshaw_2016, Sokolov_2017, Clarke_2018}. Typically, a fit to an averaged spectrum is used to set the initial parameter guesses for a second fit to an individual spectrum. For this reason, all further comparisons will be between the $\chi^2$-local and the ensemble CNN methods.
\subsection{Testing On Real Observations}
\subsubsection{L1689 - $\mathit{^{13}}$CO ($\mathit{1-0}$)}
Although the ensemble CNN has demonstrated high classification accuracy on synthetic data, it is only useful if it can accurately classify real emission-line spectra. To test the model's performance on real observations, we have collected a $^{13}$CO ($1-0$) spectral cube from L1689 in the Ophiuchus molecular cloud observed by the COMPLETE survey \citep{Ridge_2006}. This cube provides an excellent test for the ensemble CNN since it displays spectra that belong in all three of the classes in our training set. For this test, we implicitly assume that the line emission observed is optically thin everywhere. The use of CLOVER or traditional two-component line-fitting techniques on data with self-absorbed single-component lines will likely result in erroneous conclusions about the nature of the emission. Nevertheless, even if the observed emission is optically thick and self-absorbed, it still provides an adequate test set since self-absorption features mimic the appearance of optically thin emission with two velocity components along the line of sight.
Figure \ref{multi_comps_segmentation} shows the output predictions after a sliding window of size 3$\times$3 pixels has been moved across the position-position plane of the cube, the ``local'' and ``global'' spectra are extracted, and fed into the ensemble CNN. Gray pixels in Figure \ref{multi_comps_segmentation} denote those that were predicted to be in the noise class, while black represents pixels predicted to be in the one-component class and white shows those predicted to be two-component class members. The red-lettered panels in Figure \ref{multi_comps_segmentation} show the ``global'' spectra at different locations on the data cube. Since we have no a priori knowledge of the physical processes that created any apparent two-component features in these real spectra, we must rely only on the appearance of the spectra when determining the success of the CNN's predictions. Nevertheless, comparing the spectra highlighted in Figure \ref{multi_comps_segmentation} to the ensemble CNN prediction map reveals that the CNN can distinguish the spectral differences between each class. Even two-component spectra with closely separated velocity components are correctly identified by the model (see, e.g., spectrum C in Figure \ref{multi_comps_segmentation}).
\begin{figure}[htb]
\epsscale{0.75}
\plotone{Oph_predictions_snr4_2.pdf}
\caption{Left panels: example segmentations of a $^{13}$CO ($1-0$) spectral cube observation of L1689 into three classes: single velocity component spectrum (black), multiple velocity component spectrum (white), and noise (grey) using CLOVER's ensemble CNN (top) and traditional $\chi^{2}$-minimization model fitting (bottom). Right panels: The ``global'' view spectra extracted from the observed spectral cube at the positions of the 3$\times$3 pixel windows overlaid onto the left panels. Red letters denote positions where CLOVER and the $\chi^{2}$ technique agree in their class predictions, while the green numbers show positions where they disagree. The text in the upper right corner of each panel shows the class predicted by CLOVER and the $\chi^{2}$ technique for that spectrum, where 2=two-component, 1=one-component, and 0=noise.}
\label{multi_comps_segmentation}
\end{figure}
Figure \ref{multi_comps_segmentation} also displays the class predictions of each pixel obtained from the traditional BIC model selection method using the ``local'' spectrum. For this method, any pixel with SNR $< 4$ is deemed noise. The red-lettered panels in Figure \ref{multi_comps_segmentation} show locations where the traditional model selection method agrees with the ensemble CNN predictions. As can be seen, these tend to be high SNR spectra where the class of the object is obvious. The green-numbered panels in Figure \ref{multi_comps_segmentation} show spectra from locations where the two methods disagree in their class predictions. These disagreeing cases reveal clear examples of the $\chi^{2}$-minimization method underfitting (labeled panels 1 and 2 in Figure \ref{multi_comps_segmentation}) and overfitting (labeled panels 3 and 4 in Figure \ref{multi_comps_segmentation}) the spectra. Visual inspection of the individual best-fit models for the $\chi^{2}$-minimization approach at those locations reveals that they are not cases in which the method fails to provide good fits to each spectrum, but rather they are failures of the BIC model selection technique. Conversely, the ensemble CNN is able to identify weak two-component features that are deemed to be one-component by the $\chi^{2}$-minimization approach (labeled panels 1 and 2 in Figure \ref{multi_comps_segmentation}), but is also resilient against predicting the two-component class when it is not warranted (labeled panels 3 and 4 in Figure \ref{multi_comps_segmentation}). These examples serve as evidence for the advantage that the ensemble CNN can provide for identifying multiple velocity component spectra.
Moreover, the ensemble CNN predictions on the entire spectral cube, which has dimensions of 118$\times$106 pixels (i.e., $\sim$ 12,500 individual predictions), take only 137 seconds ($\sim 23$ seconds for each of the six CNN predictions in the ensemble) on a single core of a 2.8 GHz Intel Core i7 CPU. Although CLOVER's prediction speed could improve by utilizing multiple cores on a single CPU or GPU, the low computing power required for CLOVER to obtain fast performance is a marked advantage over traditional methods (see Section 5.1 for a comparison involving both classification and parameter predictions). The number of pixels in typical spectral cubes is also growing with the advent of focal plane arrays that quickly map large areas of the sky \citep[e.g.,][]{Morgan_2008, Sieth_2014} and interferometers (e.g., ALMA, ngVLA, etc.) that map at high spatial resolutions. As such, the quick prediction speeds provided by CLOVER make it an attractive tool for the next generation of large-scale spectroscopic surveys of star-forming regions.
\subsubsection{DR21 - C$\mathit{^{18}}$O ($\mathit{3-2}$)}
Although $^{13}$CO ($1-0$) emission is a common tracer of molecular gas in star-forming regions, it can become optically thick in some environments. The high opacity emission sometimes leads to self-absorption dips, which can mimic the double-peaked structure of optically-thin two-component spectra \citep[e.g.,][]{Lee_1999, Sohn_2007, Schnee_2013, Keown_2016}. To ensure that self-absorption is not affecting the CNN predictions, we also test the CNN on a C$^{18}$O ($3-2$) spectral cube of DR21 in the Cygnus X star-forming region. Since C$^{18}$O is a much rarer isotopomer than $^{13}$CO, its emission is almost always optically thin and rarely suffers from self-absorption dips.
We advise users of CLOVER to determine whether or not the emission they are inputting into the algorithm is optically thin or thick. Since CLOVER makes its predictions under the assumption that the emission is optically thin, any significantly self-absorbed optically thick spectrum it receives as input will most likely be classified in the two-component class.
\begin{figure}[htb]
\epsscale{0.8}
\plotone{CygX_predictions_snr4_2.pdf}
\caption{Same as Figure \ref{multi_comps_segmentation}, but for the C$^{18}$O ($3-2$) observations of DR21.}
\label{multi_comps_segmentation_DR21}
\end{figure}
\begin{figure}[htb]
\epsscale{0.8}
\plotone{B18_predictions_snr4_clip2.pdf}
\caption{Same as Figure \ref{multi_comps_segmentation}, but for the HC$_5$N ($9-8$) observations of B18.}
\label{multi_comps_segmentation_B18}
\end{figure}
Figure \ref{multi_comps_segmentation_DR21} shows the results of both the ensemble CNN and $\chi^{2}$-minimization model selection technique on the C$^{18}$O ($3-2$) spectral cube. Once again, we see that the ensemble CNN and $\chi^{2}$-minimization methods show overall agreement between their predictions. As seen in the green numbered panels of Figure \ref{multi_comps_segmentation_DR21}, the ensemble CNN method is less susceptible to overfitting than the $\chi^{2}$-minimization method. For instance, the $\chi^{2}$-minimization method frequently classifies spectra that appear to have a single velocity component (or very subtle wings) as two-components.
In addition to showing the differences between one- and two-component class predictions for the ensemble CNN and $\chi^{2}$-minimization approach, Figure \ref{multi_comps_segmentation_DR21} also highlights the advantages of the CNN's noise class predictions over simple SNR thresholds. For example, the $\chi^{2}$-minimization approach's SNR cutoff leads to many islands of one-component members that should instead be classified as noise (see, e.g., green spectrum 4 in Figure \ref{multi_comps_segmentation_DR21}). Conversely, the ensemble CNN segmentation is much smoother, showing a clear distinction between the core of signal at the center of the cube and noise at the edges. A similar distinction between the noise and signal can be seen in the L1689 CNN segmentation shown in Figure \ref{multi_comps_segmentation}. This behavior provides further evidence of the advantages gained by incorporating CNNs into the line-fitting procedure.
\subsubsection{B18 - HC$\mathit{_5}$N ($\mathit{9-8}$)}
As an additional comparison between the ensemble CNN and $\chi^{2}$-minimization approaches, we also test their performance on a HC$_5$N ($9-8$) spectral cube from B18 in the Taurus star-forming region observed by the Green Bank Ammonia Survey \citep{Friesen_2017}. B18 is a much more quiescent region than L1689 and DR21, which means its emission tends to have only a single velocity component. HC$_5$N ($9-8$) is also an optically thin transition, ensuring that self-absorption is not affecting the spectra. Thus, this cube provides a test to see how robust the ensemble CNN is for cubes that lack two-component class members.
Figure \ref{multi_comps_segmentation_B18} displays the segmentation results for the ensemble CNN and $\chi^{2}$-minimization approach applied to the B18 cube. Overall, the two methods are in good agreement. Both correctly identify that only one-component and noise-only spectra are within the cube. As in the other test regions, we once again see that the ensemble CNN noise segmentation for B18 is superior to the SNR threshold of the $\chi^{2}$-minimization approach since there is a clear distinct between the noise and signal in the former.
\subsection{Testing on Three-Component Spectra}
One important assumption of CLOVER's ensemble CNN classifications are that they assume the input spectra belong to one of the three classes they were trained to predict (one-component, two-component, or noise-only). In real observations, however, three or more velocity components may be present in a single spectrum \citep[e.g.,][]{Sokolov_2017, Clarke_2018, Chen_2019}. As a simple test to see how CLOVER would classify spectra with three velocity components, we generate an additional synthetic test set of 30,000 three-component spectra. The first and second velocity components for each sample in the test set were generated using the same steps described in Section 2.1 for generating the two-component samples. A third velocity component was introduced by injecting a third Gaussian spectrum into each sample. The velocity dispersion and centroid for this third Gaussian were randomly drawn from a uniform distribution with the following limits: $2$ channels $\leq \sigma \leq 11$ channels, and $-0.55$ km s$^{-1} \leq$ $V_{LSR} \leq 0.55$ km s$^{-1}$ (where the spectral axes has been normalized between $-1$ km s$^{-1}$ and 1 km s$^{-1}$). $T_{peak}$ for the third Gaussian was drawn randomly from a uniform distribution between $2\times$RMS and 1 K, where RMS is the noise level selected for the cube. The RMS level for each sample was set as described in Section 2.1.
When predicting the class of the 30,000 three-component spectra, CLOVER assigns the two-component class to 29,827 ($\sim 99\%$) and the one-component class to only 173. Thus, CLOVER's two-component classifications can be thought of as a ``multi-component'' class. If presented with a sample containing more than two velocity components, the current implementation of CLOVER will likely place that sample in its two-component class.
\section{Deriving Kinematics From Two-Component Spectra}
The quick and accurate classifications provided by CLOVER can be used to improve kinematics measurements in one of two ways: 1) As a preprocessing technique that will predict the class of each pixel, then a traditional line fitting method can be used to find the best-fitting parameters for that model. 2) As a preprocessing technique into a second neural network that will predict the centroid velocity and line width directly from the spectra of the pixels identified as two-component class members. In this section, we demonstrate the latter case - deriving kinematics directly from spectra.
\cite{Fabbro_2018} showed that CNNs have similar performance as traditional least-squares template fitting for deriving stellar parameters from APOGEE spectra. More importantly, the \cite{Fabbro_2018} CNN made stellar parameter predictions significantly faster than least-squares template fitting, highlighting the advantages gained by utilizing neural network architectures. With those results in mind, it is likely that neural networks can perform similarly to the traditional least-squares model fitting commonly used to derive kinematics from emission-line spectral data of star-forming regions.
Using a similar neural network architecture described in Section 3 for CLOVER's spectral classification, we trained an additional network to use the local and global spectra for a two-component class member (i.e., a pixel predicted to be a two-component by the ensemble CNN) to predict the velocity centroid, dispersion, and peak intensity of each component. There are two main changes to the architecture of this network from the spectral classification network: 1) Instead of the training set labels being classes, they are now the velocity centroid, dispersion, and peak intensity of each component (i.e., in the most general case, both the inputs and outputs of a machine learning problem can be multidimensional. Here, we have a multidimensional-output regression.). The training set labels are a six-number array, with the first being the centroid of the lower-velocity component, the second being the centroid of the higher-velocity component, the third being the dispersion of the lower-velocity component, the fourth being the dispersion of the higher-velocity component, the fifth being the peak intensity of the lower-velocity component, and the sixth being the peak intensity of the higher-velocity component. This setup ensures that the network always predicts the labels in the same order so that no label switching occurs. 2) The output layer consists of six output neurons (one for each label) with linear activation functions that predict continuous values rather than the probability of each class. The centroid velocity labels are normalized between $-1$ and 1, with $-1$ being the left (lowest-velocity) edge of the spectrum and 1 being the right (highest-velocity) edge of the spectrum. The velocity dispersion labels are represented in units of spectral channels.
The training set for this regression network included 300,000 two-component spectra generated using the same method discussed in Section 2.1. A validation set of an additional 90,000 spectra was also used to monitor the network's performance during training in order to apply early-stopping. After training, a test set of 30,000 additional two-component samples were generated and used to gauge the network's performance. The top row of Figure \ref{multi_comps_regression} displays the regression accuracy of the CNN predictions for the test set. The model's predictions are accurate to mean absolute error (MAE = $\frac{1}{n}\sum_{t=1}^{n}|e_t|$, where $e_t$ is the error in the prediction of sample $t$) of $\sim 0.01$ for centroid velocity, $\sim 0.35$ for velocity dispersion, and $\sim 0.06$ for peak intensity.
\begin{figure}[htb]
\epsscale{0.95}
\plotone{CNN_param_predictions_mae.pdf}
\plotone{chi_param_predictions_mae.pdf}
\plotone{chi_CNN_param_predictions_mae.pdf}
\caption{Velocity centroid (two left columns), dispersion (middle two columns), and peak intensity (two right columns) predictions by CLOVER's trained regression CNN (top row), $\chi^2$-minimization grid search method (middle row), and $\chi^2$-minimization method with CNN initial guesses (bottom row) versus the ``ground-truth'' for the low-velocity component (V1, W1, T1) and high-velocity component (V2, W2, T1) for the 30,000 two-component spectra in the synthetic test set. The dashed lines show a one-to-one correspondence. In all panels, the centroid velocities are normalized between $-1$ and 1. The velocity dispersion units are the number of channels in the spectrum. The subtitle above each panel shows the mean absolute error for that parameter.}
\label{multi_comps_regression}
\end{figure}
\clearpage
\begin{figure}[htb]
\epsscale{1.0}
\plottwo{reg_tpeak_gauss2.pdf}{reg_tpeak_gauss6.pdf}
\caption{Example predictions by CLOVER on previously unseen ``local-view'' spectra from the synthetic test set. The black dots/bars show the positions of the ``ground-truth'' velocity centroids, peak intensity, and velocity dispersions used to generate the synthetic sample. For comparison, the orange dots/bars show CLOVER's parameter predictions. The dashed black line shows the ground-truth model used to generate the synthetic sample, while the orange solid line shows the corresponding two-component model generated using CLOVER's parameter predictions.}
\label{test_reg}
\end{figure}
Figure \ref{test_reg} shows the trained network's predictions for two samples in the test set. The model can accurately predict the kinematics of components that have large velocity separations (e.g., left panel of Figure \ref{test_reg}), but also those that are blended together (e.g., right panel of Figure \ref{test_reg}).
The middle row in Figure \ref{multi_comps_regression} shows the performance of the $\chi^2$-minimization method's best-fit two-component model for every sample's local spectrum in the test set. Using this method, the mean absolute errors increase to $\sim 0.015$ for centroid velocity, $\sim 0.9$ for velocity dispersion, and $\sim 0.07$ for peak intensity, with a significant number of poor fits as shown by the abundance of outliers in each panel. Disregarding the outliers, the spread of the $\chi^2$-minimization predictions about the one-to-one line is similar to the CNN predictions. The lack of outliers in the CNN predictions, however, suggests that it is more resilient against fitting noise and/or falling into local minimum solutions compared to the $\chi^2$-minimization method.
Figure \ref{mae} shows the mean absolute error for each predicted parameter in bins of SNR and centroid velocity offset for the $\chi^2$-minimization method and CNN predictions. It is clear from Figure \ref{mae} that the $\chi^2$-minimization method's outliers are caused by samples with low SNR and low centroid velocity offsets, which show higher values of MAE compared to samples with higher SNR and larger centroid velocity offsets. Although the MAE values for the CNN predictions are more stable than those of the $\chi^2$-minimization method, the CNN still suffers from a moderate increase in MAE for samples with low SNR and low centroid velocity offsets.
\begin{figure}[htb]
\epsscale{0.9}
\plottwo{CNN_reg_mae_snr_V.pdf}{CNN_reg_mae_vlsr_V.pdf}
\plottwo{CNN_reg_mae_snr_W.pdf}{CNN_reg_mae_vlsr_W.pdf}
\centering
\plottwo{CNN_reg_mae_snr_T.pdf}{CNN_reg_mae_vlsr_T.pdf}
\caption{Mean absolute error (MAE) versus SNR (left column) and centroid velocity separation (right column) for centroid velocity (top row), velocity dispersion (middle row), and peak intensity (bottom row) predictions. Each data point represents the MAE for test set samples within a bin centered on the data point's x-axis position, averaged over both velocity components for each parameter (e.g., the average MAE for both V1 and V2 from Figure \ref{multi_comps_regression}). The results for the traditional $\chi^2$-minimization method are shown for both the grid-search initial guess technique (blue, see Section 4.2) and when using CLOVER's regression CNN predictions to set initial guesses (orange, see Section 5). The color of the regression CNN Local+Global data points (outlined in black) show the amount of test set samples within each bin. The centroid velocity separation calculation uses the spectral axis after normalization between $-1$ km s$^{-1}$ and 1 km s$^{-1}$.}
\label{mae}
\end{figure}
\clearpage
Since the $\chi^2$-minimization method is susceptible to falling into local minimum solutions for samples with low SNR and low centroid velocity offsets, its performance can be improved by making better initial guesses and providing constraints on the parameter space explored. One way to set these initial guesses is by using the CNN predictions, which are more resilient to low SNR and low centroid velocity offsets. To demonstrate this use-case, we perform a second round of fitting on the test set using the $\chi^2$-minimization method. Instead of using the initial guess grid-search method described in Section 4.2, the CNN predictions are used as the initial parameter guesses. We also constrain the parameter space explored by the $\chi^2$-minimization method to be within the scatter of the CNN predictions, which also helps prevent fitting noise or falling into local minimum solutions. In the bottom panel of Figure \ref{multi_comps_regression}, we show the results using this combined CNN and $\chi^2$-minimization method. Using the CNN parameter constraints, the $\chi^2$-minimization method no longer falls into local minimum solutions. The mean absolute errors improve to $\sim 0.003$ for centroid velocity, $\sim 0.5$ for velocity dispersion, and $\sim 0.04$ for peak intensity. Moreover, Figure \ref{mae} also shows that using the CNN, rather than the grid-search technique, to set initial guesses for the $\chi^2$-minimization method reduces the MAE for samples with low SNR and low centroid velocity offsets. As such, CLOVER provides a convenient way to improve existing line fitting pipelines that require initial guesses for centroid velocity, dispersion, and peak intensity.
In addition, Figures \ref{multi_comps_regression} and \ref{mae} show that the CNN+$\chi^2$-minimization approach has better performance than the CNN alone for both centroid velocity and peak intensity predictions. Conversely, the accuracies of the velocity dispersion predictions for both $\chi^2$ methods are lower than those of the CNN. The middle left panel of Figure \ref{mae}, however, shows that the velocity dispersion prediction accuracy of all three methods converges at high SNR. The poorer performance of the $\chi^2$ methods are limited to the low SNR spectra, which make up the majority of the test set samples. The better performance of the CNN is likely a result of its usage of the higher SNR ``global'' spectrum, which can provide better constraints on a given sample's velocity dispersions than the ``local'' spectrum alone.
\subsection{Testing CLOVER's regression CNN on real data}
To test the performance of CLOVER's regression CNN on real observations, we use the L1689 $^{13}$CO ($3-2$) spectra that were predicted to belong to the two-component class by CLOVER's classification CNN presented in Section 4. Figure \ref{multi_comps_regression2} shows example predictions by the regression CNN on six of the L1689 spectra. CLOVER is able to predict accurately the kinematics of blended two-component spectra with both broad and narrow components. Figure \ref{multi_comps_regression2} also shows the kinematics and best-fit model determined by fitting a two-component Gaussian model to the data using the traditional $\chi^{2}$-minimization approach described in Section 4.2. Overall, the predictions of the two methods are in good agreement across all of these test examples. In most cases, however, the $\chi^{2}$-minimization approach provides a better fit visually to the data. These better fits are likely related to the fact that the $\chi^{2}$-minimization approach uses only the local spectrum for fitting while CLOVER uses both the local and global spectra. Since CLOVER also uses the global spectrum for its predictions, there is some bias in its predictions when viewed on the local spectrum. Nevertheless, using the global spectrum allows CLOVER's predictions to have lower variance and suffer less from falling into the local minima solutions that plague the $\chi^{2}$-minimization approach (see, e.g., the discussion in Section 5).
Moreover, the design of the $\chi^{2}$-minimization approach provides an advantage over the CNN for producing visually optimal fits. The $\chi^{2}$-minimization approach is based on minimizing the residual between the spectrum and model. When provided adequate initial model parameter guesses, the $\chi^{2}$-minimization approach will oftentimes produce a visually optimal fit. Conversely, the CNN is attempting to ``guess'' the Gaussian model parameters based on similar examples it has seen during training. The CNN knows nothing about the residual of the model it generates for the six parameters it predicts. Rather, the CNN knows only that the new spectrum it receives has activated similar artificial neurons as examples it saw during training. This approach is very different to residual minimization and does not always lead to the best possible visual fit. The CNN approach can, however, get very close to the best possible visual fit (as shown in Figure \ref{multi_comps_regression2}). To obtain the most robust parameter predictions possible, we therefore recommend users use CLOVER's regression predictions as initial guesses for a $\chi^{2}$-minimization approach.
Figure \ref{multi_comps_sig} shows the result of running both CLOVER's classification and regression CNNs (i.e., the complete CLOVER method) on the full L1689 data cube. In each panel, colored pixels represent those that were designated ``two-component'' class members by CLOVER's classification CNN. The top row displays the predicted centroids for each pixel, the middle row shows the predicted dispersion, and the bottom row shows the predicted peak intensity. The maps suggest a stronger gradient in centroid velocity for the lower-velocity component than the higher-velocity component, which is typically at $V_{LSR} > 4.0 $ km s$^{-1}$. The lower-velocity component also tends to have smaller velocity dispersion, especially on the eastern side of the cloud.
In addition, the full CLOVER classifications and parameter predictions take only $\sim154$ seconds on a single core of a 2.8 GHz Intel Core i7 CPU. This is over an order of magnitude faster than the 3,209 seconds required for the $\chi^{2}$-minimization approach, which simultaneously provides classifications and parameter predictions, when run on the same CPU.
\begin{figure}[htb]
\epsscale{1.1}
\plottwo{Oph_reg_pred11.pdf}{Oph_reg_pred67.pdf}
\plottwo{Oph_reg_pred14.pdf}{Oph_reg_pred43_yes.pdf}
\centering
\plottwo{Oph_reg_pred48_yes.pdf}{Oph_reg_pred45.pdf}
\caption{Velocity centroid (orange dots) and dispersion (orange bars) predictions by CLOVER versus the centroids and dispersions obtained from a two-component Gaussian fit using $\chi^{2}$-minimization (black dots and bars) for six spectra observed in L1689. In all panels, the ``local'' view spectrum is shown in blue, while the best-fit two-component model from the $\chi^{2}$-minimization is displayed as a dotted black line. The orange solid line shows the corresponding two-component model generated using CLOVER's parameter predictions.}
\label{multi_comps_regression2}
\end{figure}
\clearpage
\begin{figure}[htb]
\plottwo{Oph_CNN_comp1.pdf}{Oph_CNN_comp2.pdf}
\plottwo{Oph_CNN_sig1.pdf}{Oph_CNN_sig2.pdf}
\centering
\plottwo{Oph_CNN_tpeak1.pdf}{Oph_CNN_tpeak2.pdf}
\caption{Velocity centroid (top row), velocity dispersion (middle row), and peak brightness temperature (bottom row) predicted by CLOVER's regression CNN for all pixels predicted to be ``two-component'' class members by CLOVER's classification CNN for L1689.}
\label{multi_comps_sig}
\end{figure}
\clearpage
\section{Classifying Spectra with Hyperfine Structure}
Hyperfine splitting is an additional mechanism that can cause an emission line to appear non-Gaussian. The emission from the NH$_3$ (1,1) transition, for instance, is split across 18 different velocity components \cite[see, e.g.,][]{Ho_1983}. For NH$_3$ (1,1), this splitting results in a central group of blended Gaussians with four satellite groups of blended Gaussians (two on each side). Such spectra would be problematic when making classifications with CLOVER as described in Section 4, since it would undoubtedly select the two-component class for every spectrum due to the multiple hyperfine groups.
With ammonia being a popular tracer of modern large-scale molecular cloud mapping surveys \citep[e.g.,][]{Friesen_2017, Hogge_2018}, CLOVER would be much more useful to the star formation community if it were adaptable to transitions with hyperfine splitting. Since the relative frequency separations of the NH$_3$ (1,1) hyperfine lines are well-known, however, we can train a new CNN to distinguish between the transition's intrinsic frequency separations and an actual second velocity-component source along the line of sight. Here, we train such a CNN using synthetic NH$_3$ (1,1) spectra.
\subsection{Generating Synthetic NH$_3$ (1,1) Spectra}
3$\times$3 pixel cubes for 300,000 training samples (100,000 in each of the three training classes) were generated using the \texttt{cold$\_$ammonia} model generator within the \texttt{pyspeckit} Python package \citep{Ginsburg_2011}. The generator creates NH$_3$ (1,1) emission models using the following input parameters that were randomly selected from the listed distributions:
\begin{itemize}
\item $T_K$ (kinetic gas temperature): uniformly distributed from $8-25$ K
\item $V_{off}$ (centroid velocity offset from spectrum center): $-2.5$ to 2.5 km s$^{-1}$, which is equivalent to channels $465 - 534$ of the 1000 channel spectrum. Here, the spectral axis has been normalized so that each channel is separated by $\sim 0.07$ km s$^{-1}$.
\item log$N$ (logarithm of the NH$_3$ column density): uniformly distributed in log$_{10}$ space from $13-14.5$ log(cm$^{-2}$)
\item $\sigma_{nt}$ (non-thermal velocity dispersion): log-normally distributed in natural logarithm space with a 1-sigma range of $0.02-0.45$ km s$^{-1}$
\item $\sigma_{tot}$ (total velocity dispersion) = $\sqrt{\sigma_{nt}+0.08^2}$ km s$^{-1}$
\end{itemize}
These distributions aim to mimic those seen in real NH$_3$ (1,1) observations by the Green Bank Ammonia Survey \citep{Friesen_2017} and KEYSTONE (Keown et al. 2019, submitted). For the two-component class, two randomly chosen models were added together. The velocity centroid of the second velocity component was drawn with respect to the first component (i.e., the range of possible centroids for individual components in the two-component samples is $-5$ km s$^{-1}$ to $5$ km s$^{-1}$). We also ensure that the centroids of the two components are separated by at least $1.0 \times \sigma_{max}$.
As described in Section 2.1, each model generated represented the central pixel in the 3$\times$3 pixel grid. The outer pixels were filled by adding a perturbation to the central pixel model by drawing values from four normal distributions (one for each parameter) with mean of zero and variance of 0.2 K, 0.1 km s$^{-1}$, 0.1 km s$^{-1}$, and 0.01 cm$^{-2}$ for $T_K$, $V_{off}$, $\sigma_{nt}$, and log$N$, respectively. Random noise with an RMS of 0.1 K was also added to the cubes, a noise level that is typical for recent ammonia mapping surveys of nearby star-forming regions \citep{Friesen_2017, Keown_2017}. For the noise class, only noise was added to an otherwise emission-free cube.
The final features for each sample are again the local (central pixel's spectrum) and global (averaged spectrum of all nine pixels) spectra, but in this case these have 1000 channels each to account for the hyperfine structure of the NH$_3$ (1,1) line that typically spreads over 500 channels. We note also that the narrow range of possible centroid velocities for the NH$_3$ (1,1) synthetic spectra create a challenging training set since the majority of the samples have blended velocity components. This choice was observationally motivated, however, since typical ammonia observations show few spectra with large centroid separations between each velocity component along the line of sight.
\subsection{Testing on Synthetic NH$_3$ (1,1) Spectra}
We adopt the same neural network architecture described in Section 3 to train the network. An additional synthetic validation set of 90,000 samples (30,000 in each class) was also used to monitor model performance during training to implement early-stopping. The trained CNN's performance on a separate synthetic test set of 30,000 additional samples (10,000 in each class) is shown in the top left panel of Figure \ref{NH3_cm}. The CNN prediction accuracy is $\sim 98\%$, $100\%$, and $\sim 92\%$, for the one-component, noise-only, and two-component classes, respectively. As shown in the right panel of Figure \ref{NH3_cm}, the prediction accuracy of ensemble averaging six independently trained CNNs improves to $\sim 99\%$, $100\%$, and $\sim 93\%$, for the one-component, noise-only, and two-component classes, respectively. For this reason, all further comparisons and analysis of the NH$_3$ data will use the ensemble CNN.
To compare the ensemble CNN performance to traditional line fitting methods, we use the $\chi^2$-minimization model selection approach to classify each spectrum in the training set using the same technique described in Section 4.2. The \texttt{cold$\_$ammonia} model generator within \texttt{pyspeckit} was used to generate one- a two-component models that were fit to the data using the $\chi^2$-minimization method. The initial guesses for $T_K$, $\sigma_{nt}$, and log$N$ were set at 14 K, 0.3 km s$^{-1}$, and 13.5 cm$^{-2}$. A grid of $V_{LSR}$ initial guesses were used, which included one guess centered on the peak intensity channel and increments of $\pm 0.4$, $\pm 1.3$, $\pm 2.2$, $\pm 3.1$, $\pm 4.0$, and $\pm 4.9$ km s$^{-1}$ offset from the peak intensity channel.
\begin{figure}[htb]
\plottwo{nh3_cm_3class.pdf}{nh3_ensemble_cm_3class.pdf}
\plottwo{chi_nh3_local_cm_3class.pdf}{chi_nh3_global_cm_3class.pdf}
\caption{Confusion matrices for a validation set of 30,000 synthetic NH$_3$ (1,1) spectra (10,000 in each class) classified by a single CNN (top left), an averaged ensemble of six CNNs (top right), traditional $\chi^2$-minimization on the ``local'' view spectrum (bottom left), and traditional $\chi^2$-minimization on the ``global'' view spectrum (bottom right). The ``noise'' class for the $\chi^2$-minimization panels was selected based on a SNR threshold of 4.}
\label{NH3_cm}
\end{figure}
\clearpage
The bottom panels in Figure \ref{NH3_cm} show that the CNN performance is better than the $\chi^2$-minimization model selection approach on the local (95$\%$, 94$\%$, and 85$\%$ for the one-component, noise-only, and two-component classes, respectively) and global (90$\%$, 94$\%$, and 92$\%$ for the one-component, noise-only, and two-component classes, respectively) spectra. These results indicate that the $\chi^2$-global approach is actually susceptible to overfitting the hyperfine spectra, tending to classify incorrectly the one-component samples as two-component at a slightly higher rate than the $\chi^2$-local method. In contrast, the ensemble CNN is more resilient to this overfitting while still incorporating the global spectrum as input.
\subsection{Testing on Real NH$_3$ (1,1) Observations}
To demonstrate that CLOVER can accurately predict the class of real ammonia spectra, we utilize two NH$_3$ (1,1) cubes observed by the KFPA Examinations of Young STellar Object Natal Environments (KEYSTONE) survey (PI: James Di Francesco; Keown et al. 2019, submitted). KEYSTONE used the 100m Green Bank Telescope to map ammonia emission across eleven of the nearest giant molecular clouds (0.9 kpc $< d <$ 3 kpc). Here, we use the KEYSTONE observations of two clouds: 1) M17, which has a core of emission (M17SW) with obvious multiple velocity components, and 2) MonR2, which is composed mainly of single velocity component spectra. To match the size of the spectra used to train CLOVER's CNNs, the ammonia cubes are clipped to 1000 channels along the spectral axis.
Following the method described in Section 4.3, predictions are made for each pixel using both CLOVER's ensemble CNN and the $\chi^2$-minimization technique. Figures \ref{M17_NH3} and \ref{MonR2_NH3} show the resulting segmentation maps for M17 and MonR2, respectively. Similar to the results of CLOVER's non-hyperfine classifications, we see clear cases where CLOVER's hyperfine classifications are more robust than the $\chi^2$-minimization technique across all three classes. In particular, CLOVER appears to provide better noise classifications and be more resilient to overfitting the spectra than the $\chi^2$-minimization technique.
There is also evidence that CLOVER is able to identify spectra with more than two-components (e.g., three or more velocity components). For instance, labeled spectrum C in Figure \ref{M17_NH3} shows a location in M17 that clearly has three velocity components. Even without including three-component spectra in the training set, CLOVER is able to correctly identify that the spectrum has more than one velocity component.
To test robustly how CLOVER will classify three-component spectra that it receives as input, we perform a three-component classification test similar to the test described in Section 4.4. An additional test set of 3,000 synthetic three-component NH$_3$ (1,1) samples were created by injecting three synthetic spectra into the test cubes by creating models at random from the distributions listed in Section 6.1. For these 3000 synthetic three-component samples, CLOVER classifies 2945 ($\sim 98\%$) as ``two-component'' and 55 ($\sim 2\%$) as ``one-component.'' This result suggests that CLOVER's two-component class can be thought of as ``multi-component'' (i.e., emission with more than one velocity component), which is similar to the result found for CLOVER's non-hyperfine classification discussed in Section 4.4.
\begin{figure}[htb]
\epsscale{0.85}
\plotone{M17_predictions5.pdf}
\caption{Left panels: segmentation of a NH$_3$ (1,1) spectral cube observation of M17 into three classes: single velocity component spectrum (black), multiple velocity component spectrum (white), and noise (grey) using CLOVER's CNN ensemble (top) and traditional $\chi^{2}$-minimization model fitting (bottom). Right panels: The ``global'' view spectra extracted from the observed spectral cube at the positions of the 3$\times$3 pixel windows overlaid onto the left panels. Red letters denote positions where CLOVER and the $\chi^{2}$ technique agree in their class predictions, while the green numbers show positions where they disagree. The spectra in all panels have been clipped around the central three hyperfine groups. The text in the upper right corner of each panel shows the class predicted by CLOVER and the $\chi^{2}$ technique for that spectrum, where 2=two-component, 1=one-component, and 0=noise.}
\label{M17_NH3}
\end{figure}
\begin{figure}[htb]
\epsscale{0.85}
\plotone{MonR2_predictions5.pdf}
\caption{Same as Figure \ref{M17_NH3} for MonR2.}
\label{MonR2_NH3}
\end{figure}
Furthermore, CLOVER is again remarkably faster at making classifications than the $\chi^2$-minimization technique. CLOVER's predictions for M17 and MonR2 take 82 seconds and 170 seconds, respectively, on a single CPU core. In comparison, the full $\chi^2$-minimization technique requires 3918 seconds for M17 and 8435 seconds for MonR2 with the computations run in parallel on eight CPU cores. This implies CLOVER's classifications provide several orders of magnitude in speed improvements over traditional methods.
\section{Predicting NH$_3$ (1,1) Kinematics}
Hyperfine splitting also poses a challenge for predicting the kinematics of spectra when using CLOVER. The predictions from CLOVER's regression CNN discussed in Section 5, for example, become unreliable for transitions with hyperfine splitting since the emission is implicitly split across multiple lines with distinct centroid velocities. To overcome this issue, we train an additional regression CNN to predict the velocity centroids, velocity dispersions, and peak intensities for NH$_3$ (1,1) spectra with two velocity components. A training set of 300,000 synthetic two-component NH$_3$ (1,1) spectra was generated as described in Section 6.1. The labels for the training set were a six-number array including the values of $V_{off}$, $\sigma_{tot}$, and the peak intensity ($T_{peak}$) for both of the velocity components.
The performance of the trained network on a validation set of 30,000 additional synthetic spectra is shown in Figure \ref{NH3_pred_params}. The mean absolute errors for the validation set are $\sim0.002$ for centroid velocity, $\sim0.6$ for velocity dispersion, and $\sim0.06$ for peak intensity. Since these MAEs have been calculated after normalizing the velocity centroids, dispersions, and peak intensities in the same way as those for the non-hyperfine regression CNN, the MAEs between the two models can be directly compared. Although the MAEs for velocity dispersion are smaller for the non-hyperfine regression CNN ($\sim0.4$), its centroid velocity and peak intensity MAEs are larger ($\sim0.01$ and $\sim0.064$). These differences are to be expected since the hyperfine training set was generated using slightly different parameter distributions than the non-hyperfine training set.
The horizontal flaring at large and small $T_{peak}$ values seen in Figure \ref{NH3_pred_params} also indicates that the hyperfine peak intensity predictions have a slight degeneracy at large and small values. This effect is also likely related to the way in which the hyperfine training set was generated. For example, the non-hyperfine training set generator ensured that the velocity components for two-component samples were separated by at least $1.5 \times \sigma_{max}$ (see Section 2.1). For the hyperfine training set, we instead chose the minimum centroid separation to be $1.0 \times \sigma_{max}$ to probe closer velocity component separations. This alteration leads to a slightly higher fraction of the hyperfine samples being indistinguishable from single velocity component spectra. A degeneracy in the peak intensity predictions for those samples is created because it becomes unclear which of the blended components is brighter.
Figure \ref{NH3_preds} displays CLOVER's centroid, dispersion, and peak intensity predictions overlaid onto the local spectra for six unseen samples included in the synthetic test set. In most cases, CLOVER's predictions are well-matched to the ground-truth values used to create the samples. Even for blended components (middle panels in Figure \ref{NH3_preds}) and those with shallow wings (top left panel in Figure \ref{NH3_preds}), CLOVER can accurately recover the underlying kinematics.
These tests prove that CNNs can be trained to not only classify spectra with hyperfine structure and multiple velocity components, but also predict with high accuracy the kinematics of the emitting gas. Moreover, this method can easily be adjusted to incorporate other molecular tracers of interest that exhibit hyperfine splitting (e.g., HCN, N$_2$H$^+$, etc.). Although the current implementation of CLOVER considers only the one- versus two-component classes of emission, the method could also be generalized to emission with three- or more velocity components.
\begin{figure}[htb]
\epsscale{1.0}
\plotone{NH3_params_preds2.pdf}
\caption{Velocity centroid (two left panels), dispersion (two middle panels), and peak intensity (two right panels) predictions by CLOVER's trained NH$_3$ (1,1) regression CNN versus the ``ground-truth'' for the low-velocity component (V1, W1, T1) and high-velocity component (V2, W2, T2) for the 30,000 two-component spectra in the synthetic test set. The dashed lines show a one-to-one correspondence. In all panels, the centroid velocities are normalized between $-1$ km s$^{-1}$ and 1 km s$^{-1}$. The velocity dispersion units are the number of channels in the spectrum. The subtitle above each panel shows the mean absolute error for that parameter.}
\label{NH3_pred_params}
\end{figure}
\begin{figure}[htb]
\epsscale{1.1}
\plottwo{NH3_preds2.pdf}{NH3_preds3.pdf}
\plottwo{NH3_preds4.pdf}{NH3_preds5.pdf}
\centering
\plottwo{NH3_preds6.pdf}{NH3_preds8.pdf}
\caption{Example predictions by CLOVER on previously unseen spectra from the hyperfine synthetic test set. The horizontal bars show the positions of each velocity component's centroid and dispersion for the ``ground-truth'' (black) and CLOVER predictions (orange) overlaid onto the ``local'' spectrum. The black and orange dots show the peak intensity for each velocity component for the ground-truth and CLOVER predictions, respectively. In all panels, the central three hyperfine groups are shown.}
\label{NH3_preds}
\end{figure}
\clearpage
\section{Improving Virial Analyses with CLOVER}
CLOVER's two-component spectral classifications and kinematics predictions can be used to improve existing analyses that neglect the presence of multiple velocity components. For instance, many virial stability analyses of star-forming structures rely on velocity dispersions measured from models that assume a single velocity component along the line of sight \cite[e.g.,][]{Keown_2017, Kirk_2017, Chen_2018, Kerr_2019}. When multiple velocity components are present along the line of sight, however, models assuming a single velocity component typically fit the observed spectrum with a much wider velocity dispersion than would be obtained by using a model with multiple velocity components. Since the virial parameter is proportional to $\sigma^2$, the wider velocity dispersions measured from one-component fits have a significant impact on the stability measurement of a given structure.
To demonstrate CLOVER's ability to improve virial analyses, we use two-component velocity dispersions measured by CLOVER to update the virial analysis of M17SW by Keown et al. (2019, submitted). The Keown et al. analysis used the KEYSTONE NH$_3$ (1,1) integrated intensity maps of M17 to identify dense gas clumps, which are shown as black contours in Figure \ref{M17_leaves}. Virial parameters were calculated by Keown et al. for each structure using a velocity dispersion map (top right panel of Figure \ref{M17_leaves}) measured from an ammonia model assuming a single velocity component along the line of sight. The velocity dispersion maps predicted by CLOVER for pixels identified as two-component in M17SW are shown in the bottom row of Figure \ref{M17_leaves}. Figure \ref{M17_leaves} clearly shows that the one-component fit produces larger velocity dispersions than the two velocity components identified by CLOVER.
Three of the clumps identified by Keown et al. fall on pixels identified as two-component by CLOVER. Here, we re-calculate the virial parameters for these three structures using the same mass, average temperature, and radius measured by Keown et al., but replace the average velocity dispersion with values measured from the CLOVER velocity dispersion maps. Although this approach neglects mass and/or size differences in the multiple structures along the line of sight, it serves as a test to see how much the two-component kinematics might affect their calculated virial parameters.
Following the method described in Keown et al., each structure's average velocity dispersion is calculated as the average of all pixels falling within its 2D mask shown in Figure \ref{M17_leaves}. The average is weighted by the integrated intensity map such that $\sigma_{v, avg} = w_1\sigma_1 + w_2\sigma_2 \cdots w_n\sigma_n$, where $w_n$ and $\sigma_n$ are the fraction of the source's integrated intensity and value of the velocity dispersion, respectively, for pixel $n$. Since CLOVER predicts two velocity dispersions for every pixel (one for each velocity component), we calculate two virial parameters for each structure based on the weighted average velocity dispersion measured in each map. Figure \ref{alpha} compares these new virial parameters with the original values presented in Keown et al. (2019, submitted).
As expected, the virial parameters using the CLOVER velocity dispersions are lower than the Keown et al. measurements. Specifically, the CLOVER-measured virial parameters are a factor of 1.5 - 8 times lower depending on the structure and which velocity component map is used. The lowest mass structure also moves from the upper ``gravitationally unbound'' half of the plot to the lower ``gravitationally bound'' half when using the CLOVER measurements. Although only three structures are analyzed here, this example shows the usefulness of CLOVER for virial analyses that include structures with multiple velocity components along the line of sight.
\begin{figure}[htb]
\epsscale{1.2}
\plottwo{M17_mom0_reproj.pdf}{M17_sig_old.pdf}
\plottwo{M17_sig1_reproj.pdf}{M17_sig2_reproj.pdf}
\caption{Top left: NH$_3$ (1,1) integrated intensity map of the M17SW region observed by KEYSTONE with dendrogram-identified clumps overlaid as black contours (Keown et al. 2019, submitted). Top right: KEYSTONE velocity dispersion measurements from modeling the NH$_3$ (1,1) emission with one velocity component. Bottom row: Velocity dispersion measured by CLOVER for pixels classified as ``two-component.''}
\label{M17_leaves}
\end{figure}
\begin{figure}[htb]
\epsscale{0.7}
\plotone{CLOVER_alpha.pdf}
\caption{Virial parameter versus mass for the three dendrogram-identified clumps in Figure \ref{M17_leaves} falling on pixels classified as ``two-component'' by CLOVER. Blue shows the virial parameters derived by Keown et al. (2019, submitted) using the KEYSTONE velocity dispersions from a one-component fit to the NH$_3$ (1,1). Orange and green show the virial parameters derived when using the CLOVER velocity dispersion predictions for each identified velocity component (W1 and W2). Sources falling below the grey dashed line are gravitationally bound when neglecting magnetic fields and external pressure.}
\label{alpha}
\end{figure}
\section{Summary}
We present a new method for identifying emission line spectra with two velocity components by training an ensemble of convolutional neural networks (CNNs) using synthetic spectral cubes. The networks predict the class of $3\times3$ pixel windows, utilizing the spatial information of pixels adjacent to the central pixel to make a prediction. The trained network ensemble has classification accuracies of $99.92 \pm 0.02 \%$, $100\%$, and $96.72 \pm 0.18\%$ for one-component, noise-only, and two-component synthetic spectral windows. This performance is a significant improvement over traditional line fitting approaches that do not consider the spatial information in adjacent pixels. The ensemble CNN's high classification performance was also demonstrated on real spectral cubes, which revealed that the ensemble CNN is able to segment accurately real observations into each of the three training set classes. Moreover, the speed with which the ensemble CNN makes its classifications was measured to be over an order of magnitude faster than a traditional line fitting approach.
A regression CNN is also trained to extract kinematics directly from the spectra identified as two-component class members by the ensemble CNN classifications. We show that the regression CNN has high prediction accuracy for two-component spectra that exhibit large centroid velocity separations and those that are blended. The combination of the ensemble and regression CNNs provides a quick way to measure accurately kinematics from two-component spectra. Named Convnet Line-fitting Of Velocities in Emission-line Regions (CLOVER), this combination unlocks a new method to analyze large spectral cubes of emission lines from star-forming molecular clouds.
After testing CLOVER on observations of four different molecular emission lines from five distinct star-forming regions observed by three separate observatories, it is clear that its predictions can be generalized to many data sets. In particular, we show that the method can be applied to transitions with hyperfine splitting. The versatility and speed of CLOVER's predictions make it an attractive option for signal versus noise segmentation and line fitting for large-scale spectral mapping surveys. The higher accuracy kinematics measurements provided by CLOVER also make it a useful tool for improving virial stability analyses of star-forming structures. CLOVER is publicly available as a Python package called \texttt{astroclover} at \url{https://github.com/jakeown/astroclover/}.
\section*{Acknowledgments}
We would like to thank Stella Offner for helpful recommendations regarding CLOVER's network architecture and training set. We also thank the anonymous referee for their thoughtful comments that have undoubtedly improved our manuscript. JK, JDF, ER, and MCC acknowledge the financial support of a Discovery Grant from NSERC of Canada. The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; Center for Astronomical Mega-Science (as well as the National Key R$\&$D Program of China with No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. This research made use of Astropy (\url{http://www.astropy.org}), a community-developed core Python package for Astronomy \citep{Astropy_2013}, pyspeckit (\url{https://pyspeckit.readthedocs.io/en/latest/}), a Python spectroscopic analysis and reduction toolkit \citep{Ginsburg_2011}, and Keras, a Python API for neural networks \citep{chollet2015keras}.
{\it Facility:} \facility{GBT}, \facility{JCMT}, \facility{FCRAO}
\section*{Appendix}
\begin{appendix}
\section{Installing and Using CLOVER}
CLOVER is publicly available for use as a Python package called \texttt{astroclover}. Here, we provide a brief description of the installation and usage instructions for the package.
Users must first ensure that they have Python 3 and all required packages installed. This can easily be done by installing the \texttt{Anaconda} Python package manager at \url{https://www.anaconda.com/distribution/}, which has many of \texttt{astroclover's} package dependencies pre-installed. \texttt{Anaconda} version 4.6.11 or later is recommended for \texttt{astroclover}, but other \texttt{Anaconda} versions have not been tested.
Once \texttt{Anaconda} is installed, users can run the following commands in a Linux or Mac terminal to setup a new environment and install the remaining packages required for \texttt{astroclover}:
\begin{itemize}
\item conda create $-$n clover$\_$env python$=\!3.6$ anaconda
\item conda activate clover$\_$env
\item pip install tensorflow$==\!1.8.0$ keras$==\!2.2.0$ spectral$\_$cube
\end{itemize} The first two commands will setup an \texttt{Anaconda} virtual environment named \texttt{clover$\_$env}, which must be entered when running \texttt{astroclover} by running the \texttt{conda activate clover$\_$env} command. The last command installs \texttt{tensorflow} version 1.8.0, \texttt{keras} version 2.2.0, and \texttt{spectral$\_$cube}, which are the CLOVER package dependencies not included by default in \texttt{Anaconda}. Although Python version 3.6 is recommended, \texttt{astroclover} has also been tested on Python 2.7 for users that wish to use Python version 2.
After successfully setting up the \texttt{Anaconda} environment, users can clone or download \texttt{astroclover} at \url{https://github.com/jakeown/astroclover}. This will create a new directory called \textit{astroclover} at the download location. Users must enter this directory and run the following command from their \texttt{Anaconda} environment:
\begin{itemize}
\item python download$\_$models.py
\end{itemize} which will download the trained convolutional neural networks that CLOVER uses from a remote directory into the user's local \textit{astroclover} directory. The 14 files are $\sim12$ GB in total.
Once the neural network files have been downloaded, users must ensure the spectral cube they input into CLOVER is formatted properly. CLOVER's predictions require FITS data cubes with position-position-spectral axes. A spectral axis of 500 channels is required for Gaussian emission lines and 1000 channels for NH$_3$ (1,1). If the cube a user inputs into CLOVER is smaller than those sizes, CLOVER will add random noise channels to each end of the spectral axis up to the required size. If the input cube's spectral axis is larger than the required input size, CLOVER will clip channels from each end of the spectral axis until the required size is obtained.
It is also recommended that the centroids of the emission lines in an input cube be located within the central $\sim275$ channels for Gaussian emission lines and the central $\sim140$ channels for NH$_3$ (1,1). These bounds are set by the range of possible centroids used to train CLOVER. If a cube has large centroid velocity gradients that cause some of the emission lines to fall outside these bounds, it is recommended that users split their cube into sub-cubes so that all emission is within the aforementioned channel bounds.
To run CLOVER on a prepared data cube, simply use the \texttt{predict(f=your$\_$cube$\_$name.fits)} function in the \texttt{clover.py} script. If the cube is NH$_3$ (1,1), add \texttt{nh3=True} in the call to \texttt{predict()} (e.g., \texttt{predict(f$=$your$\_$nh3$\_$cube.fits, nh3$=$True)}). For example, if the user is predicting on a NH$_3$ (1,1) cube using an iPython session within the \textit{astroclover} directory, they would use the following commands:
\begin{itemize}
\item import clover
\item clover.predict(f$=$your$\_$nh3$\_$cube.fits, nh3$=$True)
\end{itemize}
The classification step uses an ensemble of six independently trained CNNs to make the final class prediction. These six predictions can be done in parallel by specifying the number of desired parallel processes. For example, to run all six predictions at once, use \texttt{predict(f$=$your$\_$nh3$\_$cube.fits, nproc=6)}.
CLOVER will output its classification map and parameter predictions as individual FITS files. In total, up to eight files are generated:
\begin{enumerate}
\item input$\_$name $+ `\_$clover.fits' - cube after the spectral axis has been corrected (not generated if input cube already has proper spectral length)
\item input$\_$name $+ `\_$class.fits' - predicted class of each pixel (2=two-component, 1=noise, 0=one-component)
\item input$\_$name $+ `\_$vlsr1.fits' - predicted centroid velocity of component with lowest centroid
\item input$\_$name $+ `\_$vlsr2.fits' - predicted centroid velocity of component with highest centroid
\item input$\_$name $+ `\_$sig1.fits' - predicted velocity dispersion of component with lowest centroid
\item input$\_$name $+ `\_$sig2.fits' - predicted velocity dispersion of component with highest centroid
\item input$\_$name $+ `\_$tpeak1.fits' - predicted peak intensity of component with lowest centroid
\item input$\_$name $+ `\_$tpeak2.fits' - predicted peak intensity of component with highest centroid
\end{enumerate} where input$\_$name is the name of the FITS file input into CLOVER.
Please refer to \url{https://github.com/jakeown/astroclover} for the most up-to-date install and usage instructions since new features may be developed in the future.
\end{appendix}
\bibliographystyle{apj}
| 2024-02-18T23:40:04.308Z | 2019-09-20T02:07:01.000Z | algebraic_stack_train_0000 | 1,245 | 15,944 |
|
proofpile-arXiv_065-6241 | \section{Introduction}
Active Galactic Nuclei (AGNs) are tremendous energetic sources,
where vast amounts of energy are generated
by gravitational accretion around supermassive black hole.
The radiation at nearly all wavelengths enables us to detect AGNs in multiwavelength observations.
Hence, AGNs have been studied at various wavelengths.
Past studies show that their Spectral Energy Distributions (SEDs) are
roughly represented by a power-law (i.e., $f_\nu \propto \nu^{-\alpha}$),
whilst normal galaxies produce an SED that peaks at $\sim 1.6 \mu$m
as the composite blackbody spectra of the stellar population.
Because the colours of an object provide us with rough but essential information about its spectrum,
colours are important clues to identify AGNs from normal stars.
Colour selection is an efficient technique to distinguish AGNs from normal stars and
have played an important role to extract AGN candidates without spectral observation.
A classic method is known as the $UV$-excess \citep[$UVX$; ][]{Sandage1965-ApJ,Schmidt1983-ApJ,Boyle1990-MNRAS}.
The $UVX$ technique exploits the fact that quasars are relatively brighter than stars at shorter wavelength
and therefore occupy a bluer locus in a CCD with respect to stars.
In addition, many AGN candidates have been selected on the basis of colours in various wavelengths:
optical \citep{Richards2002-AJ},
optical and near-infrared \citep{Glikman2007-ApJ}, and
mid-infrared \citep{Lacy2004-ApJS,Stern2005-ApJ}.
These studies provide us with clues about the properties of AGNs.
Target selection of high redshift quasars has also been performed
using their colours, mainly in optical wavelengths
\citep[e.g., ][]{Fan2000-AJ,Fan2001-AJ,Fan2003-AJ}.
However, near-infrared properties are required
when we try to select targets such as higher redshift quasars,
since the shift of the Lyman break to longer wavelengths makes
observations difficult at optical wavelengths.
Therefore, near-infrared selection should be useful technique to extract high-redshift quasars.
In this paper, we present a study of the near-infrared colours of AGNs
and demonstrate, by both observed and simulated colours, that
the near-infrared colours can separate AGNs from normal stars.
Additionally, we predict near-infrared colour evolution based on a Monte-Carlo simulation.
In Sect. \ref{Data}, we introduce the catalogues of AGNs
which are used in order to investigate the observed colours.
We confirm the near-infrared properties of spectroscopically confirmed AGNs
on the basis of the near-infrared CCD and
redshift-colour relations in Sect. \ref{Properties}.
In Sect. \ref{Simulation}, we simulate the near-infrared colours
using Hyperz code developed by \citet{Bolzonella2000-AA} and
demonstrate that AGNs reside in a distinct position in the near-infrared CCD.
In Sect. \ref{Discussion}, we consider the other probable objects
which are expected to have near-infrared colours similar to those of AGNs.
\if2
\begin{figure*}[t]
\begin{tabular}{cc}
(a) & (b) \\
\begin{minipage}[tbp]{0.5\textwidth}
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/qso-agn12-ccd.eps}}
\end{center}
\end{minipage}
&
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/sdss-qso-ccd.eps}}
\end{center}
\end{minipage}
\end{tabular}
\caption{(a) The distribution of AGNs in QA catalogue.
(b) The distribution of quasars in SQ catalogue.
The stellar locus in \citet{Bessell1988-PASP} and
the reddening vector taken from \citet{Rieke1985-ApJ} are also shown.
\label{CCD1}}
\end{figure*}
\fi
\begin{figure*}[tbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b) \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-0z1.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-0z1.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-1z2.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-1z2.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-2z3.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-2z3.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-3z4.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-3z4.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-4z5.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-4z5.eps}} \\
\end{tabular}
\caption{(a) The distribution of AGNs in the QA catalogue.
(b) The distribution of quasars in the SQ catalogue.
The stellar locus \citep{Bessell1988-PASP}, the CTTS locus \citep{Meyer1997-AJ}, and
the reddening vector taken from \citet{Rieke1985-ApJ} are also shown.
\label{CCD1}}
\end{center}
\end{figure*}
\section{Data}\label{Data}
We examine the near-infrared properties of quasars/AGNs using 2MASS magnitudes.
The samples of quasars/AGNs are extracted from
Sloan Digital Sky Survey Data Release 5 (SDSS-DR5) quasar catalog and
Quasars and Active Galactic Nuclei (12th Ed. )
and these catalogues are briefly introduced below.
\subsection{2MASS}
The Two Micron All Sky Survey \citep[2MASS
\footnote{2MASS web site (http://www.ipac.caltech.edu/2mass/)}; ][]{Skrutskie2006-AJ}
is a project which observed 99.998\% of the whole sky
at the J (1.25 $\mu$m), H (1.65 $\mu$m), Ks (2.16 $\mu$m) bands,
at Mt. Hopkins, AZ (the Northern Hemisphere) and at CTIO, Chile (the Southern Hemisphere)
between June 1997 and February 2001.
The instruments are both highly automated 1.3-m telescopes equipped with three-channel cameras,
each channel consisting of a 256 $\times$ 256 array of HgCdTe detectors.
The 2MASS obtained 4 121 439 FITS images (pixel size $\sim2''_{\cdot}0$) with 7.8 s of integration time.
The $10 \sigma$ point-source detection levels are better than 15.8, 15.1, and 14.3 mag
at J, H, and K$_\textnormal{\tiny S}$ bands.
The Point Source Catalogue (PSC) was produced using these images and catalogued 470 992 970 sources.
In the 2MASS web site, the images and the PSC are open to the public and are easily available.
\begin{table*}[tbp]
\begin{center}
\begin{tabular}{crrrrrr}
\hline \hline
redshift & \multicolumn{3}{c}{QA catalogue} & \multicolumn{3}{c}{SQ catalogue} \\
range & \multicolumn{3}{c}{-----------------------------------} & \multicolumn{3}{c}{-----------------------------------} \\
& Region I & Region II & total & Region I & Region II & total \\ \hline
$0 \leq z \leq 1$ & 1 671 (27) & 4 480 (73) & 6 151 & 222 (11) & 1 869 (89) & 2 091 \\
$1 < z \leq 2$ & 238 (47) & 265 (53) & 503 & 222 (47) & 249 (53) & 471 \\
$2 < z \leq 3$ & 67 (25) & 197 (75) & 264 & 38 (19) & 165 (81) & 203 \\
$3 < z \leq 4$ & 7 (16) & 36 (84) & 43 & 9 (18) & 41 (82) & 50 \\
$4 < z \leq 5$ & 5 (71) & 2 (29) & 7 & 1 (50) & 1 (50) & 2 \\ \hline
total & 1 998 (29) & 4 970 (71) & 6 968 & 500 (18) & 2 317 (82) & 2 817 \\ \hline
\end{tabular}
\caption{
The number of objects distributed in the Region I and II.
Of 7 061 AGNs detected in the QA catalogue,
93 do not have a measured redshift. The values in parentheses represent percentages of quasars/AGNs residing in each region.
\label{Ratio}}
\end{center}
\end{table*}
\subsection{SDSS-DR5 quasar catalog}
The Sloan Digital Sky Survey (SDSS)
provides a photometrically and astrometrically
calibrated digital imaging survey of $\pi$ sr above Galactic latitude $30^\circ$
in five broad optical bands to a depth of $g' \sim 23$ mag \citep{York2000-AJ}.
Many astronomical catalogues have been produced by this survey.
The SDSS quasar catalog IV \citep[][hereafter SQ]{Schneider2007-AJ} is
the forth edition of the SDSS quasar catalog I \citep{Schneider2002-AJ},
which is made from the SDSS Fifth Data Release \citep{Adelman2007-ApJS}.
The SQ catalogue consists of 77 429 quasars,
the vast majority of which were discovered by the SDSS.
The area covered by the catalogue is $\approx 5740$ deg$^2$.
The quasar redshifts range from 0.08 to 5.41, with a median value of 1.48.
The positional accuracy of each object is better than $0_\cdot''2$.
\subsection{Quasars and Active Galactic Nuclei (12th Ed.)}
The catalogue of Quasars and Active Galactic Nuclei (12th Ed.) \citep[][hereafter QA]{Veron2006-AA} is
the 12th edition of the catalogue of quasars first published in 1971 by De Veny et al.. The QA catalogue contains 85 221 quasars, 1 122 BL Lac objects and
21 737 active galaxies (including 9 628 Seyfert 1).
this catalogue includes position and redshift as well as optical brightness (U, B, V) and 6 cm and 20 cm flux densities when available.
The positional accuracy is better than $1_\cdot''0$.
\section{Near-infrared properties of AGNs}\label{Properties}
\subsection{Extraction of Near-infrared counterpart}
The sources in two of the above-mentioned AGN catalogues (SQ and QA)
were cross-identified with 2MASS PSC,
and we extracted a near-infrared counterpart of each source.
As mentioned in the previous section,
the positional accuracies in both catalogues are better than $1''$.
Therefore, we identified an near-infrared counterpart
when a 2MASS source is located within $1''$ of a SQ/QA position.
As a result of the extraction,
we have derived 9 658 (SQ catalogue) and 14 078 (QA catalogue)
near-infrared counterparts.
For investigating the near-infrared properties using 2MASS magnitudes,
we used only 2 817 (SQ) and 7 061 (QA) objects where 2MASS photometric quality flags are
better than B (signal-to-noise ratio (S/N) $>7\sigma$).
\subsection{Colour-colour diagram}
Near-infrared ($H-K_\textnormal{\tiny S}$)-($J-H$) CCD is
a powerful tool to investigate the property of celestial objects.
We investigated the near-infrared properties of quasars/AGNs using near-infrared CCD.
Figure \ref{CCD1} shows the distributions of quasars/AGNs in a ($H-K_\textnormal{\tiny S}$)-($J-H$) CCD.
In previous studies, the intrinsic loci of stars and Classical T Tauri Stars (CTTS)
were well defined by \citet{Bessell1988-PASP} and \citet{Meyer1997-AJ}.
Their loci are also shown in the CCD.
\citet{Bessell1988-PASP} and the Caltech (CIT) systems are transformed into the 2MASS photometric system
by the method introduced by \citet{Carpenter2001-AJ}.
The reddening vector, taken from \citet{Rieke1985-ApJ}, is also shown in the diagram.
Because the stellar and CTTS loci can only shift along the reddening vector,
most of these types of stars fundamentally should not be located in the region described by the following equations.
\begin{eqnarray}
(J-H) \leq 1.70(H-K_\textnormal{\tiny S})-0.118 \label{Star}\\
(J-H) \leq 0.61 (H-K_\textnormal{\tiny S})+0.50 \label{CTTS}
\end{eqnarray}
Equation (\ref{Star}) represents the lower limit line where normal stars can reside and
Equation (\ref{CTTS}) represents the lower limit line where CTTS can reside.
Both lines are also shown in Fig. \ref{CCD1}.
Below, we call the region enclosed by Equations (\ref{Star}) and (\ref{CTTS}) ``Region II''
and all other regions ``Region I''.
In Fig. \ref{CCD1}, we can see that most of the quasar/AGNs
are located in clearly different areas than the stellar loci.
The distributions of the quasar/AGNs are on the right side of the stellar loci in the CCD,
i.e., they have a $(J-H)$ colour similar to that of normal stars
but have a $(H-K_\textnormal{\tiny S})$ colour redder than that of normal stars.
Table \ref{Ratio} counts the number of objects in each region.
It shows that 70\% of AGNs and 80\% of quasars are distributed in Region II.
Hence, the near-infrared selection of quasars can be more effective than that of other types of AGN.
Especially, $\sim 90\%$ of low redshift quasars with $0 \leq z \leq 1$ reside in Region II,
so these quasars are rarely missed.
However, objects with $1< z \leq 2$ or $4<z \leq 5$ tend to have a bluer colour in $(H-K_\textnormal{\tiny S})$
than objects with other redshift ranges, which is similar to the colour of normal stars.
Therefore, some of these quasars/AGNs might be missed.
The difference of the loci between quasars/AGNs and normal stars is
probably due to the difference of the radiation mechanism
because the dominant radiation of quasars/AGNs is not blackbody radiation.
This colour property is considered to be caused by K-excess \citep{Warren2000-MNRAS}.
They proposed a $KX$ method where quasars with a ($V-J$) colour similar to
that of stars would be redder in ($J-K$) colour.
In other words, this $KX$ method can separate quasars and stars on the basis of their colours.
This technique has been used for selecting quasar candidates
\citep[e.g., ][]{Smail2008-MNRAS,Jurek2008-MNRAS,Nakos2009-AA}.
The present work is a variant of the original $KX$ technique,
using the $(J-H)$ versus $(H-K_\textnormal{\tiny S})$ diagram.
\subsection{Colours versus redshift}
\begin{figure}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/sdss-z-color.eps}}
\end{center}
\caption{Colours versus redshift for SDSS quasars.
The redshifts are taken from the SQ catalogue.
The red solid lines show the average colour evolutions with respect to redshift.
\label{Z-Colours}}
\end{figure}
In Fig. \ref{Z-Colours}, we plot the SDSS quasars, in three colours versus redshift
together with average colour evolutions with respect to redshift.
The redshifts are taken from the SQ catalogue.
Each colour undergoes only a small change or dispersion with redshift.
This is probably due to a variety of spectral shapes and/or a variety of extinctions.
In the near-infrared CCD, this small colour change causes a small difference of AGN locus.
These properties can be reproduced by the simulation as mentioned below.
\section{Simulating the near-infrared colours of quasars}\label{Simulation}
\if2
\begin{figure}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/z-color-simulation.eps}}
\end{center}
\caption{Colours versus redshift for SDSS quasars.
\label{}}
\end{figure}
\fi
\begin{figure*}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/z-color-simulation.eps}}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/z-color-sdss-simulate2b.eps}}
\caption{Simulated colours versus redshift.
The curves represent the simulated colour evolutions
with $A_\textnormal{\tiny V}=0,1,2,3,4$ (from bottom to top), respectively.
The SDSS quasars (left panel) and the average colour evolution (right panel)
shown in Fig. \ref{Z-Colours} are also plotted in the diagram.
\label{Z-Simulated-Colours}}
\end{center}
\end{figure*}
\begin{figure}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/ccd-simulation2-b.eps}}
\end{center}
\caption{Simulated colour evolution with respect to redshift in the ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram.
The stellar locus and the reddening vector are also shown in the diagram,
which are same as in Fig. \ref{CCD1} .
\label{CCD-Simulation}}
\end{figure}
\begin{table}[tbp]
\begin{center}
\begin{tabular}{crrr}
\hline \hline
$A_\textnormal{\tiny V}$ & $J-K_\textnormal{\tiny S}$ & $J-H$ & $H-K_\textnormal{\tiny S}$ \\ \hline
0 & 0.78 (0) & 0.94 (0) & 0.86 (0) \\
1 & 0.94 (0) & 0.69 (0) & 0.60 (0) \\
2 & 0.26 (17) & 0.29 (9) & 0.14 (84) \\
3 & 0.79 (0) & 0.82 (0) & 0.59 (0) \\
4 & 1.0 (0) & 1.0 (0) & 0.88 (0) \\ \hline
\end{tabular}
\caption{
Results of a KS test between average colour evolution and simulated colour evolution.
The decimal values represent KS distance between two data and
the values in parentheses represent significance level (percentage) for each KS test.
\label{KS-test}}
\end{center}
\end{table}
In this section,
we demonstrate that the locus of quasars is well separated from that of normal stars
on the basis of a simulation using a realistic SED of quasars.
In order to simulate the near-infrared colours of quasars,
we performed a Monte-Carlo simulation with Hyperz code \citep{Bolzonella2000-AA}.
The Hyperz code can calculate photometric redshift based on an inputted spectral template library,
which finds the best fit SED by minimizing the $\chi^2$ derived from
the comparison among the observed SED and expected SEDs.
The reddening effects are taken into account according to a selected reddening law.
Although this code is usually used for estimating photometric redshifts,
we use it to derive the near-infrared colours at various redshifts.
First, we made a magnitude list containing randomly generated J, H, K$_\textnormal{\tiny S}$ magnitudes,
ranging from 8 to 16 mag (roughly coincident with a reliable range of 2MASS magnitude)
and produced 100 000 data sets.
These data sets were subjected to
SED fittings using the Hyperz code.
A realistic SED of quasars was taken from \citet{Polletta2007-ApJ}
(i.e., QSO1 in \citet{Polletta2007-ApJ} are used).
According to \citet{Polletta2007-ApJ},
the SED of the QSO1 is derived by combining the SDSS quasar composite spectrum
and rest-frame IR data of a sample of 3 SDSS/SWIRE quasars \citep{Hatziminaoglou2005-AJ}.
We used the reddening law from \citet{Calzetti2000-ApJ}, which is prepared by default in Hyperz code. Inputting the data sets into the Hyperz code,
we derived photometric redshifts with the probabilities associated with the value of $\chi^2$.
We only used objects having probabilities of $\geq 99\%$.
Figure \ref{Z-Simulated-Colours} shows the simulated colour evolutions with respect to redshift.
The curves in each diagram represent the simulated colours
with $A_\textnormal{\tiny V} =0$, 1, 2, 3, 4 (from bottom to top), respectively.
To find the best fits for the average colour curves,
we performed Kolmogorov-Smirnov (KS) tests between average colour curves and each simulated colour curve.
Table \ref{KS-test} shows the result of the KS tests.
In all three colours, the colour evolution with $A_\textnormal{\tiny V}=2$ is
the best fit among five $A_\textnormal{\tiny V}$ values.
In addition, the redshift-colour relations of SQ quasars can be roughly reproduced
by simulated curves with $0\lesssim A_\textnormal{\tiny V} \lesssim 3$.
A variety of extinctions probably generate the dispersion of the colours.
It should be noted that both ($J-H$) and ($J-K_\textnormal{\tiny S}$) colours steeply
increase over $z\sim 9$.
This is due to shifting the Lyman break over the J-band wavelength.
This property can be useful for extracting high-redshift quasars.
In Fig. \ref{CCD-Simulation}, the simulated colours with $A_\textnormal{\tiny V}=2$ are shown
in the ($H-K_\textnormal{\tiny S}$)-($J-H$) CCD, tracked by redshift evolution.
An important point is that the simulated position is well separated from the stellar locus,
that is, it is consistent with the loci of quasars/AGNs shown in Fig. \ref{CCD1}.
A variety of extinctions causes the dispersion of the simulated position and
this can probably reproduce the dispersion of the loci of quasars/AGNs in Fig. \ref{CCD1}.
It is also consistent with the fact that
the quasars with $0 \leq z \leq 1$ have relatively redder colour in ($H-K_\textnormal{\tiny S}$) compared with
the quasars with $1 \leq z \leq$ 2.
Although it is difficult to distinguish high-redshift quasars among $z\lesssim 8$,
we can extract high-redshift quasar candidates with $z \gtrsim 8$
on the basis of a ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram
because the ($J-H$) colour steeply increases over $z \sim 8$.
\section{Discussion}\label{Discussion}
\subsection{Other probable objects} \label{Probabilities}
\begin{figure*}[tbp]
\begin{tabular}{cc}
(a) & (b) \\
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/mqso-hk-jh.eps}}
\end{center}
\end{minipage}
&
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/cv2009-hk-jh.eps}}
\end{center}
\end{minipage} \\
(c) & (d) \\
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/lmxb-hk-jh.eps}}
\end{center}
\end{minipage}
&
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/cmyso-hk-jh.eps}}
\end{center}
\end{minipage}\end{tabular}
\caption{The distribution of four types of objects: Microquasars (upper left),
Cataclysmic variables (upper right), Low Mass X-ray Binaries (lower left),
and Massive Young Stellar Objects (lower right).
The stellar locus and the reddening vector are same as in Fig. \ref{CCD1}.
\label{CCD2}}
\end{figure*}
\begin{table}[tbp]
\begin{center}
\begin{tabular}{rrrrr}
\hline \hline
& Microquasars & CV & LMXB & MYSO \\ \hline
I & 16 (84) & 245 (75) & 11 (73) & 27 (93) \\
II & 3 (16) & 82 (25) & 4 (27) & 2 (7) \\
total & 19 & 327 & 15 & 29 \\ \hline
\end{tabular}
\caption{
The number and percentage of objects distributed in each region.
The values in parentheses represent percentage.
\label{ratio-four-objects}}
\end{center}
\end{table}
Although the locus of AGNs in the near-infrared CCD is different from that of normal stars,
other types of objects might be distributed in the locus
with similar properties to AGNs.
If a position in the CCD depends on the radiation mechanism,
other objects with radiation mechanism similar to AGNs are also expected to be located at the same position.
Below, we further examine the loci of four types of objects which have non-thermal radiation or
which are considered to be bright in both near-infrared and X-ray wavelengths:
Microquasars, Cataclysmic Variables (CVs), Low Mass X-ray Binaries (LMXBs), and
Massive Young Stellar Objects (MYSOs).
Sample objects are extracted from three catalogues, namely
Microquasar Candidates \citep[Microquasars; ][]{Combi2008-AA}
Cataclysmic Binaries, LMXBs, and related objects \citep[CVs and LMXBs; ][]{Ritter2003-AA},
and Catalogue of massive young stellar objects \citep[MYSOs; ][]{Chan1996-AAS}.
First, we cross-identified each catalogue with 2MASS PSC, and
extracted the near-infrared counterparts.
\citet{Combi2008-AA} have cross-identified their catalogue with the 2MASS catalogue
by adopting a cross-identification of $4''$.
The positional accuracy in Ritter \& Kolb catalogue are $\sim 1''$ \citep{Ritter2003-catalogue}.
The objects in the MYSO catalogue were selected from the Infrared Astronomical Satellite (IRAS) PSC
whose typical position uncertainties are about $2''$ to $6''$ \citep{Beichman1988-IRAS,Helou1988-IRAS}.
Therefore, we set positional criteria for the cross-identification
to $\leq 2''$ (CV and LMXB catalogues) and $\leq 4''$ (Microquasar and MYSO catalogues).
We used objects with a 2MASS photometric quality better than B (i.e., S/N $> 7\sigma$).
Using 2MASS magnitude, they were plotted in a ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram.
Figure \ref{CCD2} shows the CCD of each object.
In every case, a few objects are distributed around the locus of the AGNs,
although most of the objects are distributed around the stellar locus
or reddened region from the stellar locus.
Table \ref{ratio-four-objects} lists the number and percentage of objects distributed in each region.
Although the ratios of CVs and LMXBs residing in Region II are relatively larger than the other two types of objects,
it is not more than $\sim 25\%$.
In addition, few objects have $(H-K_\textnormal{\tiny S}) \sim 0.8$ in Region II
though most quasars/AGNs have this colour (see Fig. \ref{CCD1}).
Accordingly, contamination by these four types of objects should be a small fraction.
This means that the dominant radiation of the four objects should be thermal radiation.
The AGNs also radiate thermal radiation,
but it is a very small fraction compared with the non-thermal component produced
by accretion around supermassive black holes.
Therefore, AGNs should be well separated by these four objects using the near-infrared colours.
\subsection{Contamination by normal galaxies}
\begin{figure}[thbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/ccd-simulation-ngalaxy-0z3.eps}}
\end{center}
\caption{
Simulated colour evolutions for seven spiral galaxies.
Redshift ranges from 0.0 to 3.0 with $\Delta z=0.2$ interval points.
The boundary between Region I and II is also drawn in the diagram.
\label{Simulate-galaxy-colour}}
\end{figure}
Distant galaxies that appear as point-like sources might
contaminate the AGN locus in the near-infrared CCD.
We confirmed the locus of normal galaxies in the near-infrared CCD
by performing a Monte-Carlo simulation as in Sect. \ref{Simulation}.
The SED templates we used are seven spiral galaxies in \citet{Polletta2007-ApJ}:
spirals range from early to late types (S0-Sd).
Figure \ref{Simulate-galaxy-colour} shows the simulated intrinsic colours
(i.e., $A_\textnormal{\tiny V}=0$) of the seven galaxies.
Galaxies with $0\leq z\lesssim 0.8$ have intrinsic colours similar to those of normal stars
(i.e., they are in Region I).
Galaxies with $1.4 \lesssim z \leq 3$ are distributed
around the reddened region of either normal stars and/or CTTS.
Therefore, they should not be mistaken for AGN candidates.
On the other hand, simulated colours with $z \sim 1$ are located in Region II.
A fraction of AGN in Region II are possibly mistaken for galaxies with $z\sim 1$.
However, galaxies at $z\sim 1$ should not have enough brightness to be detected with mid-scale telescopes.
Even the brightest galaxy has no more than $M \sim -23$ mag at SDSS r-band \citep{Blanton2001-AJ,Baldry2004-ApJ}.
If such a galaxy were located at $z\sim 1$,
its apparent magnitude would be $m \gtrsim 20$ mag at J-band.
In addition, the apparent magnitude would be even fainter
because most galaxies have $M>-23$ mag and the apparent brightness suffers extinction.
Accordingly, only large-scale telescopes can observe these galaxies.
Hence, few galaxies should contaminate the AGN locus in the near-infrared CCD with respect to
the data where limiting magnitude is below 20 mag.
\section{Summary and Conclusion}
We confirmed the loci of catalogued quasars/AGNs in a ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram,
of which over $70 \sim 80\%$ are clearly separated from the stellar locus.
In addition,
we simulated the near-infrared colours of quasars on the basis of a Monte-Carlo simulation with Hyperz code,
and demonstrated that the simulated colours can reproduce both the redshift-colour relations and
the locus of quasars in the near-infrared CCD.
We also predicted the colour evolution with respect to redshift (up to $z \sim 11$).
Finally, we discussed the possibility of contamination by other types of objects.
The locus of AGNs is also different from those of
the other four probable types of objects (namely, Microquasars, CVs, LMXBs, MYSOs)
that are expected to be located at a similar locus.
We also demonstrated by a Monte-Carlo simulation
that normal galaxies are unlikely to contaminate the locus of AGNs in the near-infrared CCD.
\citet{Hewett2006-MNRAS} investigated near-infrared colours of quasars using an artificial SED,
but we proposed near-infrared colour selection criteria for extracting AGNs and
studied both observed and simulated colours with quantitative discussions.
An important point is that our selection criteria require only near-infrared photometric data,
although some previous studies \citep[e.g., ][]{Glikman2007-ApJ,Glikman2008-AJ} used colour selections
based on colours between near-infrared and optical wavelengths.
In other words, our selection criteria make extraction of candidates easier
because only near-infrared colours are needed.
This technique should also be useful when we search for high-redshift quasars,
since they become very faint in the optical wavelength due to the shift of Lyman break.
This paper demonstrates that near-infrared colours can be useful to select AGN candidates.
If an additional constraint is imposed, more reliable candidates can be extracted.
When we use the near-infrared colour selection with an additional constraint
for near-infrared catalogues containing sources distributed in large area
(e.g., 2MASS, DENIS, UKIDSS, and future surveys),
a lot of AGN samples (possibly over $\sim$10 000) are expected to be derived in a region over $\sim$10 000 deg$^2$. \citet{Kouzuma2009-prep} \citep[see also][]{Kouzuma2009-ASPC} cross-identified between the 2MASS and ROSAT catalogues, and
extracted AGN candidates in the entire sky using the near-infrared colour selection in this paper.
These large number of samples may provide us with clues about such as the evolution of AGNs and X-ray background.
Additionally, in our simulation, quasars with $z \gtrsim 8$ can be extracted on the basis of near-infrared colours.
This property might be helpful to search for high-redshift quasars in the future.
\begin{acknowledgements}
This publication makes use of data products from the Two Micron All Sky Survey,
which is a joint project of the University of Massachusetts and
the Infrared Processing and Analysis Center/California Institute of Technology,
funded by the National Aeronautics and Space Administration and the National Science Foundation.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation,
the Participating Institutions, the National Science Foundation, the U.S. Department of Energy,
the National Aeronautics and Space Administration, the Japanese Monbukagakusho,
the Max Planck Society, and the Higher Education Funding Council for England.
The SDSS Web Site is http://www.sdss.org/.
We thank the anonymous referee for useful comments to improve this paper.
\end{acknowledgements}
\bibliographystyle{aa}
| 2024-02-18T23:40:04.872Z | 2009-10-20T13:34:00.000Z | algebraic_stack_train_0000 | 1,266 | 4,711 |
|
proofpile-arXiv_065-6329 | \section{Introduction}
Let $\mathfrak{a}_n$ denote the Lie algebra consisting of all the formal
vector fields on ${\mathbb R}^n$ and let $\mathfrak{ham}_{2n}\subset \mathfrak{a}_{2n}$
denote the subalgebra of {\it Hamiltonian} formal vector fields
on ${\mathbb R}^{2n}$ with respect to the standard symplectic form $\omega$.
Hereafter we denote by $H^{2n}_{\mathbb R}$ this standard symplectic vector space,
which is the fundamental representation of the symplectic group
$\mathrm{Sp}(2n,{\mathbb R})$.
The Lie algebra $\mathfrak{ham}_{2n}$ contains the Lie subalgebra $\mathfrak{sp}(2n,{\mathbb R})$
consisting of linear Hamiltonian vector fields.
In \cite{GKF} Gel'fand, Kalinin and Fuks studied the Gel'fand-Fuks
cohomology of $\mathfrak{ham}_{2n}$ and showed that
$$
H^*_{GF}(\mathfrak{ham}_{2n}, \mathfrak{sp}(2n,{\mathbb R}))_{\leq 0}
\cong {\mathbb R}[\omega, p_1,\cdots, p_{n}]/I \ .
$$
Throughout this paper, all the cohomology groups of Lie algebras
are with trivial coefficients in ${\mathbb R}$, and we omit the coefficients from the notation.
In the formula above $\omega\in H^2_{GF}(\mathfrak{ham}_{2}, \mathfrak{sp}(2n,{\mathbb R}))_{-2n}$
denotes the symplectic class of weight $-2$
(for the definition of {\it weights}, see the next section) and
$p_i\in H^{4i}_{GF}(\mathfrak{ham}_{2n}, \mathfrak{sp}(2n,{\mathbb R}))_{0}\ (i=1,\cdots,n)$
denote the Pontrjagin classes.
Further $I$ is the ideal generated by the classes
$$
\omega^k p_1^{k_1}\cdots p_n^{k_n} \quad (k+k_1+2k_2\cdots +n k_n > n)
$$
that vanish. (In the context of $\mathfrak{a}_n$ this corresponds to the Bott vanishing theorem.)
Thus, in the non-positive weight
part, the result is similar to the case of $\mathfrak{a}_n$. However, in the positive
weight part, Gel'fand, Kalinin and Fuks found an exotic class
$GKL \in H^7_{GF}(\mathfrak{ham}_{2}, \mathfrak{sp}(2,{\mathbb R}))_{8}$ of
degree $7$ and weight $8$,
which is now called the Gel'fand-Kalinin-Fuks class. They raised the problem
of determining whether their class is non-trivial as a characteristic class
of transversely symplectic foliations, or not. Recall that the Godbillon-Vey class,
which corresponds to
$h_1c_1^{n}\in H^{2n+1}_{GF}(\mathfrak{a}_n, \mathrm{O}(n))$, was shown to be
non-trivial almost immediately after its discovery. Namely Roussarie
first proved the non-triviality and Thurston \cite{T} proved
the remarkable result that this class can vary continuously.
In sharp contrast with this, non-triviality of the GKF-class has now been an open problem for nearly 40 years.
In the late 1990's, Kontsevich \cite{K} interpreted the Rozansky-Witten invariants
in terms of the Gel'fand-Fuks cohomology and characteristic classes for foliations.
As an application, he constructed certain characteristic classes for
transversely symplectic foliations. More precisely, he considered the two Lie subalgebras
$$
\mathfrak{ham}_{2n}^1\subset\mathfrak{ham}_{2n}^0\subset \mathfrak{ham}_{2n}
$$
of $\mathfrak{ham}_{2n}$,
where $\mathfrak{ham}_{2n}^\epsilon$ denotes the formal Hamiltonian
vector fields {\it without constant terms} and
{\it without constant or linear terms} for $\epsilon = 0, 1$
respectively. Kontsevich constructed a natural homomorphism
\begin{align*}
\land \omega^n: H^*_{GF}(\mathfrak{ham}_{2n}^0,\mathrm{Sp}(2n,{\mathbb R}))&
\cong H^*_{GF}(\mathfrak{ham}_{2n}^1;{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}\\
&{\longrightarrow} H^{*+2n}_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R})).
\end{align*}
Since the abelianization of the Lie algebra $\mathfrak{ham}_{2n}^1$
can be written as
$$
\mathfrak{ham}_{2n}^1{\longrightarrow} S^3 H^{2n}_{\mathbb R} \ ,
$$
where $S^3 H^{2n}_{\mathbb R}$ denotes the third symmetric power of $H^{2n}_{\mathbb R}$,
he obtained a homomorphism
\begin{equation}
\Phi: H^*(S^3 H^{2n}_{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}{\longrightarrow}
H^{*+2n}_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R})).
\label{eq:K}
\end{equation}
Roughly speaking, Kontsevich first considered the {\it leaf} or {\it foliated} cohomology classes
of transversely symplectic foliations, rather than the de Rham
cohomology, and then produced characteristic classes for such foliations (in de Rham
cohomology) by taking the wedge product
with the maximal power $\omega^n$ of the transverse symplectic form.
The purpose of this paper is twofold.
Firstly we interpret the Gel'fand-Kalinin-Fuks
class in this framework of Kontsevich.
This interpretation shows that the GKF
class can be decomposed as a product $\eta\land\omega$ of a
certain leaf cohomology class $\eta$ of degree $5$ and the transverse symplectic
class $\omega$.
This is similar to the case of the Godbillon-Vey class $h_1c_1^n$
for codimension $n$ foliations \cite{Ghys}, which can be expressed as
the product of a $1$-dimensional leaf cohomology class $h_1$,
the Reeb class, and the primary characteristic class $c_1^n$.
(Similar factorizations are known for some other characteristic classes of
foliations; see e.g. \cite{Kot}.)
Although the problem of
geometric non-triviality of the GKF class remains open,
we hope that our result will shed some light on
the geometric meaning of this class.
Secondly we determine Kontsevich's homomorphism $\Phi$
in \eqref{eq:K}
completely up to degree $2n$
and prove that some of these classes are
non-trivial as characteristic classes of
transversely symplectic foliations.
\section{Gel'fand-Fuks cohomology of formal Hamiltonian vector fields}
As is well-known, each element $X\in \mathfrak{ham}_{2n}$ corresponds
bijectively to a formal Hamiltonian function
$$
H\in {\mathbb R}[[x_1,\cdots,x_n,y_1,\cdots,y_n]]/{\mathbb R}
$$
which is defined up to constants, via the correspondence
$$
X\leftrightarrow \sum_{i=1}^n
\left\{\frac{\partial H}{\partial x_i}\frac{\partial}{\partial y_i}-
\frac{\partial H}{\partial y_i}\frac{\partial}{\partial x_i}\right\}.
$$
Thus, on the one hand, we have an isomorphism of topological Lie algebras
$$
\mathfrak{ham}_{2n}\cong {\mathbb R}[[x_1,\cdots,x_n,y_1,\cdots,y_n]]/{\mathbb R} \ ,
$$
where the Lie bracket on the right hand side is given by the Poisson bracket.
On the other hand, this topological Lie algebra is the completion
of that of polynomial Hamiltonian functions
$$
{\mathbb R}[x_1,\cdots,x_n,y_1,\cdots,y_n]/{\mathbb R}
=\bigoplus_{k=1}^\infty S^k H_{\mathbb R}^{2n}.
$$
Here $S^k H_{\mathbb R}^{2n}$ denotes the $k$-th symmetric power of
$H_{\mathbb R}^{2n}$, which is identified with the space of all the
homogeneous polynomials of degree $k$.
The Poisson bracket is given
by
$$
S^k H_{\mathbb R}^{2n}\otimes S^\ell H_{\mathbb R}^{2n}\ni f\otimes g
\mapsto \{f,g\}\in S^{k+\ell-2} H_{\mathbb R}^{2n}.
$$
Hence the cochain complex $C^*_{GF}(\mathfrak{ham}_{2n})$ of
$\mathfrak{ham}_{2n}$ splits as a direct sum of finite dimensional
subcomplexes
$$
C^*_{GF}(\mathfrak{ham}_{2n})
\cong
\bigoplus_{w=-2n}^\infty
C^*_{GF}(\mathfrak{ham}_{2n})_{w} \ ,
$$
so that we have
$$
H^*_{GF}(\mathfrak{ham}_{2n})
\cong
\bigoplus_{w=-2n}^\infty
H^*_{GF}(\mathfrak{ham}_{2n})_{w} \ .
$$
Here
$$
C^*_{GF}(\mathfrak{ham}_{2n})_{w}=
\sum_{-k_1+k_3+2 k_4+3k_5\cdots=w}
\Lambda^{k_1} (S^{1}H_{\mathbb R}^{2n})^*\otimes
\Lambda^{k_2} (S^{2}H_{\mathbb R}^{2n})^*\otimes\cdots
$$
denotes the set of cochains with weight $w$, where
we define the weight of each element in $ (S^{k}H_{\mathbb R}^{2n})^*$
to be $k-2$, so that the coboundary operator preserves the weights.
Similar decompositions hold in the case of the relative
cohomology $H^*_{GF}(\mathfrak{ham}_{2n},\mathfrak{sp}(2n,{\mathbb R}))$,
and in the cases of the Lie subalgebras
$\mathfrak{ham}^0_{2n}, \mathfrak{ham}^1_{2n}$.
Now, as was already mentioned in the introduction, Gel'fand,
Kalinin and Fuks proved in \cite{GKF} that the cohomology
$
H^*_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))_{\leq 0}
$
in the {\it non-positive} weight part is described in terms
of the usual characteristic classes, namely the Pontrjagin classes
and the transverse symplectic class. However,
contrary to their initial working hypothesis, they found an
exotic class for the case $n=1$:
\begin{theorem}[{\bf Gel'fand-Kalinin-Fuks \cite{GKF}}]
The relative cohomology $H^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_w$
for $w\leq 8$ is given by:
\begin{align*}
H^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{\leq 0}
&\cong {\mathbb R}[\omega, p_1]/(\omega^2, \omega p_1, p_1^2) \ ,\\
H^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{w}
&= 0\quad (w=1,\cdots,7) \ ,\\
H^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}
&=
\begin{cases}
{\mathbb R} \quad (*=7)\\
0\quad \text{(otherwise)} \ .
\end{cases}
\end{align*}
\end{theorem}
Perchik \cite{P} gave a formula for the generating function
$$
\sum_{w=0}^\infty \chi(H^*(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))_w)t^w
$$
of the Euler characteristic of the relative cohomology of
$\mathfrak{ham}_{2n}$.
By computing it for the case $n=1$, he showed the
{\it existence} of many more exotic classes. Later, Metoki \cite{Metoki}
found an explicit exotic class in
$H^9(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{14}$
which we call the Metoki class.
Similar to the case of $H^*_{GF}(\mathfrak{a}_n,\mathrm{O}(n))$,
which provides characteristic classes for foliations of
codimension $n$ (see \cite{BR}\cite{BH}),
the relative cohomology
$H^*_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))$
provides characteristic classes for
{\it transversely symplectic} foliations of
codimension $2n$. More precisely,
let $\mathrm{B\Gamma}_{2n}^\omega$ denote the Haefliger classifying
space for transversely symplectic foliations of codimension $2n$.
Then we have a homomorphism
\begin{equation}
H^*_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R})){\longrightarrow}
H^*_{GF}(\mathfrak{ham}_{2n},\mathrm{U}(n))
{\longrightarrow} H^*(\mathrm{B\Gamma}_{2n}^\omega;{\mathbb R}) \ ,
\label{eq:sf}
\end{equation}
where $\mathrm{U}(n)\subset \mathrm{Sp}(2n,{\mathbb R})$ is a
maximal compact subgroup. In particular, we have the important
problem of determining whether the Gel'fand-Kalinin-Fuks class
in $H^7_{GF}(\mathfrak{ham}_2,\mathrm{Sp}(2,{\mathbb R}))$
defines a non-trivial characteristic class in
$H^7(\mathrm{B\Gamma}_2^\omega;{\mathbb R})$, or not. We also have this
problem for the Metoki class.
In \cite{K}, Kontsevich proposed a new approach to the theory of
characteristic classes for transversely symplectic foliations.
Here we briefly summarize his construction.
Kontsevich considered the two Lie subalgebras
$$
\mathfrak{ham}_{2n}^0\supset \mathfrak{ham}_{2n}^1
$$
of $\mathfrak{ham}_{2n}$
consisting of formal Hamiltonian vector fields {\it without constant terms} and
{\it without constant or linear terms}.
In terms of Hamiltonian functions, we can write
\begin{align*}
\mathfrak{ham}_{2n}^0 &=
\left(\bigoplus_{k=2}^\infty S^k H_{\mathbb R}^{2n}\right)^{\wedge}\\
\mathfrak{ham}_{2n}^1 &=
\left(\bigoplus_{k=3}^\infty S^k H_{\mathbb R}^{2n}\right)^{\wedge}
\end{align*}
and we have an isomorphism
$$
H^*_{GF}(\mathfrak{ham}_{2n}^0,\mathrm{Sp}(2n,{\mathbb R}))
\cong
H^*_{GF}(\mathfrak{ham}^1_{GF})^{\mathrm{Sp}(2n,{\mathbb R})}.
$$
Let $\mathcal{F}$ be a foliation on a smooth manifold $N$
and let $T\mathcal{F}\subset TN$ be the tangent bundle of $\mathcal{F}$.
The {\it leaf} cohomology or {\it foliated} cohomology of
$\mathcal{F}$, denoted by $H^*_{\mathcal{F}}(N;{\mathbb R})$,
is defined to be the cohomology of
$\Omega^*_{\mathcal{F}}(N)=\oplus_k \Gamma(\Lambda^k T^*\mathcal{F})$,
which is the quotient of the de Rham complex $\Omega^*N$ of $N$ by the
ideal $I^*(\mathcal{F})$ of $\mathcal{F}$. If $\mathcal{F}$ is a transversely symplectic
foliation of codimension $2n$, then there is a transverse symplectic form
$\omega$, and the homomorphism
$$
\wedge\omega^n\colon I^*(\mathcal{F})\longrightarrow \Omega^*N
$$
vanishes identically, so that there is a well-defined homomorphism
$$
\land\omega^n: H^*_{\mathcal{F}}(N;{\mathbb R}) {\longrightarrow}
H^{*+2n}(N;{\mathbb R}) \ .
$$
Now Kontsevich pointed out that the relative cohomology
$$
H^*_{GF}(\mathfrak{ham}_{2n}^0,\mathrm{Sp}(2n,{\mathbb R}))
\cong
H^*_{GF}(\mathfrak{ham}^1_{2n};{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}
$$
serves as the universal model for $H^*_{\mathcal{F}}(N;{\mathbb R})$,
so that one has the following commutative diagram:
\begin{equation*}
\begin{CD}
H^*_{GF}(\mathfrak{ham}^1_{2n})^{\mathrm{Sp}(2n,{\mathbb R})}
@>>> H^*_{\mathcal{F}}(N;{\mathbb R}) \\
@V{\land\omega^n}VV @VV{\land\omega^n}V \\
H^{*+2n}_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))
@>>> H^{*+2n}(N;{\mathbb R}) \ .
\end{CD}
\end{equation*}
It is easy to show that the natural projection
$$
\mathfrak{ham}_{2n}^1{\longrightarrow} S^3 H^{2n}_{\mathbb R}
$$
onto the lowest weight part gives the abelianization of the
Lie algebra $\mathfrak{ham}_{2n}^1$ because the
Poisson bracket
$$
S^3 H^{2n}_{\mathbb R}\otimes S^k H^{2n}_{\mathbb R}{\longrightarrow}
S^{k+1}H^{2n}_{\mathbb R}\
$$
is easily seen to be surjective for any $k\geq 3$.
It follows that, as mentioned in \eqref{eq:K}, there is a homomorphism $\Phi$
defined by the following composition:
$$
H^*(S^3 H^{2n}_{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}{\longrightarrow}
H^*_{GF}(\mathfrak{ham}^1_{2n})^{\mathrm{Sp}(2n,{\mathbb R})}
\overset{\wedge \omega^n}{{\longrightarrow}}
H^{*+2n}_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R})) \ .
$$
Further composing $\Phi$ with the homomorphism in \eqref{eq:sf}, we obtain
\begin{equation}
\widetilde{\Phi}:
H^*(S^3 H^{2n}_{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}
{\longrightarrow} H^{*+2n}(\mathrm{B\Gamma}^{\omega}_{2n};{\mathbb R}) \ .
\label{eq:K2}
\end{equation}
For any symplectic manifold $(M,\omega)$
of dimension $2n$, let $\mathrm{Symp}^\delta(M)$ denote
the symplectomorphism group of $M$ equipped with the
{\it discrete} topology. Then the above construction
gives rise to a homomorphism
$$
H^*(S^3 H^{2n}_{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}{\longrightarrow}
H^{*+2n}(\mathrm{ESymp}^\delta(M))
\stackrel{\int_M}{{\longrightarrow}}
H^{*}(\mathrm{BSymp}^\delta(M);{\mathbb R}) \ ,
$$
where $\mathrm{ESymp}^\delta(M)$ denotes the total space
of the universal foliated $M$-bundle over the classifying
space $\mathrm{BSymp}^\delta(M)$ of $\mathrm{Symp}^\delta(M)$,
and $\int_M$ is the integration over the fiber in this universal bundle.
One of the merits of the above construction of Kontsevich
is that the {\it stable} cohomology of
$\mathfrak{ham}_{2n}$ is not interesting because by \cite{GS}
it is just the polynomial algebra on the symplectic class
while that of $\mathfrak{ham}^0_{2n}$ is one of the three versions
of Kontsevich's graph cohomology
(see \cite{Kontsevich93}, \cite{Kontsevich94}),
more precisely the {\it commutative version},
which is very rich and still mysterious.
\section{Statements of the main results}
In this section, we state the main results of this paper.
\begin{theorem}
In the range of weights $w\leq 10$ the relative cohomology groups
$H^*_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))_w$
are non-trivial only for the following three combinations of degree and weight:
\begin{align*}
H^0_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))_{0}&\cong{\mathbb R}\\
H^2_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))_{2}&\cong{\mathbb R}\\
H^5_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))_{10}&\cong{\mathbb R} \ .
\end{align*}
Furthermore, the following homomorphisms are all isomorphisms:
\begin{align*}
\wedge\omega: H^0_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))_{0}
&\ {\longrightarrow} \ H^2_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{-2}\cong {\mathbb R}<\omega> \ ,\\
\wedge\omega: H^2_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))_{2}
&\ {\longrightarrow} \ H^4_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{0}\cong {\mathbb R}<p_1> \ ,\\
\wedge\omega: H^5_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))_{10}
&\ {\longrightarrow} \ H^7_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}\cong{\mathbb R}<\text{GKF}> \ .
\end{align*}
It follows that both the first Pontrjagin class $p_1$ and
the Gel'fand-Kalinin-Fuks class GKF can be decomposed as
wedge products of certain leaf cohomology classes and the
transverse symplectic class $\omega$.
\label{th:main}
\end{theorem}
Combining Theorem \ref{th:main} with our earlier result in
\cite{KM03} (see also \cite{KM07}) we obtain
the following non-triviality result for the
characteristic classes defined by Kontsevich \cite{K}.
\begin{corollary}
Under the homomorphisms
\begin{align*}
H^2(S^3 H_{\mathbb R}^{2};{\mathbb R})^{\mathrm{Sp}(2,{\mathbb R})}&\ {\longrightarrow} \
H^2(\mathrm{BSymp}^\delta(\Sigma_g);{\mathbb R})\\
H^2(S^3 H_{\mathbb R}^{2};{\mathbb R})^{\mathrm{Sp}(2,{\mathbb R})}&\ \overset{\widetilde{\Phi}}{{\longrightarrow}} \
H^4(\mathrm{B\Gamma}_2^\omega;{\mathbb R}),
\end{align*}
the generator of $H^2(S^3 H_{\mathbb R}^{2};{\mathbb R})^{\mathrm{Sp}(2,{\mathbb R})}\cong{\mathbb R}$
is mapped to
$$
e_1\in H^2(\mathrm{BSymp}^\delta(\Sigma_g);{\mathbb R}),
\quad p_1\in H^4(\mathrm{B\Gamma}_2^\omega;{\mathbb R})
$$
respectively (up to non-zero constants),
where $\mathrm{Symp}({\Sigma_g})$ denotes the symplectomorphism group of
${\Sigma_g}$ with respect to a fixed symplectic form.
It follows that both homomorphisms are non-trivial.
\label{cor:nt}
\end{corollary}
We can further generalize the above result to higher
dimensions as follows.
\begin{theorem}
In the range $*\leq 2n$, the image of the homomorphism
$$
\Phi\colon H^{*}(S^3 H_{\mathbb R}^{2n};{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}\ {\longrightarrow} \
H^{*+2n}_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))
$$
introduced by Kontsevich is precisely the subspace spanned by
the classes
$$
\omega^k p_1^{k_1}\cdots p_n^{k_n}
\quad (k+k_1+2k_2\cdots +n k_n = n)
$$
that are borderline with respect to Bott vanishing in this context.
Furthermore, the elements
$$
{\omega}^{n}, {\omega}^{n-1}p_1,\cdots, p_1^{n}
$$
are mapped non-trivially under the homomorphism
$$
\widetilde{\Phi}\colon H^{*}(S^3 H_{\mathbb R}^{2n};{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}\ {\longrightarrow} \
H^{*+2n}(\mathrm{B\Gamma}_{2n}^\omega;{\mathbb R})
$$
so that
$$
\dim \mathrm{Im} \widetilde{\Phi}\geq n+1 \ .
$$
\label{th:nt2}
\end{theorem}
\begin{remark}
It seems reasonable to conjecture that the above homomorphism
$\Phi$ is trivial in the range $*>2n$. This is true for the case
$n=1$ by Theorem \ref{th:main}.
It is an important
problem to determine whether the classes involving the
higher Pontrjagin classes $p_i \ (i\geq 2)$ are non-trivial, or not.
\end{remark}
\section{Proofs of the main results}
In this section we write $H$ as a shorthand for $H^{2n}_{\mathbb R}$.
We begin with the proof of Theorem \ref{th:main}.
For this, notice that
\begin{align*}
C^*_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{w}&=\\
\sum_{k_3+2 k_4+3k_5\cdots=w}&
(\Lambda^{k_3} S^{3}H^*\otimes
\Lambda^{k_4} S^{4}H^*\otimes
\Lambda^{k_5} S^{5}H^*\otimes
\cdots)^{\mathrm{Sp}(2,{\mathbb R})} \ ,\\
C^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{w}&=\\
\sum_{-k_1+k_3+2 k_4\cdots=w}&
(\Lambda^{k_1} H^*\otimes
\Lambda^{k_3} S^{3}H^*\otimes
\Lambda^{k_4} S^{4}H^*\otimes
\cdots)^{\mathrm{Sp}(2,{\mathbb R})} \ .
\end{align*}
It is easy to see that both
$C^*_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{w}$ and
$C^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{w}$
vanish if $w$ is odd. Moreover,
for $C^*_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{w}$ with $w=2,4,6,8$
we have the following Table \ref{tab:1}, where $\chi$ denotes the Euler characteristic.
\begin{table}[h]
\caption{}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|}
\noalign{\hrule height0.8pt}
\hfil $k$ & $1$ & $2$ & $3$ & $4$ & $5$ & $\chi$ \\
\hline
$\dim C_{GF}^{k}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{2}$
& $0$ & $1$ & $0$ & $0$ & $0$ & $1$ \\
\hline
$\dim C_{GF}^{k}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{4}$
& $0$ & $0$ & $1$ & $1$ & $0$ & $0$\\
\hline
$\dim C_{GF}^{k}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{6}$
& $0$ & $1$ & $1$ & $0$ & $0$ & $0$ \\
\hline
$\dim C_{GF}^{k}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{8}$
& $0$ & $0$ & $4$ & $5$ & $1$ & $0$ \\
\noalign{\hrule height0.8pt}
\end{tabular}
\end{center}
\label{tab:1}
\end{table}
Here we have used well-known facts about the representations of
$\mathrm{Sp}(2,{\mathbb R})$ such as $S^kH^*\cong S^kH$ and
$$
S^kH\otimes S^{\ell}H
\cong
S^{k+\ell}H\oplus S^{k+\ell-2}H\oplus\cdots
\oplus S^{k-\ell}H\quad (k\geq \ell) \ ,
$$
as well as various formulae for the
irreducible decomposition of $\Lambda^m S^kH$; see e.g. \cite{FH}.
In the weight $2$ part, we find that the homomorphism
\begin{align*}
H_{GF}^{2}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{2}
= (\Lambda^2 &S^3 H^*)^{\mathrm{Sp}(2,{\mathbb R})}\cong{\mathbb R}
\ \overset{\land\omega}{{\longrightarrow}}\\
H_{GF}^{4}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{0}
&=(\Lambda^2 H^*\otimes
\Lambda^2 S^3 H^*)^{\mathrm{Sp}(2,{\mathbb R})}
\cong{\mathbb R}<p_1>
\end{align*}
is an isomorphism because
$\omega$ is a generator
of $(\Lambda^2 H^*)^{\mathrm{Sp}(2,{\mathbb R})}\cong{\mathbb R}$.
It follows that the first Pontrjagin class
$p_1$ can be decomposed as a wedge product
$$
p_1=\gamma_1\wedge \omega
$$
of a class
$\gamma_1\in H_{GF}^{2}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{2}\cong {\mathbb R}$
in the first leaf cohomology
with the transverse symplectic class $\omega$.
In the weight $4$ part, we find that the coboundary operator
\begin{align*}
C^3_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{4}
&=(\Lambda^2 S^3 H^*\otimes S^4 H^*)^{\mathrm{Sp}(2,{\mathbb R})}
\cong{\mathbb R}\\
&\overset{\delta}{{\longrightarrow}}
C^4_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{4}
=(\Lambda^4 S^3 H^*)^{\mathrm{Sp}(2,{\mathbb R})}\cong{\mathbb R}
\end{align*}
is an isomorphism. This, together with the computation shown
in Table~\ref{tab:1}, shows that
$H^*(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{4}$ is trivial.
Similarly the weight $6$ part
$H^*(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{6}$
is trivial because the coboundary operator
\begin{align*}
C^2_{GF}&(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{6}
=(\Lambda^2 S^5 H^*)^{\mathrm{Sp}(2,{\mathbb R})}
\cong{\mathbb R}\\
&\overset{\delta}{{\longrightarrow}}
C^3_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{6}
=(S^3 H^*\otimes S^4 H^*\otimes S^5 H^*
)^{\mathrm{Sp}(2,{\mathbb R})}\cong{\mathbb R}
\end{align*}
can be seen to be an isomorphism.
The cochain complex
$C^{*}_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{8}$
for the weight $8$ part is given in Table \ref{tab:2},
where the symbols $(347), (4^26)$, for example, stand for
\begin{align*}
(S^3 H^*\otimes S^4 H^*\otimes S^7 H^*
)^{\mathrm{Sp}(2,{\mathbb R})}&\cong{\mathbb R}\\
(\Lambda^2 S^4 H^*\otimes S^6 H^*
)^{\mathrm{Sp}(2,{\mathbb R})}&\cong{\mathbb R}
\end{align*}
respectively, and similarly for the other ones.
\begin{table}[h]
\caption{generators for $C_{GF}^{*}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{8}$}
\begin{center}
\begin{tabular}{|l|c|c|}
\noalign{\hrule height0.8pt}
\hfil ${}$ & $\text{dim}$ & $\text{generators}$ \\
\hline
$C_{GF}^{3}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{8}$ & $4$
& $(347) (356) (4^26)(45^2)$\\
\hline
$C_{GF}^{4}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{8}$ & $5$
& $(3^246) (3^25^2)_2 (34^25)_2$\\
\hline
$C_{GF}^{5}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{8}$ & $1$
& $(3^345)$\\
\noalign{\hrule height0.8pt}
\end{tabular}
\end{center}
\label{tab:2}
\end{table}
The subscript $2$ in the symbol $(3^25^2)_2$
means that its dimension is $2$, namely we have
$$
(\Lambda^2 S^3 H^*\otimes \Lambda^2 S^5 H^*
)^{\mathrm{Sp}(2,{\mathbb R})}\cong{\mathbb R}^2 \ .
$$
A direct computation of the coboundary operators shows that
this cochain complex is acyclic.
The dimensions of the cochain complex for the weight $10$ part are
given in the first line of Table \ref{tab:3}.
In the second line, the dimensions of
the cochain complex for the weight $8$ part of
$H^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}$
are given. These were first computed by Gel'fand-Kalinin-Fuks \cite{GKF}
and were later re-computed by Metoki \cite{Metoki}.
\begin{table}[h]
\caption{}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\noalign{\hrule height0.8pt}
\hfil $k$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $\chi$ \\
\hline
$\dim C_{GF}^{k-2}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$
& $0$ & $0$ & $0$ & $1$ & $3$ & $9$ & $12$ & $4$ & $-1$ \\
\hline
$\dim C_{GF}^{k}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}$
& $0$ & $0$
& $5$ & $13$ & $17$ & $18$ & $14$ & $4$ & $-1$\\
\noalign{\hrule height0.8pt}
\end{tabular}
\end{center}
\label{tab:3}
\end{table}
As was already mentioned, Gel'fand, Kalinin and Fuks determined
$H^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}$
by a computer calculation and found that it is $1$-dimensional,
generated by their class of degree $7$.
Metoki re-computed this cohomology group,
again with the aid of a computer program, and constructed an explicit (but complicated) cocycle for the
$GKF$-class. His cocycle is not divisible by $\omega$, and, so far, no
cocycle divisible by $\omega$ has been known.
Now it can be checked that the homomorphism
$$
\wedge\omega\colon C^*_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}
\ {\longrightarrow}\
C^{*+2}_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}
$$
induces an embedding of cochain complexes which shifts the
degree by $2$ and the weight by $-2$.
Our purpose is to prove that it induces an isomorphism in cohomology.
By an explicit computation, we determined a system of generators
for the first chain complex
$C_{GF}^{*}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$
as shown in Table \ref{tab:4}.
\begin{table}[h]
\caption{generators for $C_{GF}^{*}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$}
\begin{center}
\begin{tabular}{|l|c|c|}
\noalign{\hrule height0.8pt}
\hfil ${}$ & $\text{dim}$ & $\text{generators}$ \\
\hline
$C_{GF}^{2}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$
& $1$ & $(7^2)$ \\
\hline
$C_{GF}^{3}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$ & $3$
& $(358) (367) (457)$\\
\hline
$C_{GF}^{4}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$ & $9$
& $(3^248) (3^257) (34^27)(3456)_4(35^3)(4^36)$\\
\hline
$C_{GF}^{5}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$ & $12$
& $(3^347) (3^356) (3^24^26)_3(3^245^2)_4(34^35)_2(4^5)$\\
\hline
$C_{GF}^{6}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$ & $4$
& $(3^45^2) (3^34^25)_2 (3^24^4)$\\
\noalign{\hrule height0.8pt}
\end{tabular}
\end{center}
\label{tab:4}
\end{table}
In general, there are three equivalent ways of expressing elements
in $C^*_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))$.
The first one is in terms of (duals of)
$\mathrm{Sp}(2n,{\mathbb R})$-invariant tensors of Hamiltonian functions.
The second one is by means of vertex oriented graphs
which encode ways of
contraction of tensors of Hamiltonian functions
by applying the symplectic pairing
$H^{2n}_{\mathbb R}\otimes H^{2n}_{\mathbb R}{\rightarrow}{\mathbb R}$ along the edges.
The third one is in terms of tautological $1$-forms
$$
\delta^{i}_{j_1\cdots j_k}\in C^1_{GF}(\mathfrak{a}_{2n})
$$
(restricted to the Lie subalgebra $\mathfrak{ham}_{2n}$), defined by
$$
\delta^{i}_{j_1\cdots j_k}(X)=(-1)^k
\frac{\partial f_i}{\partial x_{j_1}\cdots\partial x_{j_k}}(0,\cdots, 0) \ ,
$$
where
$$
X=\sum_{i=1}^{2n} f_i \frac{\partial}{\partial x_i} \in \mathfrak{a}_{2n} \ ;
$$
see e.g. \cite{B}. For example, a generator of
$(\Lambda^2 S^3 H^*)^{\mathrm{Sp}(2,{\mathbb R})}$
in the first line of Table \ref{tab:1} can be given in either of the following three ways:
\begin{align*}
&\mathrm{(1)}\ (x^3\wedge y^3-3x^2y\wedge xy^2)^* \ ,\\
&\mathrm{(2)}\ \text{a graph with $2$ vertices and $3$ edges joining them} \ ,\\
&\mathrm{(3)}\ -\delta^1_{22}\wedge\delta^2_{11}-3\delta^1_{11}\wedge\delta^2_{22} \ .
\end{align*}
Metoki \cite{Metoki} gave an explicit basis for
$C^*_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}$
and computed the coboundary operators in terms of this basis.
Although he did not use the representation theory of $\mathrm{Sp}(2,{\mathbb R})$
explicitly, it turns out that our basis given in Table \ref{tab:4}
appears as a subbasis of his.
Therefore we can use his computation
(which we checked by our method).
In particular, the coboundary maps
$$
C^4(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}
\overset{A}{{\longrightarrow}}
C^5(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}
\overset{B}{{\longrightarrow}}
C^6(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}
$$
are represented by the following
$(12,9)$-matrix
$$
\setcounter{MaxMatrixCols}{18}
A=
\begin{pmatrix}
45 & 18 & 9 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & -9 & 0 & 0 & 5 &6&6&0&0\\
-10 & 0 & -2 & 10 & -10 &-19&-33&0&1\\
-120 & 0 & 72 & 10 & -16 &-3&16&0&6\\
30 & 0 & -30 & 0 & -2 &-12&-21&0&-3\\
-8 & 50 & 0 & 10 & -32 &-48&-60&13&0\\
-1 & -2 & 0 & 10 & -4 &-15&-25&11&0\\
-15 & 18 & 0 & 20 & -34 &-57&-73&0&0\\
-70 & 16 & 0 & 0 & 0 &6&-14&20&0\\
0 & 0 & 0 -9 & -20 & 20 &39&52&0&-1\\
0 & 0 & 57 & 10 & 4 &12&16&0&-3\\
0 & 0 & 0 & 0 & 0 &0&0&0&-140
\end{pmatrix}
$$
and the $(4,12)$-matrix
$$
\setcounter{MaxMatrixCols}{18}
B=
\begin{pmatrix}
0 & 30 & -39 & 0 \\
-56 & -8 & -12 & 0 \\
0 & 6 & 9 & 0 \\
0 & 28 & -28 & -14 \\
0 & 68 & -66 & -56 \\
-10 & -6 & 12 & 0 \\
10 & -2 &4 & 0 \\
0 & -22 & 9 & 0 \\
1 & 5 & -10 & 0 \\
0 & -12 & 12 & -14 \\
0 & -6 & 9 & -14 \\
0 & 0 & 0 & 1
\end{pmatrix} \ .
$$
Of course we have $BA=O$.
The corresponding coboundary maps in
$$
C^6(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}
\overset{\widetilde{A}}{{\longrightarrow}}
C^7(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}
\overset{\widetilde{B}}{{\longrightarrow}}
C^8(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}
$$
are given by the following $(14,18)$-matrix $\widetilde{A}$
and $(4,14)$-matrix $\widetilde{B}$:
$$
\widetilde{A}=
\begin{pmatrix}
A & A_1\\
O & A_2
\end{pmatrix},
\quad
\widetilde{B}=
\begin{pmatrix}
B \\
B_1
\end{pmatrix}.
$$
Here $O$ denotes the zero matrix of size $(2,9)$ and
this checks the fact that
$$
C^*_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}
\ \subset\
C^{*+2}_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{8}
$$
is indeed a subcomplex.
Now an explicit computation shows that the above inclusion
induces an isomorphism in cohomology.
This completes the proof of Theorem \ref{th:main}.
\qed
\begin{remark}
The unique leaf cohomology class
$$
\eta\in H^5_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}
$$
such that $\eta\wedge{\omega}=GKF$
can be represented by an explicit cocycle in
$C^5_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$
which is a linear combination of the
cochains of the forms
$(3^347), (3^24^26), (3^245^2)$
in Table \ref{tab:4}.
We omit the precise formula.
\end{remark}
\begin{proof}[Proof of Corollary \ref{cor:nt}]
We proved in \cite{KM03} (see also \cite{KM07})
that both the first Pontrjagin
class $p_1\in H^4(\mathrm{ESymp}^\delta(\Sigma_g);{\mathbb R})$
and its fiber integral
$e_1\in H^2(\mathrm{BSymp}^\delta(\Sigma_g);{\mathbb R})$,
which is the first Mumford-Morita-Miller class, are non-trivial.
More precisely, we proved the existence of
foliated ${\Sigma_g}$-bundles over closed oriented surfaces
such that the signatures of their total spaces are
non-zero, while their total holonomy groups are
contained in the group $\mathrm{Symp}({\Sigma_g})$ of
area-preserving diffeomorphisms of ${\Sigma_g}$ (with respect to some area form).
By Theorem \ref{th:main} the homomorphism
$$
\wedge\omega\colon
H^2_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))\cong{\mathbb R}
{\longrightarrow}
H^4_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))\cong{\mathbb R}
$$
is an isomorphism, where the target
is generated by the first Pontrjagin class $p_1$.
The result follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:nt2}]
We begin with the proof of the first statement.
On the one hand, the weight of the elements in $S^3 H_{\mathbb R}^{2n}$ is $1$
while that of ${\omega}^n$ is $-2n$. Hence the weights of
elements of $\mathrm{Im}\Phi$ restricted to the range
$*\leq 2n$ are non-positive.
By the result of Gel'fand-Kalinin-Fuks \cite{GKF} mentioned
in the Introduction, we can conclude that
$\mathrm{Im}\Phi$ is contained in the span of the
classes
$$
\omega^k p_1^{k_1}\cdots p_n^{k_n} \in
H^*_{GF}(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))
$$
with $k+k_1+2k_2\cdots +n k_n \leq n$.
On the other hand, any element in $\mathrm{Im}\Phi$ is
annihilated by taking the wedge product with a single ${\omega}$
because ${\omega}^{n+1}$ vanishes identically. Therefore
$\mathrm{Im}\Phi$ is contained in the span of the above classes
with the condition that
$k+k_1+2k_2\cdots +n k_n$ is precisely equal to $n$.
It remains to prove that all these classes are indeed
contained in $\mathrm{Im}\Phi$.
For this, we use the well-known formulae which
express ${\omega}$ and Pontrjagin classes in terms of the
tautological $1$-forms.
With respect to the standard basis
$x_1,\cdots,x_n,y_1,\cdots,y_n$ of the symplectic vector
space $H^{2n}_{\mathbb R}$ and the tautological $1$-forms
$$
\delta^{i}_{j_1\cdots j_k}\in C^1_{GF}(\mathfrak{ham}_{n}),
$$
we have
$$
\omega=\delta^1\wedge\delta^{n+1}+\cdots+\delta^{n}\wedge\delta^{2n}.
$$
The universal curvature form $\Omega=(\Omega^i_j)$
can be written as
$$
\Omega^i_j=\sum_{k=1}^n \delta^i\wedge \delta^k_{jk} \ ,
$$
see e.g. \cite{B}, and the Pontrjagin classes $p_i\ (i=1,2,\cdots)$
are certain homogeneous polynomials
on $\Omega^i_j$ of degree $2i$. In terms of the duals of
Hamiltonian functions,
the tautological forms $\delta^i$ and $\delta^k_{jk}$ correspond
to elements of $H^*$ and $S^3 H^*$, respectively. We can now conclude that
$$
p_i\in ({\Lambda}^{2i} H^*\otimes {\Lambda}^{2i} S^3 H^*)^{\mathrm{Sp}(2n,{\mathbb R})}
\quad (i=1,2,\cdots).
$$
It follows that any element
$\omega^k p_1^{k_1}\cdots p_n^{k_n}$
with $k+k_1+2k_2\cdots +n k_n = n$
is contained in
$$
({\Lambda}^{2n} H^*\otimes {\Lambda}^{2n-2k} S^3 H^*)^{\mathrm{Sp}(2n,{\mathbb R})}
\cong
{\omega}^n\wedge ({\Lambda}^{2n-2k} S^3 H^*)^{\mathrm{Sp}(2n,{\mathbb R})}
$$
because ${\Lambda}^{2n} H^*\cong{\mathbb R}$ generated by ${\omega}^n$.
Hence such elements are contained in $\mathrm{Im}\Phi$
proving the first part of the Theorem.
Next we prove the second part.
Let $\pi\colon E{\longrightarrow} X$ be a foliated ${\Sigma_g}$-bundle over a closed
oriented surface with non-vanishing signature such that
the total holonomy group is contained in the group
$\mathrm{Symp}({\Sigma_g})$. The existence of such bundles was
proved in our paper \cite{KM03}. The classifying map
$f\colon E{\longrightarrow} \mathrm{B\Gamma}_2^{\omega}$ of the transversely
symplectic foliation on $E$ of codimension $2$ has the property that
$f^*(p_1)\neq 0$. Now consider the manifold
${\mathbb C} P^{n-k}\times E^k$ equipped with transversely symplectic
foliation of codimension $2n$ which is induced from the
point foliation on ${\mathbb C} P^{n-k}$ and the above foliation on $E$.
Then it is easy to see that the characteristic class
${\omega}^{n-k}p_1^k$ of this foliation is non-trivial.
This completes the proof.
\end{proof}
\begin{remark}
By the calculation in the proof $p_1$ is divisible by $\omega$ only if $n=1$, although
in general $p_1^n$ is divisible by $\omega$.
The dimension of $({\Lambda}^{2i} H^*\otimes {\Lambda}^{2i} S^3 H^*)^{\mathrm{Sp}(2n,{\mathbb R})}$
is $1$ for $n=1$ and is $2$ for $n\geq 2$.
\end{remark}
\section{The Euler characteristic of
$H^*(\mathfrak{ham}^0_{2n},\mathrm{Sp}(2n,{\mathbb R}))$}
As we mentioned already,
Perchik \cite{P} gave a formula for the generating function
$$
\sum_{w=0}^\infty \chi(H^*(\mathfrak{ham}_{2n},\mathrm{Sp}(2n,{\mathbb R}))_w)t^w
$$
of the Euler characteristic of the relative cohomology of
$\mathfrak{ham}_{2n}$. In this section, we prove a similar
formula for
$H^*_{GF}(\mathfrak{ham}^0_{2n},\mathrm{Sp}(2n,{\mathbb R}))$.
Following \cite{P}, let us define rational functions
$p_i(n)\ (i=0,1,\cdots)$
on $n+1$ variables $x_1,\cdots,x_n, t$
(polynomials with respect to $t$) as follows.
First consider
$$
a=(a_1,\cdots,a_n),\quad b=(b_1,\cdots,b_n)\quad (a_i,b_i\geq 0)
$$
and put
$$
|a+b|=\sum (a_i+b_i),\quad x^{a-b}=x_1^{a_1-b_1}\cdots x_n^{a_n-b_n}.
$$
Then define
\begin{align*}
p_0(n)&=\Pi_{|a+b|=2,\ a\not= b} (1-x^{a-b})\\
p_k(n)&=\Pi_{|a+b|=2+k} (1-t^{k}x^{a-b}) \ .
\end{align*}
\begin{theorem}
The constant term with respect to $x_i$ $(i=1,\cdots,n)$ of the infinite product
$\Pi_{i=0}^\infty\ p_i(n)$ is equal to
$$
n! 2^n \sum_{w=0}^\infty
\chi(H^*(\mathfrak{ham}^0_{2n},\mathrm{Sp}(2n,{\mathbb R}))_w)t^w \ ,
$$
where the subscript $w$ denotes the weight $w$ part of the cohomology.
\label{th:gf}
\end{theorem}
\begin{proof}
Perchik's formula was obtained by multiplying
the above infinite product
$\Pi_{i=0}^\infty\ p_i(n)$ with one more rational function
$$
p_{-1}(n)=\Pi_{|a+b|=1,\ a\not= b} (1-t^{-1}x^{a-b}).
$$
This part corresponds to the constant term
of $\mathfrak{ham}_{2n}$ which is isomorphic
to $H^{2n}_{\mathbb R}$ as a representation of $\mathrm{Sp}(2n,{\mathbb R})$
and whose weight is $-1$.
Since the relative cohomology
$H^*_{GF}(\mathfrak{ham}^0_{2n},\mathrm{Sp}(2n,{\mathbb R}))$
is defined by ignoring this part,
the proof follows by eliminating $p_{-1}(n)$
from the original formula of Perchik.
\end{proof}
\begin{remark}
In the case of $n=1$, a computer computation carried out with the help of
M.~Suzuki shows that $\frac{1}{2}$ times the above
constant term in low degrees in $t$ is given by
$$
1+ t^2-t^{10}+ t^{12}- t^{14}- t^{16}+ t^{18}-3 t^{24}+2 t^{26}+\cdots \ ,
$$
while the corresponding series for
$H^*_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R}))$ due to Perchik
is
$$
t^{-2}+2-t^{8}- t^{14}- t^{22}- t^{28}+ t^{30}- t^{32}+\cdots \ .
$$
The coefficient $-1$ of $t^{10}$ in the former series corresponds
to our leaf cohomology class $\eta$.
Observe also that the coefficient of $t^{16}$ is
$-1$. The corresponding coefficient of $t^{14}$ in the latter series
is also $-1$ which represents the Metoki class in
$H^{9}_{GF}(\mathfrak{ham}_2,\mathfrak{sp}(2,{\mathbb R}))_{14}$
(see \cite{Metoki}). Although the cocycle given by Metoki himself
is not divisible by $\omega$, it seems highly likely that
his class can also be decomposed as $\eta'\wedge\omega$
for some leaf cohomology class
$\eta'\in H^7_{GF}(\mathfrak{ham}^0_2,\mathfrak{sp}(2,{\mathbb R});{\mathbb R})_{16}$.
\end{remark}
\section{Concluding remarks}
It is easy to see that the relative cohomology
$$
H^*_{GF}(\mathfrak{ham}^0_{2n},\mathrm{Sp}(2n,{\mathbb R}))_w
\cong
H^*_{GF}(\mathfrak{ham}^1_{2n})_w^{\mathrm{Sp}(2n,{\mathbb R})}
$$
stabilizes as $n$ goes to infinity. In fact, the limit
cohomology is nothing but one of Kontsevich's
theories of graph cohomologies developed in
\cite{Kontsevich93}, \cite{Kontsevich94},
more precisely the {\it commutative} case
(see \cite{K}).
As before, the abelianization homomorphism
$$
\mathfrak{ham}_{2n}^1{\longrightarrow} S^3 H^{2n}_{\mathbb R}
$$
induces a homomorphism
$$
\Phi_n\colon H^*(S^3 H^{2n}_{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}
{\longrightarrow} H^*_{GF}(\mathfrak{ham}^1_{2n})^{\mathrm{Sp}(2n,{\mathbb R})}.
$$
However, the stable cohomology
$$
\lim_{n\to\infty} H^*(S^3 H^{2n}_{\mathbb R})^{\mathrm{Sp}(2n,{\mathbb R})}
$$
is isomorphic to
$$
{\mathbb R}[\text{vertex oriented connected trivalent graph}]/(\text{AS}) \ ,
$$
where $AS$ denotes the anti-symmetric relation.
If we add another relation, called the $IHX$ relation,
to the above, we obtain the algebra
$$
\mathcal{A}(\phi)=
{\mathbb R}[\text{vertex oriented connected trivalent graph}]/(\text{AS, IHX}).
$$
This algebra plays a fundamental role in the theory of
finite type invariants for homology $3$-spheres
due to Ohtsuki \cite{O}, who extended the foundational theory of Vassiliev
for knots, and developed by
Le, Murakami and Ohtsuki (see \cite{LMO}).
Garoufalidis and Nakamura proved the following result:
\begin{theorem}[{\bf Garoufalidis and Nakamura \cite{GN}}]
The ideal $(S^4 H^{2n}_{\mathbb R})$ of $\Lambda^* S^3H^{2n}_{\mathbb R}$
generated by $S^4 H^{2n}_{\mathbb R}\subset \Lambda^2 S^3 H^{2n}_{\mathbb R}$
corresponds exactly to the $IHX$-relation so that
there is an isomorphism
$$
\mathcal{A}(\phi)\cong
(\Lambda^* S^3H^{2n}_{\mathbb R}/(S^4 H^{2n}_{\mathbb R}))^{\mathrm{Sp}(2n,{\mathbb R})}.
$$
\end{theorem}
Since it can be seen that $\mathrm{Ker}\ \Phi_\infty$
coincides with the $\mathrm{Sp}$-invariant part
$\left((S^4 H^{2n}_{\mathbb R})\right)^{\mathrm{Sp}(2n,{\mathbb R})}$
of the above ideal, we conclude that
$$
\mathrm{Image}\ \Phi_\infty\cong
\mathcal{A}(\phi) \ .
$$
Thus it is a very important problem to determine
$\mathrm{Coker}\ \Phi_\infty$. We have tried to determine
whether our leaf cohomology class
$\eta\in H^5_{GF}(\mathfrak{ham}_2^0,\mathfrak{sp}(2,{\mathbb R}))_{10}$
survives in the {\it stable} cohomology
$$
\lim_{n\to\infty} H^5_{GF}(\mathfrak{ham}^0_{2n},\mathfrak{sp}(2n,{\mathbb R}))_{10} \ ,
$$
or not. We have the same problem for other unstable
leaf cohomology classes.
So far this attempt has remained unsuccessful.
One method of attacking this problem would be to
compute the generating function $c(t)$ for the
commutative graph cohomology by making use of
Theorem~\ref{th:gf}. More precisely,
there is important problem of computing
$$
c(t)=
\lim_{n\to\infty}
\sum_{w=0}^\infty
\chi(H^*(\mathfrak{ham}^0_{2n},\mathrm{Sp}(2n,{\mathbb R}))_w)t^w \ ,
$$
which is the limit as $n\to\infty$ of the
formula given in Theorem~\ref{th:gf}.
Our computations so far imply
$$
c(t)=1+t^2+2t^4+3t^6+6t^8+\cdots \ .
$$
Recall here that the algebra $\mathcal{A}(\phi)$ is known
(see \cite{O2}) to be
a polynomial algebra whose numbers of generators
are $1, 1, 1, 2, 2, 3, \cdots $ in degrees $2,4,6,8,10,12,\cdots$
so that the generating function for this algebra is
$$
1+t^2+2t^4+3t^6+6t^8+9t^{10}+16 t^{12}+\cdots.
$$
It should be nice to know how these two generating
functions differ from each other.
\subsection*{Acknowledgements}
The authors would like to thank T.~Sakasai and M.~Suzuki
for help with computer computations using LiE and Mathematica.
They would also like to thank T.~Tsuboi for information
about the thesis of Metoki \cite{Metoki}.
\bibliographystyle{amsplain}
| 2024-02-18T23:40:05.242Z | 2009-10-18T22:46:24.000Z | algebraic_stack_train_0000 | 1,286 | 7,551 |
|
proofpile-arXiv_065-6400 | \section{Introduction}
In recent years, theoretical and experimental investigation has been focused
on the engineering of highly nonclassical, non-Gaussian states of the radiation
field (for a review, see e.g. \cite{PhysRep}).
Interest in the production of non-Gaussian optical states is due to their
strongly nonclassical properties, such as entanglement and negativity
of the quasi-probability phase-space distributions,
that are important for the efficient implementation of quantum information and
communication protocols \cite{PhysRep,KimBS,KitagawaPhotsub,DodonovDisplnumb,Cerf}, and for quantum estimation tasks \cite{QEstimNoi}.
Several schemes for the generation of non-Gaussian states, both single-mode and two-mode,
have already been proposed \cite{CxKerrKorolkova,AgarTara,DeGauss1,DeGauss2,DeGauss3,DeGauss4,DeGauss5},
and many successful and encouraging experimental realizations have been reported recently
\cite{ZavattaScience,ExpdeGauss1,ExpdeGauss2,Grangier,BelliniProbing,GrangierCats}.
In line of principle, a very important result concerns the rigorous proof that various
nonclassical properties are minimized by Gaussian states \cite{ExtremalGaussian}.
Therefore, it is reasonable to expect that the use of non-Gaussian resources
may improve significantly the performance of quantum information protocols.
In particular, concerning quantum teleportation with continuous variables (CV),
it has been shown that the success probability of teleportation can be greatly increased
by using entangled non-Gaussian resources in the framework of the ideal Braunstein-Kimble (B-K)
protocol \cite{KitagawaPhotsub,Opatrny,Cochrane,Olivares,CVTelepNoi,YangLi}.
Indeed, it has been shown that some specific two-mode non-Gaussian states,
dubbed squeezed Bell-like states (that include as subcases photon-added and
photon-subtracted de-Gaussified states \cite{CVTelepNoi}), when used as entangled
resources provide a significant increase in the teleportation fidelity
of single-mode input states under the ideal protocol \cite{CVTelepNoi}.
Such an enhancement is due to a balancing of three different features \cite{CVTelepNoi}:
The entanglement content of the resources, their (appropriately defined)
degree of affinity with the two-mode squeezed vacuum, and their (suitably measured)
amount of non-Gaussianity. For the precise definition of the last two quantities, see Refs.~\cite{CVTelepNoi,GenoniNonGaussy}). It has been suggested \cite{CVTelepNoi}
that such states can be produced by combining simultaneous phase-matched multiphoton
processes and conditional measurements.
The analysis of Ref.~\cite{CVTelepNoi} has been later extended to consider other
classes of non-Gaussian resources, such as two-mode squeezed symmetric superpositions
of Fock states and of squeezed cat-like states, that allow high levels of performance
in the teleportation of single-mode input states \cite{CVTelepNoisyNoi}.
A partial preliminary analysis of non-ideal cases has also been performed
\cite{CVTelepNoisyNoi} by considering simple superpositions of independently
generated fields converging on a common spatial volume, such as superpositions
of a two-mode pure non-Gaussian resource and a two-mode thermal state \cite{Glauber}.
In this elementary instance, mixed non-Gaussian entangled states remain preferred
resources for teleportation when compared to mixed twin-beam Gaussian states \cite{CVTelepNoisyNoi}.
\\
In this work, using the formalism of the characteristic function, we study in full generality
the Braunstein-Kimble protocol for CV teleportation in realistic conditions and with
non-Gaussian entangled resources. We include in our investigation the main sources of
decoherence that lead to the degradation of the transferred quantum information,
such as losses due to imperfect homodyne measurements, and damping due to the
propagation of the optical fields in lossy fibers. The effects of these inefficiencies
have already been considered, among others, in Refs.~\cite{VukicsnonidealTelep,TelepChizhov}.
In particular Ref.~\cite{VukicsnonidealTelep} is concerned with the study of imperfect
Bell measurements, while in Ref.~\cite{TelepChizhov} the authors investigate
the limits of quantum teleportation due to photon absorption during propagation in fibers.
Besides considering each problem separately, these and related works are always restricted
to the use of Gaussian resources. The main object of the present work is to investigate the
effect of the simultaneous presence of all sources of imperfection on the performance of
CV teleportation protocols with non-Gaussian resources, and their robustness against decoherence.
A general and exhaustive analysis turns out to be possible in the framework
of the characteristic function representation. This method has been
discussed in full generality for the description of ideal CV teleportation
\cite{MarianCVTelep}, and applied first to the case of Gaussian \cite{MarianCVTelep}
and non-Gaussian resources \cite{CVTelepNoi,CVTelepNoisyNoi}.
We will then extend the formalism to include the description of nonideal CV teleportation,
including realistic Bell measurements and decoherence due to propagation in noisy channels.
In order to investigate different optimization strategies of the nonideal protocol,
we will discuss optimization over the free parameters of the non-Gaussian resources
as well as over the gain factor associated with the transmitted
classical information \cite{TelepGainBowen} (for various strategies of gain tuning and
of optimal gain see also Refs.~\cite{TelepChizhov,TelepIde,TelepTailored}).
Indeed, in the instance of non-Gaussian resources, the gain can be considered
as a further free parameter suitable for optimization.
The paper is organized as follows.
In section~\ref{SecCharFuncTelep} we extend the characteristic function formalism
to include the case of realistic CV quantum teleportation.
In section~\ref{SecEntangRes} we introduce and discuss the main properties
of some classes of non-Gaussian entangled resources.
In section~\ref{SecUnityGain} we study the efficiency of the quantum teleportation protocol
in the instance of fixed given values of the gain.
In section~\ref{SecNonunityGain} we carry out an optimization procedure of the protocol
over the entangled resources and the gain parameter.
Finally, in section \ref{secConclusions} we draw our conclusions and discuss some
outlook on current and future research.
\section{Nonideal CV teleportation protocol in the characteristic function formalism}
\label{SecCharFuncTelep}
In this section, we describe the realistic B-K CV teleportation protocol
in the formalism of the characteristic function. Although several alternative formalisms
are available for the description of the B-K CV teleportation protocol
\cite{BraunsteinKimble,TelepFormal1,TelepFormal2,TelepFormal3,FuruRep,vanLoockTelep},
the characteristic function representation proves to be particularly convenient
when considering the nonideal case and non-Gaussian resources.
The description of nonideal teleportation requires the introduction
of mechanisms of loss and inefficiency in the main steps of the protocol.
Indeed, a schematic description of the nonideal protocol is depicted in Fig.~\ref{FigRealQuantTel}.
\begin{figure}[t]
\centering
\includegraphics*[width=8.5cm]{RealTelepPr.eps}
\caption{(Color online) Pictorial representation of the nonideal B-K CV quantum teleportation
protocol: In the first step, the input mode is mixed by Alice with one
of the two beams (modes) of the entangled resource; the ensuing state
is then subject to a realistic Bell measurement. The result of the measure
is communicated to Bob through a classical channel. In the second step,
a unitary transformation, determined by the previous measurement, is applied to the second
mode of the entangled resource, that is affected by decoherence during the propagation
in a noisy channel, e.g. a lossy fiber. The ensuing output state is the final
teleported state.}
\label{FigRealQuantTel}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics*[width=8.5cm]{RealBellMeas.eps}
\caption{(Color online) Model scheme of a realistic Bell measurement.
The model takes into account the non-unity efficiency of
the detectors $D$ performing the homodyne measurement.
In the depicted scheme, such inefficiency is simulated by the
introduction of two fictitious beam splitters $BS_{2}$ and $BS_{3}$,
with equal transmissivity $\eta$.}
\label{FigRealQuantTel2}
\end{figure}
The input state and the entangled resource states are assumed to be initially pure.
This is not a serious limitation because one can always map the case of
a nonideal teleportation protocol with noisy (mixed) inputs and resources
to an equivalent protocol with pure inputs and resources but with a
correspondingly larger amount of noise affecting the protocol. A simple
illustrative example of this equivalence will be discussed later on
in the present work.
The single-mode input state ($in$) is mixed to mode $1$
of the entangled resource at a beam splitter. At the first user's
(Alice) location, a Bell measurement, consisting in homodyne detections,
is performed on the obtained state of mode $in$ and mode $1$.
In order to describe a nonideal measurement, one needs
to model/simulate the inefficiencies of the photo-detectors.
A realistic detector can be modeled by placing a fictitious beam splitter,
i.e. a partly transmitting mirror, in front of an ideal detector \cite{LeonhardtRealHomoMeasur}.
Based on such a prescription, a scheme describing a realistic Bell measurement
is shown in Fig.~\ref{FigRealQuantTel2}.
After a realistic Bell measurement,
the result is transmitted to the receiver (Bob) through a classical channel.
The mode $2$ of the entangled resource propagates in a noisy channel, as a lossy
fiber, to Bob's location, where it undergoes a unitary displacement
according the result of the Bell measurement It is important to note that the
generation of the entangled resource can take place close to the sender, and
typically very far away from the receiver, as one of the main tasks of quantum
teleportation is the transfer of quantum information across long distances.
It is then legitimate to assume that the radiation field associated
to mode $1$ is not affected by losses due to propagation, while the field
associated to mode $2$, which usually has to propagate over much longer distances,
can be strongly affected by decoherence. The degradation of quantum information
is caused by the propagation of field mode $2$ in a noisy channel.
Therefore, the output teleported state depends both on the
inefficiency of the homodyne detectors and the decoherence rate
of the noisy channel. \\
We can formalize the effects of the above-described dynamics as follows.
Let us denote by
$\rho_{in} \,=\, |\phi\rangle_{in}\,_{in}\langle \phi|$
and $\rho_{res} \,=\, |\psi\rangle_{12}\,_{12}\langle \psi|$
the density matrices associated, respectively, with the
single-mode pure input state and with the two-mode pure
entangled resource. The single-mode input state is initially
disentangled from the two-mode entangled resource, so that the
the initial three-mode field is
$\rho_{0} \,=\, \rho_{in} \otimes \rho_{res}$,
and the initial global characteristic function reads
\begin{eqnarray}
&&\chi_{0}(\alpha_{in};\alpha_{1};\alpha_{2}) \,=\,
Tr[\rho_{0} \; D_{in}(\alpha_{in})\,
D_{1}(\alpha_{1})\, D_{2}(\alpha_{2})] \nonumber \\
&& \nonumber \\
&&\,=\, \chi_{in}(\alpha_{in})\; \chi_{res}(\alpha_{1};\alpha_{2}) \,,
\label{globcharfuncinitial}
\end{eqnarray}
where $Tr$ denotes the trace operation, $D_{j}(\alpha_{j})$ denotes the
displacement operator for the mode $j$ ($j=in,1,2$), $\chi_{in}$ is the
characteristic function of the input state, and $\chi_{res}$ is the
characteristic function of the entangled resource.
By defining the quadrature operators $X_{j} \,=\,
\frac{1}{\sqrt{2}}(a_{j}+ a_{j}^{\dag})$ and $P_{j} \,=\,
\frac{i}{\sqrt{2}}(a_{j}^{\dag}- a_{j})$ $(j=in,1,2)$, and the associated
phase-space real variables $x_{j} \,=\,
\frac{1}{\sqrt{2}}(\alpha_{j}+ \alpha_{j}^{*})$ and $p_{j} \,=\,
\frac{i}{\sqrt{2}}(\alpha_{j}^{*}- \alpha_{j})$,
the characteristic function can be written in terms of $x_{j}$, $p_{j}$, i.e.
$\chi_{0}(\alpha_{in};\alpha_{1};\alpha_{2}) \equiv \chi_{0}(x_{in},p_{in};x_{1},p_{1};x_{2},p_{2})$.
\\
The first step of the protocol consists in the Bell measurement at Alice's location,
that is the homodyne measurements of the first quadrature of the mode $1$
and of the second quadrature of the mode $in$, with results $\tilde{x}$ and $\tilde{p}$, respectively.
After such nonideal Bell measurement, the remaining mode $2$ is left in a mixed state described by
the corresponding single-mode characteristic function $\chi_{Bm}(x_{2},p_{2})$. In appendix \ref{AppendixRealTel} we prove in detail how to compute it in full generality for arbitrary single-mode inputs and arbitrary two-mode entangled resources. Here we report only the final expression:
\begin{eqnarray}
&&\chi_{Bm}(x_{2},p_{2}) = \frac{\mathcal{P}^{-1}(\tilde{p},\tilde{x})}{(2\pi)^{2}}
\int d\xi d\upsilon \; e^{i \xi\tilde{p}-i \tilde{x} \upsilon} \times \nonumber \\
&& \nonumber \\
&& \chi_{in}\left(\frac{T \xi}{\sqrt{2}}\,,\frac{T \upsilon}{\sqrt{2}} \right)
\chi_{res}\left(\frac{T \xi}{\sqrt{2}}\,,-\frac{T \upsilon}{\sqrt{2}};x_{2},p_{2}\right)\times
\nonumber \\
&& \nonumber \\
&&\exp\left\{-\frac{R^{2}}{4}( \xi^{2}+ \upsilon^{2}) \right\} \,,
\label{chiBellMeasurement}
\end{eqnarray}
where $T^2$ and $R^2=(1-T^{2})$ denote, respectively, the transmissivity and reflectivity
of the beam splitters that model the losses.
The function $\mathcal{P}(\tilde{p},\tilde{x})$ is the distribution of the measurement outcomes
$\tilde{p}$ and $\tilde{x}$ (see appendix \ref{AppendixRealTel}).
Note that the Gaussian exponential in Eq.~(\ref{chiBellMeasurement})
is related to the vacua entering the input ports of the fictitious beam splitters.
\\
Afterwards, mode $2$ propagates in a damping channel, like, e.g., a lossy fiber,
before it reaches Bob's location. The Markovian dynamics of a system subject to
damping is described, in the interaction picture, by the following master equation
for the density operator $\rho$ \cite{WallsMilburn,DecohReview}:
\begin{equation}
\partial_{t} \rho \,=\, \frac{\Upsilon}{2}
\left\{ n_{th} L[a_{2}^{\dag}] \rho + (n_{th}+1) L[a_{2}] \rho \right\} \,,
\label{MasterEq}
\end{equation}
where the Lindblad superoperators are defined as $L[\mathcal{O}]
\rho \equiv 2 \mathcal{O} \rho \mathcal{O^{\dag}}
- \mathcal{O^{\dag}} \mathcal{O} \rho - \rho \mathcal{O^{\dag}} \mathcal{O}$,
$\Upsilon$ is the mode damping rate, and $n_{th}$ is the number of thermal photons.
Finally, at Bob's location, a displacement $\lambda= g(\tilde{x}+i \tilde{p})$
is performed on mode $2$. The real parameter $g$ is the so-called gain factor
\cite{vanLoockTelep}.
The combined effect of propagation in a damping channel and unitary displacement
determines the characteristic function $\chi_{out}(x_{2},p_{2})$ of the final
output state of the teleportation protocol (see appendix \ref{AppendixRealTel}
for details):
\begin{eqnarray}
&&\chi_{out}(x_{2},p_{2}) \,=\,
\chi_{in}\left(g T x_{2}\,,g T p_{2} \right) \times \nonumber \\
&& \nonumber \\
&& \chi_{res}\left(g T x_{2}\,,-g T p_{2};e^{-\frac{\tau}{2}}x_{2},e^{-\frac{\tau}{2}}p_{2}\right) \times \nonumber \\
&& \nonumber \\
&&\exp\left\{-\frac{1}{2} \Gamma_{\tau,R}(x_{2}^{2}+p_{2}^{2})\right\} ,
\label{chioutfinale}
\end{eqnarray}
where $\tau=\Upsilon t$, and the thermal "renormalized" phase-space
covariance $\Gamma_{\tau,R}$ is defined as:
\begin{equation}
\Gamma_{\tau,R} \,=\, (1-e^{-\tau})\left(\frac{1}{2}+n_{th}\right)+g^{2}R^{2} \,.
\label{Gammadef}
\end{equation}
The form of Eq.~(\ref{chioutfinale}) highlights the different roles played
by the two sources of noise introduced in the teleportation protocol, associated,
respectively, to the damping rate $\Upsilon$ and the reflectivity $R^2$.
The two decoherence mechanisms act separately but also in combination, as
one can see from the Gaussian exponential factor in Eq.~(\ref{chioutfinale}),
which in fact is nonvanishing for $R \neq 0$ and/or $\tau \neq 0$.
The effect of the imperfect Bell measurement is expressed also
by the presence of the scale factor $T$ in the arguments of the input and
resource characteristic functions $\chi_{in}$ and $\chi_{res}$. Viceversa,
decoherence due to noisy propagation affects obviously only mode $2$
by means of the exponentially decreasing weight $e^{-\frac{\tau}{2}}$
in the arguments of $\chi_{res}$. The factorized form of the output
characteristic function, holding for the ideal protocol \cite{MarianCVTelep},
\begin{equation}
\chi_{out}(x_{2},p_{2})=\chi_{in}(x_{2},p_{2})\,\chi_{res}(x_{2},-p_{2};x_{2},p_{2}),
\label{MarianFormula}
\end{equation}
is recovered, as expected, from Eq.~(\ref{chioutfinale}) when $R=0$ $(T=1)$, $\Upsilon=0$ $(\tau=0)$,
and $g=1$.
\section{Entangled resources: Two classes of optimized non-Gaussian states}
\label{SecEntangRes}
Given the general description of the nonideal protocol in terms of the characteristic functions,
in this section we analyze the performance of two classes of non-Gaussian entangled
resources for the teleportation of input coherent states, respectively, the two-mode
squeezed Bell-like states $|\psi\rangle_{SB}$ and the two-mode squeezed cat-like
states $|\psi\rangle_{SC}$:
\begin{equation}
|\psi\rangle_{SB} = S_{12}(\zeta)
\{\cos\delta |0,0 \rangle + e^{i \theta} \sin\delta
|1,1 \rangle \} ,
\label{squeezBell}
\end{equation}
\begin{eqnarray}
&&|\psi\rangle_{SC} = \mathcal{N}_{SC} S_{12}(\zeta)
\{\cos\delta |0,0 \rangle + e^{i \theta} \sin\delta
|\gamma,\gamma \rangle \} , \nonumber \\
\label{squeezCat}
\end{eqnarray}
where $S_{12}(\zeta) = e^{ -\zeta a_{1}^{\dag}a_{2}^{\dag} + \zeta
a_{1}a_{2}}$ is the two-mode squeezing operator, $\zeta=r e^{i\phi}$,
$|m \, , n \rangle \equiv |m \rangle_{1} \otimes |n \rangle_{2}$
is a two-mode Fock state (of modes 1 and 2),
$|\gamma,\gamma \rangle\equiv |\gamma\rangle_{1}\otimes |\gamma\rangle_{2}$
is a symmetric two-mode coherent state with complex amplitude $\gamma =|\gamma| e^{i \varphi}$,
and the normalization factor $\mathcal{N}_{SC}$ is
$\mathcal{N}_{SC}=\{1+ e^{-|\gamma|^{2}}\sin 2\delta \cos\theta\}^{-1/2}$.
In order to obtain a maximization of the teleportation fidelity, it is necessary
to perform a simultaneous balanced optimization, on the free parameters
($\delta$, $\theta$, $\gamma$), of some partially competing properties
\cite{CVTelepNoi,CVTelepNoisyNoi}. These include the entanglement content,
the amount of non-Gaussianity of the state \cite{GenoniNonGaussy}, and a
squeezed-vacuum-affinity $\mathcal{G}$. For a generic pure state $|\psi\rangle$,
the latter is defined as \cite{CVTelepNoi,CVTelepNoisyNoi}:
\begin{equation}
\mathcal{G} = \sup_{r} |\langle -r|\psi\rangle|^{2} \, ,
\end{equation}
with $|-r\rangle = S_{12}(-r)|0,0\rangle$.
Indeed, the optimal non-Gaussian resources (\ref{squeezBell}) and (\ref{squeezCat})
exhibit a sufficient squeezed-vacuum-affinity, which then appears to be a crucially
needed property in order to select efficient and highly performing non-Gaussian resources.
For instance, one could as well consider a different form of squeezed Bell-like state,
the so-called "Buridan donkey" or "Hamlet" state $|\psi\rangle_{SB2}$, that is
obtained from the singlet Bell state as follows:
\begin{equation}
|\psi\rangle_{SB2} = S_{12}(\zeta)\{\cos\delta |0,1 \rangle + e^{i \theta} \sin\delta|1,0 \rangle \} .
\label{squeezBell2}
\end{equation}
It is indeed simple to verify that, although such a state, at fixed squeezing,
is more entangled than a Gaussian twin beam, it performs less efficiently both
in the ideal and in the realistic teleportation protocol.
This fact can be understood if one looks at the behavior of the squeezed vacuum affinity \cite{CVTelepNoi}.
Namely, the Buridan donkey state Eq.~(\ref{squeezBell2}) does not contain the fundamental Gaussian
contribution coming from the squeezed vacuum.
Therefore, it is ``unbalanced``, in the sense that it is less affine to the squeezed vacuum,
and excessively non-Gaussian compared to the optimized Bell-like state $|\psi\rangle_{SB}$.
Therefore, the fine interplay among these three quantities,
i.e. the entanglement, the degree of non-Gaussianity, and, in particular, the squeezed vacuum affinity,
cannot be realized in the non-Gaussian resource (\ref{squeezBell2}) \cite{CVTelepNoi}.
The crucial role played by the squeezed vacuum affinity for the performance of
different non-Gaussian resources has been studied in Ref.~\cite{CVTelepNoisyNoi}.
In particular, it has been shown that, in the ideal teleportation protocol,
the two-mode squeezed symmetric superposition of Fock states,
i.e. $S_{12}(\zeta)\sum_{k=0}^{2}c_{k}|k,k \rangle$,
when optimized for the teleportation of both input coherent states and single-photon states,
reduces to a squeezed truncated twin beam,
i.e. $S_{12}(-r)\sum_{k=0}^{2}\tanh^{k} s|k,k \rangle$.
In the same paper, it has been also shown that, in the ideal teleportation protocol,
the optimized two-mode squeezed cat-like states, i.e. Eq.~(\ref{squeezCat}),
possess a high amount of squeezed vacuum affinity.
Therefore, besides a certain amount of entanglement, also a sufficient degree of
squeezed vacuum affinity appears to be necessary for a non-Gaussian resource to
be optimal for a B-K teleportation protocol.
Clearly, the performance of B-K teleportation protocols depends strongly
on the structure of the second-order correlations in the entangled resources.
In this sense the B-K protocol, with its structure of homodyne measurements,
is particularly tailored to the use of Gaussian resources.
Therefore, a non-Gaussian resource may improve on the performance
of a corresponding Gaussian one only if the fundamental Gaussian
contribution coming from the squeezed vacuum is subject to a not
too drastic modification. A large value of the affinity assures
that the non-Gaussian resource satisfies such a requirement. The
interplay of the affinity with the non-Gaussianity and the degree
of entanglement allows to single out those non-Gaussian resources
possessing higher-order correlations that add to the leading Gaussian structure
of the two-mode entangled resource, thus enhancing further the protocol
efficiency, and lacking those non-Gaussian contributions that are incompatible
with the structure of the B-K protocol.
Indeed, a very interesting question
open for future investigation is the inverse of the one studied in the present
paper. Here we are analyzing the problem of optimizing non-Gaussian resources given the
B-K protocol. The inverse question would be that of adapting the protocol to the resources.
Namely, given a certain class of non-Gaussian squeezed resources with some
given properties, one asks how the B-K protocol would
have to be modified in order to optimize the fidelity of teleportation.
We now proceed to determine the general expression of the fidelity of teleportation
in terms of the characteristic function for the three different non-Gaussian entangled
resources $|\psi\rangle_{SB}$, $|\psi\rangle_{SC}$, $|\psi\rangle_{SB2}$.
For instance, the two-mode characteristic function $\chi_{SB}$ of
the entangled resource (\ref{squeezBell}) reads
\begin{equation}
\chi_{SB}(\alpha_{1},\,\alpha_{2})=Tr[|\psi\rangle_{SB}\,_{SB}\langle\psi| \, D_{1}(\alpha_{1})D_{2}(\alpha_{2})] \, ,
\label{chiSB}
\end{equation}
and analogous expressions hold for $\chi_{SC}$ and $\chi_{SB2}$.
The corresponding explicit expression is obtained
using the two-mode Bogoliubov transformations
\begin{eqnarray}
&&S_{12}^{\dag}(\zeta)\, a_{i} \, S_{12}(\zeta)=\cosh r \, a_{i}
-e^{i\phi}\sinh r \, a_{j}^{\dag}, \, \nonumber \\
&& (i\neq j=1,2) \,,
\label{BogoliubovT}
\end{eqnarray}
and the relation
\begin{equation}
\langle m| D(\alpha) |n \rangle \,=\,
\left(\frac{n!}{m!}\right)^{1/2}\alpha^{m-n}e^{-\frac{1}{2}|\alpha|^{2}}
L_{n}^{(m-n)}(|\alpha|^{2}) \,,
\label{LaguerreFormula}
\end{equation}
where $L_{n}^{(m-n)}(\cdot)$ denotes the associated Laguerre polynomial of order $n$.
The quantity measuring the success probability of a teleportation
protocol is the fidelity of teleportation $\mathcal{F} \, = \,
Tr[\rho_{in}\rho_{out}]$. In the formalism of the characteristic function,
the fidelity reads
\begin{eqnarray}
\mathcal{F} =&& \frac{1}{\pi} \int d^{2}\alpha \;
\chi_{in}(\alpha) \chi_{out}(-\alpha) \,, \nonumber \\
&& \nonumber \\
&&\frac{1}{2\pi} \int dx_2 dp_2 \;
\chi_{in}(x_2, p_2) \chi_{out}(-x_2, -p_2) \,,
\label{Fidelitychi}
\end{eqnarray}
where $\alpha= \frac{1}{\sqrt{2}}(x_2 +i p_2 )$, $d^{2}\alpha = \frac{1}{2} dx_2 dp_2$,
and $\chi_{out}(\alpha)\equiv\chi_{out}(x_{2},p_{2})$ is given by Eq.~(\ref{chioutfinale}).
In the case of input coherent states $\rho_{in}=|\beta\rangle_{in}\,_{in}\langle\beta|$
with complex amplitude $\beta$, that we will always consider in the following,
the characteristic function of the input $\chi_{in}(\alpha)$ reads:
\begin{equation}
\chi_{in}(\alpha) \,=\,
e^{-\frac{1}{2}|\alpha|^{2}+(\alpha\beta^{*}-\alpha^{*}\beta)}
\; .
\label{chiCohin}
\end{equation}
Eq.~(\ref{Fidelitychi}) is the fundamental quantity that measures
the efficiency of a CV teleportation protocol (ideal or nonideal).
At fixed squeezing, the optimization procedure consists
in the maximization of the teleportation fidelity (\ref{Fidelitychi})
over the free parameters of the non-Gaussian entangled resources,
Eqs.~(\ref{squeezBell}), (\ref{squeezCat}), and (\ref{squeezBell2}).
Since it can be verified explicitly that the optimal choice for the
phases $\phi$, $\theta$, and $\varphi$ are
$\phi=\pi$ and $\theta=\varphi=0$, the squeezed Bell-like state $|\psi_{SB}\rangle$
and the Buridan donkey state $|\psi_{SB2}\rangle$ have a unique available
free parameter $\delta$. On the other hand, the squeezed cat-like state
$|\psi_{SC}\rangle$ has two free parameters, the angle $\delta$ and the
modulus $|\gamma|$. The analytical expressions of the teleportation fidelities
of input coherent states corresponding to the three different classes of non-Gaussian resources,
respectively $\mathcal{F}_{SB}^{(g)}(r,\delta)$, $\mathcal{F}_{SC}^{(g)}(r,\delta,|\gamma|)$,
and $\mathcal{F}_{SB2}^{(g)}(r,\delta)$ are reported in appendix~\ref{AppendixFid},
Eqs.~(\ref{FidelitySqBell}), (\ref{FidelitySqCat}), and (\ref{FidelitySqBell2}).
Let us notice that we have introduced the superscript $(g)$ to explicitly indicate
the dependence on the gain $g$. It is simple to verify that, for arbitrary $g$,
the fidelities are explicitly dependent on the amplitude $\beta$ of the input
coherent states. In the next sections, the (numerical) optimization procedures
of the fidelity $\mathcal{F}$ will be implemented following two different routes.
In section \ref{SecUnityGain} we operate at a specific value of the gain $g = 1/T$,
for a fixed value of the transmissivity $T^2$. This is the only choice that makes
the fidelity independent of $\beta$. In section \ref{SecNonunityGain} we adopt a
more general approach by letting $g$ be a fully free parameter and performing
appropriate optimization procedures. \\
In both cases, the maximization is carried out
at fixed (finite) squeezing $r$, and at fixed $\tau$, $n_{th}$, and $R$.
From an operational point of view, fixing these parameters is equivalent
to assume control on the characteristics of the experimental apparatus,
including the inefficiency of the photo-detectors and the length and damping
rate of the noisy channel. Finally, concerning the experimental realization
of the two-mode non-Gaussian resources, Eqs.~(\ref{squeezBell}) and (\ref{squeezCat}),
a detailed theoretical proposal is put forward in Ref.~\cite{CVTelepNoi}. The experimental
realization of the single-mode version of the state $|\psi\rangle_{SC}$ has been
reported in Ref.~\cite{GrangierCats}.
\section{$\beta$-independent optimal fidelity}
\label{SecUnityGain}
In this section, we analyze the success probability of quantum teleportation
for the gain $g$ fixed to be $g \, = \, 1/T$. It is immediate to verify that
with this choice, the fidelity becomes $\beta$-independent (see appendix \ref{AppendixFid}).
This choice allows to assume no knowledge about the alphabet of input coherent states,
while in the next section we will assume a partial knowledge on the input states over an
interval of values of $\beta$. For $g\,=\, 1/T$, the expressions for the fidelities
$\mathcal{F}_{SB}(r,\delta)$, $\mathcal{F}_{SC}(r,\delta,|\gamma|)$,
and $\mathcal{F}_{SB2}(r,\delta)$
(where the superscript $(g)$ has been removed) greatly simplify.
At fixed $r$, $\tau$, $n_{th}$, and $R$, the optimal fidelities of teleportation
are defined as:
\begin{eqnarray}
&& \mathcal{F}_{opt}^{(SB)} \,=\, \max_{\delta} \, \mathcal{F}_{SB}(r,\delta) \,,
\label{FidSBoptg1} \\
&& \mathcal{F}_{opt}^{(SB2)} \,=\, \max_{\delta} \, \mathcal{F}_{SB2}(r,\delta) \,,
\label{FidSB2optg1} \\
&& \mathcal{F}_{opt}^{(SC)} \,=\, \max_{\delta,|\gamma|} \, \mathcal{F}_{SC}(r,\delta,|\gamma|) \,.
\label{FidSCoptg1}
\end{eqnarray}
In Fig.~\ref{FigTelFigGain1} we plot the optimal fidelities,
corresponding to the three classes of non-Gaussian resources,
as functions of the squeezing $r$ at different values of the
parameters $\tau$, $n_{th}$, and $R=\sqrt{1-T^2}$.
For comparison, the fidelity associated with the Gaussian
squeezed vacuum (twin beam) is reported as well.
In order to understand the separate effects of the two
different sources of decoherence on the degradation
of the fidelity, we consider two cases:
$(i)$ decoherence due to imperfect Bell measurements alone, i.e. $R>0$ and $\tau=0$
(see Fig.~\ref{FigTelFigGain1} panel I);
$(ii)$ decoherence due to propagation in noisy channels alone, i.e. $R=0$ and $\tau >0$
(see Fig.~\ref{FigTelFigGain1} panel II).
In the first case, the fidelities grow monotonically, with increasing $r$, tending towards an
asymptotic saturation value. This behavior is equivalent to that observed in the instance of
an ideal protocol with noisy resources \cite{CVTelepNoisyNoi}. Indeed, the case of a nonideal
teleportation protocol with noisy (mixed) resources is equivalent to the case of a nonideal
protocol with pure resources but with a larger amount of noise.
In the second case, as $r$ increases, the fidelity first increases up to
a $\tau$-dependent maximum $r_{max}(\tau)$, and then decreases for larger values of $r$.
This behavior can be explained observing that there are two competing
effects associated to increasing the degree of squeezing. The first effect
is constructive and is due to the enhanced affinity of the entangled resource
with an EPR state for increasing $r$. This constructive effect is contrasted
by a disruptive one due to the optical photons generated by the squeezing that
add to the thermal photons of the channel (initially set to zero). For
not too large values of $r$, the first effect dominates, until a maximum
is reached at $r = r_{max}(\tau)$. For $r > r_{max}(\tau)$ the disruptive
effect becomes dominant, and the increasingly large number of optical photons
amplifies the decoherence, leading to a strong suppression of the fidelity.
The interplay between squeezing $r$ and channel decay rate $\tau$ can be
understood quantitatively by investigating the structure of the output
characteristic function $\chi_{out}$ Eq.~(\ref{chioutfinale}) that enters
in the expression of the fidelity (\ref{Fidelitychi}).
For $g T = 1$, $\chi_{out}$ takes the form
\begin{eqnarray}
\chi_{out}(x_{2},p_{2})= &&e^{-\frac{1}{2}\Gamma_{\tau,R}(x_{2}^{2}+p_{2}^{2})} \; \chi_{in}(x_{2},p_{2}) \times \nonumber \\ && \nonumber \\
&&\chi_{res}\left(x_{2},-p_{2};e^{-\frac{\tau}{2}}x_{2},e^{-\frac{\tau}{2}}p_{2}\right) \nonumber \,,
\end{eqnarray}
where $\Gamma_{\tau,R}$ is given by Eq.~(\ref{Gammadef}).
We see that, if $\tau \neq 0$, the exponential weights $e^{-\tau/2}$
introduce an asymmetry between the two modes of the resource in the expression of the
characteristic function $\chi_{res}$. This asymmetry is responsible for the decrease
of the fidelity for $r >r_{max}(\tau)$ at $\tau\neq 0$. The important ensuing conclusion
is that it is in fact detrimental to increase too much the squeezing when the
losses cannot be reduced strongly. Therefore, the primary experimental goal should
always be that of reducing the losses rather than incrementing the squeezing.
\begin{figure}[h]
\centering
\includegraphics*[width=9cm]{TelepFidelgT.eps}
\caption{(Color online) Optimal fidelities of teleportation $\mathcal{F}_{opt}$, as functions of the squeezing parameter $r$, for different values of the parameters $\tau$, $n_{th}$, and $R$. The fidelities correspond to the teleportation of single-mode input coherent states $|\beta\rangle$ using two-mode squeezed Bell-like states (full line) or two-mode squeezed cat-like states (dashed line) as entangled resources.
The fidelities associated with the two-mode squeezed vacuum (dotted line) and to the Buridan donkey states (long-dotted line) are reported for comparison. In panel I, $\tau=0$, $n_{th}=0$, and the reflectivity
is fixed at the values $R^{2}=0,\,0.05,\,0.1,\,0.15$. For each entangled resource (associated with a specific plot style), the corresponding curves are ordered from top to bottom with increasing $R^{2}$.
In panel II, $n_{th}=0$, $R=0$, and the reduced time is fixed at the values $\tau=0,\,0.1,\,0.2,\,0.3$.
For each entangled resource (associated to a specific plot style) the corresponding curves are ordered from top to bottom with increasing $\tau$.}
\label{FigTelFigGain1}
\end{figure}
Moreover, Fig.~\ref{FigTelFigGain1} (Panel II) shows that indeed at current experimentally
attainable values of the squeezing, i.e. $r \lesssim 1.5$, the nonideal
teleportation protocol operates already in the regime of best efficiency,
and both the squeezed Bell-like resources (\ref{squeezBell})
and the squeezed cat-like resources (\ref{squeezCat}) perform
much better than the corresponding (i.e. at the same squeezing) Gaussian resources.
This result generalizes and confirms the analogous behavior observed in the instance of ideal
protocols \cite{CVTelepNoi,CVTelepNoisyNoi}. On the contrary, as already anticipated in the
previous section, the Buridan donkey resources allow for teleportation fidelities
even worse than those associated with the Gaussian twin beam. Indeed, from Eq.~(\ref{FidelitySqBell2})
it follows that the optimal $\beta$-independent fidelity $\mathcal{F}_{opt}^{(SB2)}$ is obtained (with $g=1/T$) by letting $\delta=0$. In this case, the Buridan donkey state trivially reduces to a two-mode squeezed Fock state.
It is worth noting that, at $g T = 1$ and any fixed value of $\tau$, all the non-Gaussian resources
share with the Gaussian one the same maximum value of the fidelity, obtained at the same
value $r_{max}(\tau)$ of the squeezing parameter. This implies that, at given $\tau$, one
can determine the value $r_{max}(\tau)$ for all the various resources by just considering the
simple Gaussian instance. A straightforward computation then yields
\begin{equation}
\exp\{2\;r_{max}\} = \sqrt{\frac{\cosh (\tau/2) + 1}{\cosh (\tau/2) - 1}} \,.
\end{equation}
We see that, for increasing $\tau$, the range $[0,r_{max}(\tau)]$ of best efficiency reduces.
\begin{figure}[h]
\centering
\includegraphics*[width=8.5cm]{TelepFidelgTtwo.eps}
\caption{(Color online) Optimal fidelities of teleportation $\mathcal{F}_{opt}$,
as functions of the squeezing parameter $r$,
with $\tau=0.3$, $n_{th}=0$, and $R^{2}=0.05$.
The different lines represent the fidelity of teleportation
of input coherent states $|\beta\rangle$, corresponding, respectively,
to the following entangled resources:
Squeezed Bell-like state(full line),
squeezed cat-like state (dashed line),
squeezed vacuum (dotted line),
Buridan donkey state (long-dotted line),
and photon-subtracted squeezed state (dot-dashed line).}
\label{FigTelFigGain1due}
\end{figure}
Finally, we consider the combined effect of the two decoherence mechanisms.
In Fig.~\ref{FigTelFigGain1due} we plot the optimal fidelities associated to
the different classes of non-Gaussian resources with the experimental parameters
fixed at the values $\tau=0.3$, $n_{th} = 0$, and $R^{2}=0.05$. In
this case, the simultaneous presence of the two effects leads to
a strong suppression of the fidelity both with respect to the ideal case and
to each of the two nonideal cases taken separately. A regime of best efficiency
is still present, but significantly reduced.
In Fig.~\ref{FigTelFigGain1due}, we also report the fidelity of teleportation
associated with the two-mode photon-subtracted squeezed states:
\begin{eqnarray}
&&|\psi\rangle_{PSS}= \mathcal{N} a_{1}a_{2}S_{12}(\zeta)|0,0\rangle \nonumber \\
&& \nonumber \\
&&=\mathcal{N}e^{i\phi} S_{12}(\zeta) \left\{-|0,0\rangle + e^{i\phi}\tanh r |1,1\rangle\right\} \,,
\label{PhotSubtractsqueez}
\end{eqnarray}
where $\mathcal{N}=(1+\tanh^{2}r)^{-1/2}$ is the normalization \cite{CVTelepNoi}.
This non-Gaussian resource belongs, as a particular subcase,
to the class of the squeezed Bell-like resources. Indeed,
Eq.~(\ref{squeezBell}) reduces to Eq.~(\ref{PhotSubtractsqueez}) for $\delta=\arctan (\tanh r)$
(with $\phi=\pi$ and $\theta=0$).
The interest in the resources (\ref{PhotSubtractsqueez}) is due to the fact that
such states have been already produced in the laboratory \cite{ExpdeGauss2,Grangier}.
Furthermore, the corresponding de-Gaussification scheme can be easily integrated
in the B-K teleportation protocol \cite{Opatrny}.
As in the ideal instance \cite{CVTelepNoi}, also in the realistic case,
see Fig.~\ref{FigTelFigGain1due},
the performance of the non-Gaussian resource (\ref{PhotSubtractsqueez})
is intermediate between that of the Gaussian twin beam and of the
squeezed Bell-like states for $0\leq r \lesssim 1$,
while, for $r \gtrsim 1$, it degrades much faster.
The affinity to the two-mode squeezed vacuum of
state (\ref{PhotSubtractsqueez}) decreases for growing $r$;
correspondingly, the resource becomes more and more non-Gaussian.
Moreover, for some intervals of values of the squeezing parameter
the photon-subtracted squeezed states behave better than the
squeezed cat-like states. It is worth noticing that there exists
a "crossing" value of $r$ at which the optimal squeezed Bell-like
state reduces to the photon-subtracted squeezed state and thus the
corresponding fidelities of teleportation coincide,
see Fig.~\ref{FigTelFigGain1due}.
In this section we have considered always the case $g T = 1$.
The scenario changes dramatically if $g T \neq 1$. Indeed, in this case
the analytical expressions of the fidelities (\ref{FidelitySqBell}),
(\ref{FidelitySqCat}), (\ref{FidelityTwB}), (\ref{FidelitySqBell2}),
depend on the coherent amplitude $\beta$. This dependence
affects quite significantly the behavior of the fidelities as we will
see in the next section.
\section{Average optimal fidelity and one-shot fidelity}
\label{SecNonunityGain}
In order to investigate possible improvements in the efficiency of the teleportation protocol,
in this section we aim at optimizing the success probability of teleportation
assuming $g$ as a further free optimization parameter.
The fidelity of teleportation as a function of the gain is studied
for several input states in Ref.~\cite{TelepIde}, while displacement strategies
are considered in Ref.~\cite{TelepTailored}, in order to improve the
output quality for a reduced alphabet of possible input states.
These two important works are concerned with the study of the ideal protocol
implemented using Gaussian resources. The effect of absorption due to propagation
in fibers is studied in Ref.~\cite{TelepChizhov}, where, for the case of Gaussian
resources, it is shown that the gain-optimized fidelity of teleportation
is strongly suppressed. \\
Let us now describe the optimization procedure applied to the instance of non-Gaussian resources.
Following the approach of Refs.~\cite{TelepChizhov,TelepTailored},
we define the average fidelity $\mathcal{\overline{F}}$ by averaging
the $\beta$-dependent fidelity $\mathcal{F}(\beta)$ over the
set of input coherent states $|\beta\rangle$ as follows:
\begin{eqnarray}
&&\mathcal{\overline{F}} \,=\, \int d^{2}\beta \; \mathcal{F}(\beta) \,
p(\beta) \,,
\label{averfid} \\
&& \nonumber \\
&&p(\beta)=(\pi \sigma)^{-1}\exp\{-\sigma^{-1} \, |\beta|^{2}\} \,,
\label{weight}
\end{eqnarray}
where the function $p(\beta)$ (\ref{weight}) is a Gaussian distribution centered at $\beta=0$.
The variance parameter $\sigma$ determines the cutoff of the amplitude $\beta$,
and thus the reduced alphabet that one considers. We will compare our results
with the quantum benchmark for the storage and transmission of coherent states
distributed according to Eq.~(\ref{weight}) \cite{Benchmark}. This benchmark
is equivalent to the upper bound $\mathcal{F}_{class}$ achievable with any
classical strategy, and satisfying the inequality \cite{Benchmark}:
\begin{equation}
\mathcal{F}_{class} \,\leq\, \frac{\sigma+1}{2\sigma+1} \,.
\label{BenchmarkCoh}
\end{equation}
The first step of our optimization procedure, employing Eqs.~(\ref{FidelitySqBell}) and (\ref{FidelitySqCat}), is to determine the average fidelities $\mathcal{\overline{F}}_{SB}^{(g)}(r,\delta)$
and $\mathcal{\overline{F}}_{SC}^{(g)}(r,\delta,|\gamma|)$, whose expressions
we do not report because their long and cumbersome structure is not particularly illuminating.
Further, we do not apply the optimization procedure to the teleportation fidelity associated to
Buridan donkey resource as we have already showed that no enhancement can be obtained compared
to the Gaussian twin beam resource.
At fixed squeezing $r$ and experimental parameters $\tau$, $n_{th}$, and $R$,
we define the optimal values $g_{opt}$, $\delta_{opt}$, and $|\gamma_{opt}|$ of the free parameters
as those that maximize the fidelities averaged over the values of $\beta$ weighted according
to the normal distribution $p(\beta)$ (\ref{weight}):
\begin{equation}
\mathcal{\overline{F}}_{SB}^{(g_{opt})}(r,\delta_{opt}) \doteq \max_{\{g,\delta\}} \mathcal{\overline{F}}_{SB}^{(g)}(r,\delta) ,
\label{optimparamSB}
\end{equation}
\begin{equation}
\mathcal{\overline{F}}_{SC}^{(g_{opt})}(r,\delta_{opt},|\gamma_{opt}|) \doteq \max_{\{g,\delta,|\gamma|\}} \mathcal{\overline{F}}_{SC}^{(g)}(r,\delta,|\gamma|) .
\label{optimparamSC}
\end{equation}
The optimal values of the parameters are determined numerically.
Next, we introduce the one-shot fidelities $\mathcal{F}_{1s}$
as the non-averaged fidelities evaluated at the optimal values
of the parameters and a fixed value of $\beta$:
\begin{equation}
\mathcal{F}_{1s}^{(SB)}(\beta,r) \, \doteq \, \mathcal{F}_{SB}^{(g_{opt})}(\beta,r,\delta_{opt}) \,,
\label{avoptFidSB}
\end{equation}
\begin{equation}
\mathcal{F}_{1s}^{(SC)}(\beta,r) \, \doteq \, \mathcal{F}_{SC}^{(g_{opt})}(\beta,r,\delta_{opt},|\gamma_{opt}|) \,.
\label{avoptFidSC}
\end{equation}
For each possible value of $\beta$ one can estimate the success probability
of teleportation associated with a specific event.
The functions $\mathcal{F}_{1s}(\beta,r)$ yield the teleportation fidelities
at given squeezing $r$ and for an input coherent state with specific amplitude $\beta$.
Partial information about the alphabet of input states, quantified by the choice of the
variance $\sigma$ in the distribution (\ref{weight}), can be exploited to obtain a refinement
of the optimization procedure.
Indeed, we expect that smaller values of $\sigma$, corresponding to a better knowledge
of the alphabet of input states, will lead to higher values of the one-shot fidelities.
\subsection{Fidelities: Variable $r$, fixed $\tau$}
In Fig.~\ref{FigTelnonunityGain}, at the same fixed parameters of Fig.~\ref{FigTelFigGain1due},
$\tau=0.3$, $n_{th}\approx 0$, $R^{2}=0.05$,
we plot the one-shot fidelity $\mathcal{F}_{1s}$ as a function of $r$,
both for the non-Gaussian resources and for the Gaussian twin beam.
In panel I we plot the one-shot fidelities for $\sigma=10$ and
$\beta=1,2,3$; in panel II we plot the one-shot fidelities for $\sigma=100$ and
$\beta=3,5,10$ (obviously $|\beta|^2$ must fall in the interval $[0,\sigma]$).
Let us notice that according to Eq.~(\ref{BenchmarkCoh}),
the quantum benchmarks for the two choices of $\sigma$ are:
$\mathcal{F}_{class}(\sigma=10) \approx 0.523$ and
$\mathcal{F}_{class}(\sigma=100) \approx 0.502$.
\begin{figure}[h]
\centering
\includegraphics*[width=9cm]{OneshotFidr.eps}
\caption{(Color online) One-shot fidelity of teleportation $\mathcal{F}_{1s}$,
for input coherent states $|\beta\rangle$,
as a function of the squeezing parameter $r$,
with $\tau=0.3$, $n_{th}=0$, $R^{2}=0.05$,
for the following resources: Squeezed Bell-like state (full line);
squeezed cat-like state (dashed line); and a squeezed vacuum (dotted line).
In panel I $\beta=1,\,2,\,3$ and $\sigma=10$. In panel II $\beta=3,\,5,\,10$
and $\sigma=100$. For each entangled resource (associated with a specific plot style),
the corresponding curves are ordered from top to bottom with increasing $\beta$.}
\label{FigTelnonunityGain}
\end{figure}
Panel I of Fig.~\ref{FigTelnonunityGain}, corresponding to $\sigma=10$
and small values of $|\beta|$, shows that, as soon as $r$ is different from zero,
all the resources yield fidelities above the quantum benchmark.
Moreover, one observes a significant enhancement of the fidelities with respect
to the $\beta$-independent ones of Fig.~\ref{FigTelFigGain1due} (corresponding
to $g = 1/T$). Indeed, while the $\beta$-independent fidelities in Fig.~\ref{FigTelFigGain1due}
are well below the value $0.8$, all the one-shot fidelities of panel I of Fig.~\ref{FigTelnonunityGain}
lie above this value for squeezing $r$ ranging from about $0.8$ to about $1.2$, depending on the resource
being considered. Also in this case the non-Gaussian resources exhibit better performances with respect to the Gaussian ones.
Panel II of Fig.~\ref{FigTelnonunityGain} shows that for $\sigma=100$ and larger values
of $|\beta|$, the enhancement of the fidelity is quite modest compared to
the $\beta$-independent fidelity, reported in Fig.~\ref{FigTelFigGain1due}.
This result is not surprising because a variance $\sigma=100$ obviously
allows less knowledge on the alphabet of input states with respect to the case $\sigma=10$.
It is important to remark that the curves corresponding to the same entangled resource
but different values of $\beta$ become effectively distinguishable only $r \gtrsim 1$.
\subsection{Fidelities: Variable $\tau$, fixed $r$}
We now study the fidelities as functions of the reduced time,
or effective length of the fiber $\tau$, at fixed squeezing $r$.
To this aim, fixing the squeezing parameter at the intermediate value $r=0.8$,
with $n_{th} = 0$, $R^{2}=0.05$, we investigate the behavior of the one-shot fidelities.
In Fig.~\ref{FigTelnonunityGain2}, choosing the same values of $\beta$ and $\sigma$
as in Fig.~\ref{FigTelnonunityGain}, we plot $\mathcal{F}_{1s}$ as a function of $\tau$.
\begin{figure}[h]
\centering
\includegraphics*[width=9cm]{OneshotFidt.eps}
\caption{(Color online) One-shot fidelity of teleportation $\mathcal{F}_{1s}$,
for input coherent states $|\beta\rangle$,
as a function of the reduced time $\tau$,
with $r=0.8$, $n_{th}=0$, $R^{2}=0.05$, for
the following entangled resources:
Squeezed Bell-like state (full line);
squeezed cat-like state (dashed line);
and squeezed vacuum (dotted line).
In panel I $\beta=1,\,2,\,3$ and $\sigma=10$.
In panel II $\beta=3,\,5,\,10$ and $\sigma=100$.
For each entangled resource (associated to a specific plot style),
the corresponding curves are ordered from top to bottom
with increasing $\beta$.}
\label{FigTelnonunityGain2}
\end{figure}
Fig.~\ref{FigTelnonunityGain2} shows that the teleportation fidelity
remains above the classical threshold up to significantly large values of $\tau$.
At $\sigma=10$ and for small values of of the coherent amplitude $\beta$ (see panel I),
the one-shot fidelities associated to the same resource
but to different values of $\beta$ are distinguishable. Viceversa,
at $\sigma=100$ and for larger values of $\beta$ (see panel II),
the fidelities associated to the same resource but different values
of $\beta$ are virtually indistinguishable, and thus effectively
$\beta$-independent. Comparing the performance of the different
resources in the time domain, one can distinguish three regimes:
For short and long times the one-shot fidelities associated to non-Gaussian
resources exhibit a significant enhancement compared to the Gaussian instance,
while for intermediate times there is a substantial equivalence in the
performance of Gaussian and non-Gaussian resources.
\section{Conclusions}
\label{secConclusions}
In this paper we have investigated the performance of non-Gaussian entangled resources
in nonideal protocols of quantum teleportation of input coherent states.
We have resorted to the characteristic function formalism
for the description of protocols affected by decoherence.
We have discussed how the effects of decoherence stemming from
photon losses in fiber and imperfect Bell measurements
affect the success probability of teleportation. In particular,
we have established that, while the fidelities associated to
different resources remain above the classical benchmark for
quite long times, the non-Gaussian resources perform always better
than the Gaussian ones, in the ideal as well as in the nonideal
quantum teleportation protocol.
The present analysis should be extended to include teleportation
of two-mode states using multimode non-Gaussian resources \cite{twomodeinputTelep}.
It would be also interesting to consider the optimization of teleportation protocols
with non-Gaussian entangled resources with respect to local properties (such as
single-mode squeezing), by extending the existing schemes for the local optimization
of Gaussian resources \cite{GaussianOptimization}.
\acknowledgments
This work has been realized in the framework of the FP7 STREP Project
HIP (Hybrid Information Processing). We acknowledge financial support also from
MIUR under FARB Funds 2007 and 2008, from INFN under Iniziativa Specifica
PG62, and CNR-INFM Coherentia. F. I. acknowledges financial support from
the ISI Foundation for Scientific Interchange.
| 2024-02-18T23:40:05.441Z | 2009-12-10T21:13:25.000Z | algebraic_stack_train_0000 | 1,298 | 7,820 |
|
proofpile-arXiv_065-6432 | \section{Introduction}
\label{1} The statistical mechanics of equilibrium phenomena is a very useful theoretical
framework for understanding the thermodynamic properties of many--particle systems from a microscopical
point of view. However, in nature, most of the systems evolve under
out-of-equilibrium conditions, and there is not yet
a suitable general framework to study them as in the case of equilibrium systems.
Nevertheless, some progress have been achieved in the knowledge of far from equilibrium behavior by means
of simple models, capable to catch the essential physics of non-equilibrium processes.\\
In this context, we introduce a very simple model, derived from the Ising model,
driven out of equilibrium by an external field that mimics the effects of a uniform shear profile \cite{B94}.
This model evolves with a non-conserved dynamics, corresponding to model A in the classification of Hohenberg
and Halperin \cite{HH}, and it was already used by Cavagna \textsl{et. al} \cite{CBT00} and by
Cirillo \textsl{et. al} \cite{ciri} to study phase separation. \\
We will focus on the study of phase transition properties in this
model. Typical configurations observed in our simulations are
displayed in figure {\ref{snap}. At low temperatures the system
appears ordered with elongated domains directed along the field
direction. At high temperatures the system exhibits a gas-like
appearance with disordered patterns. Similar ordered and disordered
phases, also experimentally found \cite{larson}, generally
characterize the behavior of sheared binary systems. As usually in
systems with an applied external field, the transition point is a
function of the magnitude of the driving field \cite{onuki}.
\begin{figure}[H]
\centering
\includegraphics[height=7cm,width=11cm,clip=,angle=0]{fig1.eps}
\caption{Snapshot configurations corresponding to the two phases of
the Ising model with shear. On the left panel we observe a
typical stripe-like configuration at $T<T_c$. On the right panel a configuration at $T>T_c$ is displayed.
The external field is applied along the horizontal axis.}\label{snap}
\end{figure}
Previous theoretical and experimental studies have shown that
sheared binary systems undergo a second order phase transition at a
critical temperature $T_c$ \cite{onuki}. In diffusive systems the
effect of the external driving field is to inhibit fluctuations so
that the critical temperature is expected to increase with the
magnitude of the driving. In a continuum model with non--conserved
dynamics, in the large-$N$ analytical approximation, it has been
found that the value of the critical temperature depends on the
driving field following a power law at small field magnitudes
\cite{gonepeli}. In previous Monte Carlo studies on the critical
behavior of sheared Ising models \cite{chan}, it was not possible to
extract information about the critical temperature, due to numerical
uncertainties and finite size effects \cite{prl2}. In view of this,
we revisit this issue, in order to determine the critical
temperature of the model as a function of the magnitude of the
external field, and to compute for the first time the critical
exponents in this model. For the sake of comparison, we will
contrast the obtained values with those computed for the 2d driven
lattice gas model (DLG) \cite{kls} that will be briefly described in
the next section.
To study the phase transition in the model, the critical dynamical
behavior will be investigated by monitoring the time evolution of some observables before
the system reaches nonequilibrium steady states (NESS). This technique, generally called
\textsl{short time dynamics} \cite{JSS, zhe2} is an alternative convenient way to
obtain both the critical temperature and exponents precisely, with less computational
cost than other methods commonly used, such as the finite size scaling applied to the
specific heat and response functions, that require a considerably amount of simulation time
in order to reach NESS. Furthermore, since the measurements are carried out
in the first steps of evolution, the short time dynamic approach is free of the critical slowing down.\\
The manuscript is organized as follows: In Section \ref{2}, the Ising model with the external shear field is introduced. The technique used to study the phase transition is described in Section \ref{3}. The simulation results are presented in Section \ref{4}, and finally the conclusions are stated in Section \ref{5}.
\section{The model}
\label{2}
We will consider the nearest--neighbor two--dimensional
Ising model with a single--spin--flip thermalization dynamics,
e.g.\ the Metropolis dynamics \cite{metro}. The driving field will be defined
in order to mimic the convective velocity shear profile
\begin{equation}
\label{sh} v_x(y)=\dot{\gamma} y \qquad v_y=0
\end{equation}
where the parameter $\dot{\gamma}$ is called the {\em shear
rate} and represents the shear field magnitude. If the system is imagined as a sequence of layers
labelled by $y$, then $\dot{\gamma} y$ is the displacement of the
layer $y$ in a unit of time. If $L_y$ is the vertical size and
$v_{\textrm{max}}$ is the speed of the fastest layer, then
$\dot{\gamma} L_y=v_{\textrm{max}}$.
The model is defined on a square lattice $\Lambda$ of horizontal and vertical sizes $L_x$, $L_y$ respectively, with periodic boundary conditions in the $L_x$ direction and free in the $L_y$ direction. More precisely, let $\Omega=\{-1,+1\}^\Lambda$ be the space of configurations and, for $\sigma\in\Omega$, let $\sigma_{x,y}$ be the value of the spin associated to the site
$(x,y)\in\Lambda$. Then the Hamiltonian of the model is
\begin{equation}
\label{ham} H_\Lambda(\sigma)=
-J\sum_{y=1}^{L_{y}}\sum_{x=1}^{L_{x}}\sigma_{x,y}\sigma_{x+1,y}-
J\sum_{x=1}^{L_{x}}\sum_{y=1}^{L_{y}-1}\sigma_{x,y}\sigma_{x,y+1}
\end{equation}
with $\sigma_{L_{x}+1,y}=\sigma_{1,y}$ for all $y=1,\dots,L_{y}$,
and $J$ is a positive real coupling constant, which means that the
interactions are ferromagnetic. We will combine the thermalization
dynamics with an algorithm introducing the shear in the system. The
shear is superimposed to the thermalization dynamics with typical
rates not depending on the thermalization phenomenon, but fixed a
priori. This has been done in different ways in \cite{CBT00, chan,
okabe}. In this paper we use a very ductile generalization of those
dynamics aiming to introduce the shear effects in a way resulting
competitive with respect to the thermalization process.
Notice that our dynamics results from the combination of two steps:
i) a thermalization step which would bring the system in the
usual equilibrium; ii) a shear step which changes the
configurations of the system forbidding to reach the equilibrium.
Therefore, all together, our algorithm does not satisfy
local detailed balance expressed in terms of standard
equilibrium probabilities of configurations. Similar models
have been also used in different context of non-equilibrium studies \cite{crooks}.\\
Let the {\em
time unit} be the time needed for a full thermal update of the
entire lattice, e.g.\ a full sweep of the Metropolis algorithm. The
shear algorithm is parametrized with a submultiple $\tau$ of
$L_{y}L_{x}$ (the period of the shear procedure), a positive integer
$\lambda\le L_{x}/2$ (the number of unit cells that a row is shifted when the shear is
performed), and a non--negative real $\nu \le1/L_{y}$. The dynamics
of the model that we study in this paper is defined in a precise way
via the following algorithm:
\begin{enumerate}
\item \label{i:tre.1}
set $t=0$, choose $\sigma_0\in\Omega$, and set $n=0$;
\item \label{i:tre.2}
increase by 1 the index $n$, and
choose at random with uniform probability $1/L_{x}L_{y}$ a site
of the lattice and
perform the elementary single--site step of the thermalization dynamics;
\item \label{i:tre.3}
if $n$ is multiple of $\tau$ a layer is
randomly chosen with uniform probability $1/L_{y}$. Then, if $\bar y$ is the chosen layer,
all the layers with $y\ge\bar y$ are shifted by $\lambda$ lattice
spacings to the right with probability $\nu L_{y}$;
\item \label{i:tre.3.1}
if $n<L_{x}L_{y}$ goto~\ref{i:tre.2}, else denote by $\sigma_{t+1}$ the configuration of the system;
\item \label{i:tre.4}
set $t=t+1$, set $n=0$, and goto~\ref{i:tre.2}.
\end{enumerate}
\par\noindent
We note that if $\nu=1/L_{y}$ the shift at step \ref{i:tre.3} is surely performed and this
case will be later addressed to as {\it full shear}. The smoothness of the shear field, eqs.
(\ref{sh}), is ensured by the random choice of the layer $\bar{y}$ in step 3.
Now we want to express the shear rate $\dot{\gamma}$, introduced in equation (\ref{sh}), in terms of the
parameters of our dynamics. We have to estimate the typical displacement per unit of time of
the row labelled by $y$. Such a row is involved in a shear event, step~\ref{i:tre.3} of the
algorithm above, if and only if the extracted row $\bar y$ is such as $\bar y\le y$, and
this happens with probability $y/L_{y}$. Since the shear event results in a shift with
probability $\nu L_{y}$, the probability that during a shear event the row $y$ does shift is
given by
\begin{displaymath}
\frac{y}{L_{y}}\times\nu L_{y}=\nu y.
\end{displaymath}
By noting that the number of shift events per unit of time is
equal to $L_{x}L_{y}/\tau$ and recalling that the shift amplitude
is $\lambda$, we have that the typical shift of the row $y$ per
unit of time is given by
\begin{displaymath}
\frac{L_{x}L_{y}}{\tau}\times\lambda\times\nu y.
\end{displaymath}
By using definition (\ref{sh}) we finally get $ \dot{\gamma}=L_{x}L_{y}\nu \lambda/\tau$,
which becomes $\dot{\gamma}=L_{x}\lambda/\tau$ in the case of full shear. \\
It is important to remark that there exists a large variety of
models that evolve under non equilibrium states by the action of an
external field. An example is the driven
lattice gas (DLG) where the driving field is not
superimposed to the thermalization dynamics, but it is rather inserted in the Metropolis
transition rates, that become anisotropic, and biases the movement of particles along
its direction \cite{kls}.
Furthermore, it exhibits a second order phase transition at particle density $\rho=1/2$, between an \textsl{ordered} phase at low temperatures characterized by regions of low and high particle density,
called stripes, oriented along the field direction, and a \textsl{disordered} phase
at high temperatures with the appearance of a lattice gas. Both ordered and disordered phases have a similar appearance with those exhibited in figure \ref{snap}.
The critical temperature
increases with the magnitude of the external field, saturating at $T_c\sim1.41 T_c(0)$
in the case of infinite \cite{kls}. Here, $T_c(0)=2.269$ $J/k_B$ is the critical temperature of the
2d Ising model ($J$ is the coupling constant and $k_B$ is the Boltzmann constant).
\section{Dynamical Critical Behavior}
\label{3} It is known that, for systems exhibiting critical behavior, the relevant observables
measured in equilibrium stationary states can be written in terms of power laws, with
characteristic critical exponents due to the divergence of both the spatial correlations and the correlation time.
In recent years, however, the attention has been also focused to the \textsl{early stages of the
evolution} of the system towards the critical state, that is,
to a microscopic time regime where the spatial correlation length is small compared with
the system size \cite{JSS}. Within this regime, it is possible to
measure scaling laws of the observable quantities \cite{zhe2, zhe}.
This new method to study second-order phase transitions, called
\textsl{short time dynamics}, allows to estimate the critical
temperature and to compute the critical exponents of the transition
with relative quickness and avoids the shortcomings that more usual
techniques present to study critical behavior. Furthermore, the
short time dynamics has been applied to investigate the critical
behavior of a wide range of systems of different nature, such as
models showing criticality under equilibrium conditions, such as
e.g, the XY systems \cite{trimper}, the 2d 3-state Potts model
\cite{zhe21}, the Ising magnet under different lattice geometries
\cite{zhe3, bab}, and of nonequilibrium critical models such as the
driven diffusive lattice gas (DLG) \cite{alsa, repalsa, saal}, etc.
In these
last three works, a detailed analysis of the second-order \cite{alsa, repalsa}
and first-order \cite{saal} non-equilibrium phase transition was
performed by using the short time critical dynamic methodology. For
the second-order phase transition, the excellent agreement between
critical exponents evaluated using the standard (stationary) and
dynamical (short time) approaches strongly support the robustness of
this method. Encouraged by this success, our goal is to extend the
short time dynamics concept to this model, basing our ideas
on the already developed short time dynamics method for the DLG model in ref. \cite{alsa}.\\
The above mentioned scaling laws can be observed employing two different initial configurations,
namely: 1) Fully disordered configurations (FDC's), which means that the system is initially
placed in a thermal bath at $T \rightarrow \infty$ , and the system configuration is similar to that exhibited on the right panel of figure \ref{snap}; 2) Completely ordered configurations or
ground state configurations (GSC's) as expected for $T = 0$. In our model, based on the fact
that the equilibrium Ising model has all the spins pointing in the same direction (i. e.
magnetization $M=$ 1 or -1) at this temperature, we will adopt this configuration as the
ground state for testing the short time dynamic behavior. \\
The shear field introduces anisotropic effects, that generates
anisotropic correlations in the system. As a consequence of this,
there will be two correlation lengths, namely: 1) A
\textit{parallel} or longitudinal correlation length
$\xi_{\parallel}$
along the external field direction, and 2) A \textit{perpendicular} or transverse
correlation length $\xi_{\perp}$ perpendicular to $\xi_{\parallel}$. Whatever
the initial condition is used to start the system, both spatial correlation lengths are
quite small or zero at the beginning of the dynamic process, and near the critical temperature $T_c$ they increase dynamically as a power law $\xi_{\parallel(\perp)}\propto t^{1/z_{\parallel(\perp)}}$, where $z_{\parallel(\perp)}$ is the dynamic critical exponent in the respective directions. \\
Before we start to describe the theoretical basis of the technique
applied to this model, we set the external driving field along the
horizontal direction, i.e. the $x$ axis. Also, we need to define
quantities that are relevant in the critical behavior of the model.
Based on the morphological appearance of typical configurations
present in the
system (see figure \ref{snap}), we will consider a variant of the order parameter $OP$ employed
in the critical study of the DLG model \cite{alsa}:
\begin{equation}
\label{op}
OP=\frac{1}{L_y}\sum_{y=1}^{L_{y}} |P(y)|,
\end{equation}
\noindent where
$P(y)=\frac{1}{L_x}\sum_{x=1}^{L_{x}} \sigma_{x,y}$ is the average
of the spin profile in the shear field axis.
\noindent The order parameter defined in this way can take into account the
small ordering that appears at the early
stages of the evolution.\\
There is one more point to take into account before we start to
expose the method applied to this system. In all formulas below we will assume, and demonstrate later,
that only the parallel correlation length is relevant in the short time critical evolution of the system.
In fact at $T\simeq T_c(\dot{\gamma})$ and at early times of evolution, parallel and perpendicular
correlations begin to increase. However, domains of perpendicularly
correlated spins are broken by the shear, and assume a characteristic elongated shape,
also observed in many experimental studies of sheared systems.
As a consequence of
this, transversal correlations grow slower than parallel correlations, so they do not take part in
the dynamic critical behavior of the model at short times. This effect
was also shown for the DLG model \cite{alsa, repalsa}. Since
this happens independently of the initial
configuration, we will take $z=z_{\parallel}$ in every expression below.
Furthermore, they must contain the anisotropic finite size dependence in order to match the usual anisotropic scaling forms for the NESS regime.\\
Starting with FDC's, the scaling law proposed for the order parameter ($OP$) reads \cite{alsa},
\begin{equation}
\label{2op}
OP(t,\phi,L_x,L_y)=b^{-\beta/\nu_{\parallel}}
OP^{*}(b^{-z}t,b^{1/\nu_{\parallel}}\phi, b^{-1}L_{x},b^{-\nu_{\perp}/\nu_{\parallel}}L_y) ,
\end{equation}
\noindent where $t$ is the time, $b$ is the spatial rescaling factor, $\beta$ is the
critical exponent of the order parameter, $\nu_{\parallel(\perp)}$
are the correlation length critical exponents in the $x$($y$)
axis ($\xi_{\parallel(\perp)}\propto\phi^{-\nu_{\parallel(\perp)}}$), $OP^{*}$ is a scaling
function, $z$ is the already mentioned dynamic exponent of the longitudinal correlation length,
and $\phi=\frac{T-T_{c}}{T_{c}}$ . Notice also that $L_y$ is
rescaled by $b^{-\nu_{\perp}/\nu_{\parallel}}$ to include possible shape effects
in the dynamic critical behavior \cite{kls}.\\
To generate the FDC initial conditions, the lattice is filled
at random with exactly $\rho_0L_xL_y$ particles, being $\rho_0=1/2$ the
density of up spins. However, the number of particles on each row parallel to the field axis is not the same for all rows. This generates tiny density fluctuations along this direction, which are of the order of $(1/L_{x})^{-\frac{1}{2}}$, in agreement with the central limit theorem. According to equation (\ref{2op}), these fluctuations add up, and the amplitude of $OP$ depends on $L_x^{-1/2}$\footnote{These fluctuations are equivalent to the initial
magnetization $m_0$ in the original
formulation of short time dynamics \cite{zhe}.}.
We have to
take into account this expression for the final form of the time evolution of $OP$. Setting
$b\simeq t^{1/z}$, eq. (\ref{2op}) becomes:
\begin{equation}
OP(t, \phi, L_x,L_y)=t^{-\beta/\nu_{\parallel}z} OP^{*}(1,t^{1/\nu_{\parallel}z}\phi, t^{-1/z}L_{x},t^{-\nu_{\perp}/\nu_{\parallel}z}L_y ).
\label{preop}
\end{equation}
\noindent Then, if $t^{-1/z}L_{x}$ is extracted out of the scale function in Eq. (\ref{preop}), we have:
\begin{equation}
OP(t, \phi, L_x,L_y)=t^{-\beta/\nu_{\parallel}z} (
t^{-1/z}L_{x})^{x}OP^{**}(1,t^{1/\nu_{\parallel}z}\phi,t^{-\nu_{\perp}/\nu_{\parallel}z}L_y),
\end{equation}
\noindent but since $OP \simeq 1/L_{x}^{\frac{1}{2}}$, then $x=-1/2$, so, the final expression for $OP(t)$ is the following:
\begin{equation}
OP(t,\phi,L_x)= L_x^{-1/2}t^{c_{1}}
OP^{**}(t^{1/\nu_{\parallel}z}\phi)\qquad L_x,L_y\rightarrow \infty, \qquad
\label{fdc}
\end{equation}
with
$c_{1}=(1-2\beta/\nu_{\parallel})/2z$ \cite{alsa}. \\
Furthermore, it is easy to show that the logarithmic derivative of
$OP$ with respect to $\phi$, given by Eq. (\ref{fdc}) at
criticality, behaves as
\begin{equation}
\partial_{\phi} ln OP(t,\phi) \propto t^{c_{2}} , \qquad \\
\label{derfdc}
\end{equation}
\noindent where the exponent is $c_{2} = 1/\nu_{\parallel}z$. \\
On the other hand, starting
the system from the GSC configuration described above, and according
to
the scaling behavior proposed in \cite{alsa}, we have
\begin{equation}
OP(t,\phi,L_x,L_y)=b^{-\beta/\nu_{\perp}}
OP^{***}(b^{-z}t,b^{1/\nu_{\perp}}\phi, b^{-1}L_{x},b^{-\nu_{\parallel}/\nu_{\perp}}L_{y}) ,
\label{gsc}
\end{equation}
where $OP^{***}$ is another scaling function . Here we have also included the shape scaling factor $b^{-\nu_{\perp}/\nu_{\parallel}}L_y$. Proceeding in the same way as in the above case, we have, taking $b\simeq t^{1/z}$ in (\ref{gsc}) at
criticality, the following expression for $OP$:
\begin{equation}
OP(t)\propto t^{-c_{3}}\qquad L_x,L_y\rightarrow \infty, \qquad
\label{gsc1}
\end{equation}
with an exponent $c_{3} = \beta/\nu_{\perp}z$.
Moreover, the derivative of Eq. (\ref{gsc1}) with respect to $\phi$ at criticality is given by
\begin{equation}
\partial_{\phi} OP(t) \propto t^{c_4} , \qquad
\label{derigsc1}
\end{equation}
\noindent where the exponent is $c_{4} = (1 -\beta)/\nu_{\perp}z$.\\
\section{Simulation Results}
\label{4}
\subsection{Critical Temperature}
In this work we used rectangular and square lattices of different sizes
$L_{x},L_{y}$, in the range $128\leq L_x, L_y\leq 10000$ lattice units. The critical dynamics of the model was investigated as a function of the shear magnitude, in the range $1/32\leq\dot{\gamma}\leq50$. The temperatures were measured in units of $J/k_{B}$, $k_B$ being the Boltzmann constant, and the time is measured in Monte Carlo steps (mcs), where one unit consists of $L_x L_y$ attempts for spin updates. The time evolution was sampled from 100 to 1000 realizations of the system, according to each initial condition and temperature.\\
We begin showing our results by considering the critical dynamic evolution of the system
when it is started from FDC configurations, and then coupled with a thermal bath at $T\simeq T_c$. In figure \ref{fdcgd5}, the time evolution of $OP(t)$ is shown for a system where a shear field is applied with shear rate $\dot{\gamma}=5$. The best power law behavior is obtained at $T=T_c=2.660$ before $OP(t)$ reaches a saturation value due to finite size effects. On the other hand, if $T<T_c$ or $T>T_c$, $OP$ deviates from the power law behavior as equation (\ref{fdc}) states, and the curve shows an upward or
downward bending, respectively (see the curves corresponding to $T=2.655$ and $T=2.665$ in the same figure)
\begin{figure}[H]
\centering
\includegraphics[height=15cm,width=11cm,clip=,angle=270]{op_fdc_allgds2c.ps}
\caption{Log-log plot of the short time evolution of $OP(t)$
starting from FDC configurations, in a system with $L_x=L_y=512$ and
$\dot{\gamma}=5$. The best fit of the raw data gives a power law
behavior at $T_c=2.660$, as it is indicated by the dashed straight
line. Upward and downwards deviations from this behavior can also be
observed for $T=2.655$ and $T=2.665$, respectively.}
\label{fdcgd5}
\end{figure}
The critical dynamic evolution was investigated by performing simulations also on rectangular lattices. The plots in figures \ref{fdcs} display the dynamic evolutions of $OP(t)$ at $T=T_c$ for two shear field magnitudes, $\dot{\gamma}=1/2$ and $\dot{\gamma}=10$ in a) and b) panels, respectively. The lattice sizes used are indicated in the legend of each plot by the notation $L_x$ $\times$ $L_y$. In the main plots of each figure, all early-time evolution exhibits the same power-law behavior with similar values of the exponent $c_1$ (equation (\ref{fdc})). Then, a saturation value is reached, $OP_{sat}$, that depends only on $L_x$, as it can be observed in lattices with longitudinal sizes $L_x=500$ and $L_x=1000$ for $\dot{\gamma}=1/2$ and $L_x=500$ for $\dot{\gamma}=10$ respectively. So, these plots show that the early-time critical evolution of the system is free from lattice shape effects \cite{kls}, because the relation between the longitudinal and transversal sizes is different for each studied case. The same was observed in the short-time critical evolution of the DLG model \cite{alsa}. In addition, the power law behaviors can be collapsed by rescaling $OP(t)$ by $L_{x}^{1/2}$ as it is proposed in equation (\ref{fdc}). This is shown in the insets of both figures
\begin{figure}[H]
\centering
\includegraphics[height=9cm,width=7cm,clip=,angle=-90]{opvst_gd05_fdc.ps}
\includegraphics[height=9cm,width=7cm,clip=,angle=-90]{opvst_gd10_fdc.ps}
\caption{Log-log plots of the time evolutions of $OP(t)$ at $T=T_c$, for the system in
rectangular and square lattices of sizes $L_x \times L_y$, as indicated in the legends.
In a) the external field has magnitude $\dot{\gamma}=1/2$ while in b) the shear
field magnitude is $\dot{\gamma}=10$. The straight lines are least-square fits of the data.
The insets in each figure show the collapse of the power law behaviors when $OP(t)$ is multiplied
by $ L_x^{1/2}$
}
\label{fdcs}
\end{figure}
Then, the critical points of the system at several values of the
shear field magnitudes $\dot{\gamma}$ were found. Figure
\ref{fdcgds} shows that the power-law behavior of $OP$ collapses for
large values of $\dot{\gamma}$, i.e. $\dot{\gamma}=5, 10$ and $50$,
and occurs at approximately the same critical temperatures for each
shear rate. These temperatures are larger than the estimated for the
equilibrium Ising model, $T_c(\dot{\gamma}=0)=2.269$ (see table 1).
However, the time evolution of $OP$ depends on the shear field if
$\dot{\gamma}$ is small, as it can be seen for the cases
$\dot{\gamma}=1/32$ and $\dot{\gamma}=1/10$. Although these
magnitudes are quite small, they are enough to raise the critical
temperature to $T_c(\dot{\gamma}=1/32)=2.29$ and
$T_c(\dot{\gamma}=1/10)=2.395$, both close but greater than the
critical temperature of the Ising model. This confirms that the
critical temperature depends on the shear field magnitude, as found
by theoretical studies \cite{gonepeli}.
\begin{figure}[H]
\centering
\includegraphics[height=15cm,width=11cm,clip=,angle=-90]{op_fdc_allgds2d.ps}
\caption{Log-log plots of the time evolutions of $OP(t)$ at $T=T_c$
corresponding to systems with different shear fields.
The magnitudes $\dot{\gamma}$' are indicated in the legend.}
\label{fdcgds}
\end{figure}
Once that the critical temperatures $T_c$ corresponding to the
different $\dot{\gamma}$'s were collected, a diagram of critical
temperatures versus $\dot{\gamma}$ can be performed. Figure
\ref{diag_fase} shows that two regimes can be distinguished. In the
first regime, the critical temperature grows with $\dot{\gamma}$ as
a power law, i. e. $T_c(\dot{\gamma})/T_c(0)-1$ $\propto$
$\dot{\gamma}^\psi$.
The value of the exponent
was estimated in $\psi=0.52(3)$, which is consistent with that calculated theoretically in
\cite{gonepeli}. In this work (ref. \cite{gonepeli}), the critical transition in the
$\varphi^3 \rightarrow \langle\varphi^2\rangle\varphi$ approximation was studied in a scalar field model based on a convection-diffusion equation with Landau-Ginzburg free energy, with the average $\langle\varphi^{2}\rangle$ self-consistently determined. Above the lower critical dimension, the exponent $\psi$ was evaluated to be $1/2$ and $1/4$ for the cases with non-conserved and conserved order parameter respectively.
Then, $T_c$ crosses over to a saturation regime at larger $\dot{\gamma}$'s,
saturating at $T_c(\dot{\gamma})\backsim 1.18 T_c(0)$, where $T_c(0)\approxeq2.269$ is the
critical temperature of the 2D Ising model. The increase of the critical temperature by action
of the external field, and a posterior saturation regime was also observed in the DLG model \cite{kls}. We want to remark also that in real fluids there is also a negative contribution to the shift of the critical temperature coming from hydrodynamics interactions \cite{ok}. The total shift of $T_c$ results to be negative for fluids with low molecular weight \cite{ok}, was also found in experiments \cite{bg}. A review for other systems is given in \cite{onuki}.
\begin{figure}[H]
\centering
\includegraphics[height=15cm,width=11cm,clip=,angle=270]{DTc_vs_gd_semII.ps}
\caption{Diagram of critical temperatures
$T_c(\dot{\gamma})/T_c(0)-1$ versus $\dot{\gamma}$, in log-log
scale. The fit of the data points in the growing regime gives a
slope $\psi=0.52(3)$.}\label{diag_fase}
\end{figure}
On the other hand, if the system is started from the GSC initial condition (i.e. magnetization equal to 1),
and then it is left to evolve at the working temperature $T\simeq T_c$, $OP(t)$ decreases, and
follows a power law behavior at $T=T_c$. Upwards or downwards deviations are observed
according if $T<T_c$ or $T>T_c$, respectively. Figure \ref{gscgd5} exhibits the evolution of
$OP(t)$ for the same parameters of figure \ref{fdcgd5}. A clear power law behavior can be observed at $T=2.66$, which is exactly the same temperature found when the system was started from FDC configurations. This is precisely the signature of a second-order phase transition in the model. In addition, the expected deviations from the power law behavior at $T=2.65$ and $T=2.67$ are also observed.
\begin{figure}[H]
\centering
\includegraphics[height=15cm,width=11cm,clip=,angle=270]{gsc_gds_all_3c.ps}
\caption{Log-log plot of the time evolutions of $OP(t)$
corresponding to a system set in a square lattice of size $L_x=L_y=512$
with shear rate $\dot{\gamma}=5$. The working temperatures of the
thermal bath are indicated in the legend. A power law behavior is
observed for $T_c=2.660$ $J/k_B$. The dashed line is a fit of the
numerical data.}\label{gscgd5}
\end{figure}
Proceeding in the same way we did for the
evolutions started from FDC's, the critical behavior
of the system was investigated when it is started from GSC configurations in
rectangular lattices. Figures \ref{gscls} a) and b) show the critical
time evolution of $OP$ when the system is initiated from a
GSC configuration, corresponding to $\dot{\gamma}=1/2$ and
$\dot{\gamma}=10$, respectively. It is important to remark that the best power law behavior was obtained when the systems evolved at the same critical temperatures found when they were initiated from FDC configurations. According to the results,
the transversal size $L_y$ does not play a relevant role in the
critical behavior of the system as it is shown in the evolutions in
lattices with $L_x=500$ (figure \ref{gscls} a) and b)) and also with
$L_x=2000$ (figure \ref{gscls} b)), but rather the longitudinal size
$L_x$ is relevant. This is in agreement with the results previously
exhibited in figure \ref{fdcs}, and allow to conclude that that the
critical evolution of the system is independent of lattice shape
effects for both initial configurations.
\begin{figure}[H]
\centering
\includegraphics[height=9cm,width=7cm,clip=,angle=-90]{opvst_gd05_gsc.ps}
\includegraphics[height=9cm,width=7cm,clip=,angle=-90]{opvst_gd10_gsc.ps}
\caption{Log-log plots of the time evolutions of $OP(t)$ at $T=T_c$, for the system set in
rectangular and square lattices of sizes $L_x \times L_y$, as
indicated in the respective legends. In a) the external field has magnitude $\dot{\gamma}=1/2$
while in b) the shear field magnitude is $\dot{\gamma}=10$. The dashed lines displaced are
least-square fits of the obtained data.}
\label{gscls}
\end{figure}
\subsection{Critical Exponents}
Focusing our attention on the power law behavior at the $T=T_c$, the exponents $c_1$ and $c_2$ corresponding
to the dynamic evolution of $OP$ (equation (\ref{fdc})) and its
logarithmic derivative $\partial_{\tau} ln OP(t,\tau)$ (equation
(\ref{derfdc})) can be estimated. Table \ref{tablac} enlists the
obtained values of $T_c$ and the mentioned exponents for small and
large values of $\dot{\gamma}$. The values of $c_1$ and $c_2$, for
the case of $\dot{\gamma}=1/32 $ are different compared with the
those at larger $\dot{\gamma}$, as it was already observed in figure \ref{fdcgds}. Furthermore, the
value of $c_1$ for $\dot{\gamma}=50$ is also slightly different from
the corresponding cases of $\dot{\gamma}=5$ and $\dot{\gamma}=10$.
This seems to be a rare behavior, since there exists a saturation
regime for the critical temperature at these values of
$\dot{\gamma}$'s (figure \ref{diag_fase}), and we are induced to
think that the dynamic behavior of the system is independent of the
field magnitude in this limit. Opposite to that, the estimated
values of $c_2$ are quite similar in the limit of large
$\dot{\gamma}$'s.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$\dot{\gamma}$ & $T_c$ &$c_1$&$c_2$ \\
\hline
\hline
1/32& 2.29&0.180(8)&0.99(2) \\
\hline
5 &2.66&0.239(1)&0.84(1)\\
\hline
10 &2.675&0.238(1)&0.87(1)\\
\hline
50 &2.675&0.224(1)&0.85(1) \\
\hline
\hline
\end{tabular}
\caption{Exponents $c_1$ and $c_2$, obtained from FDC initial conditions, corresponding to the values of $\dot{\gamma}$ enlisted in the first column.
}\label{tablac}
\end{table}
\noindent Table \ref{tablac2} enlists the values of the
exponents $c_3$ and $c_4$ that were obtained from a least-square fits of the critical evolution of the system when it is initiated from the GSC configurations (see equations (\ref{gsc1}) and (\ref{derigsc1})). Here, the same situation that happened for $c_1$ is found for $c_3$.
In fact, the value of $c_3$ for $\dot{\gamma}=1/32$ is different form the rest of
the corresponding values at larger $\dot{\gamma}$'s,
and the value of $c_3$ for $\dot{\gamma}=50$ is also different from the values
estimated for $\dot{\gamma}=5$ and $\dot{\gamma}=10$. On the other hand, the values
of $c_4$ are similar for all the reported cases of $\dot{\gamma}$'s.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$\dot{\gamma}$ & $T_c$ &$c_3$&$c_4$ \\
\hline
\hline
1/32& 2.29&0.076(1)&0.65(2) \\
\hline
5 &2.66&0.401(1)&0.63(1)\\
\hline
10 &2.675&0.405(5)&0.62(1)\\
\hline
50 &2.675&0.360(7)&0.62(5) \\
\hline
\hline
\end{tabular}
\caption{Exponents $c_3$ and $c_4$ obtained from the GSC configuration, corresponding to the values of $\dot{\gamma}$ enlisted in the first column.
}\label{tablac2}
\end{table}
\noindent According to the equations developed in section \ref{3}, the critical exponents of the second-order phase transition, are obtained by combining the estimated exponents $c_1$, $c_2$, $c_3$, and $c_4$ enlisted above, corresponding to each case of $\dot{\gamma}$ investigated. Table \ref{tablacc} resumes the obtained results and includes the critical exponents of both the Ising and DLG model for the sake of comparison. Since this issue for the case of the DLG model still remains an open problem, the obtained theoretical values from all proposed theories exposed in refs. \cite{kls} and \cite{gallegos1} are included.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$\dot{\gamma}$ & $\beta$&$z$&$\nu_{\perp}$&$\nu_\parallel$ \\
\hline
\hline
1/32& 0.105(1)&1.77(7)&0.78(1)&0.57(5) \\
\hline
5 &0.39(1)&0.88(1)&1.10(1)&1.35(2)\\
\hline
10&0.39(1)&0.86(1)&1.14(1)&1.34(1)\\
\hline
50 &0.37(1)&0.94(1)&1.10(1)&1.30(6)\\
\hline
Ising 2d &0.125 &2.16&1&1\\
\hline
DLG(E=50) (ref. \cite{kls}) &1/2&$\approx$4/3&$\approx$3/2&$\approx$1/2\\
\hline
DLG (E=50) (ref. \cite{gallegos1}) &$\approx$0.33&$\approx$1.998&$\approx$1.22&$\approx$0.63\\ \hline
\hline
\end{tabular}
\caption{Table of the calculated critical exponents for each case of
$\dot{\gamma}$ employed. The critical exponents corresponding to both theories of the critical phase transition of the Ising and the DLG models are also included for comparison. Since there is no anisotropy in the Ising model $\nu_\bot=\nu_\parallel=\nu$ must be read. The results of this table do not fit with the relation $\psi=1/\nu z$ suggested in \cite{ok}. See, however, in Sect. 5, the discussion about our results our small shear rates.}\label{tablacc}
\end{table}
\noindent An overview of this table deserves some comments. First, the
calculated value of $\beta$ at $\dot{\gamma}=1/32$ is close to
$\beta=1/8$ calculated for the 2d Ising model. This similarity could
drive us to think that the effects of the shear field are
negligible, but this is not the case, as it is evidenced by the
values of the rest of the critical exponents. In fact, anisotropy
effects are important, even if a small external field,
in this case $\dot{\gamma}=1/32$, is applied. The dynamic exponent $z (\dot{\gamma}=1/32)=1.77(7)$
indicates that the correlation length (in the longitudinal direction, see \ref{lcorr})
grows faster with time than the corresponding one in the Ising model $z=2.16$, and the difference between
$\nu_\bot(\dot{\gamma}=1/32)=0.78(1)$ and $\nu_\parallel(\dot{\gamma}=1/32)=0.57(5)$ reveals an anisotropic
critical behavior even at small shear rate values. \\
The situation is different for the critical exponents at the largest values of $\dot{\gamma}$ investigated.
The values of the exponents are similar between each other, suggesting that the critical behavior does not depend of the applied field. This fact is also present in figure \ref{diag_fase},
where the critical temperature is approximately the same for the largest values of $\dot{\gamma}$ used.
Furthermore, we also noticed that $\nu_\bot>\nu_\parallel$ for $\dot{\gamma}=1/32$, while it happens
the opposite at large $\dot{\gamma}$. At the moment, a reasonable explanation for this issue is not possible due to the lack of a theoretical framework about the critical behavior of this model.\\
To end this section, one final comment is appropriate. In view of
the values exposed in table 3, the computed critical exponents do
not belong to the universality classes of the Ising or of the DLG
models respectively. This fact is not surprising since, as we have
seen, the shear rate affects the critical behavior of the model by
inducing anisotropic effects in the equilibrium model that changes
its behavior. The case of the DLG model is different. Although both
models have a similar phase behavior, the particle dynamics defined
for the DLG model conserves the number of particles while our model
does not. This difference will probably affect the values of the
critical exponents, and in consequence there is no reason to expect
that both models will belong to the same universality class.
\subsection{Longitudinal Correlation}
\label{lcorr}
In section \ref{3}, it is assumed that the dynamic increase of the longitudinal correlation length $\xi_{\parallel}$ and the breakage of the corresponding transversal one $\xi_{\perp}$ at $T=T_c$ are due to the anisotropy effects induced by the external shear field. This will cause that the short-time critical dynamic evolution of the system initiated from either the FDC or GSC configurations will depend only on the dynamic critical exponent $z_{\parallel}=z$ that describes the critical dynamic increase of $\xi_{\parallel}$. \\
To show our hypothesis, we performed a scaling of the
whole curve of the $OP(t)$ for the cases of $\dot{\gamma}=1/2$
(small driving field) and $\dot{\gamma}=10$ (large driving field).
We propose a phenomenological scaling in the spirit of the scaling
form used by Family and Vicsek to describe the roughness growth of
interfaces \cite{famvis} (obviously in a different context not related to ours).
This is given by the following expression
\begin{equation}
OP=L_x^{-\omega_{i}}f(t/L_x^{z}), \label{f-v}
\end{equation}
\noindent where $L_x$ is the longitudinal size, according to the
results exhibited in figures \ref{fdcs} and \ref{gscls} a) and b),
respectively. The exponent $\omega_{i}$, $i=$FDC or GSC, is the
exponent that has into account the finite-size critical behavior of
$OP$, and $z$ is the relevant dynamic critical exponent. The idea of
this scaling form is simple: if all the curves can be collapsed by
using the same dynamic exponent $z$ independently of the initial
condition used to start the simulations, it can be demonstrated
numerically that only one correlation is relevant in the critical
dynamic behavior. Furthermore, equation (\ref{f-v}) must contain
both the critical dynamic behavior of $OP$ according to equations
(\ref{preop}) and (\ref{gsc1}) at the early times of evolution, and
the
finite size behavior in the limit of large times ($t\rightarrow\infty$)
where the correlation length is comparable to $L_x$. Therefore we have that $f(u)$
must be $f\propto (t/L_x^z)^{-\beta/\nu_{\parallel}z}$ or $f\propto (t/L_x^z)^{-\beta/\nu_{\perp}z}$ at early times $u\ll1$, depending if the initial condition is $FDC$ or $GSC$ respectively. This fixes the finite-size exponent $\omega_i$ in $\omega_{FDC}=\beta/\nu_{\parallel}$ or $\omega_{GSC}=\beta/\nu_{\perp}$ according to the initial configuration used to start the simulations.\\
Figures \ref{satgd05} and \ref{satgd10} a) and b) exhibit both the
finite-size dependence of $OP(t)$ with the longitudinal size $L_x$
(insets), and the scaling function $f(t/L_x^z)$ (main plots), for
the small and large external fields, represented by
$\dot{\gamma}=1/2$ (figures \ref{satgd05}) and $\dot{\gamma}=10$
(figures \ref{satgd10}) respectively. In all plots, the finite-size
dependence is obtained by calculating the saturated value of
$OP(t)$, $OP_{sat}$, from figures \ref{fdcs} and \ref{gscls}, which
were plotted versus $L_x$.
\begin{figure}[H]
\centering
\includegraphics[height=9cm,width=7cm,clip=,angle=270]{gd05_fdc_scal.ps}
\includegraphics[height=9cm,width=7cm,clip=,angle=270]{gd05_gsc_scal.ps}
\caption{Log-log plots of the scaling functions $f(t/L^z)$ (see equation (\ref{f-v})) of the time evolution of $OP(t)$
when the system is started from FDC and GSC configurations (main plot of figure a) and b) respectively) corresponding to a system with external field of magnitude $\dot{\gamma}=1/2$.
The insets of both figures show the finite-size dependence and the
corresponding power-law fit in a double logarithmic scale. The
values of the corresponding fits are $\omega_{FDC}=0.50(1)$ (inset
of a) ) and $\omega_{GSC}=0.53(1)$ (inset of b)), respectively.
}\label{satgd05}
\end{figure}
As it can be observed in the insets of the plots in figures \ref{satgd05} a) and b) the size dependence of $OP$ in the long time regime can be well fitted by a power law as it is proposed in equation (\ref{f-v}). The estimated exponents $\omega_{fdc}=0.50(1)$ and $\omega_{gsc}=0.53(1)$ were not consistent with the expected values $\omega_{FDC}=\beta/\nu_{\parallel}=0.263(3)$ or $\omega_{GSC}=\beta/\nu_{\perp}=0.242(2)$. However, the good collapses performed with the same $z(\dot{\gamma}=1/2)=1.45(3)$ exhibited in the main plots of both figures clearly evidences that only longitudinal correlations are relevant in the critical evolution of the system at short times.\\
On the other hand, the insets of figure \ref{satgd10} a) and b), show that the finite-size behavior of $OP(t)$ is also a power law for the case of large shear field magnitudes, represented by $\dot{\gamma}=10$. Opposite to the case with $\dot{\gamma}=1/2$, the estimated values of $\omega_{fdc}=0.29(2)$ and $\omega_{gsc}=0.34(3)$ were in agreement with the expected values $\omega_{FDC}=\beta/\nu_{\parallel}=0.291(2)$ or $\omega_{GSC}=\beta/\nu_{\perp}=0.342(2)$ calculated from Table 3. The good collapses performed with $z(\dot{\gamma}=10)$ displayed in the main plots of the figures allow us to conclude that the same behavior observed for small $\dot{\gamma}'s$ is also exhibited by systems with large values of the external fields.\\
\begin{figure}[H]
\centering
\includegraphics[height=9cm,width=7cm,clip=,angle=270]{gd10_fdc_scal.ps}
\includegraphics[height=9cm,width=7cm,clip=,angle=270]{gd10_gsc_scal.ps}
\caption{Log-log plots of the scaling functions $f(t/L^z)$ (see equation (\ref{f-v})) of the time evolution of $OP(t)$
when the system is started from FDC's and GSC's configurations
(main plot of figure a) and b) respectively) corresponding to a system with external field of magnitude $\dot{\gamma}=10$.
The insets of both figures show the finite-size dependence and the
corresponding power-law fit in a double logarithmic scale. The
values of the corresponding fits are $\omega_{fdc}=0.29(2)$ (inset
of a) ) and $\omega_{gsc}=0.34(3)$ (inset of b)), respectively.}
\label{satgd10}
\end{figure}
To summarize. we have shown that the critical dynamic evolution of
the system, started from either FDC's or GSC initial configurations,
can be scaled with the dynamic critical exponent $z$ proposed in
Section \ref{3}. Based on the arguments exposed in Section \ref{3},
we can conclude that, in the short time limit of the critical
evolution, the correlations along the field axis (longitudinal) are
more relevant than transverse (perpendicular) correlations.
Furthermore, the obtained finite-size exponents $\omega_i$ are only
in accordance with those calculated using the critical exponents
enlisted in Table 3 corresponding to $\dot{\gamma}=10$, while at
$\dot{\gamma}=1/2$ they differ by a factor of nearly 2.
\section{Discussion and Conclusions}
\label{5}
In this work, the second-order phase
transition in the 2d non-conserved Ising model under the action of an
external driving shear field was investigated by studying the critical evolution of the system in the short-time regime.
In order to apply this method, the dynamic evolution of the system at $T\simeq T_c(\dot{\gamma})$ was monitored when it is initiated from fully disordered initial configurations (FDC), and from the completely ordered configuration (GSC).
\noindent Starting the system from FDC's configurations, the time evolution of the order
parameter $OP$ follows a power law behavior at the critical temperature $T_c$,
while at slightly different values of $T$ the power law is modulated by a scaling function that
bends upwards or
downwards depending if the temperature is less or greater than $T_c$,
respectively. The critical evolution was studied on square and
rectangular lattices of different sizes $L_x$ and $L_y$, and the
results indicate that the short-time critical evolution is free of
shape effects. Furthermore, the saturation value reached by $OP$
depends only on $L_x$.
\noindent The critical evolution started from FDC's configurations was studied
for different values of $\dot{\gamma}$, and the diagram of reduced
temperatures $T_c(\dot{\gamma})/T_c(0)-1$
versus $\dot{\gamma}$, $T_c(0)=2.269$ $J/k_B$ was drawn. As a first observation,
all the values found for $T_c(\dot{\gamma})$
are always greater than the 2d critical temperature of the Ising model,
that is typical for models driven out of equilibrium by an external field.
Furthermore, two
regimes can be distinguished: 1) a \textsl{growing regime} where
$T_c(\dot{\gamma})/T_c(0)-1$
$\propto$ $\dot{\gamma}^\psi$. The exponent $\psi$ was calculated by means of
a linear regression fit,
giving $\psi=0.52(3)$, which is consistent with theoretical predictions in ref.
\cite{gonepeli};
2) a \textsl{saturation regime}, where $T_c(\dot{\gamma})$ does not
change appreciably with
$\dot{\gamma}$. In this regime $T_c(\dot{\gamma})\sim 1.18 T_c(0)$.
A similar diagram
was already observed in the DLG model \cite{kls}, where
the temperature grows with the magnitude of the driving field and then
saturate at large values.\\
On the other hand, the critical dynamic behavior of the model was
also investigated when it is initiated from the ground state
configuration (GSC). A decreasing power law is observed for $OP(t)$
at the same $T_c(\dot{\gamma})$ found when the system was started
from FDC configurations. This evidences that the model experiences a
second order phase transition, as expected. Also in this case, the
critical evolution of the system was simulated on rectangular and
square lattices of different sizes. It was found that the critical
evolution is independent of the shape of the lattice and the
saturation value of $OP$ only depends on the longitudinal size
$L_x$, as for evolutions initiated from FDC configurations. So, it
is concluded as a general result that in the short-time scale, the
system critical evolution is free of shape effects, as it is also
observed for the critical evolution of the DLG model in the same time interval \cite{alsa}.
\noindent Then, the quantities $c_i$ ($i=1,4$), defined in sect. 3, were
studied in order to calculate the critical exponents of the
transition. Starting from FDC's initial configurations, the dynamic
critical behavior at small $\dot{\gamma}$ is slower than at larger
$\dot{\gamma}$'s. As it can be observed in figure \ref{fdcgd5} and
in Table 1, the order parameter exponent $c_1$ is smaller at
$\dot{\gamma}=1/32$ than
at larger $\dot{\gamma}$'s. Also, its logarithmic derivative
exponent $c_2$ is different for this case. On the other hand, the
values of $c_1$ and $c_2$ are more stable for larger
$\dot{\gamma}$'s, although $c_1$ at $\dot{\gamma}=50$ is slightly
smaller than the estimated values for $\dot{\gamma}=5, 10$ . A
similar scenario was found for the order parameter $c_3$ and its
derivative $c_4$ exponents starting from the GSC configuration. \\
By combining these exponents, the static and dynamic critical
exponents $\beta$, $\nu_{\bot}$, $\nu_{\parallel}$ and $z$ were
calculated and
are enlisted in table 3. The order parameter critical exponent $\beta$ for
$\dot{\gamma}=1/32$ is similar to the value calculated for the Ising
model, but the values of $z$, $\nu_{\parallel}$ and $\nu_{\bot}$
show that the anisotropy introduced by such a small external field
is relevant. At large $\dot{\gamma}$'s, all the exponents are
similar within a small range, suggesting that the critical behavior
of the model is practically independent of the magnitude of the
field in this regime. Furthermore, Table 3 also shows that
the critical exponents of the
sheared model do not belong to the universality class of the Ising
or of the DLG model, even if this model shows a similar phase
behavior.\\
\noindent Finally, the critical exponents summarized above were computed based
on the fact that only the longitudinal correlation length is
relevant for the dynamic critical behavior of the model,
independently of the initial configuration. In order to check this,
a finite-size scaling of the dynamic evolution of $OP$ was performed
with the aid of equation (\ref{f-v}).
This equation must contain
both the critical dynamic behavior of equations (\ref{preop}) and
(\ref{gsc1}) at early times of evolution, and also the finite-size
critical behavior at long times. As a consequence, the finite-size
exponents must be $\omega_{FDC}=\beta/\nu_{\parallel}$ and
$\omega_{GSC}=\beta/\nu_{\perp}$ for both initial
conditions respectively. By measuring the saturated values of $OP$,
$OP_{sat}$, and computing
the exponents $\omega_{FDC}$ and $\omega_{GSC}$
the time series of $OP$ were collapsed for the system initiated from ordered and disordered
configurations as the main plots of figures \ref{satgd05} and
\ref{satgd10} show. Therefore, it is concluded
that only the longitudinal correlation length
$\xi_{\parallel}$ takes part in the critical evolution of the Ising
model when an external shear field is applied. Furthermore,
the finite-size exponents
$\omega_{FDC}$ and $\omega_{GSC}$ were not consistent with the rates
$\beta/\nu_{\parallel}$ and $\beta/\nu_{\perp}$ for the cases
corresponding to $\dot{\gamma}=1/2$, while they are in good agreement
for a shear field of magnitude $\dot{\gamma}=10$.
This discrepancy between the predicted and measured critical
exponents for the case $\dot{\gamma}=1/2$, together with the fact
that the values of the critical exponents estimated for smaller
$\dot{\gamma}$ are not similar with those corresponding to larger
values of $\dot{\gamma}$ (see Table \ref{tablacc}), may be explained
by conjecturing that, at such small values of the shear rate, the
system is less perturbed by the external driving. This means that
the growth of transverse critical correlations is less affected by
the shear field, and may become relevant in the short time regime.
If this is so, our scaling assumptions will be not longer valid, and
both $\xi$'s need to be considered in order to propose scaling forms
for the dynamic critical behavior of the model in this regime. In order to study this, new simulations of the model with $\dot{\gamma}=1/32$ were performed,
but they demanded a lot of computational time, specially when the system is started
from the GSC configurations because they needed larger lattice sizes and evolution time intervals in order to obtain good power laws and saturation regimes ($L\geq 10000$, evolution time intervals of the order of $10^6$ MCS or larger), so we did not obtained reliable results. As a consequence, this interesting subject will be left for further research in the future. In contrast, at larger $\dot{\gamma}'s$, the good agreement between $\beta/\nu_{\parallel}$ and $\beta/\nu_{\perp}$, calculated from the exponents in Table \ref{tablacc}, and the estimated values of $\omega_{FDC}$ and $\omega_{GSC}$ respectively, suggest that the critical behavior is different from both the cases with smaller $\dot{\gamma}'s$, and also from the Ising and DLG models at large external fields \cite{alsa, repalsa}.
\section{Acknowledgements}
GPS wants to thank CONICET and the ANPCyT.
\vskip 0.3cm\noindent
| 2024-02-18T23:40:05.526Z | 2009-10-16T21:56:12.000Z | algebraic_stack_train_0000 | 1,306 | 8,836 |
|
proofpile-arXiv_065-6612 | \section{\label{}}
\section{Introduction}
The beyond the standard model (BSM) of supersymmetry (SUSY) has
been explored theoretically~\cite{bib:susyprimer} for several
decades. While no experimental evidence has been found, searches
for supersymmetry continue to be an important part of the
physics programs at high energy colliders.
The spectrum of sparticle masses is determined by the SUSY
model and the choice of parameters. However, in many variations
(e.g. mSUGRA) the lightest sparticles may be charginos and
neutralinos, mixtures of the wino and bino which are the
superpartners of the $W$ and $Z$ bosons. If the lightest
neutralino \mbox{$\tilde{\chi}_1^0$}\ is also the lightest
supersymmetric particle (LSP) and $R$-parity is conserved,
then the \mbox{$\tilde{\chi}_1^0$}\ escapes detection in collider experiments
and appears only as a contribution to missing transverse
energy. Therefore, pair production of \mbox{$\tilde{\chi}_1^0$}\ becomes
difficult to observe. A more interesting process is associated
production of a chargino and the second lightest
neutralino ($p\bar{p} \to \tilde{\chi}_1^\pm \tilde{\chi}_2^0$)
where some of the decay products of the $\tilde{\chi}_1^\pm$
and $\tilde{\chi}_2^0$ can be observed.
The D0 experiment at the Fermilab Tevatron has collected data
on $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV since 2001. The
detector consists of an inner tracking system (with solenoid
magnet), a liquid argon calorimeter, and an outer muon
spectrometer. A full description is available in
Ref.~\cite{bib:d0detector}.
\section{Trileptons}
A traditional search for associated chargino and neutralino
production involves looking for events with three leptons
plus missing transverse energy~\cite{bib:trilepton}.
In this case, the chargino
decays via $\tilde{\chi_1}^\pm \to \ell^\pm \nu \tilde{\chi}_1^0$
while the neutralino decays via
$\tilde{\chi}_2^0 \to \ell^{\prime\pm} \ell^{\prime\mp} \tilde{\chi}_1^0$
(Fig.~\ref{fig:trilepfeyn}).
Here, $\ell$ and $\ell^\prime$ may or may not be the same
lepton flavor while the neutrinos and lightest neutralinos escape
undetected.
\begin{figure*}
\centering
\includegraphics[width=135mm]{figure1.eps}
\caption{Feynman diagrams for the associated production of a chargino
and a neutralino with decay into the trilepton final state.}
\label{fig:trilepfeyn}
\end{figure*}
Six final state particles means that some of the time,
one or more of the charged leptons has low \mbox{$p_T$}. This can be particularly
true depending on the sparticle mass relationships.
Figure~\ref{fig:trilep_pt} shows the \mbox{$p_T$}\ of the three charged
leptons (in order of decreasing \mbox{$p_T$}) for one model point. Therefore,
techniques have been developed to include low \mbox{$p_T$}\ leptons in
the analysis. In this case, we allow the third lepton to be
identified as an isolated track with low \mbox{$p_T$}. This recovers
part of the acceptance that would be lost by requiring it to
be identified as a high quality lepton. Another technique (not
used here) is to search for like sign leptons.
\begin{figure}
\centering
\includegraphics[width=80mm]{figure2.eps}
\caption{Distribution of charged lepton \mbox{$p_T$}\ for the trilepton
final state. The red histogram is the highest \mbox{$p_T$}\ lepton, the
green the second highest \mbox{$p_T$}\ lepton, and the blue is the third
highest \mbox{$p_T$}\ lepton.} \label{fig:trilep_pt}
\end{figure}
One reason this channel is considered a ``golden channel'' is
that there are very few standard model processes that may
produce trilepton events. The primary source of events with
three real leptons is via diboson production (e.g. $WZ$) which
has a small cross section. Single boson production ($W$ or
$Z$) contributes only through a fake third isolated track
and/or fake missing transverse energy.
\subsection{Event selection}
Event selection criteria are optimized in two regions,
one ``low-\mbox{$p_T$}" and one ``high-\mbox{$p_T$}", to take advantage of the
different kinematics in different regions of mSUGRA
parameter space. The data is searched in four final states:
(1) di-electron plus lepton ($ee\ell$), (2) di-muon plus
lepton ($\mu\mu\ell$), (3) electron plus muon plus lepton
($e\mu\ell$), and (4) muon plus tau ($\mu\tau$). The third
lepton is identified as an isolated track without using the
lepton identification criteria. Details on the selection
criteria are given in Ref.~\cite{bib:trilepton}.
The inclusion of the $\mu\tau$ channel is new for the trilepton
analysis. Here, hadronic decays of the tau are considered.
Leptonic decays and single hadron decays ($\tau\to\pi^\pm\nu$)
had previously been included when they passed
other lepton requirements. The inclusion of this final state
allows for greater sensitivity at higher values
of $\tan\beta$. The $\mu\tau$ channel is broken down into
two subsets: $\mu\tau\tau$ and $\mu\tau\ell$ where the difference
is whether a second reconstructed tau or a isolated track
is required. The $\mu\tau$ channels are not optimized
separately for low and high \mbox{$p_T$}.
\subsection{Results}
Comparisons of data and expected background for the various channels
is given in Tab.~\ref{tab:trilepton_results}. Good agreement is
observed. From this we set limits on the parameters of the
mSUGRA model. Figure~\ref{fig:trilepton_plane} shows the limits
in the m$_{1/2}$ versus m$_0$ plane. Figure~\ref{fig:trilepton_beta}
shows the limit on the cross section times branching ratio as a
function of $\tan\beta$.
\begin{table}[h]
\begin{center}
\caption{Numbers of events for data and expected background for the
four final states and both
the low-\mbox{$p_T$}\ and high-\mbox{$p_T$}\ optimizations in the trilepton search.}
\begin{tabular}{|l|c|c|} \hline
& & \textbf{Expected} \\
& ~~~\textbf{Data}~~~ & \textbf{~~~Background~~~} \\ \hline
$ee\ell$ & & \\
~~low-\mbox{$p_T$} & 2 & 1.8 $\pm$ 0.2 \\
~~high-\mbox{$p_T$}~~~~ & 0 & ~~~0.8 $\pm$ 0.1~~~ \\ \hline
$e\mu\ell$ & & \\
~~low-\mbox{$p_T$} & 2 & 0.8 $\pm$ 0.2 \\
~~high-\mbox{$p_T$} & 0 & 0.5 $\pm$ 0.1 \\ \hline
$\mu\mu\ell$ & & \\
~~low-\mbox{$p_T$} & 4 & 1.2 $\pm$ 0.2 \\
~~high-\mbox{$p_T$} & 4 & 2.0 $\pm$ 0.3 \\ \hline
$\mu\tau$ & & \\
~~$\mu\tau\tau$ & 1 & 0.8 $\pm$ 0.2 \\
~~$\mu\tau\ell$ & 0 & 0.8 $\pm$ 0.1 \\ \hline
\end{tabular}
\label{tab:trilepton_results}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=80mm]{figure3.eps}
\caption{95\% C.L. limits on the mSUGRA model in the m$_{1/2}$ versus
m$_0$ plane. The orange indicates the expected limits while the
green shows the observed limits. Previously published results
from LEP and CDF are also shown.} \label{fig:trilepton_plane}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=80mm]{figure4.eps}
\caption{95\% C.L. limits on the cross section times branching
ratio as a function of $\tan\beta$ for a chargino mass of
130 GeV and $m(\tilde{\tau})-m(\tilde{\chi}^0_2 = 1$ GeV. The
cross section for the mSUGRA model is shown as the blue line.}
\label{fig:trilepton_beta}
\end{figure}
\section{Dark photons}
Recent experimental evidence for an excess of positrons and/or
electrons within the cosmic ray spectrum~\cite{bib:pamela,bib:atic}
have inspired a new model of a ~1 TeV dark matter candidate that
can annihilate with itself. This annihilation can create two light ($<$ 3 GeV)
gauge bosons called dark photons that are force carriers of a hidden valley
sector~\cite{ArkaniHamed:2008qn,bib:strassler}.
These gauge bosons can decay via mixing
with the standard model photon to produce pairs of standard model
fermions. The branching ratios into fermion types depends upon
the mass of the dark photon.
D0 has searched for evidence of dark photons through pairs of
electrons or muons with very small opening angle
using 4.1 fb$^{-1}$ of data~\cite{bib:d0darkphoton} .
Figure~\ref{fig:darkphoton_feyn} shows a Feynman diagram
for the production and decay of dark photons at the Tevatron.
In this scenario, the production still creates an associated
chargino and neutralino (as in the previous search), but
the observable final state is significantly different.
\begin{figure}
\centering
\includegraphics[width=80mm]{figure5.eps}
\caption{Feynman diagram for the associated production of a chargino
and a neutralino with decays into a dark sector.} \label{fig:darkphoton_feyn}
\end{figure}
\subsection{Selection criteria}
Events are selected by requiring a photon with \mbox{$p_T$}\ $>$ 30 GeV and
missing transverse energy $>$ 20 GeV. Pairs of oppositely signed
tracks with ${\cal R} < 0.2$ (where ${\cal R} = \sqrt{(\Delta\eta)^2 +
(\Delta\phi)^2}$) and $\Delta z_{vertex} < 2$ cm are dark photon
candidates. They must have momentum greater than 10(5) GeV
for the leading(2nd leading) track.
Previous analyses likely would have missed such a signal
due to isolation criteria. Here, isolation variables are
calculated after accounting for the second nearby particle.
Dark photon candidates are divided into two types: electron or
muon. In the electron case, the tracks must match a single EM
cluster (since the two electrons overlap in the calorimeter).
In the muon case, one of the tracks must be matched to a
reconstructed muon.
\subsection{Results}
The invariant mass distribution for the electron or muon pairs
is studied for evidence of a dark photon resonance. The
background estimate is created by combining three data samples
with one or more selection criteria inverted.
Figure~\ref{fig:darkphoton_mass} shows the data, background
estimate, and simulated signal. No evidence for a narrow
resonance is seen.
\begin{figure}
\centering
\includegraphics[width=80mm]{figure6.eps}
\caption{Distribution of the invariant mass of dark photon
candidates. The left distribution shows the dimuon spectrum
while the right shows the dielectron spectrum.
The background (filled band) is estimated by
combining three samples with inverted selection criteria. An
example signal ($m_{\gamma_D} = 1.4$ GeV) added to the background
is shown as the open histogram.} \label{fig:darkphoton_mass}
\end{figure}
Limits on the production cross section are extracted from the
invariant mass distributions (Fig.~\ref{fig:darkphoton_limit0}.
Figure~\ref{fig:darkphoton_limit1}
shows the limits on the dark photon mass as a function of the
chargino mass for a branching ratio of $\tilde{\chi}_1^0 \to
\gamma_D \tilde{X}$ of 0.5. Figure~\ref{fig:darkphoton_limit2}
shows the limit on the chargino mass as a function of this
branching ratio for three different dark photon masses.
\begin{figure}
\centering
\includegraphics[width=80mm]{figure7.eps}
\caption{Limit on the cross section for dark photon production cross
section as a function of dark photon mass.}
\label{fig:darkphoton_limit0}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=80mm]{figure8.eps}
\caption{Limit on the dark photon mass versus chargino mass for a
neutralino to dark photon branching ratio of 0.5. The expected limit
is shown by the dash-dotted line.} \label{fig:darkphoton_limit1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=80mm]{figure9.eps}
\caption{Limit on the chargino mass as a function of the neutralino
to dark photon branching ratio for three different dark photon masses.
The limit from a previous diphoton search~\cite{bib:d0diphoton} is
shown as the black contour.} \label{fig:darkphoton_limit2}
\end{figure}
\section{Summary}
The D0 experiment has recently completed two searches for
production of charginos and neutralinos in the Tevatron Run II
data set. The first was a traditional search in the trilepton
final state including $ee\ell$, $e\mu\ell$, $\mu\mu\ell$ and
$\mu\tau$ channels. The second was a novel search for spatially
close lepton pairs as a signature of a dark photon resulting from
a neutralino decay. Neither search observed an excess of data
over expectation and limits on the production cross section and
model parameters were set.
\bigskip
| 2024-02-18T23:40:06.240Z | 2009-10-20T11:25:06.000Z | algebraic_stack_train_0000 | 1,342 | 1,912 |
|
proofpile-arXiv_065-6719 | \section{Introduction}
\label{sec:intro}
The Hubble constant ($H_0$, measured in units of $\rm{\, km\, s^{-1}\, Mpc^{-1}}$) is one of the key
cosmological parameters since it sets
the present age, size, and critical density of the Universe.
Methods for measuring
the Hubble constant include Type Ia supernovae (SNe Ia)
\citep[e.g.][]{Tammann79, RiessEtal09}, the Sunyaev-Zel'dovich
effect \citep[e.g.][]{SunyaevZeldovich80, BonamenteEtal06},
the expanding photosphere method for Type II supernovae
\citep[e.g.][]{KirshnerKwan74,SchmidtEtal94},
and maser distances \citep[e.g.][]{Herrnstein99, MacriEtal06}.
However, perhaps the two most well-known recent measurements come from
the \textit{Hubble Space Telescope} ({\it HST}{}) Key Project (KP)
\citep{FreedmanEtal01} and the Wilkinson Microwave Anisotropy Probe (WMAP)
observations of the cosmic microwave background (CMB)
\citep[e.g.][]{KomatsuEtal09}. The
{\it HST}{}\ KP measurement of $H_0$ is based on secondary distance
indicators (including Type Ia supernovae, Tully-Fisher, surface
brightness fluctuations, Type II supernovae, and the fundamental
plane) that are calibrated using Cepheid distances to nearby galaxies
with a zero point in the Large Magellanic Cloud. The resulting
Hubble constant is $72\pm8\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$ \citep{FreedmanEtal01}.
We note that the largest contributor to the systematic error from the
distance ladder of which this measurement depends is the metallicity
dependence of the Cepheid period-luminosity relation.
More recently, \citet{RiessEtal09} addressed some of these systematic effects
with an improved differential distance ladder
using Cepheids, SNe Ia, and the maser galaxy NGC 4258, finding
$H_0=74.2 \pm 3.6 \rm{\, km\, s^{-1}\, Mpc^{-1}}$, a 5\% local
measurement of Hubble's constant.
The five year measurement made using WMAP temperature and
polarization data is $H_0=71.9^{+2.6}_{-2.7} \rm{\, km\, s^{-1}\, Mpc^{-1}}$ \citep{DunkleyEtal09},
under the assumption that the Universe
is flat and that the dark energy is described by a cosmological constant
(with equation of state parameter $w=-1$).
The uncertainty in $H_0$ increases markedly if either
of these two assumptions is relaxed, due to degeneracies
with other cosmological parameters. For example, WMAP
gives $H_0\sim50\rm{\, km\, s^{-1}\, Mpc^{-1}}$ without the flatness assumption, and
$H_0 = 74^{+15}_{-14}\rm{\, km\, s^{-1}\, Mpc^{-1}}$ for a flat Universe with time-independent $w$
not fixed at $w=-1$.
As $H_0$ is such
an important parameter, it is essential to measure it using multiple
methods.
In this paper, we use a single strong gravitational
lens as an independent
probe of $H_0$, and explore its systematic errors and relations with other
cosmological parameters to provide guidance for future studies. We will show
that the single lens is competitive with those of the best current
cosmographic probes.
Given the current progress in measuring time delays
\citep[e.g.,][]{VuissozEtal07, VuissozEtal08, ParaficzEtal09}, the
methodology in this paper should lead to substantial advances when
applied to samples of gravitational lenses.
Strong gravitational lensing occurs when a source galaxy is lensed
into multiple images by a galaxy lying along its line of
sight. The principle of using strong gravitational lens systems with
time-variable sources to measure the Hubble
constant is well understood (e.g. \citeauthor{Refsdal64}~\citeyear{Refsdal64},
\citeauthor*{SchneiderEtal06}~\citeyear{SchneiderEtal06}).
The relative time delays between the multiple images are inversely
proportional to $H_0$ via a combination of angular diameter distances
and depend on the lens potential (mass) distribution. We refer to the
combination of angular diameter distances as the ``time-delay
distance''. By measuring the time delays and modeling the lens
potential, one can infer the value for the time-delay distance; this
distance-like quantity is primarily sensitive to $H_0$ but depends
also on other cosmological parameters which must be factored into the
analysis. The direct measurement of the time-delay distance
means that gravitational lensing is independent of distance ladders.
Despite being an elegant method, gravitational lensing has its
limitations. Perhaps the most well-known is the ``mass-sheet
degeneracy'' between $H_0$ and external convergence
\citep*{FalcoEtal85}.
There is also a
degeneracy between $H_0$ and the slope of the lens mass distribution,
especially for lenses where the configuration is nearly symmetric
\citep[e.g.][]{Wucknitz02}. In such
cases, the image positions are at approximately the same radial
distance from the lens center and so the slope is poorly
constrained.
In both cases the remedy is to provide more information. Modeling the mass
environment of the lens can, in principle,
independently constrain the external convergence (e.g.,
\citeauthor{KeetonZabludoff04}~\citeyear{KeetonZabludoff04};
\citeauthor{FassnachtEtal06}~\citeyear{FassnachtEtal06};
Blandford et al.~in preparation); likewise, lens galaxy stellar velocity
dispersion measurements
\citep[e.g.,][]{GroginNarayan96a, GroginNarayan96b, TonryFranx99,
KoopmansTreu02, TreuKoopmans02, BarnabeKoopmans07, McKeanEtal09}
and analysis of any extended images
\citep[e.g.,][]{DyeWarren05, DyeEtal08}
can constrain the
mass distribution slope.
A measurement of $H_0$ to better than a few percent precision would
provide
the single most useful complement to results obtained from studies of
the CMB for dark energy studies \citep[e.g.][]{Hu05, RiessEtal09}.
Dark energy has been used to explain the accelerating Universe,
discovered using luminosity distances to SNe Ia \citep{RiessEtal98,
PerlmutterEtal99}. Efforts in studying dark energy often
characterize it by a constant equation of state parameter $w$ (where
$w=-1$ corresponds to a cosmological constant) and assume a flat
Universe. These include \citet{PerlmutterEtal99}, who in their Figure
10 constrained $w\lesssim -0.65$ for present day matter density values
of $\Omega_{\rm m}\ge0.2$, and \citet{EisensteinEtal05}, who combined their
angular diameter distance measurement to $z=0.35$ from Baryon Acoustic
Oscillations (BAO) with WMAP data \citep{SpergelEtal07} to obtain
$w=-0.80\pm0.18$. Recently, \citet{KomatsuEtal09} measured
$w=-0.992^{+0.061}_{-0.062}$ by combining WMAP 5-year results (WMAP5) with
observations of SNe Ia \citep{KowalskiEtal08} and BAO
\citep{PercivalEtal07}. \citet{KomatsuEtal09} also explored more
general dark energy descriptions. In our study, we combine the
time-delay distance measurement from B1608$+$656{}\ with WMAP data to derive
a constraint on $w$, and compare the constraining power of B1608$+$656{}\
to that of other cosmographic probes.
In this paper, we present an accurate measurement of $H_0$ from the
gravitational lens B1608$+$656{}. A comprehensive lensing analysis of the
lens system is in a companion paper (Paper~I{}; \citeauthor{SuyuEtal09}~\citeyear{SuyuEtal09}).
Using the results from
Paper I, we focus in this paper on techniques required to break the
mass-sheet degeneracy in order to infer a value of $H_0$ with well-understood
uncertainty. We then explore the influence of this measurement on other
cosmological parameters.
The organization of the paper is as follows. In Section
\ref{sec:H0theory}, we briefly review the theory behind using
gravitational lenses to measure $H_0$, include a description of the
mass-sheet degeneracy, and describe the dynamics modeling for the
measured velocity dispersion. In Section \ref{sec:H0ProbTheory}, we
outline the probability theory for combining various data sets and for
including cosmological priors. In Section \ref{sec:LensModel}, we
present the gravitational lens B1608$+$656{}\ as a candidate for measuring
$H_0$, and show the lens modeling results.
We present the new velocity dispersion measurement and the
stellar dynamics modeling in Section \ref{sec:StellDyn}. The study of
the convergence accumulated along the line of sight to B1608$+$656{}\
is discussed in Section
\ref{sec:LensEnv}. The priors for our model parameters are described
in Section \ref{sec:Priors}. Finally, in Section \ref{sec:H0} we combine the
lensing, dynamics and external convergence analyses
to break the mass-sheet
degeneracy and infer $H_0$ from the B1608$+$656{}\ data set. We then
show how B1608$+$656{}\ aids in constraining flatness and
measuring $w$ when combined with WMAP, before
concluding in Section~\ref{sec:conc}.
Throughout this paper, we assume a $w$-CDM universe where dark energy
is described by a time-independent equation of state with parameter
$w=P/\rho c^2$ with
present day dark energy density $\Omega_{\rm \Lambda}$, and the present day matter
density is $\Omega_{\rm m}$. Each quoted parameter estimate is the median of the
appropriate one-dimensional marginalized posterior
probability density function (PDF), with the quoted
uncertainties showing, unless otherwise stated, the $16^{\rm th}$ and $84^{\rm
th}$ percentiles (that is, the bounds of a 68\% confidence interval).
\section{Measuring $H_0$ using lensing, stellar dynamics, and lens environment studies}
\label{sec:H0theory}
In this section we briefly review
the theory of gravitational lensing for $H_0$ measurement
(Section~\ref{sec:H0theory:Lensing}),
describe the
mass-sheet degeneracy (Section \ref{sec:H0theory:MSD}), and present the
dynamics modeling (Section \ref{sec:H0theory:Dynamics}). Readers
familiar with these subjects can proceed directly to Section
\ref{sec:H0ProbTheory}.
\subsection{Theory of gravitational lensing}
\label{sec:H0theory:Lensing}
For a strong lens system in an otherwise homogeneous Robertson-Walker
universe, the excess time delay of an image at angular position
$\vec{\theta}=(\theta_1,\theta_2)$ with corresponding source position
$\vec{\beta}=(\beta_1,\beta_2)$ relative to the case of no lensing is
\begin{equation} \label{eq:T}
t(\vec{\theta},\vec{\beta}) = \frac {1}{c} \frac{D_{\rm d} D_{\rm s}}{D_{\rm ds}} (1+z_{\rm d})\, \phi(\vec{\theta},\vec{\beta}),
\end{equation}
where $z_{\rm d}$ is the redshift of the lens, $\phi(\vec{\theta},\vec{\beta})$ is
the so-called Fermat potential, and $D_{\rm d}$, $D_{\rm s}$, and $D_{\rm ds}$
are, respectively, the angular diameter distance from us to the lens, from us
to the source, and from the lens to the source. The Fermat potential is
defined as
\begin{equation} \label{eq:FP}
\phi(\vec{\theta},\vec{\beta})\equiv \left[\frac{(\vec{\theta}-\vec{\beta})^2}{2}-\psi(\vec{\theta}) \right],
\end{equation}
where the first term comes from the geometric path difference as a result of
the strong lens deflection, and the second term is the gravitational delay
described by the lens potential $\psi(\vec{\theta})$. The scaled deflection
angle of a light ray is $\vec{\alpha}(\vec{\theta}) = \vec{\nabla}
\psi(\vec{\theta})$, and the lens equation that governs the deflection
of light rays is $\vec{\beta} = \vec{\theta} - \vec{\alpha}(\vec{\theta})$.
The projected dimensionless surface mass density $\kappa(\vec{\theta})$ is
\begin{equation} \label{eq:psiKappaDiffRelan}
\kappa(\vec{\theta})=\frac{1}{2}\nabla^2\psi(\vec{\theta}),
\end{equation}
where
\begin{equation} \label{eq:kappa}
\kappa(\vec{\theta}) = \frac {\Sigma(D_{\rm d} \vec{\theta})} {\Sigma_{\rm cr}} \qquad \mathrm{with} \qquad \Sigma_{\rm cr} = \frac{c^2 D_{\rm s}}{4 \pi G D_{\rm d} D_{\rm ds}},
\end{equation}
and $\Sigma(D_{\rm d} \vec{\theta})$ is the physical projected surface mass density.
The constant coefficient in Equation (\ref{eq:T}) is proportional to
the angular diameter distance and hence inversely proportional to the
Hubble constant.
We can thus simplify Equation (\ref{eq:T}) to the
following:
\begin{eqnarray}
\label{eq:Tsimp}
t(\vec{\theta},\vec{\beta}) & = & \frac{D_{\rm \Delta t}}{c}\,
\phi(\vec{\theta},\vec{\beta}) \\
\label{eq:TsimpH0}
& \propto & \frac{1}{H_0} \phi(\vec{\theta},\vec{\beta}),
\end{eqnarray}
where $D_{\rm \Delta t} \equiv (1+z_{\rm d}) D_{\rm d} D_{\rm s}/D_{\rm ds}$ is referred
to as the time-delay distance.
Therefore, by modeling the lens potential ($\psi(\vec{\theta})$)
and the source position ($\vec{\beta}$), we
can use time-delay lens systems to deduce the value of the Hubble
constant,
and indeed the other cosmological parameters
that appear in $D_{\rm \Delta t}$. In this way, strong lensing can be
seen as a kinematic probe of the universal expansion, in the same general
category as SNe Ia and BAO.
Since the principal dependence of $D_{\rm \Delta t}$ is on $H_0$, we continue
to discuss lenses as a probe of this one parameter; however,
we shall see that the other cosmological parameters play an important role in
the analysis.
Gravitational lens systems with spatially extended source surface brightness
distributions are of special interest since they provide additional
constraints on the lens potential. However, in this case, simultaneous
determinations of the source surface brightness and the lens potential are
required.
\subsection{Mass-sheet degeneracy}
\label{sec:H0theory:MSD}
We now
briefly describe the mass-sheet degeneracy and its relevance to this
research \citep[see e.g.][for details]{FalcoEtal85,SchneiderEtal06}.
As its name suggests, this is a degeneracy in the mass modeling
corresponding to the addition of a
mass sheet that contributes a convergence and zero shear (and a matching
scaling of the original mass distribution) which leaves the predicted
image positions unchanged. A circularly symmetric surface mass
density distribution that is uniform interior to the line of sight is
one example of such a lens. Suppose we have a lens
model $\kappa_{\rm model}(\vec{\theta})$ that fits the observables of a
lens system (i.e., image positions, flux ratios for point sources, and the
image shapes for extended sources).
A new model described by the
transformation $\kappa_{\rm trans}(\vec{\theta}) = \lambda + (1-\lambda)
\kappa_{\rm model}(\vec{\theta})$, where $\lambda$ is a constant,
would also fit the lensing observables equally well.
The parameter $\lambda$ corresponds physically to the convergence of
the sheet.
Since we might think of including exactly such a parameter to account for
additional physical mass lying along the line of sight, or in the lens plane
to model a nearby group or cluster, it is clear that
the mass-sheet degeneracy corresponds to a degeneracy between this
external convergence ($\kappa_{\rm ext}$) and the mass normalization of the lens
galaxy.\footnote{To be specific, the prescription that we adopt for
combining the effects of many mass sheets at redshifts $z_i$ with
surface mass densities $\Sigma_i$ is $\kappa_{\rm ext}=\frac{4\pi
G}{c^2}\displaystyle\sum_i\frac{\Sigma_i(D_i\vec\theta)D_i D_{i{\rm s}}}{D_{\rm s}}$.}
Despite the invariance of the image positions, shapes and relative fluxes
under a mass-sheet transformation, the relative Fermat potential between the
images changes according to $\Delta \phi_{\rm trans}(\vec{\theta},
\vec{\beta}_{\rm trans}) = (1-\lambda) \Delta \phi_{\rm
model}(\vec{\theta},\vec{\beta}_{\rm model})$. Therefore, given measured
relative time delays $\Delta t$, which are inversely proportional to $H_0$ and
proportional to the relative Fermat potential (Equation \ref{eq:TsimpH0}), the
transformed model $\kappa_{\rm trans}$ would lead to an $H_0$ that is a factor
$(1-\lambda)$ lower than that of the initial $\kappa_{\rm model}$ (for fixed
$\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$, and $w$). In other words, if there is physically any external
convergence $\kappa_{\rm ext}$ due to the lens' local environment or mass structure along the
line of sight
to the lens system that is not incorporated in the lens
modeling, then
\begin{equation}
\label{eq:MassSheet:H0bias}
H_0^{\rm{true}}=(1-\kappa_{\rm ext}) H_0^{\rm{model}}.
\end{equation}
This degeneracy is present because lensing observations only deliver
relative positions and fluxes. The degeneracy can be broken, allowing us to
measure $H_0$, if (i) we
know the magnitude or angular size of the source in absence of lensing,
(ii) we have information on
the mass normalization of the lens,
or (iii) we can compare the measured shear in the lens with the
observed distribution of mass to calibrate $\kappa_{\rm ext}$.
For most of the strong lens systems including B1608$+$656{},
case (i) does not apply, so
circumventing the mass-sheet degeneracy requires the input of more
information, either about the lensing galaxy itself, or its three-dimensional
environment.
We distinguish between two kinds of mass sheets: \textit{internal}
and \textit{external}.
Internal mass
sheets, which are physically associated with the lens galaxy, are due
to nearby, physically associated galaxies,
groups or clusters which, crucially,
affect the stellar dynamics of the
lens galaxy. External mass sheets describe mass
distributions that are not physically associated with the
lens galaxy and, by definition, do not affect the stellar dynamics.
Typically these will
lie along the line of sight to the lens \citep{FassnachtEtal06}.
We identify $\kappa_{\rm ext}$ as the net convergence of this external mass sheet.
Two methods for breaking the mass-sheet degeneracy are then:
\renewcommand{\theenumi}{\roman{enumi}}
\begin{enumerate}
\item \textit{Stellar dynamics of the lens galaxy.} Stellar dynamics
can be used jointly with lensing to break the internal mass-sheet degeneracy
by providing an estimate of the enclosed mass at a radius different
from the Einstein radius, which is approximately the radius of the
lensed images from the lens galaxy
\citep[e.g.,][]{GroginNarayan96a, GroginNarayan96b, TonryFranx99,
KoopmansTreu02, TreuKoopmans02, BarnabeKoopmans07}. We note that
for a given stellar velocity dispersion, there is a degeneracy in
the mass and the stellar orbit anisotropy (which characterizes the amount of
tangential velocity dispersion relative to radial dispersion). Nonetheless,
the mass-isotropy degeneracy is nearly orthogonal to the mass-sheet
degeneracy, so a combination of the mass within the effective radius
(from the stellar velocity dispersion) and the mass within the Einstein
radius (from lensing) effectively breaks both the mass-isotropy and
the internal mass-sheet degeneracies. We describe how this works within the
context of our chosen mass model in Section~\ref{sec:H0theory:Dynamics}
below.
\item \textit{Studying the environment and the line of sight to the lens galaxy.}
Observations of the field around lens galaxies allow a rough picture of the
projected mass distribution to be built up. Many lens galaxies lie in galaxy
groups, which can be identified either by their spectra or, more cheaply
(but less accurately), by their colors and magnitudes. By modeling the
mass distribution of the groups and galaxies in the lens plane and along the
line of sight to the lens galaxy, one can estimate the external
convergence~$\kappa_{\rm ext}$ at the redshift of the lens \citep[e.g.][and references
therein]{MomchevaEtal06, FassnachtEtal06, AugerEtal07}. The group modeling
requires (i) identification of the galaxies that belong to the group of the
lens galaxy, and (ii) estimates of the group centroid and velocity
dispersion. A number of recipes can be followed. For example,
\citet{KeetonZabludoff04} considered two extremes: (i) the group is
described by a single smooth mass distribution, and (ii) the masses are
associated with individual galaxy group members with no common halo. The
realistic mass distribution for a galaxy group should be somewhere between
these two extremes.
The experience to date is that
modeling lens environments accurately is very difficult, with uncertainties
of 100\% typical \citep[e.g.][]{MomchevaEtal06, FassnachtEtal06}.
In Section \ref{sec:LensEnv}, we describe an
alternative approach for quantifying the external convergence in a
statistical manner: ray-tracing through numerical simulations of large-scale
structure \citep{HilbertEtal07}.
In this section
we also present a first attempt at tailoring the ray-tracing results
to our one line of sight, using the relative galaxy number counts in the
field.
\end{enumerate}
We emphasize that the mass-sheet degeneracy is simply one of the
several parameter degeneracies in the lens modeling that has been
given a special name. When power-laws ($\kappa \sim b R^{1-\gamma'}$,
where $R$ is the radial distance from the lens center, $b$ is the
normalization of the lens, and $\gamma'$ is the radial slope in the
mass profile) are used to describe the lens mass distribution, one
often finds a $H_0$-$\gamma'$ degeneracy in addition to the
$H_0$-$b$-$\kappa_{\rm ext}$ (mass-sheet) degeneracy (for fixed $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$ and
$w$; more generally, $D_{\rm \Delta t}$ would be in place of $H_0$). These two
degeneracies are of course related via $H_0$. The $H_0$-$\gamma'$
degeneracy primarily occurs in lens systems with symmetric
configurations due to a lack of information on $\gamma'$. In contrast,
lens systems with images spanning a range of radii or with extended
images provide information on $\gamma'$
\citep[e.g.][]{WucknitzEtal04,DyeEtal08}, and so the $H_0$-$\gamma'$
degeneracy is broken. Nonetheless, the $H_0$-$b$-$\kappa_{\rm ext}$ degeneracy
is still present unless we provide information from dynamics and lens
environment studies.
\subsection{Stellar dynamics modeling}
\label{sec:H0theory:Dynamics}
In order to model the velocity dispersion of the stars in the lens galaxy, we
need a model for the local gravitational potential well in which those stars
are orbiting. This potential is due to both
the mass distribution of the lens
galaxy, and also the ``internal mass sheet'' due to
neighboring groups and galaxies physically associated with the lens, as described in the
previous subsection.
Recent studies such as the Sloan Lens ACS Survey
(SLACS) and hydrostatic X-ray analyses found that the sum of these internal components can be well-described
by a power law \citep[e.g.][]{TreuEtal06, KoopmansEtal06,
GavazziEtal07, KoopmansEtal09, HumphreyBuote09}.
With this in mind, we assume that the total (lens plus sheet) mass density distribution is
spherically symmetric and of the form
\begin{equation}
\label{eq:localdensity}
\rho_{\rm local} = \rho_0 \left(\frac{r_0}{r}\right)^{\gamma'},
\end{equation}
where $\gamma'$ is the logarithmic slope of the effective lens density
profile, and $\rho_0 r_0^{\gamma'}$ is the normalization of the mass
distribution that is determined quite precisely by the lensing, up to
a small offset contributed by the external convergence $\kappa_{\rm ext}$. This
normalization can be expressed in terms of observable or inferrable
quantities as we show below.
By integrating $\rho_{\rm local}$
within a cylinder with radius
given by the Einstein radius $R_{\rm{Ein}}$, we find
\begin{eqnarray}
M_{\rm local} & = & 4\pi \int_0^\infty dz \int_0^{R_{\rm{Ein}}} \rho_0 r_0^{\gamma'} \frac{s\, {\rm d}s}{(s^2 + z^2)^{\gamma'/2}} \\
\label{eq:Mlocal}
& = & - \rho_0 r_0^{\gamma'} \frac{\pi^{3/2}
\Gamma(\frac{\gamma'-3}{2})
R_{\rm{Ein}}^{3-\gamma'}}{\Gamma(\frac{\gamma'}{2})}.
\end{eqnarray}
However, the mass responsible for creating an Einstein ring is a combination
of this local mass and the external mass contributed along the line of sight,
so the mass contained within the Einstein ring is
\begin{equation}
\label{eq:Meq}
M_{\rm Ein} = M_{\rm local} + M_{\rm{ext}}
\end{equation}
where $M_{\rm Ein}$ is the mass enclosed within the Einstein radius
$R_{\rm{Ein}}$ that would be inferred from lensing,\footnote{By definition,
$R_{\rm{Ein}}$ is the radius within which the total mean convergence is unity.}
given by
\begin{equation}
\label{eq:MEin}
M_{\rm Ein} = \pi R_{\rm{Ein}}^2 \Sigma_{\rm cr},
\end{equation}
and $M_{\rm ext}$ is the mass contribution from $\kappa_{\rm ext}$,
\begin{equation}
\label{eq:Mkext}
M_{\rm{ext}} = \pi R_{\rm{Ein}}^2 \kappa_{\rm ext} \Sigma_{\rm cr}.
\end{equation}
Combining Equations (\ref{eq:Mlocal}), (\ref{eq:Meq}),
(\ref{eq:MEin}), and (\ref{eq:Mkext}), we find
\begin{equation}
\rho_0 r_0^{\gamma'} = (\kappa_{\rm ext} - 1)
\Sigma_{\rm cr} R_{\rm{Ein}}^{\gamma'-1} \frac{\Gamma(\frac{\gamma'}{2})}{\pi^{1/2}
\Gamma(\frac{\gamma'-3}{2})}.
\end{equation}
Substituting this in Equation (\ref{eq:localdensity}), we obtain
\begin{equation}
\rho_{\rm local}=(\kappa_{\rm ext} - 1)
\Sigma_{\rm cr} R_{\rm{Ein}}^{\gamma'-1} \frac{\Gamma(\frac{\gamma'}{2})}{\pi^{1/2}
\Gamma(\frac{\gamma'-3}{2})} \frac{1}{r^{\gamma'}}.
\end{equation}
Spherical Jeans modeling can then be employed to infer the
line-of-sight velocity dispersion, $\sigma^{\rm P}(\gamma', \kappa_{\rm ext},
\beta_{\rm ani}, \Omega_{\rm m}, \Omega_{\rm \Lambda}, w)$, from $\rho_{\rm local}$
by assuming a model for the
stellar distribution $\rho_*$ \citep[e.g.,][]{BinneyTremaine87}.
Here, $\beta_{\rm ani}$ is a general anisotropy term that can be expressed
in terms of an anisotropy radius parameters for the stellar velocity
ellipsoid, $r_{\rm ani}$, in the Osipkov-Merritt formulation
\citep{Osipkov79, Merritt85}:
\begin{equation}
\beta_{\rm ani} = \frac{r^2}{r_{\rm ani}^2 + r^2},
\end{equation}
where $r_{\rm ani}=0$ is pure radial orbits and
$r_{\rm ani}\rightarrow\infty$ is isotropic with equal radial and
tangential velocity dispersions.
The dependence of $\sigma^{\rm P}$
on $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$, and $w$ enters through $\Sigma_{\rm cr}$ and the
physical scale radius of the stellar distribution, but the dependence
on $H_0$ drops out.
We now follow \citet{BinneyTremaine87} to show how the model velocity dispersion is calculated. The three-dimensional radial velocity dispersion $\sigma_{\rm r}$ is found by solving the spherical Jeans equation
\begin{equation}
\label{eq:Jeansequation}
\frac{1}{\rho_*}\frac{d(\rho_*\sigma_{\rm r})}{dr} + 2\frac{\beta_{\rm ani} \sigma_{\rm r}}{r} = -\frac{GM(r)}{r^2},
\end{equation}
where $M(r)$ is the mass enclosed within a radius $r$ for the total density profile given by Equation (\ref{eq:localdensity}) and with $\rho_*$ given by the Hernquist profile \citep{Hernquist90}
\begin{equation}
\rho_*(r) = \frac{{\rm{I_0}} a}{2\pi r (r + a)^3},
\end{equation}
where the scale radius $a$ is related to the effective radius $r_{\rm eff}$ by $a = 0.551r_{\rm eff}$ and ${\rm{I_0}}$ is a normalization term. The solution to Equation (\ref{eq:Jeansequation}) is
\begin{eqnarray}
\sigma_{\rm r}^2 & = & \frac{4\pi G a^{-\gamma'} \rho_0 r_0^{\gamma'}}{3-\gamma'} \frac{r(r+a)^3}{r^2 + r_{\rm ani}^2} \cdot \nonumber\\
& & \left( \frac{r_{\rm ani}^2}{a^2} \frac{{\rm _2F_1}[2+\gamma',\gamma'; 3+\gamma'; \frac{1}{1+r/a}]}{(2+\gamma') (r/a+1)^{2+\gamma'}} + \right. \nonumber\\
& & \left. \frac{{\rm _2F_1}[3, \gamma'; 1+\gamma'; -a/r]}{\gamma' (r/a)^{\gamma'}} \right),
\end{eqnarray}
where ${\rm _2F_1}$ is a hypergeometric function. The model luminosity-weighted velocity dispersion within an aperture $\mathcal{A}$ is then
\begin{equation}
(\sigma^{\rm P})^2 = \frac{\int_{\mathcal{A}} [ I_{\rm
H}(R)\sigma_{\rm s}^2 * \mathcal{P} ]\, R \,{\rm d}R \,{\rm
d}\theta}{\int_{\mathcal{A}} [ I_{\rm H}(R) * \mathcal{P} ]\, R
\,{\rm d}R \,{\rm d}\theta},
\end{equation}
where $I_{\rm H}(R)$ is the projected Hernquist distribution \citep{Hernquist90}, both integrands are convolved with the seeing $\mathcal{P}$ as indicated, and the theoretical (that is, before convolution and integration over the spectrograph aperture) luminosity-weighted projected velocity dispersion $\sigma_{\rm s}$ is given by
\begin{equation}
I_{\rm H}(R)\sigma_{\rm s}^2 = 2\displaystyle \int_R^\infty (1 -
\beta_{\rm ani} \frac{R^2}{r^2})\frac{\rho_*\sigma_{\rm r}^2 r \,{\rm d}r}{\sqrt{r^2 - R^2}}.
\end{equation}
The use of a \citet{Jaffe83} stellar distribution function follows the same derivation.
In the next section, we present the probability theory
for obtaining posterior probability distribution of $H_0$ by combining
the lensing, dynamics and lens environment studies.
\section{Probability Theory}
\label{sec:H0ProbTheory}
We aim to obtain an expression for the posterior probability
distribution of cosmological parameters $H_0$, $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$, and $w$
given the various independent data sets of B1608$+$656{}.
\subsection{Notations for joint modeling of data sets}
\label{sec:H0ProbTheory:Notation}
We introduce notations for the observed data and the model parameters that
will be used throughout the rest of this paper.
We have three independent data sets for B1608$+$656{}: the time delay
measurements from the radio observations of the four lensed images A,
B, C and D \citep{FassnachtEtal99,
FassnachtEtal02}, {\it HST}{}\ Advanced Camera for Surveys (ACS)
observations associated with
program 10158 (PI:Fassnacht; \citeauthor{SuyuEtal09}~\citeyear{SuyuEtal09}),
and the stellar velocity dispersion measurement of the primary lens
galaxy G1 (see Section \ref{sec:StellDyn}). Let $\boldsymbol{\Delta t}$ be the
time delay measurements of images A, C and D relative to image B, $\boldsymbol{d}$
be the data vector of the lensed image surface brightness measurements
of the gravitational lensed image, and $\sigma$ be the stellar
velocity dispersion measurement of the lens galaxy.
As shown in Section~\ref{sec:H0theory:Lensing},
information on $H_0$, $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$, and $w$ comes primarily from the relative
time delays between the images, which is a product of the
time-delay distance $D_{\rm \Delta t}$
and the Fermat potential difference. The Fermat potential is
determined by the lens potential and the source position that is given by the
lens equation. Therefore, the first step is to model the lens system using the
observed lensed image $\boldsymbol{d}$. In order to model the lens mass
distribution using the extended source information, we need to model the
point-spread function (PSF) $\boldsymbol{\mathsf{B}}$, image covariance matrix $\boldsymbol{\mathsf{C}}_{\mathrm{D}}$, lens
galaxy light $\boldsymbol{l}$, and dust $\boldsymbol{\mathsf{K}}$ (if present)
\citep[e.g.][]{SuyuEtal09}. We collectively denote these discrete
models associated
with the lensed image processing as
$\boldsymbol{M}_{\rm D}=\{\boldsymbol{\mathsf{B}},\boldsymbol{\mathsf{C}}_{\mathrm{D}},\boldsymbol{l},\boldsymbol{\mathsf{K}}\}$.
We explored a representative subspace of models
$\boldsymbol{M}_{\rm D}$ in Paper~I{}, using the Bayesian evidence from the ACS data
analysis to quantify the appropriateness of each model tested.
Given a particular image processing model,
we can infer the parameters of the lens
potential and the source surface brightness distribution
from the ACS data $\boldsymbol{d}$.
The data models are denoted by $\boldsymbol{M}_j =
\boldsymbol{M}_2,\ldots,\boldsymbol{M}_{11}$ for Models 2--11 in
Paper~I{}.
The lens potential can be simply parametrized by, for example, a
singular power-law ellipsoid (SPLE) with surface mass density
\begin{equation}
\label{eq:sple}
\kappa(\theta_1,\theta_2) = b \left[\theta_1^2+\frac{\theta_2^2}{q^2} \right]^{(1-\gamma')/2},
\end{equation}
where $q$ is the axis ratio, $b$ is the lens strength that determines
the Einstein radius ($R_{\rm Ein}$), and $\gamma'$ is the radial slope
\citep[e.g.][]{Barkana98,KoopmansEtal03}.
The distribution is then translated (with two parameters for the
centroid position) and rotated by the position angle parameter.
There is no need to include an
external convergence parameter in the mass
distribution during the lens modeling
since we cannot determine it due to the mass-sheet
degeneracy.
Instead, we explicitly
incorporate the external convergence in
the Fermat potential later on, taking into account the interplay among this
parameter, the slope, and the normalization of the effective lens mass
distribution. We collectively label all the
parameters of the simply-parametrized model by $\boldsymbol{\eta}$, except
for the radial slope $\gamma'$.
Alternatively, the
lens potential can be described on a grid of pixels, especially when
the source galaxy is spatially extended (which provides additional
constraints on the lens potential). We focus on this case; in
particular, we decompose the lens potential into an initial
simply-parametrized SPLE model $\psi_0(\gamma',\boldsymbol{\eta})$ and
grid-based potential corrections denoted by the vector $\boldsymbol{\delta \psi}$. The
final potential, which is on the same grid of pixels as the
corrections, is $\boldsymbol{\psi} = \boldsymbol{\psi_0}(\gamma',\boldsymbol{\eta}) + \boldsymbol{\delta \psi}$,
where $\boldsymbol{\psi_0}(\gamma',\boldsymbol{\eta})$ is the vector of initial potential
values evaluated at the grid points. Furthermore, we also describe the
extended source surface brightness distribution on a (different) grid of pixels by the
vector $\boldsymbol{s}$. The determination of the source surface brightness
distribution given the lens potential model is a regularized linear
inversion. The strength and form of the regularization are denoted by
$\lambda$ and $\boldsymbol{\mathsf{g}}$, respectively. The procedure for obtaining
the pixelated potential corrections and the corresponding extended
source surface brightness distribution is iterative and is described in detail
in Paper~I{}. We highlight that the resulting (iterated) pixelated lens
potential model is not limited by the parametrization of the initial
SPLE model -- tests of this method in Paper~I{}\ showed that when the
iterative procedure converged, the true potential was reconstructed
irrespective of the initial model.
The resulting lens potential allows us to compute the Fermat potential
$\phi$ at each image position, up to a factor of $(1-\kappa_{\rm ext})$.
Combining the Fermat potential with a value of $D_{\rm \Delta t}$ computed given the
cosmological parameters \{$H_0$,
$\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$, $w$\} provides us with
predicted values of the image time delays, $\Delta t^{\rm P}$.
The dynamics modeling of the galaxy is performed following
Section \ref{sec:H0theory:Dynamics}.
By construction, the power-law profile for the dynamics modeling with
slope $\gamma'$ matches the radial profile of the SPLE. Although
spherical symmetry is assumed for the dynamics modeling, a suitably
defined Einstein radius from the lens modeling leads to $R_{\rm Ein}$
and $M_{\rm Ein}$ that are independent of $q$ and are directly
applicable to the spherical dynamics modeling
\citep[e.g.][]{KoopmansEtal06}. Furthermore, the results from SLACS
based on spherical dynamics modeling \citep{KoopmansEtal09} agree with
those from a more sophisticated two-dimensional kinematics analyses of
six SLACS lenses \citep{CzoskeEtal08, BarnabeEtal09}, indicating that
spherical dynamics modeling for B1608+656 is sufficient. The
predicted velocity dispersion is dependent on six parameters:
1) the effective lens mass distribution profile slope $\gamma'$,
2) the external convergence $\kappa_{\rm ext}$,
3) the anisotropy radius $r_{\rm ani}$, and then the cosmological parameters 4)
$\Omega_{\rm m}$, 5) $\Omega_{\rm \Lambda}$, and 6) $w$.
By combining lensing, dynamics, and lens environment studies, we can break
the $D_{\rm \Delta t}$-$\kappa_{\rm ext}$ degeneracy to obtain a probability distribution for the
cosmological parameters \{$H_0$, $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$, $w$\}
given the data sets. In the inference, we assume that the redshifts of the
lens and source galaxies are known exactly for the computation of $D_{\rm \Delta t}$.
This is approximately true for B1608$+$656{}, which has spectroscopic measurements
for the redshifts \citep{MyersEtal95, FassnachtEtal96} --- an
uncertainty of $0.0003$ on the redshifts translates to $<0.2\%$
in time-delay distance, and hence $H_0$ for fixed $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$ and $w$.
By imposing sensible
priors on \{$H_0$, $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$, $w$\} from other independent experiments such as
WMAP5, we can marginalize the distribution to obtain the
posterior probability distribution for $H_0$.
\subsection{Constraining cosmological parameters}
\label{sec:H0ProbTheory:CosParamConstraint}
In this section, we describe the probability theory for inferring
cosmological parameters from the B1608$+$656{}\ data sets.
Readable introductions to this type of analysis can be found in the books by
\citet{sivia} and \citet{mackay}; we use notation consistent
with that in Paper~I{}.
Our goal is to obtain the posterior PDF
for the model parameters $\boldsymbol{\xi}$ given the three independent data sets
\{$\boldsymbol{\Delta t}$, $\boldsymbol{d}$, $\sigma$\}:
\begin{equation}
\label{eq:ParsPosterior:sec}
P(\boldsymbol{\xi}|\boldsymbol{\Delta t},\boldsymbol{d},\sigma) \propto P(\boldsymbol{\Delta t}|\boldsymbol{\xi})P(\boldsymbol{d}|\boldsymbol{\xi})P(\sigma|\boldsymbol{\xi})P(\boldsymbol{\xi}),
\end{equation}
where the parameters $\boldsymbol{\xi}$ consist of all the model parameters
for obtaining the predicted data sets described in Section
\ref{sec:H0ProbTheory:Notation}: $\gamma'$, $\kappa_{\rm ext}$, $\boldsymbol{\eta}$, $\boldsymbol{\delta \psi}$,
$\boldsymbol{s}$, $\boldsymbol{M}_{\rm D}$, $r_{\rm ani}$, $H_0$, $\Omega_{\rm m}$,
$\Omega_{\rm \Lambda}$, $w$. For notational simplicity, we denote the cosmological parameters as
$\boldsymbol{\pi}=\{H_0, \Omega_{\rm m}, \Omega_{\rm \Lambda}, w\}$. In Equation
(\ref{eq:ParsPosterior:sec}), the dependence on $z_{\rm s}$ and $z_{\rm d}$ are implicit.
To obtain the PDF of cosmological parameters $\boldsymbol{\pi}$, we
marginalize Equation (\ref{eq:ParsPosterior:sec}) over all parameters
apart from $\boldsymbol{\pi}$:
\begin{eqnarray}
\label{eq:CosmoparsPosterior:sec}
P(\boldsymbol{\pi}|\boldsymbol{\Delta t},\boldsymbol{d},\sigma) &\propto& \int {\rm d}\gamma'\ {\rm d}\kappa_{\rm ext}\ {\rm d}\boldsymbol{\eta}\ {\rm d}\boldsymbol{\delta \psi}\ {\rm d}\boldsymbol{s}\ {\rm d}\boldsymbol{M}_{\rm D}\ {\rm d}r_{\rm ani} \cdot \nonumber\\
& & \overbrace{P(\boldsymbol{\Delta t}|\boldsymbol{\xi})P(\boldsymbol{d}|\boldsymbol{\xi})P(\sigma|\boldsymbol{\xi})}^{\rm likelihood} \cdot \nonumber\\
& & \overbrace{P(\boldsymbol{\pi},\gamma',\kappa_{\rm ext},\boldsymbol{\eta},\boldsymbol{\delta \psi},\boldsymbol{s} ,\boldsymbol{M}_{\rm D},r_{\rm ani})}^{\rm prior}.
\end{eqnarray}
In the following subsection, we discuss each of the three terms in the joint
likelihood function in turn.
\subsection{Likelihoods}
\label{sec:H0ProbTheory:Like}
Each of the three
likelihoods in Equation (\ref{eq:CosmoparsPosterior:sec}) generally
depends only on a subset of the parameters $\boldsymbol{\xi}$. Specifically,
dropping independences, we have
$P(\boldsymbol{\Delta t}|\boldsymbol{\xi})=P(\boldsymbol{\Delta t}|\boldsymbol{\pi},\gamma',\kappa_{\rm ext},\boldsymbol{\eta},\boldsymbol{\delta \psi},\boldsymbol{M}_{\rm D})$,
$P(\boldsymbol{d}|\boldsymbol{\xi})=P(\boldsymbol{d}|\gamma',\boldsymbol{\eta},\boldsymbol{\delta \psi},\boldsymbol{s},\boldsymbol{M}_{\rm D})$,
and $P(\sigma|\boldsymbol{\xi})=P(\sigma|\Omega_{\rm m},\Omega_{\rm \Lambda},w,\gamma',\kappa_{\rm ext},r_{\rm ani})$.
For B1608$+$656{}, we can simplify and drop independences further in the time
delay likelihood $P(\boldsymbol{\Delta t}|\boldsymbol{\xi})$ by expressing the relative
Fermat potential (relative to image B for the images A, C
and D) as
\begin{equation}
\label{eq:fprelation:sec}
\Delta\phi(\gamma', \kappa_{\rm ext}, \boldsymbol{M}_{\rm D}) = (1-\kappa_{\rm ext}) q(\gamma',\boldsymbol{M}_{\rm D}),
\end{equation}
and writing the $i^{\rm th}$ (AB, CB or DB) predicted time delay as
\begin{equation}
\label{eq:tddef:sec}
\Delta t_i^{\rm P} = \frac{1}{c}D_{\rm \Delta t}(z_{\rm d},z_{\rm s},\boldsymbol{\pi}) \cdot \Delta\phi_i(\gamma',\kappa_{\rm ext}, \boldsymbol{M}_{\rm D})
\end{equation}
(see Appendix~\ref{app:H0ProbTheory} for details).
The resulting likelihood is
\begin{eqnarray}
\label{eq:tdelayLikeSimp:sec}
\lefteqn{ P(\boldsymbol{\Delta t} | z_{\rm d},z_{\rm s}, \boldsymbol{\pi}, \gamma',\kappa_{\rm ext},\boldsymbol{M}_{\rm D})} \nonumber\\
& & = \prod_{i=1}^{3} P(\Delta t_i | z_{\rm d},z_{\rm s}, \boldsymbol{\pi}, \gamma',\kappa_{\rm ext}, \boldsymbol{M}_{\rm D}),
\end{eqnarray}
where we assume that the three time delay measurements are independent, and
that each
$P(\Delta t_i | z_{\rm d},z_{\rm s}, \boldsymbol{\pi}, \gamma',\kappa_{\rm ext}, \boldsymbol{M}_{\rm D})$
is given by the PDF in \citet{FassnachtEtal02}.
The pixelated lens potential and source surface brightness
reconstruction allows us to compute
\begin{eqnarray}
\label{eq:SrEvidDef:sec}
P(\boldsymbol{d} | \gamma', \boldsymbol{\eta}, \boldsymbol{\delta \psi}_{\mathrm{MP}}, \boldsymbol{M}_{\rm D}) &= \int &{\rm d}\boldsymbol{s}\ P(\boldsymbol{d} | \gamma', \boldsymbol{\eta}, \boldsymbol{\delta \psi}_{\mathrm{MP}}, \boldsymbol{s}, \boldsymbol{M}_{\rm D}) \cdot \nonumber\\
& & P(\boldsymbol{s} | \lambda, \boldsymbol{\mathsf{g}}),
\end{eqnarray}
by marginalizing out the source surface brightness $\boldsymbol{s}$. The
most probable potential correction, $\boldsymbol{\delta \psi}_{\mathrm{MP}}$, is the result of the
pixelated potential reconstruction method.
The likelihood for the lens parameters,
$P(\boldsymbol{d} | \gamma', \boldsymbol{\eta}, \boldsymbol{\delta \psi}_{\mathrm{MP}}, \boldsymbol{M}_{\rm D})$, is also
the Bayesian evidence of the source surface brightness reconstruction;
the analytic expression for this likelihood is given by Equation (19)
in \citet{SuyuEtal06}. Part of the
marginalization in Equation (\ref{eq:CosmoparsPosterior:sec}) can be
simplified via
\begin{eqnarray}
& & \int {\rm d}\boldsymbol{\delta \psi}\ {\rm d}\boldsymbol{s}\ {\rm d}\boldsymbol{M}_{\rm D}\ P(\boldsymbol{d} | \gamma', \boldsymbol{\eta}, \boldsymbol{\delta \psi}, \boldsymbol{s}, \boldsymbol{M}_{\rm D})\cdot\nonumber\\
& & \ \ \ \ \ \ P(\boldsymbol{s} | \lambda, \boldsymbol{\mathsf{g}}) P(\boldsymbol{M}_{\rm D}) P(\boldsymbol{\delta \psi})\nonumber \\
\label{eq:CosmoparsPosteriorSimp1:sec}
& \propto & \sim P(\boldsymbol{d} | \gamma', \boldsymbol{\eta}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5}),
\end{eqnarray}
under various assumptions stated in
Appendix~\ref{app:H0ProbTheory} that are either justified
in Paper I or will be shown to be valid in Section \ref{sec:LensModel:Result}.
In essence, we find that the ACS data models that give acceptable fits are all
equally probable within their errors, making conditioning on
$\boldsymbol{M}_{5}$ (i.e., setting $\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5}$, where
$\boldsymbol{M}_{5}$ is Model 5 in Paper~I{}\ for the lensed image processing)
approximately equivalent to marginalizing over all models $\boldsymbol{M}_{\rm D}$.
Furthermore, we can
marginalize out the parameters of the smooth lens model~$\boldsymbol{\eta}$
separately:
\begin{eqnarray}
\label{eq:MargSlopePosterior:sec}
P(\gamma' | \boldsymbol{d}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5}) &\propto& \int {\rm d}\boldsymbol{\eta}\ P(\boldsymbol{d} | \gamma', \boldsymbol{\eta}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})\cdot\nonumber\\
& & \ \ P_{\rm no\, ACS}(\gamma')\ P(\boldsymbol{\eta}).
\end{eqnarray}
(See Appendix~\ref{app:H0ProbTheory} for details of the assumptions involved.)
We see that the resulting PDF,
$P(\gamma' | \boldsymbol{d}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$, can itself be treated as
a prior on the slope $\gamma'$. Without the ACS data $\boldsymbol{d}$, this
distribution will default to the lower level prior~$P_{\rm no\,
ACS}(\gamma')$. For the rest of
this section we refer only to the generic prior~$P(\gamma')$, keeping in mind
that this distribution may or may not include the information from the ACS
data. This will allow us to isolate the influence of the ACS
data on the final results, when we compare the PDF in
Equation (\ref{eq:MargSlopePosterior:sec})
with some alternative choices of $P(\gamma')$.
For the velocity dispersion likelihood, the predicted velocity dispersion
$\sigma^{\rm P}$ as a function of the parameters described in
Section \ref{sec:H0ProbTheory:Notation} is
\begin{equation}
\sigma^{\rm P} = \sigma^{\rm
P}(\Omega_{\rm m},\Omega_{\rm \Lambda},w,\gamma',\kappa_{\rm ext},r_{\rm ani}|z_{\rm d},z_{\rm s}, r_{\rm eff},R_{\rm Ein}),
\end{equation}
where the effective radius, $r_{\rm eff}$, the
Einstein radius, $R_{\rm Ein}$, and the mass enclosed within the
Einstein radius, $M_{\rm Ein}$, are fixed. The effective radius is
fixed by observations, and $R_{\rm Ein}$ and $M_{\rm Ein}$ are the
quantities that lensing delivers robustly. The uncertainty in the dynamics
modeling due to the error associated with $r_{\rm eff}$, $R_{\rm Ein}$
and $M_{\rm Ein}$ is negligible compared to the uncertainties
associated with $\kappa_{\rm ext}$. The likelihood
function for $\sigma$ is a Gaussian:
\begin{eqnarray}
\label{eq:vdispLikelihood:sec}
\lefteqn{P(\sigma | \Omega_{\rm m},\Omega_{\rm \Lambda},w,\gamma',\kappa_{\rm ext},r_{\rm ani})} \nonumber \\
& & = \frac{1}{\sqrt{2\pi\sigma_{\sigma}^2}} \exp{\left[-\frac{(\sigma -
\sigma^{\rm P})^2}{2\sigma_{\sigma}^2}\right]}.
\end{eqnarray}
Finally then, we have the following simplified version of Equation (\ref{eq:CosmoparsPosterior:sec}),
where the posterior PDF has been successfully compartmentalized into
manageable pieces:
\begin{eqnarray}
\label{eq:CosmoparsPosteriorFullSimp2:sec}
P(\boldsymbol{\pi}|\boldsymbol{\Delta t},\boldsymbol{d},\sigma) & \propto & \int {\rm d}\gamma'\, {\rm d}\kappa_{\rm ext}\, {\rm d}r_{\rm ani}\cdot\nonumber\\
& & \ \ \ \ P(\boldsymbol{\Delta t} | z_{\rm d}, z_{\rm s}, \boldsymbol{\pi}, \gamma',\kappa_{\rm ext},\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})\cdot\nonumber\\
& & \ \ \ \ P(\sigma | \Omega_{\rm m},\Omega_{\rm \Lambda},w,\gamma',\kappa_{\rm ext},r_{\rm ani})\cdot\nonumber\\
& & \ \ \ \ P(\gamma')\, P(\kappa_{\rm ext})\, P(r_{\rm ani})\, P(\boldsymbol{\pi}).
\end{eqnarray}
Sections 4 to 7 address the specific forms of the likelihoods and the
priors in Equation (\ref{eq:CosmoparsPosteriorFullSimp2:sec}). In
particular, in the next section, we focus on the lens modeling of
B1608$+$656{}\ which will justify the assumptions mentioned above
and provide both the time delay likelihood
and the ACS $P(\gamma')$ prior.
\section{Lens model of B1608$+$656{}}
\label{sec:LensModel}
The quadruple-image gravitational lens B1608$+$656{}\ was discovered in the
Cosmic Lens All-Sky Survey (CLASS) \citep{MyersEtal95, BrowneEtal03,
MyersEtal03}.
Figure \ref{fig:B1608color} is an image of B1608$+$656{}, showing the
spatially extended source surface brightness
distribution (with lensed images
labeled by A, B, C, and D) and two interacting galaxy lenses (labeled
by G1 and G2). The redshifts of
the source and the lens galaxies are, respectively, $z_{\rm s}= 1.394$
\citep{FassnachtEtal96} and $z_{\rm d}= 0.6304$
\citep{MyersEtal95}.\footnote{We assume that the redshift of G2 is the
same as G1.} We note that the lens galaxies are in a group with all
galaxy members in the group lie within $\pm300\rm{\, km\, s^{-1}}$ of the mean
redshift \citep{FassnachtEtal06}. Thus, even a conservative
limit of $300\rm{\, km\, s^{-1}}$ for the peculiar velocity of B1608$+$656{}\ relative to
the Hubble flow would only change $D_{\rm \Delta t}$ by $0.5\%$.
As we will see, this is not significant compared to the
systematic error associated with $\kappa_{\rm ext}$.
This system is special in that
the three relative time delays between the four images were measured
accurately with errors of only a few percent: $\Delta t_{\rm
AB}=31.5^{+2.0}_{-1.0} \rm{\ days}$, $\Delta t_{\rm CB}= 36.0^{+1.5}_{-1.5}
\rm{\ days}$, and $\Delta t_{\rm DB}= 77.0^{+2.0}_{-1.0} \rm{\ days}$
\citep{FassnachtEtal99, FassnachtEtal02}. The additional constraints
on the lens potential from the extended source analysis
and the accurately measured time delays between
the images make B1608$+$656{}\ a good candidate to measure $H_0$ with
few-percent precision. However, the presence of dust and interacting
galaxy lenses (visible in Figure \ref{fig:B1608color}) complicate this
system. In Paper~I{}, we presented a comprehensive analysis
that took into account the extended source surface
brightness distribution, interacting galaxy lenses, and the presence of
dust for reconstructing the lens potential. In the following
subsections, we summarize the data analysis and lens modeling from
Paper~I{}, and present the resulting Bayesian evidence values (needed in
Equation (\ref{eq:MargSlopePosterior:sec})) from the lens modeling.
\begin{figure}
\begin{center}
\includegraphics[width=75mm]{fig1.eps}
\caption[{\it HST}{}\ ACS image of B1608$+$656{}]{\label{fig:B1608color}
{\it HST}{}\ ACS image of B1608$+$656{}\ from 11 orbits in F814W and 9 orbits in
F606W. North is up and east is left. The lensed images of the source galaxy are labeled by A,
B, C, and D, and the two lens galaxies are G1 and G2. 1 arcsec corresponds to
approximately 7~kpc at the redshift of the lens.}
\end{center}
\end{figure}
\subsection{Summary of observations, data analysis, and lens modeling in Paper
I}
\label{sec:LensModel:ObsAnal}
Deep {\it HST}{}\ ACS observations on B1608$+$656{}\ in F606W and F814W filters
were taken specifically to
obtain high signal-to-noise ratio images of the lensed source emission.
In Paper I, we investigated a representative sample of PSF, dust, and lens
galaxy light models in order to extract the Einstein ring for the lens
modeling. Table~\ref{tab:EvidSPLE1D} lists the various PSF and dust
models, and we refer the readers to Paper I for details of each model.
The resulting dust-corrected, galaxy-subtracted F814W image
allowed us to model both the lens potential and source surface brightness
on grids of pixels based on an iterative and perturbative
potential reconstruction scheme. This method requires an initial
guess potential model that would ideally be close to the true model.
In Paper I, we adopt the SPLE1+D (isotropic) model from
\citet{KoopmansEtal03} as the initial model, which is the most
up-to-date, simply-parametrized model combining both lensing and stellar
dynamics. In the current paper, we
additionally investigate the dependence on the initial model by
describing the lens galaxies as SPLE models for a range of slopes
($\gamma'=1.5, 1.6, \ldots, 2.5$). Contrary to the SPLE1+D (isotropic)
model, the parameters for the SPLE models with variable slopes
are constrained by lensing data only, without the velocity dispersion
measurement.
The source reconstruction
provides a value for the Bayesian evidence, $P(\boldsymbol{d} | \gamma', \boldsymbol{\eta},
\boldsymbol{\delta \psi}, \boldsymbol{M}_{\rm D})$, which can be used for model comparison (where
model refers to the PSF, dust, lens galaxy light, and lens potential
model). The reconstructed lens potential (after the
pixelated corrections $\boldsymbol{\delta \psi}$) for each data model
(PSF, dust, lens galaxy light) leads to three estimates of the Fermat
potential differences between the image positions. These are
presented in the next subsection for the representative set of PSF, dust,
lens galaxy light, and pixelated potential model.
\subsection{Lens modeling results}
\label{sec:LensModel:Result}
In Paper~I{}, we successfully used a pixelated
reconstruction method to model small deviations from a smooth
lens potential model of B1608$+$656{}.
The resulting source surface brightness distribution is well-localized, and the
most probable
potential correction~$\boldsymbol{\delta \psi}_{\mathrm{MP}}$ has angular structure approximately following a $\cos \phi$
mode with amplitude $\sim2\%$. The $\cos 2\phi$ mode, which could mimic an
additional external shear or lens mass distribution
ellipticity, has a lower amplitude still, indicating that the
smooth model of \citet{KoopmansEtal03} --- which includes an external shear of
$\simeq 0.08$ --- is giving an adequate account of the
extended image light distribution. This was the main result of Paper~I{}.
The key ingredient in the ACS prior for the lens density profile slope
parameter~$\gamma'$ (Equation (\ref{eq:MargSlopePosterior:sec})) coming from this
analysis is the
likelihood $P(\boldsymbol{d} | \gamma', \boldsymbol{M}_{\rm D})$. For a particular choice of
slope~$\gamma'$ and data model~$\boldsymbol{M}_{\rm D}$,
this is just the evidence value resulting from the Paper~I{}\ reconstruction.
In this section, our objective is to use the results of this analysis to
obtain $P(\gamma' | \boldsymbol{d})$ and $\Delta\phi(\gamma', \kappa_{\rm ext})$,
marginalizing over a
representative sample of data models.
\subsubsection{Marginalization of the data model}
Table~\ref{tab:EvidSPLE1D} shows the
results of the pixelated potential reconstruction at fixed density
slope in the initial smooth
lens potential model, for various data models $\boldsymbol{M}_{\rm D}$.
Specifically, we used the SPLE1+D (isotropic) model in
\citet{KoopmansEtal03} with $\gamma'=2.05$.
The uncertainties in the log evidence in Table \ref{tab:EvidSPLE1D} were
estimated as $\sim0.03\times10^4$ for the log evidence values before
potential correction, and $\sim0.05\times10^4$ for the log evidence values
after potential correction.
We see a clear division between models with high and low evidence values, the
two groups being separated by a very large factor in probability.
Assuming that all the data models $\boldsymbol{M}_{\rm D}$ are equally probable {\it a
priori}, the contribution to the marginalized distribution
$P(\boldsymbol{\pi}|\boldsymbol{\Delta t},\boldsymbol{d},\sigma)$
(Equation (\ref{eq:CosmoparsPosterior:sec})) from these lower-evidence
models will be negligible.
The physical difference between these evidence-ranked data models is
in the dust correction: the 2-band dust models are found to be less
probable than the 3-band dust models. It is useful to quantify the
systematic error that would occur with the use of 2-band dust models
(which was avoided from the evidence ranking) in terms of the $H_0$
value implied by the system. For this simple error estimation we use
Equation (\ref{eq:Tsimp}) and assert $\Omega_{\rm m}=0.3$, $\Omega_{\rm \Lambda}=0.7$, $w=-1$
and zero external convergence,
as a fiducial reference cosmology \citep{KoopmansEtal03}.
The implied Hubble
constants are shown in the final four columns of Table~\ref{tab:EvidSPLE1D}.
We see that the disfavored use of the 2-band dust maps would have led to
values of $H_0$ some 15\% lower than that inferred from the 3-band maps.
We note that the evidence values of each of the
3-band dust map models $\boldsymbol{M}_{\rm D}$ are the same within their uncertainties.
We can also see that for good data models, specifically
$\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5}$, the three $H_0$ values have low scatter: these
lens models are internally self-consistent. Furthermore, the scatter between
the values for the different good data models is also low: the high evidence
data models consistently return the same Hubble constant. This is the basis
for the approximations (in Section \ref{sec:H0ProbTheory:Like} and Appendix
\ref{app:H0ProbTheory}) that the likelihood
$P(\boldsymbol{\Delta t}|\boldsymbol{\xi})$ is effectively constant with the 3-band
dust map models $\boldsymbol{M}_{\rm D}$. Assuming
that we have indeed obtained the optimal set of $\boldsymbol{M}_{\rm D}$, we can
approximate the likelihoods in Equations (\ref{eq:MargSlopePosterior:sec})
and (\ref{eq:CosmoparsPosteriorFullSimp2:sec}) as being evaluated for model
$\boldsymbol{M}_{5}$.
\subsubsection{Effects of the potential corrections}
Having approximately marginalized out $\boldsymbol{M}_{\rm D}$ by conditioning on
$\boldsymbol{M}_{5}$, we now consider the impact of the potential corrections
discussed in Paper~I{}. In particular, we seek the likelihood for the density
profile slope parameter~$\gamma'$, $P(\boldsymbol{d} | \gamma'=\gamma'_i, \boldsymbol{\eta},
\boldsymbol{\delta \psi}_{\mathrm{MP}},\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$. We characterize this function on a
grid of slope values in the range of $\gamma'=1.5,1.6\ldots,2.5$, first
re-optimizing the parameters of the smooth lens model, and then computing the
source reconstruction evidences both with and without potential correction.
These are
tabulated in Table~\ref{tab:slopeEvid}. We again compute the Fermat
potential differences and implied Hubble constant values as before.
The spread of the three implied $H_0$ values at fixed density slope is again
small: we conclude that the internal self-consistency of the lens model
depends on the data model but not~$\gamma'$. The table also shows that the smooth
SPLE model provides a good estimate of the relative Fermat potentials. Indeed,
this was the principal conclusion of Paper~I{}. The relative thickness of the
arcs is sensitive to the SPLE density profile slope~$\gamma'$, as can be seen
in the first two columns of Table~\ref{tab:slopeEvid}: the evidence clearly
favors $\gamma' \simeq 2.05$, as previously found by \citet{KoopmansEtal03}.
Indeed, exponentiating this gives quite a sharply peaked function, which we
return to below.
How is the potential correction then affecting the model? In
Table~\ref{tab:slopeEvid} we can see that the corrected potential leads to
nearly the same evidence value ($P(\boldsymbol{d} | \gamma'=\gamma'_i, \boldsymbol{\eta},
\boldsymbol{\delta \psi}_{\mathrm{MP}},\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$) for a wide range of underlying density
slopes, and yet barely changes the relative Fermat potential values.
The unchanging nature of the Fermat potential is due to the curvature type of
regularization on the potential corrections suppressing the addition of
mass within the potential reconstruction annulus.
From \citet{Kochanek02},
the relative Fermat potential depends only on the mean surface mass density
enclosed in the annulus between the images, to first order in
$\delta R/\langle R\rangle$, where $\delta R$ is the difference in the radial
distance of the image locations from the effective center of the lens galaxies
and $\langle R\rangle$ is the mean radius of
the images. The mean surface mass density depends on the slope of the initial
SPLE model (hence the trend we see in relative Fermat potential in the
left-hand side of Table~\ref{tab:slopeEvid}), but not on the potential
corrections due to the curvature regularization imposed. Therefore, to
first order in $\delta R/\langle R\rangle$, the Fermat potential depends
only indirectly on $\gamma'$ via the mean surface mass density.
The second order term is very small
--- it has a prefactor of 1/12 and for B1608$+$656{}, $(\delta R/\langle R\rangle)^2
\sim 0.1$. Therefore, for good and self-consistent data models, the potential
corrections $\boldsymbol{\delta \psi}_{\mathrm{MP}}$ do not change the Fermat potential significantly.
The right-hand side of Table~\ref{tab:slopeEvid},
where a wide range of initial slope values
provide good fits to the data, is therefore
effectively a manifestation of the
mass-sheet degeneracy. One can understand the effect of the potential
corrections as making {\it local corrections to the effective density profile
slope in order
to fit the ACS data}.
The change in slope by the pixelated
corrections would create a deficit/surplus of mass in the annulus,
which the pixelated potential corrections then add/subtract back into
the annulus in the form of a constant mass sheet to (i) enforce the
prior (no net addition of mass within annulus) and (ii) continue to
fit the arcs equally well.
We conclude that the value of the potential correction analysis is in
demonstrating that the double SPLE model for B1608$+$656{}\ is, despite the
system's complexity, a good model for the high fidelity {\it HST}{}\ data.
The corrections are small in magnitude ($\simeq 2\%$ relative to the
initial SPLE model), and the inclusion of
the $\boldsymbol{\delta \psi}$ neither
significantly reduces the dispersion in implied $H_0$ values between the image
pairs, nor alters the rank order of the data models.
We therefore
use the information on the slope of the initial SPLE model
from the ACS data {\it without potential corrections,}
thus using the information on the relative thickness of the lensed
extended images clearly present.
How we derive our estimate for $P(\boldsymbol{d} | \gamma', \boldsymbol{M}_{\rm D})$
from
column 2 of Table~\ref{tab:slopeEvid} is described next.
\begin{table*}
\begin{center}
\caption{\label{tab:EvidSPLE1D} log evidence values and relative Fermat potential values before and after the pixelated potential reconstruction for various data models with the SPLE1+D (isotropic) initial model}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{Data Model} & \multicolumn{1}{|c|}{Initial Potential} & \multicolumn{8}{|c|}{Corrected Potential} \\
\hline
Model & PSF & dust & log P & log P & $\Delta\phi^{\rm{AB}}$ & $\Delta\phi^{\rm{CB}}$ & $\Delta\phi^{\rm{DB}}$ & $H_0^{\rm{AB}}$ & $H_0^{\rm{CB}}$ & $H_0^{\rm{DB}}$ & $\bar{H_0}$ \\
& & & $(\times10^4)$ & $(\times10^4)$& & & & & & & \\
\hline
5 & B1 & 3-band & $1.56$ & $1.77$ & 0.244 & 0.279 & 0.575 & 78.1 & 78.1 & 75.1 & $77.1 \pm 1.7$ \\
9 & C & B1/3-band & $1.56$ & $1.76$ & 0.240 & 0.280 & 0.563 & 76.7 & 78.3 & 73.5 & $76.2 \pm 2.4$ \\
3 & C & 3-band & $1.60$ & $1.76$ & 0.243 & 0.277 & 0.570 & 77.6 & 77.5 & 74.4 & $76.5 \pm 1.8$ \\
2 & drz& 3-band & $1.48$ & $1.75$ & 0.238 & 0.278 & 0.548 & 76.0 & 77.7 & 71.6 & $75.1 \pm 3.1$ \\
7 & B2 & 3-band & $1.55$ & $1.75$ & 0.237 & 0.274 & 0.571 & 75.7 & 76.7 & 74.6 & $75.7 \pm 1.0$ \\
\hline
11& B1 & no dust & $1.27$ & $1.72$ & 0.229 & 0.263 & 0.576 & 73.2 & 73.6 & 75.3 & $74.0 \pm 1.1$ \\
10& B1 & C/2-band & $1.36$ & $1.61$ & 0.193 & 0.227 & 0.565 & 61.8 & 63.5 & 73.8 & $66.4 \pm 6.4$ \\
4 & C & 2-band & $1.40$ & $1.58$ & 0.199 & 0.234 & 0.560 & 63.6 & 65.6 & 73.1 & $67.4 \pm 5.0$ \\
6 & B1 & 2-band & $1.10$ & $1.41$ & 0.196 & 0.226 & 0.559 & 62.5 & 63.2 & 73.0 & $66.2 \pm 5.8$ \\
8 & B2 & 2-band & $1.23$ & $1.40$ & 0.201 & 0.234 & 0.556 & 64.3 & 65.4 & 72.7 & $67.4 \pm 4.5$ \\
\hline
\hline
& & & & \multicolumn{8}{|c|}{$\Delta\phi$ and $H_0$ values from initial SPLE1+D (isotropic)} \\
\hline
& & & & & 0.243 & 0.271 & 0.575 & 77.7 & 75.8 & 75.1 & $76.2 \pm 1.3$ \\
\hline
\end{tabular}
\end{center}
Notes --- The uncertainties in the log evidence before and after the potential
corrections are $\sim0.03\times10^4$ and $\sim0.05\times10^4$, respectively.
The relative Fermat potentials are in units of ${\rm arcsec}^2$, and the
$H_0$ values are in units of $\rm{\, km\, s^{-1}\, Mpc^{-1}}$.
The $\bar{H_0}$ values are the
mean and standard deviation from the mean of the three estimates obtained
using the initial/corrected potential and the three time delays, without
taking into account the uncertainties associated with the time delays.
These $H_0$ values assume $\Omega_{\rm m}=0.3$, $\Omega_{\rm \Lambda}=0.7$ and $w=-1$, and are listed
purely to aid the digestion of the $\Delta\phi$ values.
The full analysis for obtaining the probability distribution for the
cosmological parameters is described in Section \ref{sec:H0}.
\end{table*}
\begin{table*}
\begin{center}
\caption{\label{tab:slopeEvid} log evidence value before and after the pixelated potential reconstruction for initial models with various slope using PSF-B1 and the 3-band dust map ($\boldsymbol{M}_{\rm D}$=Model 5)}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{8}{|c|}{Initial Potential} & \multicolumn{8}{|c|}{Corrected Potential} \\
\hline
$\gamma'$ & log P & $\Delta\phi^{\rm{AB}}$ & $\Delta\phi^{\rm{CB}}$ & $\Delta\phi^{\rm{DB}}$ & $H_0^{\rm{AB}}$ & $H_0^{\rm{CB}}$ & $H_0^{\rm{DB}}$ & $\bar{H_0}$ & log P & $\Delta\phi^{\rm{AB}}$ & $\Delta\phi^{\rm{CB}}$ & $\Delta\phi^{\rm{DB}}$ & $H_0^{\rm{AB}}$ & $H_0^{\rm{CB}}$ & $H_0^{\rm{DB}}$ & $\bar{H_0}$ \\
& ($\times10^4$) & & & & & & & & ($\times10^4$) & & & & & & & \\
\hline
1.5 & 1.38 & 0.125 & 0.139 & 0.287 & 40.2 & 39.0 & 37.6 & $38.9 \pm 1.3$ & 1.73 & 0.130 & 0.143 & 0.290 & 41.7 & 40.2 & 38.0 & $39.9 \pm 1.9$\\
1.6 & 1.48 & 0.147 & 0.163 & 0.338 & 47.2 & 45.8 & 44.3 & $45.7 \pm 1.4$ & 1.77 & 0.150 & 0.170 & 0.349 & 48.1 & 47.6 & 45.6 & $47.1 \pm 1.3$\\
1.7 & 1.52 & 0.174 & 0.193 & 0.403 & 55.5 & 54.0 & 52.7 & $54.0 \pm 1.4$ & 1.75 & 0.178 & 0.201 & 0.417 & 57.0 & 56.2 & 54.5 & $55.9 \pm 1.3$\\
1.8 & 1.54 & 0.190 & 0.211 & 0.442 & 60.8 & 59.1 & 57.7 & $59.2 \pm 1.5$ & 1.77 & 0.194 & 0.215 & 0.457 & 61.9 & 60.2 & 59.7 & $60.7 \pm 1.2$\\
1.9 & 1.58 & 0.210 & 0.234 & 0.491 & 67.1 & 65.4 & 64.1 & $65.6 \pm 1.4$ & 1.76 & 0.210 & 0.237 & 0.510 & 67.3 & 66.4 & 66.6 & $66.8 \pm 0.5$\\
2.0 & 1.60 & 0.229 & 0.256 & 0.540 & 73.3 & 71.6 & 70.5 & $71.8 \pm 1.3$ & 1.79 & 0.231 & 0.261 & 0.549 & 73.8 & 73.0 & 71.7 & $72.9 \pm 1.1$\\
2.1 & 1.60 & 0.247 & 0.276 & 0.586 & 79.0 & 77.3 & 76.6 & $77.6 \pm 1.2$ & 1.79 & 0.250 & 0.287 & 0.606 & 80.0 & 80.1 & 79.1 & $79.8 \pm 0.5$\\
2.2 & 1.58 & 0.264 & 0.296 & 0.632 & 84.5 & 82.8 & 82.6 & $83.3 \pm 1.0$ & 1.77 & 0.258 & 0.299 & 0.648 & 82.5 & 83.7 & 84.6 & $83.7 \pm 1.1$\\
2.3 & 1.57 & 0.281 & 0.315 & 0.676 & 89.8 & 88.0 & 88.3 & $88.7 \pm 0.9$ & 1.79 & 0.267 & 0.311 & 0.678 & 85.3 & 86.9 & 88.5 & $86.9 \pm 1.6$\\
2.4 & 1.55 & 0.297 & 0.332 & 0.720 & 94.8 & 92.8 & 94.0 & $93.9 \pm 1.0$ & 1.79 & 0.299 & 0.344 & 0.738 & 95.6 & 96.3 & 96.4 & $96.2 \pm 0.4$\\
2.5 & 1.49 & 0.312 & 0.348 & 0.763 & 99.8 & 97.4 & 99.6 & $98.9 \pm 1.3$ & 1.78 & 0.311 & 0.357 & 0.759 & 99.4 & 99.7 & 99.1 & $99.5 \pm 0.3$\\
\hline
\end{tabular}
\end{center}
Notes --- notation and uncertainties are the same as those described in the
notes for Table~\ref{tab:EvidSPLE1D}.
\end{table*}
\subsubsection{The ACS posterior PDF for $\gamma'$}
\label{sec:LensModel:Result:slope}
In the previous section, we explored the {\it HST}{}\ data constraints on the
slope parameter, optimizing the other parameters of the SPLE lens
model at each step. To characterize properly $P(\gamma'|\boldsymbol{d},
\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$ in Equation
(\ref{eq:MargSlopePosterior:sec}), we would need to marginalize
over all lens parameters $\boldsymbol{\eta}$ instead. However, as we shall
now see, this optimization approximation is actually a good one and is
certainly the most tractable solution due to the high dimensionality
of the problem (16 parameters to describe G1, G2 and external shear).
Direct sampling in the 16-dimensional parameter space
of $P(\boldsymbol{d} | \gamma', \boldsymbol{\eta},
\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5}) \, P_{\rm no\, ACS}(\gamma')\, P(\boldsymbol{\eta})$ in Equation
(\ref{eq:MargSlopePosterior:sec}) via, for example, Markov chain Monte
Carlo (MCMC) techniques using the extended source information is not
feasible on a reasonable time scale. Importance sampling of the prior
PDF from the radio data of image positions and fluxes ($P_{\rm no\,ACS}(\gamma',
\boldsymbol{\eta}) = P_{\rm no\,ACS}(\gamma', \boldsymbol{\eta} | {\rm radio})$) by weighing the
samples by $P(\boldsymbol{d} | \gamma', \boldsymbol{\eta}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$
is difficult since $\gamma'$ is effectively unconstrained by the radio
data (the $\chi^2$ changes by $\lesssim 1$ in the slope range between
1.5 and 2.5).\footnote{We set $\gamma'_{\rm G2}=\gamma'_{\rm G1}=\gamma'$ since
the slope of G2 is ill-constrained \citep{KoopmansEtal03}.}
It is precisely the unconstrained nature of the $\gamma'$ parameter that makes
the optimization approximation so good.
The ``tube'' of
$\gamma'$-degeneracy traversing the 16-dimensional parameter space dominates
the uncertainties in the parameters. We thus
assume that the tube of
$\gamma'$-degeneracy has negligible thickness (a degeneracy curve),
and use $P(\boldsymbol{d} | \gamma', \boldsymbol{\eta},
\boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$ to break the degeneracy. Specifically, we
use the radio observations, {\it HST}{}\ Near Infrared Camera and
Multi-Object Spectrometer 1 (NICMOS) images (Proposal 7422;
PI:Readhead), and time delay data to
obtain the best-fitting $\hat{\boldsymbol{\eta}}$ for a given
$\gamma'$=$\gamma'_i$ (assuming $\Omega_{\rm m}=0.3$, $\Omega_{\rm \Lambda}=0.7$ and $w=-1$ in using
the time delay data), and compute the corresponding $P(\boldsymbol{d} |
\gamma'_i, \hat{\boldsymbol{\eta}}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$. These are the
listed evidence values in the second column of Table
\ref{tab:slopeEvid} for the various $\gamma'_i$ values. The time delay
data are included because the predicted relative Fermat potential
among the image pairs using the radio and NICMOS data are otherwise
inconsistent with one another. The optimized parameters from only the
radio and NICMOS data lead to $\chi^2\sim600$ for just the time delay
data; including the time delay data reduces the time delay $\chi^2$ to
$\sim 1$ with only a mild increase in the radio and NICMOS $\chi^2$ of
$\sim 6$. We
``undo'' the inclusion of the time delay data (so that we do not use
the time delay data twice in the importance sampling of Equation
(\ref{eq:CosmoparsPosteriorFullSimp2:sec})) by subtracting the log
likelihood of the time delay from the log likelihood of $\boldsymbol{d}$;
the effect is negligible since the latter is $\sim10^4$ higher in
magnitude.
Our thin degeneracy tube assumption implies that
$P(\boldsymbol{d} | \gamma') \simeq P(\boldsymbol{d} | \gamma', \hat{\boldsymbol{\eta}})$, such that
the posterior PDF for the slope is
$P(\gamma' | \boldsymbol{d}) \propto P(\boldsymbol{d} | \gamma')\ P_{\rm no\,ACS}(\gamma')$.
Assigning a uniform prior (i.e., $P_{\rm no\,ACS}(\gamma')$ is constant),
we arrive at the result that our desired PDF is
just the exponentiation of the log evidence in column 2 of
Table~\ref{tab:slopeEvid}.
Fitting these log evidences with the following quadratic function,
\begin{equation}
\label{eq:logPfit}
\log P(\gamma'|\boldsymbol{d}) = C - \frac{(\gamma' - \gamma'_0)^2}{2\sigma_{\gamma'}^2},
\end{equation}
we obtain the following best-fit parameter
values: $\gamma'_0=2.081\pm0.027$,
$\sigma_{\gamma'}=0.0091\pm0.0008$, and $C=(1.60\pm0.01)\times10^4$.
While the PDF width
$\sigma_{\gamma'}$ is very small, the centroid is not well
determined. Adding $\sigma_{\gamma'}$ and the uncertainty in
$\gamma'_0$ in quadrature, we finally approximate $P(\gamma' | \boldsymbol{d})$
with a Gaussian centered on $2.08$ with
standard deviation $0.03$. This provides the prior on $\gamma'$ from
the ACS data (in Equation (\ref{eq:CosmoparsPosteriorFullSimp2:sec})).
The deep ACS data therefore allow a significant improvement to the
previous measurement in \citet{KoopmansEtal03} of $\gamma' =
1.99\pm0.20$, which was based on the radio data and the NICMOS ring.
Coincidentally, our $\gamma'=2.08\pm0.03$ is identical, apart from the
spread, to the measurement from SLACS of
$\gamma'=2.08\pm0.2$ that was based on a sample of massive elliptical
lenses \citep{KoopmansEtal09}.
The spread of 0.2 in the SLACS measurement is the intrinsic scatter of
slope values in the sample, and is larger than the typical
uncertainties associated with individual systems in the sample of
$\sim 0.15$.
We note that our measurement
is not the first percent-level determination of a
strong lens density profile slope. \citet{WucknitzEtal04} used high
precision astrometric measurements from VLBI data to constrain the
$\gamma'$ parameter in B0218$+$357 to be $1.96 \pm 0.02$ (where we have
transformed their $\beta$ into our notation). However, they did not use
exactly the same model as we do here (instead working with
combinations of isothermal elliptical potentials and neglecting
external convergence). \citet{DyeWarren05} measured the power-law
slope of the lens galaxy in the Einstein ring system 0047-2808 to be
$\gamma' = 2.11\pm0.04$ based on the extended image constraints. More
recently, \citet{DyeEtal08} determined the power-law slope of the
extremely massive and luminous lens galaxy in the Cosmic Horseshoe
Einstein ring system J1004+4112 to be $\gamma'=1.96\pm0.02$.
\subsubsection{Predicted relative Fermat potentials}
In order to be able to calculate the time delay likelihood function,
$P(\boldsymbol{\Delta t} | z_{\rm d},z_{\rm s}, \boldsymbol{\pi}, \gamma',\kappa_{\rm ext},\boldsymbol{M}_{\rm D})$, at any value
of the slope~$\gamma'$, we need to interpolate the Fermat potential differences
given in Table~\ref{tab:slopeEvid}. In fact, these data give us the
function $q(\gamma')$ to insert into
Equation~(\ref{eq:fprelation:sec}): we can do the interpolation at $\kappa_{\rm ext} = 0.0$
and then rescale by $(1-\kappa_{\rm ext})$ without loss of generality.
For each of the image pairs, we fit the relative Fermat potential difference
as a third-order polynomial function of $\gamma'$ using the values we have at
the discrete points $\gamma'_i$ for the SPLE models in the table. Recall that
the SPLE model provides an unbiased estimate of the relative Fermat potential,
and that the various top data models $\boldsymbol{M}_{\rm D}$ gave consistent estimates.
Thus, the polynomial fit gives the function $q(\gamma',\boldsymbol{M}_{\rm D})$ in
Equation (\ref{eq:fprelation:sec}). The third-order polynomial fit leads to
residuals ($=(\Delta\phi_i-\Delta\phi^{\rm poly})/(\Delta\phi_i)$) of $<1\%$ for all
image pairs at all slope points in Table \ref{tab:slopeEvid} except for
$\gamma'_i=1.7$, which has residuals of $\sim2\%$.
\section{Breaking the Mass-Sheet Degeneracy: Stellar Dynamics}
\label{sec:StellDyn}
In this section, we present the observations and data reduction for measuring
the velocity dispersion~$\sigma$ of G1 in B1608$+$656{}. This measurement appears
as the likelihood function given in Equation~(\ref{eq:vdispLikelihood:sec})
above.
\subsection{Observations}
\label{sec:StellDyn:Obs}
We have obtained a high signal-to-noise spectrum of B1608$+$656{}\ using
the Low-Resolution Imaging Spectrometer
(LRIS; \citeauthor{OkeEtal95}~\citeyear{OkeEtal95})
on Keck 1. The data were obtained from the red side of the
spectrograph on 12 June 2007 using the 831/8200 grating with the D680 dichroic
in place. A slit mask was employed to obtain simultaneously spectra for two
additional strong lenses in the field \citep{FassnachtEtal06p2} and to
continue to probe the structure along the line of sight to the lens
\citep{FassnachtEtal06}. The night was clear with a nominal seeing of
0\farcs9, and 10 exposures of 1800s and one exposure of 600s were obtained for
a total exposure time of 18600s.
Each exposure was reduced individually using a custom pipeline \citep[see][for
details]{AugerEtal08} that performs a single resampling of the spectra onto a
constant wavelength grid; the same wavelength grid was used for all exposures
to avoid resampling the spectra when combining them, and an output pixel scale
of 0.915 \AA\ was used to match the dispersion of the 831/8200 grating.
Individual spectra were extracted from an aperture 0\farcs84 wide
(corresponding to 4 pixels on the LRIS red side) centered on the peak of the
flux of the lensing galaxy G1. The size of the aperture was chosen to avoid
contamination from the spectrum of G2 while maximizing the total flux for an
improved signal-to-noise ratio. The extracted spectra were combined by
clipping the extreme points at each wavelength and taking the
variance-weighted sum of the remaining data points. The same extraction and
coaddition scheme was performed for a sky aperture to determine the resolution
of the output co-added spectrum; we find the resolution to be ${\rm R} =
2560$, corresponding to $\sigma_{\rm obs} = 49.7\, {\rm km\, s^{-1}}$. The
signal-to-noise ratio per pixel of the final spectrum is $\sim 60$.
\subsection{Velocity dispersion measurement}
\label{sec:StellDyn:Model}
We use a Python-based implementation of the velocity-dispersion code from
\citet{vanderMarel94}, with one important modification. Our implementation
allows for a linear sum of template spectra to be modeled using a bounded
variable least squares solver with the constraint that each template must have
a non-negative coefficient. We use a set of templates from the INDO-US stellar
library containing spectra for a set of seven K and G giants with a variety of
temperatures and spectra for an F2 and an A0 giant. These templates of
early-type stars are particularly important for B1608$+$656{}, which has a
post-starburst spectrum \citep{MyersEtal95}.
We perform our modeling over a wide range of wavelength intervals and find a
stable solution over a variety of spectral features; we therefore choose to use
the rest-frame range from 4200~\AA\ to 4900~\AA\ for our fit. The INDO-US
templates have a constant-wavelength resolution of 1.2~\AA\ which
corresponds to $\sigma_{\rm template} = 33.6\rm{\, km\, s^{-1}}$ over this wavelength range.
We iterate over a range of template combinations and polynomial continuum
orders and find that a variety of solutions that vary around
$260\, {\rm km\, s^{-1}}$ with a spread of about $13\, {\rm km\, s^{-1}}$ and
statistical uncertainties of $7.7\, {\rm km\, s^{-1}}$ (see Figure
\ref{fig:LRISvelocitydispersion}). We therefore adopt a
velocity dispersion of $\sigma = 260 \pm 15 \rm{\, km\, s^{-1}}$, with the error
incorporating the systematic template mismatch and the statistical error for
the models. This agrees with the previous measurement of $\sigma_{\rm
ap}=247\pm35 \rm{\, km\, s^{-1}}$ by \citet{KoopmansEtal03} with a significant
reduction in the uncertainties, though we note that the two velocity
dispersions have been measured in slightly different apertures.
\begin{figure}
\centering\includegraphics[width=75mm,clip]{fig2.eps}
\caption{\label{fig:LRISvelocitydispersion} The LRIS spectrum of B1608$+$656{}\
(black line) with a model generated from all 9 INDO-US templates and
a 9th order continuum overplotted (red line).
The gray shaded areas were not included in the fit, and the lower panel shows
the fit residuals. The spectrum and our modeling suggest a central velocity
dispersion of $\sigma = 260 \pm 15\, {\rm km\, s^{-1}}$, including systematic
errors. }
\end{figure}
\section{Breaking the Mass-Sheet Degeneracy: Lens Environment}
\label{sec:LensEnv}
In this section, we outline two approaches for quantifying
the prior probability distributions of the external mass sheet~$\kappa_{\rm ext}$.
Computing this quantity such that Equation (\ref{eq:MassSheet:H0bias}) holds true
is not a trivial matter.
The non-linearity of strong lensing means that the
surface mass density at a given angular position in successive redshift planes
between the observer and the source cannot simply be scaled by the appropriate
distance ratios and summed: rather, the deflection angles (which can be large)
need to be taken into account when calculating the distortion matrices (which
contain and define the external convergence and shear),
leading us towards a ray-tracing approach
\citep{HilbertEtal09}. Detailed investigation of the ray paths down the
B1608$+$656{}\ light cone is beyond the scope of this paper, and we defer it to a
later work (Blandford et al.~in preparation). In this section we
use the statistics of B1608$+$656{}-like fields in numerical simulations to derive a
PDF for~$\kappa_{\rm ext}$.
\subsection{Ray-tracing through the Millennium Simulation}
\label{sec:LensEnv:MS}
Following
\citet{HilbertEtal07}, we use the multiple-lens-plane algorithm to
trace rays through the Millennium Simulation
\citep[MS;][]{SpringelEtal05}, one of the largest N-body simulations
of cosmic structure formation.\footnote{ The details of the
ray-tracing algorithm are described in \citet{HilbertEtal09}. The
methods for sampling lines of sight, identifying strong lensing
events, and calculating the convergence are described in
\citet{HilbertEtal07}. Note that we also include a stellar component
in the ray-tracing as described in \citet{HilbertEtal08}.
}
We then identify lines of sight where strong lensing by matter
structures at $z_{\rm d}=0.63$ occurs for sources at $z_{\rm s}=1.39$. The
convergence along these lines of sight is estimated by summing the
projected matter density on the lens planes weighted for a source at $z_{\rm s}=1.39$$^9$
along the ray trajectory.
By excluding the primary lens plane at
$z_{\rm d}=0.63$ that causes the strong lensing, the constructed convergence
is truly external to the lens and is due to the line-of-sight
contributions only. By sampling many lines of sight, we obtain an
estimate for the probability density function of $\kappa_{\rm ext}$ from
simulations. We denote this as the ``MS'' prior on $\kappa_{\rm ext}$.
\begin{figure}
\includegraphics[width=75mm]{fig3.eps}
\caption{
\label{fig:HilbertKext}
Probability distribution for the external convergence $\kappa_\mathrm{ext}$
along strongly lensed lines of sight from the Millennium Simulation for the
lens redshift $z_\mathrm{L}$ and source redshifts $z_\mathrm{S}$ of B1608$+$656{}{}
(solid line) compared to the convergence distribution for all lines of sight
(dotted line).
}
\end{figure}
Figure \ref{fig:HilbertKext} shows the predicted
amount of external convergence constructed using
$6.4 \times 10^8$ lines of sight (with and without strong
lenses) to sources at $z_{\rm s}=1.39$: of these, $8.0\times 10^3$ lines
of sight contain strong lenses.
For both curves, the mean $\kappa_{\rm ext}$
is consistent with zero with a spread of $\sim 0.04$.
How should we interpret this distribution? According to its
definition, $\kappa_{\rm ext}$ {\it could} have
contributions from galaxies on the primary lens plane that do not affect the
dynamics. Neglecting these contributions (effectively assuming that the lens
is an isolated galaxy) might lead to an underestimate of $\kappa_{\rm ext}$, since most
lenses are massive galaxies that often live in over-dense environments like
galaxy groups and
clusters.\footnote{
\label{ftnote:MS_no_splitting}
It is beyond the scope of this paper to quantify this contribution from our
ray-tracing simulations. This would require modeling the lenses and their
environment in a way that allows one to split the mass distribution into a
part that is accounted for by the lens model (and constrained by lensing and
dynamics data) and a part that acts as external convergence.
}
However, if the local contribution to the external convergence is accounted
for in the lensing plus dynamics modeling \citep[as discussed
in][]{FassnachtEtal06}, then the MS PDF will give an accurate uncertainty in
the inferred Hubble constant after marginalization.
Indeed, what the MS PDF also verifies is that {\it on average} the contribution
to the external convergence at a strong lens from line-of-sight structures is
almost the same as that for a random line of sight, namely zero.
The
MS prior therefore suggests that ensembles of {\it isolated} strong lenses
will yield estimates of cosmological parameters that are not strongly biased
by line-of-sight structures.
The PDF in Figure~\ref{fig:HilbertKext} gives us an idea of by how much
individual lenses' line-of-sight $\kappa_{\rm ext}$ values vary, and hence an estimate
of the uncertainty on $H_0$ due to this structure. In the absence of any other
information, we can assign the Millennium Simulation PDF as a prior
on~$\kappa_{\rm ext}$ in order to limit the possible values of external convergence to
those likely to occur. This assignment
has the effect of adding an additional
uncertainty of~$\sim 0.04$ in~$\kappa_{\rm ext}$, with no systematic shift in~$\kappa_{\rm ext}$.
\subsection{Combining galaxy density observations with ray-tracing
simulations}
\label{sec:LensEnv:OBS}
The prior discussed in the preceding section does not take into account any
information about the environment of B1608$+$656{}. Here, we combine knowledge of
the lens environment with ray-tracing to obtain a more informative prior on
the external convergence.
\citet[][]{FassnachtEtal09} compared galaxy number counts in fields
around strong galaxy lenses, including B1608$+$656{}, with number counts in
random fields and in the COSMOS field. Among other measures, they used
the number of galaxies with apparent magnitude $18.5 \le
m_\mathrm{F814W} < 24.5$ in the F814W filter band in apertures of
$45\,\arcsec$ radius
(300~kpc at the redshift of B1608$+$656{})
to quantify the galaxy number density
$n_\mathrm{gal}$ projected along lines of sight. They found that the
distribution of $n_\mathrm{gal}$ for lines of sight containing strong
lenses is not very different from that for random lines of
sight. However, B1608$+$656{}\ lies along a line of sight with a galaxy
density $n_\mathrm{gal}$ that is about twice the mean over random
lines of sight, $\langle n_\mathrm{gal} \rangle$.
A positive $\kappa_{\rm ext}$ bias can arise through Poissonian fluctuations that are
present in the number of groups along the line of sight in the {\it observed}
sample of strong lenses.
\begin{figure}
\includegraphics[width=75mm]{fig4.eps}
\caption{\label{fig:HilbertKext2}
Probability distribution for the external convergence $\kappa_\mathrm{ext}$
obtained from combining results of galaxy number counts around B1608$+$656{}\ with
results from ray-tracing through the Millennium Simulation. Compared are the
distribution along lines of sight with a relative galaxy number density
$n_\mathrm{gal}/\langle n_\mathrm{gal}\rangle = 2.00\pm 0.05$ (solid line) to
the distribution along all lines of sight (dotted line).
}
\end{figure}
We can use this measurement of galaxy number density in the B1608$+$656{}\ field to
generate a more informative prior PDF for~$\kappa_{\rm ext}$.
As for the MS prior in the previous section,
we use the ray-tracing through the MS together with the semi-analytic
galaxy model of \citet{DeLuciaBlaizot2007} to quantify the expected
external convergence $\kappa_{\rm ext}$ for lines of sight with a
given {\it relative overdensity} $n_\mathrm{gal}/\langle n_\mathrm{gal}
\rangle$.
Dividing out the absolute number of galaxies in the field
accounts for differences due to the particular set of
cosmological parameters used by the Millennium
Simulation and inaccuracies in the galaxy model: We assume that differences in
the relative
overdensity between the MS cosmology and the true one are small.
We generate 32 simulated fields of
$4\times4\,\mathrm{deg}^2$ on the sky containing the positions and
apparent magnitudes\footnote{
The model galaxy catalogs do not provide
F814W magnitudes. We simply approximate $m_\mathrm{F814W}$ by
combining SDSS $i$-band and $z$-band magnitudes to get
$m_\mathrm{F814W} = x_i m_i + (1-x_i) m_z$ with $x_i = 0.5$.
We have checked that our results do not depend strongly on $x_i\in[0,1]$.
} of the model
galaxies at redshifts $0<z<5.2$ together with maps of the convergence
$\kappa$ to source redshift $z_{\rm s}=1.39$. The galaxy positions
and magnitudes in the simulated fields are converted into maps of the
galaxy density $n_\mathrm{gal}$. We then select all lines of sight
with relative overdensity $1.95 \le n_\mathrm{gal}/\langle
n_\mathrm{gal} \rangle < 2.05$ and compute the distribution of the
convergence along these lines of sight. The resulting convergence
distribution (shown in Figure~\ref{fig:HilbertKext2}) is then used as
prior distribution for the external convergence $\kappa_\mathrm{ext}$,
which we denote as the ``OBS'' (observations and MS) prior.
The convergence computed in this way is not strictly speaking
external convergence, since (i) we do not subtract any contribution from any primary strong lens,
(ii) we take all lines of sight and not just those to
strong lenses. We are instead building on one of the results
of the previous section and assume that the distribution of external convergences is
very similar to the distribution of convergences along random lines of sight.
Where this approach becomes inappropriate is where a ray passes close to a galaxy
center, and is hence associated with a very large convergence. Assuming such a line of sight
as foreground/background for a strong lens galaxy essentially creates a lens system
with two or more strong deflectors.
These sightlines correspond to compound lenses such as {SDSS\ J0946+1006}
\citep{Gav++08}, but not to B1608$+$656{}.
However, the tail of high convergence values does not pose a problem here:
as we will see in Section~\ref{sec:H0:nuisance}
below, the high external convergence is rejected by the
dynamics modeling. We expect the mean and width
of the PDF in Figure~\ref{fig:HilbertKext2} to represent well the possible
values of $\kappa_{\rm ext}$ for a field that is over-dense in galaxy number by a factor
of two.
Our OBS $\kappa_{\rm ext}$ distribution agrees with earlier estimates from
\citet{FassnachtEtal06}, who identified and modeled the 4 groups along
the line of sight to B1608$+$656{}\ using various mass assignment recipes.
In both approaches, we and \citet{FassnachtEtal06} are concerned
primarily with extracting information on the external convergence and
not the external shear.
If we were to estimate the external convergence by assigning masses
and redshifts to
all objects in the B1608$+$656{}\ field, and then ray tracing through the resulting
model mass distribution, the external shear as required in the strong lens
modeling would serve as an important calibrator for the external convergence.
Such a procedure is beyond the scope of this
paper, and we defer it to a future publication (Blandford et al., in
preparation). However, we do find
(by computing the distribution of external shears
in MS fields with different external convergences) that
the magnitude of the external shear required by the
strong lens modeling ($\gamma_{\rm ext} \simeq 0.075)$ is consistent
with the external shear amplitude predicted in the OBS scenario for the
B1608$+$656{}\ field.
\subsection{The influence on lens modeling.}
\label{sec:Lensmodel:OBS}
As already remarked, the description of ray propagation in an inhomogeneous
cosmology is quite subtle. The matter (dark plus baryonic) density is
partitioned between virialized structures (galaxies, groups and clusters) and
a depleted background medium. Any structures sufficiently close to the line of
sight will imprint convergence and shear onto a ray congruence. Meanwhile the
background medium will contribute less Ricci focusing than would be present in
a homogeneous, flat universe and will diminish the net convergence.
As the foregoing discussion makes clear, the line of sight to B1608$+$656{}\ is
unusual and we know quite a lot about the photometry and redshifts of the
intervening galaxies. It is therefore possible, in principle, to make a
refined estimate of the external convergence and shear and to compare the
former with the simulations discussed above and the latter with the shear
inferred in the lens model described in Paper~I{}. In this way, the shear, again
in principle, can be used to calibrate $\kappa_{\rm ext}$.
There is a second complication that must be addressed. Matter inhomogeneities
in front of G1 and G2 distort the image of
the primary lens as well as the multiple images of the source. Inhomogeneities
behind the lens contribute further distortion in the images of the source.
In a more accurate approach, these effects should be taken into account explicitly in the construction of the lens model, while
here we are subsuming them in a single correction factor $\kappa_{\rm ext}$. The way that
the resulting corrections affect the inference of a value for $H_0$ turns out to be
quite complex. However, it appears that in
the particular case of B1608$+$656{}, the error that is incurred does not
contribute significantly to our quoted errors.
These matters will be discussed in a forthcoming publication.
\section{Priors for model parameters $\boldsymbol{\xi}$}
\label{sec:Priors}
A key goal of this work is to quantify the impact of the most serious
systematic errors associated with using time-delay lenses for cosmography. Our
approach is to characterize these errors as nuisance parameters, and then
investigate the effects of various choices of prior PDF on the inference of
cosmological parameters. To this end, we use either well motivated priors
based on the results of Section \ref{sec:LensModel}, Section
\ref{sec:LensEnv} and other independent studies, or, for contrast,
uniform (maximally ignorant) prior PDFs.
We now describe our choices for each parameter in turn.
\begin{itemize}
\item $P(\boldsymbol{\pi})$. We consider a set of four cosmological parameters,
$\boldsymbol{\pi} = \{H_0, \Omega_{\rm m}, \Omega_{\rm \Lambda}, w\}$. We then assign the following
four different joint prior PDFs:
\begin{description}
\item{\bf K03:} uniform prior on $H_0$ between $0$ and $150\ \rm{\, km\, s^{-1}\, Mpc^{-1}}$,
$\Omega_{\rm m}=0.3$, $\Omega_{\rm \Lambda}=0.7$, and $w=-1$. This is the cosmology that was
assumed in \citet{KoopmansEtal03} (the most recent $H_0$
measurement from B1608$+$656{}\ before this work), and is the cosmology
that is typically assumed in the literature for measuring $H_0$
from time-delay lenses. This form of prior allows us to compare
our $H_0$ to earlier work.
\item{\bf UNIFORM} priors on all four cosmological parameters, with
either the $w=-1$ or the flatness ($\Omega_{\rm m} = 1 - \Omega_{\rm \Lambda}$) constraint
imposed. These priors allow us to
quantify the information in the B1608$+$656{}\ data set as
conservatively as possible.
\item{\bf WMAP5:} WMAP 5 year data set posterior PDF for $\{H_0,
\Omega_{\rm m}, \Omega_{\rm \Lambda}, w\}$, assuming either $w=-1$ or a flat geometry. This
allows us to constrain either flatness or $w$
by combining B1608$+$656{}\ with WMAP.
\item{\bf WBS:} Joint posterior PDF for
$\{H_0, \Omega_{\rm \Lambda}, w\}$ with a flat geometry, given the WMAP5 data in combination
with compendia of BAO and supernovae (SN) data
sets. This allows us to quantify the
gain in precision made when incorporating B1608$+$656{}\ into the current global
analysis.
\end{description}
The last two priors are defined by the Markov chains provided by the WMAP
team\footnote{\texttt{http://lambda.gsfc.nasa.gov}} based on the analysis
performed by \citet{DunkleyEtal09} and \citet{KomatsuEtal09}.
The BAO data incorporated were taken from \citet{PercivalEtal07}; the
SN sample used is the ``union'' sample of
\citet{KowalskiEtal08}. While the BAO and SN data sets are
continually improving \citep[e.g.][]{HickenEtAl09}, this particular
well-defined snapshot is sufficient for
us to explore the
relative information content of our data set compared with other, well-known
cosmological data sets. We also note that
the publication of Markov chain representations of posterior PDFs
makes further joint analyses like the one we present here very
straightforward indeed.
\item $P(\gamma')$. We
consider three different prior PDFs for the density profile slope.
In the first two priors, we
ignore the B1608$+$656{}\ ACS data (i.e., dropping $P(\boldsymbol{d} | \gamma',
\boldsymbol{\eta}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$ in
Equation (\ref{eq:MargSlopePosterior:sec})); these first two are controls,
to allow the assessment of the amount of
information contained in the ACS data.
\begin{description}
\item{\bf Uniform:} a maximally ignorant prior PDF, defined in the range
$1.5 \leq \gamma' \leq 2.5$.
\item{\bf SLACS:} This is a Gaussian prior based on the
result from the SLACS project:
$\gamma'=2.08\pm0.2$
\citep{KoopmansEtal09}.
This was derived from a sample of low-redshift massive elliptical lenses,
studied with combined strong lens and stellar dynamics modeling.
We note that this was obtained without considering the presence of
any external convergence $\kappa_{\rm ext}$. However, \citet{TreuEtal09} find
that the environmental effects in the SLACS lenses are smaller than
their measurement errors and are typically undetected. Since SLACS
lenses do not require an external shear in the modeling, typical
$\kappa_{\rm ext}$ values for these lenses are expected to be small. Only in a
few extreme cases does the $\kappa_{\rm ext}$ reach values of order
$0.05$--$0.10$. Therefore, we take directly the prior on the slope
from SLACS lenses without corrections for $\kappa_{\rm ext}$.
\item{\bf ACS:} This prior is the PDF
$P(\gamma' | \boldsymbol{d}, \boldsymbol{M}_{\rm D}=\boldsymbol{M}_{5})$ obtained from the
analysis of
the ACS image of B1608$+$656{}\ in Section~\ref{sec:LensModel:Result}.
This is the most informative of the three priors on $\gamma'$, as it
is determined directly from the B1608+656 data, independent of
external priors from samples of galaxies (e.g.~SLACS).
\end{description}
\item $P(\boldsymbol{\eta})$. As described in
Section~\ref{sec:LensModel:Result},
we use the radio observations and the NICMOS F160W images
of B1608$+$656{}\ to constrain the smooth lens model parameters
$\boldsymbol{\eta}$ for a given slope
$\gamma'$. The posterior PDF from this analysis forms the prior PDF for the
current work.
\item $P(\kappa_{\rm ext})$. We consider three forms of prior for the external convergence:
\begin{description}
\item{\bf Uniform} between $-0.25$ and $+0.25$: again, such a
maximally ignorant prior, again to provide contrast.
\item{\bf MS:} from the strong lenses in the MS, discussed in
Section~\ref{sec:LensEnv:MS}.
\item{\bf OBS:} from the galaxy number counts in the field of
B1608$+$656{}\ and the MS, discussed in Section~\ref{sec:LensEnv:OBS}.
\end{description}
\item $P(r_{\rm ani})$. For the lens galaxy stellar orbit radial anisotropy
parameter~$r_{\rm ani}$, we simply assign a uniform prior between $0.5 r_{\rm
eff}$ and $5 r_{\rm eff}$, where $r_{\rm eff}$ is the effective
radius that is determined from the photometry to be $0.58''\pm0.06''$ \citep{KoopmansEtal03} for
the velocity dispersion measurement. The uncertainty in $r_{\rm
eff}$ has negligible impact on the model velocity dispersion.
The inner cutoff of $r_{\rm ani}$ is motivated by
observations \citep[e.g.,][]{KronawitterEtal00} and radial instability
arguments \citep[e.g.,][]{MerrittAguilar85,StiavelliSparke91},
while the outer cutoff is
for computational simplicity (the model velocity dispersion changes by a
negligible amount between $r_{\rm ani} = 5 r_{\rm eff}$ and
$r_{\rm ani}\rightarrow\infty$). These boundaries are consistent with
those in \citet{GebhardtEtal03}.
\end{itemize}
These priors are summarized in Table~\ref{tab:priors}.
\section{Inference of $H_0$ and dark energy parameters from B1608$+$656{}}
\label{sec:H0}
In this section we present the results of the analysis outlined in
Section~\ref{sec:H0ProbTheory}, putting together all the likelihood
functions and prior PDFs described in Sections
\ref{sec:LensModel} to \ref{sec:Priors}. We obtain
$P(\boldsymbol{\pi}|\boldsymbol{\Delta t},\boldsymbol{d},\sigma)$ by importance sampling, using the
two likelihoods in Equation~(\ref{eq:CosmoparsPosteriorFullSimp2:sec}) as the
weights for the
various priors on $\gamma'$, $\kappa_{\rm ext}$, $r_{\rm ani}$, and $\boldsymbol{\pi}$ listed in
Table~\ref{tab:priors} (see Appendix~\ref{app:H0ProbTheory:ImpSamp}
for details). By using the likelihood functions of our B1608$+$656{}\
data sets, we are incorporating the uncertainties
associated with these measurements.
We expect and indeed find that the data are relatively
insensitive to $r_{\rm ani}$ and do not constrain it.
Focusing first on the systematic errors now quantified as the
nuisance
parameters $\gamma'$ and~$\kappa_{\rm ext}$, we gradually increase the complexity of
the cosmological model to probe the full space of parameters.
For each possible combination of the priors on the parameters in
Table~\ref{tab:priors}, we generate 96000 samples of $\gamma'$, $\kappa_{\rm ext}$,
$r_{\rm ani}$, and $\boldsymbol{\pi}$ to characterize the prior probability
distribution. We also have two types of stellar distribution functions,
Hernquist and Jaffe, for modeling the stellar velocity dispersion; we find
that the two different types of stellar distribution function produce nearly
identical PDFs for the cosmological parameters. Since the priors on the
parameters play a greater role than does the choice of stellar dynamics model,
we focus only on the Hernquist stellar distribution function for the remainder
of the section.
\begin{table*}
\begin{center}
\caption{\label{tab:priors} Priors on the parameters}
\begin{tabular}{|l||c|c|c|}
\hline
$P(\gamma')$ & uniform ($1.5\leq\gamma'\leq2.5$) & SLACS ($\gamma'=2.08\pm0.2$) & ACS ($\gamma'=2.08\pm0.03$) \\
\hline
$P(\kappa_{\rm ext})$ & uniform ($-0.25\leq\kappa_{\rm ext}\leq0.25$) & MS (Millennium
Simulations; Figure \ref{fig:HilbertKext}) & OBS (Observations and MS; Figure
\ref{fig:HilbertKext2}) \\
\hline
$P(r_{\rm ani})$ & \multicolumn{3}{|c|}{uniform ($0.5r_{\rm eff}\leqr_{\rm ani}\leq5 r_{\rm eff}$)}\\
\hline
$P(\boldsymbol{\pi})$ & K03 ($\Omega_{\rm m}=0.3$, $\Omega_{\rm \Lambda}=0.7$, $w=-1$, & UNIFORMopen ($w=-1$, & UNIFORMw ($\Omega_{\rm \Lambda}=1-\Omega_{\rm m}$ uniform $\in [0,1]$, \\
& uniform $H_0 \in [0,150]\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$) & $\Omega_{\rm m}$ and $\Omega_{\rm \Lambda}$ uniform $\in [0,1]$, & uniform $w \in [-2.5,0.5]$, \\
& & uniform $H_0 \in [0,150]\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$) & uniform $H_0 \in [0,150]\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$) \\
\cline{2-4}
& WMAPopen & WMAPw (WMAP5 with & WBSw (WMAP5 + BAO + SN with \\
& (WMAP5 with $w=-1$) & flatness and time-independent $w$) & flatness and time-independent $w$) \\
\hline
\end{tabular}
\end{center}
Notes --- The K03 entry for $P(\boldsymbol{\pi})$ is the same prior as
in \citet{KoopmansEtal03}. This is also the most common cosmology prior
assumed in previous studies of time-delay lenses.
\end{table*}
\subsection{Exploring the degeneracies among $H_0$, $\gamma'$ and~$\kappa_{\rm ext}$}
\label{sec:H0:nuisance}
To investigate the impact of our limited knowledge of the lens density profile
slope~$\gamma'$ and external convergence~$\kappa_{\rm ext}$, we first fix the cosmological
parameters $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$ and $w$ according to the K03 prior. This allows us a
simplified view of the problem, and also a comparison with previous work that
used this rather restrictive prior.
\begin{figure*}
\begin{minipage}{0.48\linewidth}
\centering\includegraphics[width=\linewidth]{fig5a.ps}
\end{minipage}\hfill
\begin{minipage}{0.48\linewidth}
\centering\includegraphics[width=\linewidth]{fig5b.ps}
\end{minipage}
\caption{\label{fig:H0-nuisance} Left:
the marginalized posterior PDF for $H_0$
assuming K03 cosmology and OBS $\kappa_{\rm ext}$ priors.
Right: the marginalized posterior PDF for $H_0$
assuming K03 cosmology and ACS $\gamma'$ priors. The prior
uncertainty in external convergence determines the precision of
the inferred Hubble constant.}
\end{figure*}
We first assign the OBS prior for $\kappa_{\rm ext}$,
and look at the effect of the various
choices of density profile slope priors. The left-hand panel in
Figure~\ref{fig:H0-nuisance} shows the
marginalized posterior PDF for $H_0$ for the three different priors for
$\gamma'$ given in Table~\ref{tab:priors}. From this graph, we see
that the SLACS prior gives a similar estimate of $H_0$ as the uniform
prior with a negligible increase in precision.
The ACS prior lowers $H_0$ relative to that of the SLACS
and uniform priors, and improves the precision in $H_0$ to $4.4\%$.
Overall, the impact of the prior on $\gamma'$ is
relatively low in the sense that,
even with a uniform prior on~$\gamma'$,
$H_0$ is still constrained to $7\%$ (taking
$H_0=70.6$ as our reference value).
For the remainder of this paper, we assign the ACS prior.
As expected, the prior for $\kappa_{\rm ext}$ has a greater effect, shown in
the right-hand panel of
Figure~\ref{fig:H0-nuisance}. Taking the maximally informative OBS prior as our
default, we see that relaxing this to the MS prior causes an increase in
inferred $H_0$ value of some $6\rm{\, km\, s^{-1}\, Mpc^{-1}}$, and relaxing further to a uniform
prior increases it by $12\rm{\, km\, s^{-1}\, Mpc^{-1}}$. The precision in $H_0$ also drops
by more than a factor of two from the OBS prior to the uniform prior.
Our knowledge of $\kappa_{\rm ext}$ is therefore limiting the inference of $H_0$.
We note that the stellar dynamics contain a significant amount
of information on~$H_0$. The stellar dynamics effectively constrain $\kappa_{\rm ext}$
and $\gamma'$ to an approximately linear relation, where an increase in $\kappa_{\rm ext}$
requires a steepening of the slope in order to keep the predicted velocity
dispersion the same. Therefore, for a fixed range of $\gamma'$ values,
the modeling of the stellar dynamics would only permit a corresponding
range of $\kappa_{\rm ext}$
values. Specifically, without dynamics as constraints, we find
$H_0=68.1^{+3.7}_{-6.4}\rm{\, km\, s^{-1}\, Mpc^{-1}}$ for the ACS and
OBS priors.
The lower bound on $H_0$ is somewhat weakened
by the high tail of the OBS $\kappa_{\rm ext}$
distribution. On the
other hand, this high tail is rejected by the use of
the dynamics data. Therefore, our tight constraint on $H_0$ results
from the \textit{combination} of
all available data sets -- each data set constrains different parts
of the parameter space such that the joint distribution is tighter
than the individual ones.
To summarize, using all available information on B1608$+$656{}\ and the ACS
and OBS priors gives $H_0 = 70.6\pm{3.1} \rm{\, km\, s^{-1}\, Mpc^{-1}}$, a precision
of $4.4\%$. We interpret
Figure~\ref{fig:H0-nuisance} as evidence that we are approaching saturation in the information we have on the lens model for B1608$+$656{}:
the mass model is now so well constrained that the inference of cosmological
parameters from this system is limited by our knowledge of the lens
environment. We now explore this joint inference in more detail,
first putting it in some historical context.
\subsection{Comparison with other lensing $H_0$ results}
\label{sec:H0:litrev}
What improvement in the measurement of $H_0$ do we gain from our new
observations of B1608$+$656{}? The most recent
measurement before this work by \citet{KoopmansEtal03} was $H_0=75^{+7}_{-6}
\rm{\, km\, s^{-1}\, Mpc^{-1}}$. This result was based on a joint lensing and dynamics
modeling using the radio data, shape of the Einstein ring from the
NICMOS images and the earlier less precise velocity dispersion
measurement. Our improved analysis using the
deep ACS images and the newly measured velocity dispersion reduce the
uncertainty by more than a factor of two, even with the inclusion of the
systematic error due to the external convergence that was previously
neglected. We attribute our lower $H_0$ value to our incorporation of
the realistically-skewed OBS $\kappa_{\rm ext}$.
Let us now compare our $H_0$ measurement based on the K03 cosmology to several
recent measurements (within the past five years) from other time-delay
lenses. Most analyses assumed $\Omega_{\rm m}=0.3$ and $\Omega_{\rm \Lambda}=0.7$ --- we point out
explicitly the few that did not.
In B0218+357, \citet{WucknitzEtal04} measured $H_0=78 \pm
6\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$ ($2 \sigma$) by modeling this two-image lens system with
isothermal elliptical potentials (and effectively measuring $\gamma'$, see
Section~\ref{sec:LensModel:Result:slope}) but neglecting external convergence.
\citet{YorkEtal05} refined this using the centroid position of the spiral
lens galaxy based on {\it HST}{}/ACS observations as a constraint;
depending on the spiral arm masking, they found $H_0=70 \pm 5\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$
(unmasked) and $H_0=61\pm7\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$ (masked) (both with $2 \sigma$ errors).
In the two-image {FBQ\,0951+2635},
\citet{JakobssonEtal05} obtained $H_0 = 60^{+9}_{-7}$ (random,
$1\sigma$) $\pm 2$ (systematic) $\rm{\, km\, s^{-1}\, Mpc^{-1}}$ for a singular isothermal
ellipsoid model and $H_0 = 63^{+9}_{-7}$ (random, $1 \sigma$) $\pm 1$
(systematic) $\rm{\, km\, s^{-1}\, Mpc^{-1}}$ for a constant mass-to-light ratio model,
again ignoring external convergence.
In the two-image quasar system SDSS\,J1650+4251, \citet{VuissozEtal07} found
$H_0=51.7^{+4.0}_{-3.0}\rm{\, km\, s^{-1}\, Mpc^{-1}}$ assuming a singular isothermal sphere and
constant external shear for the lens model. More general lens models
considered by these authors (e.g.\
including lens ellipticity, or using a de Vaucouleurs density profile)
were found to be underconstrained.
In the two-image quasar system SDSS\,J1206+4332,
\citet{ParaficzEtal09} found $H_0=73^{+3}_{-4}\rm{\, km\, s^{-1}\, Mpc^{-1}}$ using singular
isothermal ellipsoids or spheres to describe the three lens galaxies,
where photometry was used to place additional constraints on the lens
parameters.
Recently, \citet{FadelyEtal09} modeled the gravitational lens
Q0957+561 using four different dark matter density profiles, each with
a stellar component. The lens is embedded in a cluster, and the
authors constrained the corresponding
mass sheet using the results of a weak lensing analysis by
\citet{NakajimaEtal09}. Assuming a flat universe with $\Omega_{\rm m}=0.274$ and
cosmological constant $\Omega_{\rm \Lambda}=0.726$, they found
$H_0=85^{+14}_{-13}\rm{\, km\, s^{-1}\, Mpc^{-1}}$, where the principle uncertainties were
due to the weakly constrained stellar mass-to-light ratio (a
manifestation of the radial profile degeneracy in the lens model).
Imposing constraints from stellar population synthesis models led to
$H_0=79.3^{+6.7}_{-8.5}\rm{\, km\, s^{-1}\, Mpc^{-1}}$.\footnote{\label{ftnote:Hdiffcosmo}
The corresponding $H_0$ for the K03 cosmology is within $\sim0.1\%$ of
the listed values.}
In a nutshell, most of the recent
$H_0$ measurements from individual systems assumed isothermal
profiles, and neglected the effects of both $\gamma'$ and $\kappa_{\rm ext}$:
we interpret the significant variation
between the $H_0$ estimates in the recent literature as being due to these
model limitations.
In contrast, our B1608$+$656{}\ analysis explicitly
incorporates the uncertainties due to our lack of knowledge of
both $\gamma'$ and $\kappa_{\rm ext}$.
In fact, a spread of $\sim0.2$ in $\gamma'$ around
$2.0$ would give a spread of $\sim 40\%$ in $H_0$ for the cases where
isothermal lenses are assumed \citep{Wucknitz02}.
These in turn are set by a lack of information on the
systems, either because only two images are formed, or the extended
source galaxy is not observed.
Other groups have looked to improve the constraints on $H_0$ by combining
several lenses together in a joint analysis.
Using a sample of 10 time-delay lenses, \citet{SahaEtal06} measured
$H_0=72^{+8}_{-11}\rm{\, km\, s^{-1}\, Mpc^{-1}}$ by modeling the lens' convergence distributions
on a grid and using
the point image positions of the lenses as constraints
(the PixeLens method). \citet{Coles08} improved on the method and
obtained $H_0=71^{+6}_{-8}\rm{\, km\, s^{-1}\, Mpc^{-1}}$ while addressing more clearly their
prior assumptions.
\citet{Oguri07} used a sample of 16 time-delay
lenses to constrain $H_0=68\pm6{\rm(stat.)}\pm 8 {\rm(syst.)}
\rm{\, km\, s^{-1}\, Mpc^{-1}}$ (for $\Omega_{\rm m}=0.24$ and $\Omega_{\rm \Lambda}=0.76$; see footnote
\ref{ftnote:Hdiffcosmo}) by employing a statistical approach based
on the image configurations.
By simultaneously modeling
SDSS J1206+4332 with four other systems using PixeLens, \citet{ParaficzEtal09}
derive $H_0 = 61.5^{+8}_{-4}\rm{\, km\, s^{-1}\, Mpc^{-1}}$.
The larger quoted error bars on these ensemble estimates are perhaps a
reflection of the paucity of information available for each lens, as
discussed above. All four analyses effectively assume that the ensemble
external convergence
distribution has zero mean, which may not be accurate: for example,
\citet{Oguri07} constructed a sample for which external
convergence could be neglected, and then incorporated this into the systematic
error budget. Furthermore, \citet{Oguri07} imposed a Gaussian prior on the
slope of $\gamma'=2.00\pm0.15$, and the PixeLens method's priors on
$\kappa$ may well implicitly impose constraints on $\gamma$ that are
similar to the prior in \citet{Oguri07}; these priors on the slope
may not be appropriate for individual systems in the ensembles.
In contrast, our measurement of $\gamma'$ from the ACS data means
that our results are independent of external priors on $\gamma'$. In
fact, our detailed study of the single well-observed lens B1608$+$656{},
even incorporating the effects of $\kappa_{\rm ext}$, constrains $H_0$ better
than the studies using ensembles of lenses. Our claim is that our
analysis of the systematic effects in B1608$+$656{}\ --- explicitly
including density profile slope and external convergence as nuisance
parameters --- is one of the most extensive on a single lens, and is
rewarded with one of the most accurate measurements of $H_0$ from
time-delay lenses.
\subsection{Relaxing the K03 prior}
\label{sec:H0:relaxcos}
As we described in Section \ref{sec:H0theory}, strong lens time delays enable
a measurement of a cosmological distance-like quantity, $D_{\rm \Delta t} \equiv
(1+z_{\rm d}) D_{\rm d} D_{\rm s}/D_{\rm ds}$. While there is some slight further
dependence on cosmology in the stellar dynamics modeling, we expect this
particular distance combination to be well constrained by the system. To
illustrate this, we plot in Figure~\ref{fig:Dtp} the PDF for $D_{\rm \Delta t}$ with
and without the constraints from B1608$+$656{}, for various choices of the cosmological
parameter prior PDF. Specifically, we show the effect of relaxing the prior on
$\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$ and $w$ from the K03 delta function to the two types of uniform
distributions detailed in Table~\ref{tab:priors}:
``UNIFORMopen'' and ``UNIFORMw''.
We see that all of these distributions
predict the same uninformative prior for $D_{\rm \Delta t}$, and that the B1608$+$656{}\
posterior PDFs are correspondingly similar. With the OBS and ACS priors for
$\kappa_{\rm ext}$ and $\gamma'$, we estimate
$D_{\rm \Delta t} \simeq (5.16^{+0.29}_{-0.24})\times10^3\, {\mathrm{Mpc}}$, a precision of
$\sim5\%$.
The difference between the $D_{\rm \Delta t}$ estimates among the three priors
shown is $\lesssim2\%$.
\begin{figure}[!ht]
\centering\includegraphics[width=0.95\linewidth]{fig6.ps}
\caption{\label{fig:Dtp} PDFs for $D_{\rm \Delta t}$,
showing the B1608$+$656{}\ posterior
constraints on $D_{\rm \Delta t}$ (solid) given
assorted uniform priors for the cosmological parameters (dotted, labeled).
See the text for a full description of these various priors.
In this figure we assign the ACS and OBS priors for
$\gamma'$ and~$\kappa_{\rm ext}$. B1608$+$656{}\ provides tight constraints on $D_{\rm \Delta t}$,
which translates into information about $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$ and
$w$ as well as $H_0$.}
\end{figure}
Figure~\ref{fig:Dtp} suggests that a shifted log normal
approximation (to take into account the skewness)
for the product of the B1608$+$656{}\
likelihood function, marginalized over the OBS and ACS priors,
is an appropriate compression of our results. We find that
\begin{eqnarray}
\label{eq:DtpLikelihood}
\hspace{-0.3cm}\lefteqn{P(D_{\rm \Delta t}|H_0,\Omega_{\rm m},\Omega_{\rm \Lambda},w) \simeq }\nonumber\\
&& \hspace{-0.3cm} \frac{1}{\sqrt{2\pi} (x-\lambda_{\rm D}) \sigma_{\rm D}}
\exp{\left[-\frac{(\log(x-\lambda_{\rm D}) - \mu_{\rm D})^2}{2\sigma_{\rm D}^2}\right]},
\end{eqnarray}
where $x=D_{\rm \Delta t}/(1\, {\rm Mpc})$, $\lambda_{\rm D} = 4000.$, $\mu_{\rm
D}=7.053$ and $\sigma_{\rm D} = 0.2282$, accurately
reproduces the cosmological parameter inferences: for example,
Hubble's constant is recovered to $<0.7\%$ and its $16^{\rm th}$ and
$84^{\rm th}$ percentiles (68\% CL) are recovered to $<1.1\%$ for the
WMAP cosmologies we considered.
\subsection{Constraints on $\Omega_{\rm m}$ and $\Omega_{\rm \Lambda}$}
\label{sec:H0:opencos}
\begin{figure*}
\centering\includegraphics[height=0.95\linewidth,angle=270]{fig7.ps}
\caption{\label{fig:cosuniform} The B1608$+$656{}\ marginalized posterior PDF for
$H_0$, $\Omega_{\rm m}$, $\Omega_{\rm \Lambda}$ and $\kappa_{\rm ext}$ in a $w=-1$
cosmological model and assuming ACS $\gamma'$ and OBS
$\kappa_{\rm ext}$ priors; contours are 68\% and 95\% confidence levels.
The three sets of colored contours correspond to three different
prior/data set combinations.
Blue: B1608$+$656{}\ constraints, given the UNIFORMopen prior; red: the prior
provided by the WMAP 5 year data set alone; black: the joint constraints from
combining WMAP and B1608$+$656{}. The blue contours in the $\Omega_{\rm m}$ and
$\Omega_{\rm \Lambda}$ columns are omitted since they would show almost no constraints,
as indicated by the diagonal panels.}
\end{figure*}
Based on the construction of $D_{\rm \Delta t}$, we expect strong lens time delays
to be more sensitive to $H_0$ than the other three cosmological parameters.
This is shown in Figure~\ref{fig:cosuniform}, where we consider
$w=-1$ uniform cosmological prior ``UNIFORMopen''
and plot the marginalized B1608$+$656{}\ posterior PDF
to show the influence of the
lensing data (blue lines). While there is a slight dependence on $\Omega_{\rm \Lambda}$, we
see that the B1608$+$656{}\ data do indeed primarily constrain $H_0$. In contrast,
we plot the posterior PDF from the analysis of the 5 year WMAP data set
\citep[red lines,][]{DunkleyEtal09}. With no constraint on the curvature of
space, the CMB data provides only a weak prior on $H_0$, which is highly
degenerate with $\Omega_{\rm m}$ and
$\Omega_{\rm \Lambda}$. Importance sampling the WMAP MCMC chains with the B1608$+$656{}\ likelihood,
we obtain the joint posterior PDF, plotted in black.
Strong lens time delays are an example of a kinematic cosmological probe,
i.e., one that is sensitive to the geometry and expansion rate of the
Universe, but not to dynamical assumptions about the the growth of structure in the Universe. In
Table~\ref{tab:OkConstraint}, we compare the B1608$+$656{}\ data set to a number of
other kinematic probes from the literature. The WMAP data constrain the
angular diameter distance to the last scattering surface; these other data sets
effectively provide a second distance estimate that breaks the degeneracy
between $H_0$ and the curvature of space. In the B1608$+$656{}\ case, we constrain
$\Omega_{\rm k}$ to be $-0.005_{-0.026}^{+0.014}$ (95\% CL). We can see that in terms of
constraining the curvature parameter, B1608$+$656{}\ is more informative than
the {\it HST}{}\ Key Project $H_0$ measurement, and is comparable to the
current SNe Ia data set.
\begin{table}
\begin{center}
\caption{\label{tab:OkConstraint} Curvature parameter
constraints from WMAP5 combined
with various data sets assuming $w=-1$ (95\% CL).}
\begin{tabular}{lcc}
\hline
WMAP5$^{\rm a,b}$ & $-0.285 < \Omega_{\rm k} < 0.010$ & $15\%$ \\
WMAP5 + {\it HST}{}\, KP$^{\rm b,c}$ & $-0.052 < \Omega_{\rm k} < 0.013$ & $3.3\%$ \\
WMAP5 + SN$^{\rm b,d}$ & $-0.032 < \Omega_{\rm k} < 0.008$ & $2.0\%$ \\
WMAP5 + BAO$^{\rm b,e}$ & $-0.017 < \Omega_{\rm k} < 0.007$ & $1.2\%$ \\
{\bf WMAP5 + B1608{}} & $\mathbf{-0.031<\Ok<0.009} $ & $\mathbf{2.0\%}$ \\
\hline
\end{tabular}
\end{center}
The third column
gives the ``precision,''
quantified as half the 95\% confidence interval in $(1.0 - \Omega_{\rm k})$,
as a percentage.$\;\;$
$^{\rm a}$ \texttt{http://lambda.gsfc.nasa.gov} $\;\;$
$^{\rm b}$ \citet{KomatsuEtal09}.$\;\;$
$^{\rm c}$ \citet{FreedmanEtal01}.$\;\;$
$^{\rm d}$ Based on the ``union'' SN samples compiled by \citet{KowalskiEtal08}.$\;\;$
$^{\rm e}$ \citet{PercivalEtal07}.$\;\;$
\end{table}
Figure~\ref{fig:cosuniform} also shows the primary nuisance parameter,
$\kappa_{\rm ext}$. When B1608$+$656{}\ and the WMAP data are combined, the PDF for $\kappa_{\rm ext}$
shifts and tightens very slightly, as we expect from the discussion in
Section~\ref{sec:H0:nuisance}. If we relax the OBS prior on $\kappa_{\rm ext}$ to
uniform, then we obtain $-0.032<\Omega_{\rm k}<0.021$ (95\% CL), which is still
tighter than the {\it HST}{}\ KP constraints.
\begin{figure*}[!ht]
\centering\includegraphics[height=0.95\linewidth,angle=270]{fig8.ps}
\caption{\label{fig:wuniform} The B1608$+$656{}\ marginalized posterior PDF for
$H_0$, $\Omega_{\rm \Lambda}$, $w$ and $\kappa_{\rm ext}$ in a flat
cosmological model, again assuming ACS $\gamma'$ and OBS
$\kappa_{\rm ext}$ priors; contours are 68\% and 95\% confidence levels.
The three sets of colored contours correspond to three different
prior/data set combinations.
Blue: B1608$+$656{}\ constraints, given the UNIFORMw prior; red: the prior
provided by the WMAP 5 year data set alone; black: the joint constraints from
combining WMAP and B1608$+$656{}. The blue contours in the $\Omega_{\rm m}$ and
$\Omega_{\rm \Lambda}$ columns are omitted since they would show almost no constraints,
as indicated by the diagonal panels.}
\end{figure*}
\subsection{Constraints on dark energy}
\label{sec:H0:DarkEnergy}
As noted by many authors \citep[e.g.][]{Hu05,KomatsuEtal09,RiessEtal09},
the degeneracy-breaking shown in the previous
subsection can be recast as a mechanism for constraining the equation of state
of dark energy,~$w$.
If we assert a precisely flat geometry for the Universe,
as motivated by the inflationary scenario, we can spend our available
information on constraining~$w$ instead. Figure~\ref{fig:wuniform} shows the
marginalized posterior PDF for the cosmological parameter $H_0$, $\Omega_{\rm \Lambda}=1-\Omega_{\rm m}$
and $w$, along with the nuisance parameter $\kappa_{\rm ext}$, again comparing the
B1608$+$656{}\ constraints with uniform and WMAP priors, and the WMAP constraints
alone. With the WMAP data alone, $w$ is strongly degenerate with $H_0$ and
$\Omega_{\rm \Lambda}$. Including B1608$+$656{}, which mainly provides constraints on $H_0$, the
$H_0$-$w$-$\Omega_{\rm \Lambda}$ degeneracy is partly broken. The resulting marginalized
distribution gives $w=-0.94^{+0.17}_{-0.19}$,
consistent with a cosmological constant. The corresponding value of
Hubble's constant is $H_0 = 69.7^{+4.9}_{-5.0} \rm{\, km\, s^{-1}\, Mpc^{-1}}$.
We summarize our inferences of $H_0$ and $w$ in
this variable-$w$ model in Table~\ref{tab:wConstraint}, comparing to a
similar set of alternative kinematic probes referred to in the previous section.
We see that, combining with the WMAP 5 year data set and
marginalizing over all other parameters, the B1608$+$656{}\ data set
provides a measurement of Hubble's constant with an uncertainty of 6.9\%, with the
equation of state parameter simultaneously constrained to 18\%.
This level of precision is better than that available from the {\it HST}{}\,
KP and is competitive with the current BAO measurements.
Our results are
consistent with the results from all the other probes listed. This is not a
trivial statement: combining each data set with the WMAP 5 year prior allows us
not only to quantify the relative constraining power of each one, it also
retains the possibility of detecting inconsistencies between data sets. As it
is, it appears that all the kinematic probes listed are in agreement within
their quoted uncertainties. Some tension might
be present if the supernovae and B1608$+$656{}\ were
considered separately from a combination of local {\it HST}{}\, $H_0$
measurements and
BAO constraints, but we have no compelling reason to make such a
division. As the statistical errors associated with each probe are decreased,
other inconsistencies may arise: we might
expect there to always be a need for
careful pairwise data set combinations.
Finally then, we incorporate B1608$+$656{}\ into a global analysis of
cosmological data sets. As an example, we importance sample from the
WBSw prior PDF; this is the joint posterior PDF from the joint analysis of
WMAP5, BAO and SN data.
This prior
is already very tight,
characterized by a median and 68\% confidence limits of
$H_0=70.3^{+1.6}_{-1.5} \rm{\, km\, s^{-1}\, Mpc^{-1}}$.
When we include information from
B1608$+$656{}\ with the ACS $\gamma'$ and OBS $\kappa_{\rm ext}$ priors, we obtain
$H_0=70.4^{+1.5}_{-1.4} \rm{\, km\, s^{-1}\, Mpc^{-1}}$,
a slight shift in centroid and 6\%
reduction in the confidence interval. This is good as it shows global
consistency in the WMAP5, BAO, SN and B1608$+$656{}\ data sets.
\begin{table*}
\begin{center}
\caption{\label{tab:wConstraint} Dark energy constraints from WMAP5 combined
with various data sets, assuming flat geometry.}
\begin{tabular}{lccccc}
\hline
& $H_0 / \rm{\, km\, s^{-1}\, Mpc^{-1}}$ & & & $w$ & \\
\hline
WMAP5$^{\rm a,b}$ & $74^{+15}_{-14}$ & $20\%$ & & $-1.06^{+0.41}_{-0.42}$ & $42\%$ \\
WMAP5+{\it HST}{}\, KP$^{\rm a,b,c}$ & $72.1^{+7.4}_{-7.6}$ & $10\%$ & & $-1.01^{+0.23}_{-0.22}$ & $23\%$ \\
WMAP5+SN$^{\rm a,b,d}$ & $69.4^{+1.6}_{-1.7}$ & $2.3\%$ & & $-0.977^{+0.065}_{-0.064}$ & $6.5\%$ \\
WMAP5+BAO$^{\rm a,b,e}$ & $73.9^{+4.7}_{-4.8}$ & $6.6\%$ & & $-1.15^{+0.21}_{-0.22}$ & $22\%$ \\
WMAP5+Riess$^{\rm f}$ & $74.2\pm3.6^{\rm g}$ & $5.0\%$ & & $-1.12\pm0.12$ & $12\%$ \\
{\bf WMAP5+B1608{}} & $\mathbf{69.7^{+4.9}_{-5.0}}$ & $\mathbf{6.9\%}$ & & $\mathbf{-0.94^{+0.17}_{-0.19}}$ & $\mathbf{18\%}$ \\
\hline
\end{tabular}
\end{center}
The ``precisions'' in the third and fifth columns are defined
as half the 68\% confidence interval,
as a percentage of either 72 for~$H_0$ or -1.0 for~$w$. $\;\;$
$^{\rm a}$ \texttt{http://lambda.gsfc.nasa.gov} $\;\;$
$^{\rm b}$ \citet{KomatsuEtal09}. The $H_0$ estimate was taken from the previously listed website.$\;\;$
$^{\rm c}$ \citet{FreedmanEtal01}.$\;\;$
$^{\rm d}$ Based on the ``union'' SN samples compiled by \citet{KowalskiEtal08}.$\;\;$
$^{\rm e}$ \citet{PercivalEtal07}.$\;\;$
$^{\rm f}$ \citet{RiessEtal09}.$\;\;$
$^{\rm g}$ not marginalized over other cosmological parameters.
\end{table*}
\subsection{Future prospects}
\label{sec:H0:future}
In this paper, we have studied a single strong gravitational lens, B1608$+$656{},
investigating in depth the various model parameter degeneracies and
systematic effects. At present, B1608$+$656{}\ remains the only strong lens
system with (i) time delay measurements with errors of only a few percent, and
(ii) extended source surface brightness distribution
for accurate lens modeling; as we
have shown, these two properties together enable the careful study and the
resulting tight constraint on $H_0$.
Table~\ref{tab:wConstraint} shows that even this one system provides
competitive accuracy on $H_0$ and $w$ for a single kinematic probe, especially
when we consider that all the other experiments involved averaging together
many independent distance measurements. What should we expect from extending
this study to many more lenses? As we showed in Section~\ref{sec:H0:nuisance},
if the data are good enough to constrain the density profile slope to a few
percent, the accuracy of the cosmological parameter inference is limited, as
it is in B1608$+$656{}, by our knowledge of the lens environment,~$\kappa_{\rm ext}$.
However, we also outlined in Section~\ref{sec:LensEnv} how using information
from numerical simulations and the photometry in the field can be used to
constrain this nuisance parameter and yield an unbiased estimate of~$H_0$.
Furthermore, as discussed in Section~\ref{sec:H0:nuisance}, stellar
dynamics provides significant amount of information on $\kappa_{\rm ext}$
by limiting its permissible range of values.
While we, and also \citet{TreuEtal09} and \citet{FassnachtEtal09},
discuss how the line-of-sight contributions to~$\kappa_{\rm ext}$ should average to zero
over many lens systems, lens galaxies --- like all massive galaxies ---
tend to live in locally overdense
environments, such that the local contribution to $\kappa_{\rm ext}$ would be non-zero.
Careful studies of the lens environments
(e.g. \citeauthor{MomchevaEtal06}~\citeyear{MomchevaEtal06};
\citeauthor{FassnachtEtal06}~\citeyear{FassnachtEtal06};
Blandford et al.~in preparation) and of N-body simulations with gas physics
to determine this local contribution to $\kappa_{\rm ext}$ will be crucial for
obtaining $H_0$ from a large sample of lenses. If we are able to average
together $N$ systems we should, in principle, be able to reduce our
uncertainty by~$\sqrt{N}$. In practice, the accuracy of the combination
procedure will sooner be limited by the systematic uncertainty in the shape
and centroid of the assumed $\kappa_{\rm ext}$ distribution: investigating the
properties of this distribution is perhaps the most urgent topic for further
work. Likewise, if the density profile slope cannot be constrained for each
time-delay lens individually, the details of the prior PDF assigned for~$\gamma'$
will become important as the ensemble grows.
In the near future, cadenced surveys such as those planned with the Large
Synoptic Survey Telescope (LSST) and being undertaken by the
Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) will
discover large numbers of time-delay lenses, prompting us to consider
performing analyses such as the one described here on hundreds of lens
systems. In practice, obtaining data of the quality we have presented here
for hundreds of suitable lenses will pose a significant observational
challenge. Nevertheless, \citet{DobkeEtal09}, \citet{CoeMoustakas09} and Oguri
\& Marshall (in preparation) investigate constraints on cosmological
parameters based on large samples of time-delay lenses. In particular,
\citet{CoeMoustakas09} suggest that, in terms of raw precision and in
combination with a prior PDF from Planck, an LSST ensemble could reach
sub-percent level precision in $H_0$, and constrain $w$ to 3\% or better,
provided that the systematic effects such as $\kappa_{\rm ext}$ are under
control. Our work has already addressed some of these systematic
effects, and will provide a basis for future analysis of large samples
of time-delay lenses and lens environment studies.
\section{Conclusions}
\label{sec:conc}
We have studied the well-observed gravitational lens B1608$+$656{}\ and used it to
infer the values of cosmological parameters;
we outlined and followed a Bayesian approach for combining three
data sets: {\it HST}{}/ACS imaging, stellar velocity dispersion measurement, and the
time delays between the multiple images. Diagnosing the principal systematic
effects, we included two nuisance parameters ($\gamma'$ and $\kappa_{\rm ext}$)
into the data model to account
for them, assigning well-motivated prior PDFs and marginalizing over them.
We draw the following conclusions:
\begin{itemize}
\item We find that the {\it HST}{}/ACS images constrain the density profile slope
parameter~$\gamma' = 2.08 \pm 0.03$, which we propagate through the
cosmological parameter inference as a prior PDF. Relaxing this prior to a
uniform distribution degrades the precision on $H_0$ from 4.4\% to 7.0\%;
the SLACS intrinsic profile slope parameter distribution is not
significantly more informative than the uniform prior.
\item With the ACS prior for $\gamma'$, we find that
the inferred cosmological parameters are dominated by the
the external convergence~$\kappa_{\rm ext}$. Ray-tracing through
the Millennium Simulation gives a PDF for~$\kappa_{\rm ext}$ due to line-of-sight
contributions that has zero mean and width~$\sim 0.04$, while using
the galaxy number counts in the B1608$+$656{}\ field in conjunction with
the MS gives $\kappa_{\rm ext}=0.10^{+0.08}_{-0.05}$.
\end{itemize}
Using our most informative priors on the two nuisance parameters, we
arrive at the following cosmographic inferences:
\begin{itemize}
\item In the K03 cosmology ($\Omega_{\rm m}=0.3$, $\Omega_{\rm \Lambda}=0.7$, $w=-1$, and uniform
$H_0$), we obtain from the B1608$+$656{}\ data set $H_0 = 70.6 \pm 3.1
\rm{\, km\, s^{-1}\, Mpc^{-1}}$ (68\% CL). The $4.4\%$ error includes both statistical
and dominant systematic uncertainties, through the marginalization described
above. This is a significant improvement to
the earlier measurement of $H_0=75^{+7}_{-6}\rm{\, km\, s^{-1}\, Mpc^{-1}}$ by
\citet{KoopmansEtal03}.
\item Time-delay lenses are sensitive primarily to $H_0$ but are
weakly dependent on other cosmological parameters; the
lensing measurement of $H_0$ is robust and useful
for studying dark energy when combined with other
cosmological probes. We find that for B1608$+$656{}\ the cosmographic
information
can be summarized as a shifted log normal
probability distribution for the time-delay distance~$D_{\rm \Delta t}$ in
units of Mpc, with the three parameters $\lambda_{\rm D}=4000.$,
$\mu_{\rm D}=7.053$ and $\sigma_{\rm D}=0.2282$.
\item In a $\Lambda$-CDM cosmology (with $w=-1$), the B1608$+$656{}\ data set
breaks the degeneracy between $\Omega_{\rm m}$ and $\Omega_{\rm \Lambda}$ in the WMAP 5 year data set,
and constrains the curvature parameter to be zero to $2.0\%$ (95\% CL),
a level of
precision similar to those afforded by the current Type Ia SNe sample.
\item B1608$+$656{}\ in combination with the WMAP 5 year data set,
assuming flatness and
allowing (a time-independent) $w$ to vary,
gives $H_0 = 69.7^{+4.9}_{-5.0} \rm{\, km\, s^{-1}\, Mpc^{-1}}$ and
$w=-0.94^{+0.17}_{-0.19}$ (68\% CL).
These are significant improvements
to the WMAP5 only constraints of
$H_0=74^{+15}_{-14}\,\rm{\, km\, s^{-1}\, Mpc^{-1}}$ and
$w=-1.06^{+0.41}_{-0.42}$.
B1608$+$656{}\ is as competitive as the current BAO data in determining
$w$ when combined with WMAP5.
\end{itemize}
Our detailed analysis of B1608$+$656{}\ provides the framework for using large
samples of time-delay lenses as cosmological probes in the near future. We
anticipate the local contribution to $\kappa_{\rm ext}$, which would not average away
with a large sample of lenses, being the dominant residual systematic
error. Several lens environment studies to circumvent this are underway;
with the effects from $\kappa_{\rm ext}$ accurately modeled, future samples of time-delay
gravitational lenses should be a competitive cosmological probe.
\acknowledgments We thank M.~Brada{\v c}, J.~Hartlap, E.~Komatsu, J.~P.~McKean and
P.~Schneider for useful discussions. We are grateful to the anonymous
referee whose suggestions and comments helped clarify parts of the paper.
S.H.S. is supported in part through the Deutsche
Forschungsgemeinschaft under the project SCHN 342/7--1.
C.D.F. acknowledge support under the {\it HST}{}\
program \#GO-10158. Support for program \#GO-10158 was provided by
NASA through a grant from the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in
Astronomy, Inc., under NASA contract NAS 5-26555. C.D.F.
acknowledge the support from the European Community's Sixth Framework
Marie Curie Research Training Network Programme, contract no.
MRTN-CT-2004-505183 ``ANGLES.'' R.D.B. acknowledges support through NSF
grant AST 05-07732. L.V.E.K. is supported in part through an
NWO-VIDI career grant (project number 639.042.505). T.T. acknowledges
support from the NSF through CAREER award NSF-0642621, by the Sloan
Foundation through a Sloan Research Fellowship, and by the Packard
Foundation through a Packard Fellowship. This work was supported in
part by the NSF under award AST-0444059, the TABASGO foundation in the
form of a research fellowship (P.J.M.), and by the US Department of
Energy under contract number DE-AC02-76SF00515. Based in part on
observations made with the NASA/ESA \textit{Hubble Space Telescope},
obtained at the Space Telescope Science Institute, which is operated
by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS 5-26555. These observations are associated
with program \#GO-10158.
\bibliographystyle{apj}
| 2024-02-18T23:40:06.636Z | 2010-01-28T18:03:58.000Z | algebraic_stack_train_0000 | 1,367 | 21,940 |
|
proofpile-arXiv_065-6888 | \section{#1}}
\def\alpha} \def\b {\beta} \def\g {\gamma {\alpha} \def\b {\beta} \def\g {\gamma}
\def\Gamma} \def\d {\delta} \def\D {\Delta {\Gamma} \def\d {\delta} \def\D {\Delta}
\def\epsilon} \def\ve {\varepsilon} \def\k {\kappa {\epsilon} \def\ve {\varepsilon} \def\k {\kappa}
\def\lambda} \def\L {\Lambda} \def\m {\mu {\lambda} \def\L {\Lambda} \def\m {\mu}
\def\nu} \def\s {\sigma} \def\S {\Sigma {\nu} \def\s {\sigma} \def\S {\Sigma}
\def\tau} \def\th {\theta} \def\Th {\Theta {\tau} \def\th {\theta} \def\Th {\Theta}
\def\vartheta} \def\w {\omega} \def\Om {\Omega {\vartheta} \def\w {\omega} \def\Om {\Omega}
\def\Upsilon {\Upsilon}
\newcommand{\mbox{${\cal A}$}} \newcommand{\calb}{\mbox{${\cal B}$}}{\mbox{${\cal A}$}} \newcommand{\calb}{\mbox{${\cal B}$}}
\newcommand{\mbox{${\cal C}$}} \newcommand{\cald}{\mbox{${\cal D}$}}{\mbox{${\cal C}$}} \newcommand{\cald}{\mbox{${\cal D}$}}
\newcommand{\mbox{${\cal E}$}} \newcommand{\calf}{\mbox{${\cal F}$}}{\mbox{${\cal E}$}} \newcommand{\calf}{\mbox{${\cal F}$}}
\newcommand{\mbox{${\cal G}$}} \newcommand{\calh}{\mbox{${\cal H}$}}{\mbox{${\cal G}$}} \newcommand{\calh}{\mbox{${\cal H}$}}
\newcommand{\mbox{${\cal I}$}} \newcommand{\calj}{\mbox{${\cal J}$}}{\mbox{${\cal I}$}} \newcommand{\calj}{\mbox{${\cal J}$}}
\newcommand{\mbox{${\cal K}$}} \newcommand{\call}{\mbox{${\cal L}$}}{\mbox{${\cal K}$}} \newcommand{\call}{\mbox{${\cal L}$}}
\newcommand{\mbox{${\cal M}$}} \newcommand{\caln}{\mbox{${\cal N}$}}{\mbox{${\cal M}$}} \newcommand{\caln}{\mbox{${\cal N}$}}
\newcommand{\mbox{${\cal O}$}} \newcommand{\calp}{\mbox{${\cal P}$}}{\mbox{${\cal O}$}} \newcommand{\calp}{\mbox{${\cal P}$}}
\newcommand{\mbox{${\cal Q}$}} \newcommand{\calr}{\mbox{${\cal R}$}}{\mbox{${\cal Q}$}} \newcommand{\calr}{\mbox{${\cal R}$}}
\newcommand{\mbox{${\cal S}$}} \newcommand{\calt}{\mbox{${\cal T}$}}{\mbox{${\cal S}$}} \newcommand{\calt}{\mbox{${\cal T}$}}
\newcommand{\mbox{${\cal U}$}} \newcommand{\calv}{\mbox{${\cal V}$}}{\mbox{${\cal U}$}} \newcommand{\calv}{\mbox{${\cal V}$}}
\newcommand{\mbox{${\cal W}$}} \newcommand{\calx}{\mbox{${\cal X}$}}{\mbox{${\cal W}$}} \newcommand{\calx}{\mbox{${\cal X}$}}
\newcommand{\mbox{${\cal Y}$}} \newcommand{\calz}{\mbox{${\cal Z}$}}{\mbox{${\cal Y}$}} \newcommand{\calz}{\mbox{${\cal Z}$}}
\newcommand{\mbox{${\mathscr A}$}} \newcommand{\scrb}{\mbox{${\mathscr B}$}}{\mbox{${\mathscr A}$}} \newcommand{\scrb}{\mbox{${\mathscr B}$}}
\newcommand{\mbox{${\mathscr C}$}} \newcommand{\scrd}{\mbox{${\mathscr D}$}}{\mbox{${\mathscr C}$}} \newcommand{\scrd}{\mbox{${\mathscr D}$}}
\newcommand{\mbox{${\mathscr E}$}} \newcommand{\scrf}{\mbox{${\mathscr F}$}}{\mbox{${\mathscr E}$}} \newcommand{\scrf}{\mbox{${\mathscr F}$}}
\newcommand{\mbox{${\mathscr G}$}} \newcommand{\scrh}{\mbox{${\mathscr H}$}}{\mbox{${\mathscr G}$}} \newcommand{\scrh}{\mbox{${\mathscr H}$}}
\newcommand{\mbox{${\mathscr I}$}} \newcommand{\scrj}{\mbox{${\mathscr J}$}}{\mbox{${\mathscr I}$}} \newcommand{\scrj}{\mbox{${\mathscr J}$}}
\newcommand{\mbox{${\mathscr K}$}} \newcommand{\scrl}{\mbox{${\mathscr L}$}}{\mbox{${\mathscr K}$}} \newcommand{\scrl}{\mbox{${\mathscr L}$}}
\newcommand{\mbox{${\mathscr M}$}} \newcommand{\scrn}{\mbox{${\mathscr N}$}}{\mbox{${\mathscr M}$}} \newcommand{\scrn}{\mbox{${\mathscr N}$}}
\newcommand{\mbox{${\mathscr O}$}} \newcommand{\scrp}{\mbox{${\mathscr P}$}}{\mbox{${\mathscr O}$}} \newcommand{\scrp}{\mbox{${\mathscr P}$}}
\newcommand{\mbox{${\mathscr Q}$}} \newcommand{\scrr}{\mbox{${\mathscr R}$}}{\mbox{${\mathscr Q}$}} \newcommand{\scrr}{\mbox{${\mathscr R}$}}
\newcommand{\mbox{${\mathscr S}$}} \newcommand{\scrt}{\mbox{${\mathscr T}$}}{\mbox{${\mathscr S}$}} \newcommand{\scrt}{\mbox{${\mathscr T}$}}
\newcommand{\mbox{${\mathscr U}$}} \newcommand{\scrv}{\mbox{${\mathscr V}$}}{\mbox{${\mathscr U}$}} \newcommand{\scrv}{\mbox{${\mathscr V}$}}
\newcommand{\mbox{${\mathscr W}$}} \newcommand{\scrx}{\mbox{${\mathscr X}$}}{\mbox{${\mathscr W}$}} \newcommand{\scrx}{\mbox{${\mathscr X}$}}
\newcommand{\mbox{${\mathscr Y}$}} \newcommand{\scrz}{\mbox{${\mathscr Z}$}}{\mbox{${\mathscr Y}$}} \newcommand{\scrz}{\mbox{${\mathscr Z}$}}
\newcommand{{\boldsymbol{a}}}{{\boldsymbol{a}}}
\newcommand{\mbox{\boldmath $a$}}{\mbox{\boldmath $a$}}
\newcommand{{\mathbf{a}}}{{\mathbf{a}}}
\newcommand{\mbox{\boldmath $\Pi$}}{\mbox{\boldmath $\Pi$}}
\newcommand{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\nabla$}}
\newcommand{\mbox{\boldmath $\hat{A}$}}{\mbox{\boldmath $\hat{A}$}}
\newcommand{\mbox{\boldmath $\hat{B}$}}{\mbox{\boldmath $\hat{B}$}}
\newcommand{\mbox{\boldmath ${\cal C}$}}{\mbox{\boldmath ${\cal C}$}}
\newcommand{\bbe}[1]{\mbox{${\mathbb E}^{#1}$}}
\def{\hbox{{\rm I}\kern-.2em\hbox{\rm R}}}{{\hbox{{\rm I}\kern-.2em\hbox{\rm R}}}}
\def{\hbox{{\rm I}\kern-.2em\hbox{\rm B}}}{{\hbox{{\rm I}\kern-.2em\hbox{\rm B}}}}
\def{\hbox{{\rm I}\kern-.2em\hbox{\rm N}}}{{\hbox{{\rm I}\kern-.2em\hbox{\rm N}}}}
\def\,\,{\hbox{{\rm I}\kern-.59em\hbox{\bf C}}}{\,\,{\hbox{{\rm I}\kern-.59em\hbox{\bf C}}}}
\def{\hbox{{\rm Z}\kern-.4em\hbox{\rm Z}}}{{\hbox{{\rm Z}\kern-.4em\hbox{\rm Z}}}}
\def{\hbox{{\rm I}\kern-.2em\hbox{\rm P}}}{{\hbox{{\rm I}\kern-.2em\hbox{\rm P}}}}
\def{\hbox{{\rm I}\kern-.4em\hbox{\rm H}}}{{\hbox{{\rm I}\kern-.4em\hbox{\rm H}}}}
\def{\hbox{{\rm I}\kern-.2em\hbox{\rm D}}}{{\hbox{{\rm I}\kern-.2em\hbox{\rm D}}}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def{\it et al}. {\end{eqnarray}}
\def\frac{1}{2}{\frac{1}{2}}
\def\frac{1}{3}{\frac{1}{3}}
\def\frac{1}{4}{\frac{1}{4}}
\newcommand{\inv}[1]{\frac{1}{#1}}
\def\rightarrow} \def\la{\leftarrow{\rightarrow} \def\la{\leftarrow}
\newcommand{\dint}[2]{\int_{#1}^{#2}\!\!}
\def\dintt#1#2{\int\limits_{#1}^{#2}}
\newcommand{\mbox{${\mathrm{d}}$}}{\mbox{${\mathrm{d}}$}}
\newcommand{\del}[1]{\partial_{#1}}
\def\partial{\partial}
\def\nabla{\nabla}
\def\nabla\times{\nabla\times}
\def\nabla\cdot{\nabla\cdot}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\bra}[1]{\mbox{$\langle #1 |$}}
\newcommand{\ket}[1]{\mbox{$| #1 \rangle$}}
\newcommand{\ke}[1]{{ #1}\rangle}
\newcommand{\braket}[2]{\mbox{$\langle #1 | #2 \rangle$}}
\newcommand{\ip}[2]{\langle{\rm #1}|{\rm #2}\rangle}
\newcommand{\brac}[1]{\langle #1 \rangle}
\newcommand{\proj}[1]{\ket{#1}\!\bra{#1}}
\newcommand{\op}[2]{\left|\left.{\rm #1\,}\right\rangle\right.\!\!
\left\langle\left.{\rm #2\,}\right|\right.}
\def{\dagger}{{\dagger}}
\def{\rm tr}\,{{\rm tr}\,}
\def{\rm tr}\,{{\rm tr}\,}
\def{\rm det}{{\rm det}}
\renewcommand{\Re}{{ \rm Re}}
\renewcommand{\Im}{{ \rm Im}}
\def\setlength\arraycolsep{2pt}{\setlength\arraycolsep{2pt}}
\def\nonumber{\nonumber}
\newcommand{{\it e.g.}}{{\it e.g.}}
\newcommand{{\it i.e.}}{{\it i.e.}}
\newcommand{\sgn}[1]{\mbox{sgn}(#1)}
\newcommand{\ads}[1]{\mbox{${AdS}_{#1}$}}
\newcommand{\adss}[2]{\mbox{$AdS_{#1}\times {S}^{#2}$}}
\newcommand{g_{Y\!M}}{g_{Y\!M}}
\newcommand{\Gu}[1]{\Gamma_{\underline{#1}}}
\def\textrm{D4}{\textrm{D4}}
\def\textrm{D8}{\textrm{D8}}
\def\overline{\textrm{D8}}{\overline{\textrm{D8}}}
\def\textrm{D8-}\overline{ \textrm{D8}}{\textrm{D8-}\overline{ \textrm{D8}}}
\newcommand{U_{\rm KK}}{U_{\rm KK}}
\newcommand{M_{\rm KK}}{M_{\rm KK}}
\newcommand{\widetilde}{\widetilde}
\newcommand{\widehat}{\widehat}
\newcommand{\cala_{0}}{\mbox{${\cal A}$}} \newcommand{\calb}{\mbox{${\cal B}$}_{0}}
\newcommand{\dot{\cala_{0}}}{\dot{\mbox{${\cal A}$}} \newcommand{\calb}{\mbox{${\cal B}$}_{0}}}
\newcommand{\partial_Z \cala_{0}}{\partial_Z \mbox{${\cal A}$}} \newcommand{\calb}{\mbox{${\cal B}$}_{0}}
\newcommand{{B^n_\mu}}{{B^n_\mu}}
\newcommand{{B^n_\nu}}{{B^n_\nu}}
\newcommand{{B^{\mu m}}}{{B^{\mu m}}}
\newcommand{{B^{\nu m}}}{{B^{\nu m}}}
\newcommand{{B^{\mu l}}}{{B^{\mu l}}}
\newcommand{{B^{\nu l}}}{{B^{\nu l}}}
\newcommand{{B^m_0}}{{B^m_0}}
\newcommand{{B^n_0}}{{B^n_0}}
\newcommand{{{\tilde{B}^n_{0,{\mathrm const}}}}}{{{\tilde{B}^n_{0,{\mathrm const}}}}}
\newcommand{{{\tilde{B}^m_{0,{\mathrm const}}}}}{{{\tilde{B}^m_{0,{\mathrm const}}}}}
\newcommand{{\psi_{m}}}{{\psi_{m}}}
\newcommand{{\psi_{n}}}{{\psi_{n}}}
\newcommand{{\psi_{l}}}{{\psi_{l}}}
\newcommand{{\partial_\mu}}{{\partial_\mu}}
\newcommand{{\partial_\nu}}{{\partial_\nu}}
\newcommand{{\partial^\mu}}{{\partial^\mu}}
\newcommand{{\partial^\nu}}{{\partial^\nu}}
\newcommand{{\dot{\psi}_m}}{{\dot{\psi}_m}}
\newcommand{{\dot{\psi}_n}}{{\dot{\psi}_n}}
\newcommand{{\dot{\psi}_l}}{{\dot{\psi}_l}}
\newcommand{{\dot{\Psi}_m}}{{\dot{\Psi}_m}}
\newcommand{{\dot{\Psi}_n}}{{\dot{\Psi}_n}}
\newcommand{{\phi_0}}{{\phi_0}}
\newcommand{{{v}_0}}{{{v}_0}}
\begin{document}
\begin{titlepage}
\vspace{0.5in}
\begin{center}
{\large \bf { Dense QCD: }
a Holographic Dyonic Salt}\\
\vspace{10mm}
Mannque Rho$^{a,b}$, Sang-Jin Sin$^b$ and Ismail Zahed$^{c}$\\
\vspace{5mm}
{\it $^a$ Institut de Physique Th\'eorique, CEA Saclay, 91191 Gif-sur-Yvette, France\\
$^b$ Department of Physics, Hanyang University, {\it 133-791} Seoul, Korea\\
$^c$ Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA}\\
\vspace{10mm}
{\tt \today}
\end{center}
\begin{abstract}
Dense QCD at zero temperature with a large number of colors is a crystal. We show that in the holographic dual
description, the crystal is made out of pairs of dyons with $e=g=\pm 1$ charges in a salt-like
arrangement. We argue that with increasing density the dyon masses and topological charges
equalize, turning the salt-like configuration to a bcc of half-instantons. The latter is dual to
a cubic crystal of half-skyrmions. We estimate the transition from an fcc crystal of instantons
to a bcc crystal of dyons to about 3 times nuclear matter density with a dyon binding energy
of about 180 MeV.
\end{abstract}
\end{titlepage}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}
Dense QCD with a large number of colors $N_c$ is a crystal. The Coulomb-like ratio $\Gamma=V/K$
measuring the relative interaction energy to kinetic energy is
large, ${\cal O}(N_c^2)$, since the baryon-baryon interaction is $N_c$ and the baryon kinetic energy is
$1/N_c$. Chiral skyrmions which embody key aspects of QCD at
large $N_c$~\cite{SKYRMION} crystallize~\cite{KLEBANOV} in an fcc configuration at low densities and
a cubic crystal configuration with half skyrmion symmetry at large densities~\cite{CRYSTAL,crystal2}.
The details of the crystal rearrangement and the emergence of the half-skyrmion
symmetry have been numerically explored~\cite{crystal2,park-vento}.
Some aspects of dense QCD at large $N_c$ were recently discussed in~\cite{LARRY}.
QCD at large $N_c$ and large coupling $\lambda} \def\L {\Lambda} \def\m {\mu =g^2N_c$ is amenable to a
holographic chiral description with baryons as flavor instantons \cite{SAKAI}.
Holographic dense QCD is expected to be a crystal of instantons, and a
preliminary analysis of the crystal structure in the Wigner-Seitz approximation
shows a transition to a free massive fermion equation of state
$n_B^{5/3}$~\cite{WIGNER}.
Here too a full scale holographic description of a dense instanton crystal
appears to be numerically involved.
In this letter we argue that some generic aspects of the crystalline structure
in holography can be elucidated using instanton physics. In section 2 we
briefly recall the origin of flavor instantons for the description of holographic
baryons. In section 3, we show that a simple crystal arrangement of flavor
instantons is likely to split into an underlaid double lattice of BPS dyons
of opposite charges $e=g=\pm 1$ under the action of spatial holonomies
to order $N_c\lambda$. The arrangement is salt-like. To order
$N_c\lambda^0$ the instantons and their progeniture dyons cease to be
BPS. In section 4, we argue that with increasing density, the dyon topological
charges and masses are equalized making the final dyon rearrangement
a bcc of half instantons. The latter is dual to a cubic crystal of half-skyrmions.
In section 5 we put forward some geometrical arguments in favor of the
restoration of chiral symmetry in the holographic salt configuration.
Our conclusions and prospects are in section 6.
\section{Holographic Baryons}
Baryons in holographic dual QCD (hQCD) are sourced by instantons in bulk~\cite{SAKAI,instanton-baryon}.
The pertinent brane embedding
consists of $N_c$ D4 branes that act as a gravitational source in 10 dimensions, and
a pair of $N_f$ D8 probe branes that support color charges in the fundamental
representation~\cite{SAKAI}. The source/probe condition is encoded in the limit
$N_c/N_f\ll 1$. The geometry at the boundary of D4 is $R^5\times S^4$, and the ensuing
gauge theory is SYM$_{1+4}$. By explicit compactification of $R^5\rightarrow R^4\times S^1$ with
antiperiodic boundary conditions for the boundary fermions, the boundary theory is
YM$_{1+3}$.
The open string excitations with end points on the probe branes turn
the boundary gauge theory to QCD with a mass scale defined by $S^1$ (Kaluza-Klein). This is referred to as holographic QCD (or hQCD for short).
hQCD is the dual of the boundary gauge theory in bulk as described by the sourced
gravity and the induced brane dynamics. This brane setup is characterized by a vacuum
that breaks spontaneously rigid $U(N_f)\times U(N_f)$ with massless pions (in the chiral limit) as Goldstone
bosons. Vector meson dynamics follows through the D8+$\overline{\rm D8}$ brane dynamics and is
found to follow the general lore of vector meson dominance~\cite{SAKAI,instanton-baryon,VDM}.
A single baryon in hQCD consists of a D4 wrapping around $S^4$ with a large mass $N_c\lambda} \def\L {\Lambda} \def\m {\mu$.
The RR flux through $S^4$ due to this wrapping is
\begin{equation}
\frac 1{2\pi} \int_{S^4} \, F_4=N_c
\label{1}
\end{equation}
which is equivalent to $N_c$ quarks. For antipodal D8+$\overline{\rm D8}$ separation along $S^1$, the
D4 wrapping sources a flavor instanton through the coupling of the RR flux on $S^4$ with
the Chern-Simons term in $R^6$ in D8+$\overline{\rm D8}$. The zero size instanton is BPS to leading
order. At next to leading order, the instanton size is of order $1/\sqrt{\lambda} \def\L {\Lambda} \def\m {\mu}$ and
follows from the minimum of the DBI action of the D8+$\overline{\rm D8}$ flavor probe
branes~\cite{instanton-baryon}.
The flavor instanton in hQCD is the core of the baryon in bulk, that acts as a strong source in
the holographic direction, with massless pions and vector mesons strongly coupled to it. The
ensuing baryon has an electromagnetic size of order $\lambda} \def\L {\Lambda} \def\m {\mu^0$. The magnetic currents are vector
meson mediated. The more vector mesons, the more the currents are localized around the core,
resulting into magnetic moments which are entirely carried by the core. This is a peculiar aspect
of the holographic baryon which is not present in the baryon of the Skyrme model~\cite{KIM}. A
related observation regarding the holographic magnetic currents and their consequences on
the magnetic form factor of the holographic baryon was recently pointed out in~\cite{COHEN}.
Both vector meson dominance~\cite{VDM} and the Cheshire Cat principle~\cite{CHESHIRE}
emerge naturally from holography. The dual skyrmion is just the holonomy of the flavor
instanton gauge field ${\bf A}$ along the conformal direction $Z$
\begin{equation}
U(x)=P{\rm exp}\left(\int_{-\infty}^{+\infty}{\bf A}_Z(x,Z)dZ\right)
\label{2}
\end{equation}
thereby dynamically realizing a prescient suggestion made long ago by Atiyah and Manton~\cite{MANTON}.
In leading order, the skyrmion mass is the BPS instanton mass $M=8\pi^2\kappa$ with
$\kappa=N_c\lambda/(216\pi^3)$ in hQCD~\cite{SAKAI}.
\section{Dyonic Salt}
At low temperature, baryons are expected to crystalize at large $N_c$ irrespective
of the 't Hooft coupling $\lambda$ since the Coulomb-like ratio is
$\Gamma\approx N_c^2$. In holography, we need to consider a 3 dimensional
lattice of instantons on $R^3\times R$ or an instanton in
$T^3\times R$ with $R$ being the radial direction transverse
to $D4$ in the warped Sakai-Sugimoto (SS) model~\cite{SAKAI}.
For simplicity, let us first consider an instanton in flat space. The instanton on $T^1\times R^3$
with nontrivial holonomy was discussed in~\cite{leeyi,lee,baal,DIAKONOV}. The key observation is that the
holonomy causes the instanton to split into two electrically charged monopoles or dyons of opposite
charges. The holonomy fixes the size and charge of the monopole. Since this instanton splitting
mechanism is central to our construction below we now summarize it in flat space.
Following \cite{leeyi}, we consider a D4-D0 system with a stack of $N_f$
D4 extended along the $x^1,x^2,x^3,x^4$ with $x^4$ compactified of length $2L$.
The instanton is D0 and is embedded in D4. The periodicity in $x^4$ makes
the $N_f=2$ instanton a caloron. We now define a T-duality along the $x^4$ direction, under which D0
becomes D1 along $x^4$ and D4 becomes D3 along $x^1,x^2,x^3$. If the holonomy along $x^4$ is
non-trivial, then D1 is stretching between two Higgsed D3's.
Since we have two ways to connect D3's, we have two monopoles
as we show in Fig.~1. Therefore the instanton is mapped onto two monopoles
by T-duality with fractional topological charges $v$ and $1-v$ adding to 1.
These monopoles carry electric charges so they are dyons~\cite{baal}.
The masses and the contribution to the
instanton charges of the dyons are fixed by $v$
the Higgs vev or equivalently the strength of the holonomy. For BPS instantons and therefore BPS dyons,
the vev is arbitrary modulo $2\pi/2L$.
\begin{figure}[]
\begin{center}
\vskip 1.5cm
\includegraphics[width=10cm,height=4.5cm]{caloron.eps}
\caption{D0 instanton in D4 without holonomy (left) and with holonomy (right) by T-duality. The
holonomy splits the instanton into a pair of D1 monopoles. See text.}
\end{center}
\label{caloron}
\end{figure}
What will happen if we consider a 3 dimensional array of instantons along the $x^1,x^2,x^3$ directions or an instanton on $T^3\times R$? For simplicity we set the periodicity to be $2L$ with an instanton initial arrangement in the fcc configuration.
The fcc arrangement at low density is energetically favored over the simple cubic arrangement~\cite{park-vento}. For $N_f=2$ and by choosing the T-duality along $x^3$,
we arrive at a polarized pair of dyons of charges $e=g=\pm 1$ (in units of $T_3$)
along $x^3$. The holonomy can be chosen along $x^1$ or $x^2$ with similar
conclusions. For a fixed holonomy in the flavor gauge field ${\bf A}_\mu$
\begin{equation}
\left<{\bf A}_{3}\right>=\frac{2\pi}{2L}\,v\,T_3
\label{HOLO}
\end{equation}-
the dyon masses are $M_+=MB_+=Mv$ and $M_-=MB_-=M(1-v)$ -- where $v$ is the Higgs vev -- with $B_\pm$ their
topological charges respectively~\cite{leeyi,lee,baal}.
We recall that $B_++B_-=v+(1-v)=1$ is the instanton
number. Here $2L$ is the cell size of the
initial fcc instanton arrangement. To order $N_c\lambda\approx \kappa$
the instantons and dyons are BPS with an arbitrary value of the vev $0\leq v<1$. The dyonic
crystal is salt-like with intertwined lattices of topological charges $v$ and $(1-v)$ at the
vertices. In Fig.~2 we display the fcc instanton crystal (left) as it splits to a bcc
crystal of dyons under the action of the spatial holonomy along $x^3$.
The instantons cease to be BPS at next to leading order. We now suggest that
due to the interaction for the non-BPS objects
the vev $v$ will be fixed to $v=1/2$ to minimize the energy, turning the dyonic salt to a bcc crystal of half-instantons.
\begin{figure}[]
\begin{center}
\vskip 0.5cm
\includegraphics[width=10cm,height=4.5cm]{salt1.eps}
\caption{fcc instanton crystal at low density (left);
bcc dyon crystal at high density (right) with $a=2L$.
The spatial holonomy splits each instanton to a pair of
dyons (+,+) (black) and $(-,-$) (green) turning the
fcc to a bcc crystal. See text.}
\end{center}
\label{LATTICE}
\end{figure}
\section{BCC Crystal of Half-Instantons}
Although the general instanton solution in warped hQCD is unknown,
to leading order in $\kappa$, the flavor instanton in hQCD is the flat space instanton with
zero size and BPS mass $M$. At next to leading order $N_c\lambda^0$ the instantons
acquire a size $\rho\approx 1/\sqrt{\lambda}$ and cease to be BPS as $M$ gets
corrected~\cite{SAKAI}. As a result, the instantons of hQCD interact at next to next to leading
order~\cite{NN}. The core interaction is repulsive at short distances (in units of $M_{KK}$)
\begin{equation}
V_\omega(r)\approx \frac{27\pi}{2}\left(\frac{N_c}{\lambda}\right)\,\frac 1{r^2}
\label{OMEGA}
\end{equation}
and pions dominate at large distances. (\ref{OMEGA}) originates from the 4-dimensional
(topological) Coulomb repulsion. It is at the origin of the $n_B^{5/3}$ equation of state
observed in~\cite{WIGNER} for holographic instantons on $S^3$.
At next to leading order the instantons and their progeniture dyons are non BPS
and therefore interact. The nature of the dyon interactions is in general involved~\cite{leeyi}.
Fortunately, for our dyonic crystal the details of the dyonic interactions are not important in
the non-BPS regime. Indeed, once the instantons split into $e=g=\pm 1$ dyons as in
Fig.~1, the Coulomb nature of the underlying charges will cause them to
arrange in a salt-like configuration to maximally screen the $+$ and $-$ charges, and therefore
balance the Coulomb forces.
The value $0\leq v<1$ of the holonomy is thus far arbitrary. We now note that the topological
repulsion (\ref{OMEGA}) between instantons of unit charge triggers dyons of topological
charges $v$ and $(1-v)$. The balancing of this repulsion in conjunction with the balancing of the
Coulomb forces is simply realized for $v=(1-v)$ irrespective of the details of the dyon interactions.
As a result, all dyons have equal topological charges $v=1/2$, equal masses $M/2$ and
charges $e=g=\pm 1$. The arrangement is salt-like with a unit cell $2L$ . This is a bcc crystal of
half-instantons or dyons per cubic cell $L$. The instanton or baryon density is
\begin{equation}
n_B=\frac{1/2}{L^3}=\frac{4}{(2L)^3}
\label{DENS}
\end{equation}
which is commensurate with the initial density of fcc instantons, namely a cell unit of $(2L)$
with 4 instantons. Hence our initial choice of the fcc configuration for the instantons at low
density. (\ref{DENS}) reflects on the half-instanton symmetry of the bcc dyonic salt,
which is dual to the half-skyrmion symmetry on the boundary.
To estimate the density at which the transition from the fcc crystal of instanton to the bcc crystal
of dyons occur, we note that in the flat space periodic instanton with fixed holonomy~\cite{baal,leeyi} that we shall refer to as KvLL,
the separation distance $R_{+-}$ of the dyons is~\cite{baal}
\begin{equation}
R_{+-}=2\pi\,\frac{\rho^2}{2L}
\label{RPM}
\end{equation}
with $\rho$ the KvLL instanton size with zero holonomy.
We recall that our unit cell is $2L$ for the instantons in the fcc crystal.
$v=1/2$ yields $R_{+-}=L$ which is the nearest neighbor distance in the bcc configuration.
Thus $L=\sqrt{\pi}\rho$ or a critical density $n=1/2/(\sqrt{\pi}\rho)^3$.
In hQCD, the size of the
flavor instanton is tied to its rotational inertia $\rho^2=2{\bf I}/M$~\cite{instanton-baryon,KIM}. The
moment of inertia parameter ${\bf I}$ follows by collectively quantizing the flavor instanton as a baryon.
Specifically ${\bf I}=3/2\Delta$ with $\Delta=M_\Delta-M_N$ the delta-nucleon mass splitting.
Empirically, $1/{\bf I}\approx 200$ MeV. Setting the instanton BPS mass to the nucleon mass
$M\approx 1$ GeV, it follows that $\rho\approx \sqrt{2/5}\,$ fm and $L\approx 1$
fm. So the fcc to bcc transition takes place at $n_B\approx 1/2\,{\rm fm}^{-3}$ or $n_B\approx 3\,n_{NM}$
with $n_{NM}$ the nuclear matter density. The value of $L\approx 1$ fm is amusingly close
to the numerical value of $L=1.08$ fm reported in~\cite{park-vento} for a half-skyrmion cubic crystal.
The energy density for which the fcc to bcc transition occurs can be estimated using the Madelung
constant of salt~\cite{madelung} and ignoring the contribution from the topological repulsion and the mesonic cloud
for simplicity. The energy per instanton which is dual to the energy per baryon is $E/N=M-\Delta$,
with
\begin{equation}
\Delta=(e^2+g^2)\,(T^3)^2\,\left(\int_{-\rho}^{+\rho}\frac 1{L^2+Z^2}\right)\,M_D=
(e^2+g^2)(T^3)^2\,\frac{2\,{\rm tan}^{-1}(\rho/L)}{L}\,M_D
\label{BIN}
\end{equation}
where $\Delta$ is the energy to bring a dyon in a bcc configuration. Here
$M_D=1.748$ is Madelung constant for salt in units of the nearest neighbor
distance $L$. The Z-integration in (\ref{BIN}) is over the conformal direction
since the crystal does not extend in this direction. It is cutoff by the KvLL instanton size $\rho$
at zero holonomy.
For $e=g=1$, $\rho=L/\sqrt{\pi}$ and $L\approx 1$ fm, the energy to assemble the core dyons in
the bcc configuration is $\Delta\approx 180$ MeV. We expect this estimate to change somehow by
including the core $\omega$ repulsion and the pion attraction, which tends to balance overall.
Our core estimate of $\Delta$ is surprisingly close to the $220$ MeV binding for the cubic crystal
of half-skyrmion symmetry in~\cite{park-vento}.
\section{Chiral Symmetry}
Does the dyonic salt configuration in bulk correspond to a chirally restored phase at high density?
This issue in the Skyrme model is rather elusive, as most numerical calculations
show only a restoration of chiral symmetry on the average per cell. In this section we suggest
a geometrical transition in bulk in support of the restoration of chiral symmetry following the splitting
of the instanton into two dyons.
In the spontaneously broken phase, the D8+$\overline{{\rm D}8}$ configuration is depicted in Fig.~3a.
The baryon vertex corresponds to a wrapping of the D8 brane around $S^4$. This D4 wrapping is analogous
to a point instanton in D8+$\overline{{\rm D}8}$ and is dual to a baryon on the boundary.
The baryon vertex is attached by $N_c$ strings. Pions as Goldstone
modes are fluctuations of the holonomy along the curve $C$ as shown also in Fig.~3a. The bare skyrmion with no meson cloud
is just the holonomy running through the core instanton. The cloud corresponds to fluctuations on the flavor brane and tied to the core~\cite{KIM}.
Geometry alone makes it plausible that when the instanton splits into
two dyons, the geometry of Fig.~3a is replaced by the geometry of Fig.~3b, whereby the D8 and $\overline{\rm D 8}$ branes separates from each other as we explicit in Fig.~4 for a single instanton.
As a result, the baryon vertex fractionates with $N_c/2$ strings joining D8 and $N_c/2$ strings joining $\overline{{\rm D}8}$
as in Fig.~4b. This equal opportunity splitting of
D4 corresponds to two half instantons.
The essential question here is what happens to compact D4 in D8, or the instanton wrapping $S^4$?
It is essentially similar to what happens to D0 in D4 for the caloron in Fg.~\ref{caloron} as we discussed earlier.
To understand this, we recall that near the D8's, $S^4$ is compactified and can be ignored. Thus D8 is essentially
D4 stretching in $x^1,x^2,x^3$ and the conformal direction $Z$ direction, while the baryon vertex is D0.
Notice that D8 is along $x^4$ near the tip therefore the $Z$ direction is a point along the $x^4$ direction.
Taking the T-dual transform along $x^4$ turns D0 into D1 and D4 into D3.
Summarizing: we obtain the D3-D1 configuration by taking the T-duality along the $x^4$ direction,
thereby explaining the splitting mechanism into dyons as shown in Fig.~4c. Throughout, the compact $S^4$
is a spectator.
Under the splitting of D8 and $\overline{{\rm D}8}$, the right and left Wilson lines decouple
\begin{equation}
U^R_{1/2}(x)= P\exp\left(i\int_0^{+\infty} \,{\bf A}_Z(x,Z)\,dZ\right), \quad\quad
U^L_{1/2}(x)=P\exp \left(i\int^0_{-\infty}\, {\bf A}_Z(x,Z)\,dZ\right) \\
\end{equation}
with each sourced by $N_c/2$ string. In each of the probe brane, the half instantons or dyons are
repulsive through their leading core $\omega$ repulsion much like their mother instantons. They form two distinct
$L,R$ crystals. On the boundary which is our world, the two half-instanton crystals superimpose in a maximally
repulsive configuration which is our bcc dyonic salt of half-instantons. We note that the $L,R$ crystalline structures are commensurate
with the $e=g=\pm 1$ dyonic structures as both are interchangeable by parity.
The current geometrical description is plausible but clearly not rigorous and awaits more detailed computational analysis in dense hQCD. The transition from the confined geometry to the deconfined geometry with half instantons is perhaps of the type discussed in~\cite{CHICAGO} whereby the coupling constant of the compactified boundary gauge theory is dialed to decrease in analogy with the running of the QCD coupling at high density.
\begin{figure}[]
\begin{center}
\includegraphics[width=9cm,angle=-90]{fig2.eps}
\vskip -3cm
\caption{Dense baryonic matter: (a) low density and (b) high density. See text.}
\end{center}
\label{DB1}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=9cm,angle=-90]{fig1.eps}
\end{center}
\vskip -1cm
\caption{Fractionalization of the bulk instanton into two dyons. See text.}
\label{DB2}
\end{figure}
\section{Conclusions}
In \cite{LR2}, an analogy was drawn between the half-skyrmion phase in dense baryonic matter and the half-skyrmion phase in the N\'eel magnet-VBS paramagnet transition in (2+1) D condensed matter physics~\cite{senthil}. This analogy could be better illustrated in terms of the instanton-dyon transition proposed in this paper. In the condensed matter case, the half-skyrmions confined in a skyrmion split into their constituents by the suppression of the ``monopoles" (i.e., the hedgehogs in 3D) of the emergent $CP^1$ gauge field, giving rise to a non-Ginzburg-Landau-Wilson phase that intervenes between the initial and final phases. In holography, the half-instantons or dyons confined into the original instanton split at higher density due to the emergence of strong holonomies in the spatial directions.
We have presented general arguments for why an fcc initial arrangement of instantons should split into a bcc crystal of half instantons
or dyons. The Coulomb electric and magnetic forces between the dyons cause them to reorganize in a salt like configuration naturally
for maximum screening. The net repulsive topological interaction between the individual dyons balance naturally when all dyons in
the salt like configuration carry equal topological charges, thus equal masses. The fcc initial arrangement, while optimal for the dyon
splitting, is not necessary in general. Indeed, once a crystal is formed at very low density, say an fcc for instance, then by increasing
the density we increase the strength of the spatial holonomies thereby causing the instanton to fractionate. The salt like configuration of equal dyons follows by balancing the Coulomb and topological forces. Conversely, the bcc configuration at high
density smoothly converts to an fcc configuration at lower density through the matching of the densities as in (\ref{DENS}) making it
energetically the most favorable.
Using the KvLL instanton to assess the dyon
separation in an instanton with a finite holonomy, we have shown that the transition from an fcc to a bcc configuration of dyons is
expected when the dyons are about 1 fm apart, or for baryon densities of the order of 3 times nuclear matter density.
This transition is also supported by a geometrical transition in bulk whereby the D4 baryon vertex on $S^4$ fractionates by $Z_2$. We have
presented plausible geometrical arguments in favor of the restoration of chiral symmetry upon the formation of the half-instanton
bcc crystal. The latter is dual to the simple cubic crystal of half-skyrmions.
While the arguments in favor of a skyrmion crystal of half-skyrmion symmetry are not new, we believe that our holographic
accounting of this phenomenon in bulk is. Moreover, the geometrical character of the transition we have suggested together
with the physical nature of the splitting of instantons into dyons provides the robustness needed for a topological phase
transition in QCD at large $N_c$. Our simple estimates for the density and energy per baryon for the occurrence of the bcc
dyonic crystal are surprisingly close to those obtained using detailed numerical calculations with skyrmions~\cite{park-vento}.
Most of our estimates involve the physics of the baryon cores which is uniquely described by instantons in holography. This
is in contrast to the Skyrme model, where the core physics is at the mercy of the choice of the stabilizing term (say the fourth
order derivative term for instance) which is not fixed by current algebra. As we mentioned earlier, we expect the core estimates to be
altered somehow by the addition of the cloud contributions~\cite{KIM} as they are likely to overall balance. The current arguments
complement the instanton holographic transition reported on $S^3$~\cite{WIGNER}.
We should stress that the phenomenon described in this paper does not address color deconfinement of QCD. The half-instantons we have described are hadrons in color-confined phase, with, however, chiral symmetry restored as is the case with the half-skyrmions in cc~\cite{crystal2,park-vento}. As suggested in \cite{LR2}, this could be identified with the quarkyonic phase conjectured for large $N_c$ QCD~\cite{LARRY}.
Finally, the geometrical arguments we have presented as well as our dyonic salt configuration may be of relevance
to the finite temperature confinement deconfinement transition in QCD with colored instead of flavored
instantons and antiinstantons~\cite{DIAKONOV,ARIEL}.
While at low temperature QCD may be viewed as an instanton-antiinstanton (caloron-anticaloron) liquid, it was suggested
in~\cite{DIAKONOV} that for a critical temperature the (KvLL) instantons and antinstantons
may ionize to BPS dyons. The ionic phase may be a plasma of dyons for sufficiently
weak gauge coupling or high temperature~\cite{DIAKONOV,ARIEL,MAXIM}.
However, near the critical temperature, the gauge coupling is still strong enough
to warrant a liquid or perhaps even a weakly crystallized forms of dyons. We will return to some of these issues next.
\section{Acknowledgments}
IZ thanks Hanyang University for their kind hospitality.
This work was supported by the WCU project of Korean Ministry of Education, Science and Technology (R33-2008-000-10087-0).
IZ was supported also in part by US-DOE grants DE-FG02-88ER40388 and DE-FG03-97ER4014.
| 2024-02-18T23:40:07.240Z | 2009-10-24T01:59:06.000Z | algebraic_stack_train_0000 | 1,396 | 5,737 |
|
proofpile-arXiv_065-7130 | \section{\label{sec:level1}Introduction}
Quantum fractional statistics has drawn considerable interest in condensed matter physics since the early theoretical contributions \cite{leinaas1977theory,wilczek1982quantum,wilczek1982magnetic,halperin1984statistics,haldane1991fractional,wu1984general,wu1984multiparticle} and because of its ability to describe physical phenomena such as fractional quantum Hall effect \cite{laughlin1983anomalous,laughlin1983quantized,halperin1984statistics}, spinor excitations in quantum antiferromagnets \cite{anderson1987resonating, haldane1991spinon}, high-temperature superconductivity \cite{laughlin1988rb}, quantum systems in low dimensions \cite{batchelor2006one,paredes2004tonks,kinoshita2004observation,jacqmin2011subpoissonian} and, more recently, its implications in the field of cosmology and dark matter.
Concerning the quantum physics of strongly interacting many-particle systems, in a seminal work, Haldane \cite{haldane1991fractional} introduced the Quantum Fractional Statistics (FE) and the definition of the statistical exclusion parameter $g$, $0\leq g\leq 1$, being the Bose-Einstein (BE) and Fermi-Dirac (FD) the boundary statistics for $g=0$ and $g=1$, respectively. Later Wu \cite{wu1994statistical} derived the statistical distribution for an ideal gas of fractional-statistic particles. These papers were a major contribution to describe quantum systems in one and two dimensions like anyons in a strong magnetic field in the lowest Landau level \cite {wilczek1990fractional} and excitations in pure Laughlin liquids \cite{laughlin1983anomalous, arovas1984quantumhalleffect, camino2005realization}.
On the other hand, classical statistical mechanics of interacting large particles of arbitrary size and shape is a relevant problem since it is a major challenge to properly account for the generally complex entropic contribution to the free energy. Many physical systems, ranging from small polyatomics, alkanes, to protein adlayers, resemble these characteristics. The multisite occupancy problem has been addressed since long ago by the approximations of Flory-Huggins \cite{flory1942thermodynamics,huggins1942some,huggins1942thermodynamic,huggins1942viscosity} for binary solutions, lattice gases of particles of arbitrary size and shape made of a number $k$ of linked units ($k$-mers) \cite{dimarzio1961statistics} and it has been referred as the prototype of the lattice problem \cite{lieb1974exactly}. Among the motivations we can also mention Cooper and vortex pairs modelling \cite{cooper1956bound, kosterlitz1972long}, clusters diffusion on regular surfaces \cite{tsong1980migration,lin1990diffusion} and thermodynamics of polyatomic adlayers \cite{paserba2001,strange2016,lopatina2018}, which represents a current open problem in statistical physics of gas-solid interfaces.
The FE and Wu's distribution were already reinterpreted in the domain $g > 1$ to model the thermodynamics of linear $k$-mers ideal lattice gases behaving statistically like "superfermions" \cite{riccardo2004fractional} and resulting in the exact one-dimensional (1D) solution for $g=k$ \cite{ramirez1999statistical}. As shown later, in 1D it does not arise effective correlations between states, however it does in two or higher dimensions as considered here.
This work addresses the statistical mechanics of identical particles in equilibrium occupying a set of spatially correlated states and obeying statistical exclusion in a confined region of the space. We refer as multiple exclusion the fact that, because of spatial correlations, the states accessible to single-particles can be simultaneously excluded by more than one particle in the system and it is not related to mutual exclusion as clearly defined by Haldane and Wu \cite{haldane1991fractional,wu1994statistical} to refer to exclusion statistics between different species within a space region.
A classical realization of multiple exclusion phenomena are the physical models of lattice gases of $k$-mers.
In what follows, we develop a statistics for systems of many particles with state exclusion between spatially correlated states, which reduces to Haldane-Wu's FE for statistically independent states (constant exclusion $g$) and, correspondingly, to the FD and BE ones.
Let us consider a system of volume $V$ containing $N$ identical particles having $G$ states accessible to a single particle. The canonical partition function is $Q(N,T,V)= \sum_{i} e^{-\beta H_{i}(N)}$ where $H_{i}(N)$ denotes the Hamiltonian of the $i^{th}$ state and $\beta=1/k_{b}T$ ($k_b$ is the Boltzmann constant). For the sake of simplicity, we address a homogeneous system of $N$ non-interacting identical particles in the volume $V$ (other than the fact that the states they can occupy are not independent one of each other). By defining $d_{N}$ as the number of states in $V$ accessible to the $N^{th}$ particle after $(N-1)$ have been added to $V$, then $Q(N,T,V)= W(N) e^{-\beta N U_{o}} q_{i}^{N}$ with \cite{haldane1991fractional}
\begin{equation}
W(N)= \frac{(d_{N}+N-1)!}{N! \ (d_{N}-1)!}
\end{equation}
where $U_{o}$ and $q_{i}$ are the energy per particle and the internal partition function, respectively. In the limit $n=\lim_{N,G \to \infty} N/G$, the thermodynamic functions are
\begin{equation}{\label{eq.ftilde}}
\begin{aligned}
\beta \tilde{F}(n,T)&=\lim_{N,G \to \infty}\frac{F(N,T,V)}{G}=\lim_{N,G \to \infty}\frac{\ln Q(N,T,V)}{G} \\
&=\beta nU_{o}-[\tilde{d}(n)+n] \ln[\tilde{d}(n)+n] + \tilde{d}(n) \ln \tilde{d}(n)\\
& \ \ + n \ln n
\end{aligned}
\end{equation}
\begin{equation}{\label{eq.stilde}}
\begin{aligned}
\frac {\tilde{S}(n,T)}{k_{b}T}&=\lim_{N,G \to \infty}\frac{S(N,T,V)}{G} \\
&=[\tilde{d}(n)+n] \ln[\tilde{d}(n)+n] - \tilde{d}(n) \ln \tilde{d}(n) - n \ln n
\end{aligned}
\end{equation}
and the chemical potential, $\mu=\left(\frac{\partial \tilde{F}}{\partial n}\right)_{T,V}$, satisfies
\begin{equation}{\label{eqmu}}
K(T) \ e^{\beta \mu}= \frac{n \ \left[ \tilde{d}(n) \right]^{\tilde{d}'(n)}}{\left[ \tilde{d}(n)+n\right] ^{\tilde{d}'(n)+1 }},
\end{equation}
where $\tilde{d}(n)=\lim_{N,G \to \infty} d_{N}/G$, $\tilde{d}'(n)= d[\tilde{d}(n) ]/dn$ and $K(T)=e^{-\beta U_{o}} \ q_{i}$.
From Eq. \eqref{eqmu}, two related quantities are defined which will be later useful to fully interpret the state exclusion under spatial correlations. If the system of particles in $V$ is now assumed to exchange particles with a bath at chemical potential $\mu$ and temperature $T$, the time evolution of the state occupation $n$ is given by
\begin{equation}{\label{eq.kinetic}}
\frac{dn}{dt}= P_{o} \ W_{o \to \bullet}- P_{\bullet} \ W_{\bullet \to o},
\end{equation}
where $P_{o}(P_{\bullet})$ is the average fraction of empty (occupied) states in $V$ and $W_{o \to \bullet}(W_{\bullet \to o})$ the transition rate for an empty(occupied) state to get occupied (empty). In equilibrium, $dn/dt=0$, $W_{o \to \bullet}/W_{\bullet \to o}=P_{\bullet}/P_{o}=e^{\beta(\mu-U_{o})}$, $P_{\bullet}=n$. From Eq.\eqref{eqmu} and \eqref{eq.kinetic}
\begin{equation}{\label{eq.Po}}
P_{o}(n)=P_{\bullet}(n) \ e^{-\beta (\mu-U_{o})}= \frac{\left[ \tilde{d}(n)+n\right] ^{\tilde{d}'(n)+1 }}{\left[ \tilde{d}(n) \right]^{\tilde{d}'(n)} }.
\end{equation}
In addition, we introduce a new useful quantity, namely the exclusion spectrum function $\mathcal{G}(n)$, being the average number of excluded states per particle at occupation $n$ \cite{jjriccardo2018tesislic}. Thus, $\mathcal{G}(n)=\left\langle \frac{1}{N} \sum_{iº=1}^{G} e_{i} \right\rangle$
\begin{equation}{\label{eq.Gn}}
\begin{aligned}
\mathcal{G}(n)&
\left\langle \frac{G}{N}\frac{1}{G} \sum_{i=1}^{G} e_{i} \right\rangle=\frac{1}{n}\left[ 1-P_{o}(n)\right]
=\frac{1}{n}-\frac{1}{e^{\beta(\mu-U_{o})}}
\end{aligned}
\end{equation}
where $e_{i}=1$ if the state $i$ out of $G$ is either occupied or excluded by any of the $N$ particles, or $e_{i}=0$ otherwise, and the average is assumed to be taken over the canonical ensemble. The identity $\left\langle\frac{1}{G}\sum_{i=1}^{G} e_{i} +P_{o}\right\rangle=1$ follows from the definition of $P_{o}$. $\mathcal{G}(n)$ characterizes the density dependence of the state exclusion for a spatially correlated many-particle system from zero-density to saturation.
It is worth noticing that the rightmost side of Eq. \eqref{eq.Gn} also provides an operational formula to infer the exclusion spectrum $\mathcal{G}(n)$ from experiments. For instance, for adsorbed species under equilibrium conditions ($\mu,T$), $n$ is related to the surface coverage (so called adsorption isotherm) and $U_{o}$ is obtained from the low density regime of $n(\mu,T)$.
Spatially correlated states leading to multiple exclusion can be visualized, for instance, in the classical system of linear particles occupying sites on a square lattice (Fig. 1). Given the set of states for a single particle containing all its possible configurations on the lattice, clearly an isolated dimer ($C_{1}$) occupies one state plus excluding six more states from being occupied by other particles. For a larger number of particles on the lattice there exist configurations in which some states are excluded simultaneously by neighboring particles ($C_{2}$, $C_{3}$ and $C_{4}$). This is called here ``multiple exclusion" arising from spatial correlation between states, and it has significant effects on the thermodynamics of the system.
\begin{figure}[h]
\centering
\includegraphics[width=1.00\columnwidth]{fig1.eps}
\caption{Local configurations of dimers on a square lattice.
$C_{1}$ shows the states (dashed) excluded by an isolated particle. $C_{2}$, $C_{3}$ and $C_{4}$ depict states (dashed) multiply excluded by neighboring dimers, 1, 2 and 6 for $C_{3}$, $C_{2}$ and $C_{4}$, respectively.}
\label{Fig.ejemplo.de.exclusion.k=2}
\end{figure}
It is known that the exact counting of configurations for an arbitrary number of particles on the lattice seems a hopeless task and it is still a relevant open problem in classical statistical mechanics. From here on, $d_{N}(\tilde{d}(n))$ is obtained through an approximation extending the Haldane-Wu's state counting procedure to a system of correlated states which determines the analytic multiple exclusion statistical distribution and the thermodynamics of the system. Given that the total number of states in $V$ is $G$, as we add particles from the $1^{st}$ to the $(N-1)^{th}$, the recursion relations can be written: $d_{1}=G$, $d_{2}=d_{1}-\mathcal{N}_{1},...,d_{N}=d_{N-1}-\mathcal{N}_{N-1}$, where $\mathcal{N}_{j}$ is the number of states occupied plus excluded only by the $j^{th}$ particle. Considering that a particle $j^{th}$ added to $V$ occupies one state and in addition it excludes a yet undetermined number of states out of $G$, we write the relation $\mathcal{N}_{j}=1+\mathcal{G}_{cj}$, where $\mathcal{G}_{cj}$ is the number of states excluded only by the $j^{th}$ particle [it does not account for the states excluded by $j$ which were already excluded by any of the particles $1,...,(j-1)$ because of the spatial correlations or so-called multiple state exclusion]. $\mathcal{G}_{cj}$ has to be rationalized as an average of over all the configurations of particles $1,....,j$ on the $G$ states. For $j \to N$ and $N,G \to \infty$ with $N/G=n$, it is straightforward that $\mathcal{G}_{cj}$ will converge to a value depending only on the ratio $N/G=n$ (as observed in simulation). Now we establish the following ansatz to determine $d_{N}$ \cite{jjriccardo2018tesislic}
\begin{equation}{\label{eq.Nj}}
\mathcal{N}_{j}=1+\mathcal{G}_{cj}=1+g_{c}\dfrac{d_{j}}{G},
\end{equation}
where $\mathcal{G}_{cj}=g_{c}\dfrac{d_{j}}{G}$, i.e, a system-dependent exclusion constant $g_{c}$ times the fraction $\dfrac{d_{j}}{G}$ of states that can be excluded by particle $j$. It is worth mentioning that the second term in Eq. (\ref{eq.Nj}) resembles a sort of mean-field or effective-field approximation on the set of states which in the limit $N,G \to \infty$ will depend only on the mean occupation number $n=N/G$. Based on Eq. (\ref{eq.Nj}) we can rewrite the recursion relations as: $ d_{1}=G, d_{2}=d_{1}-\left[ 1+g_{c} \frac{d_{1}}{G} \right], d_{3}=d_{2}-\left[1+g_{c} \frac{d_{2}}{G} \right]=G\left[ 1-\frac{g_{c}}{G}\right]^{2}-\left[ 1-\frac{g_{c}}{G}\right]-1,...,d_{N}=d_{N-1}-\left[1+g_{c} \frac{d_{N-1}}{G} \right]=G \left[ 1-\frac{g_{c}}{G}\right]^{N-1}-\sum_{i=0}^{N-2} \left[ 1-\frac{g_{c}}{G}\right]^{i}$.
By taking the limit $\tilde{d}(n)=\lim_{N,G \to \infty}d_{N}/G$ it yields $\tilde{d}(n)=e^{-n g_{c}}- n$. $\tilde{d}(n)$ is defined except for two constants, say $\tilde{d}(n)=C_{1} e^{-n g_{c}}-C_{2} n$, provided that it must satisfy the boundary conditions $\tilde{d}(0)=1$ and $\tilde{d}(n_{m})=\tilde{d}(1/g)=0$, where the usual Haldane's exclusion constant $g$ is used here to denote the number of states excluded per particle at maximum occupation, $n_{m}=N_{m}/G=(G/g)/G=1/g$. Thus, $C_{1}=1$ and $C_{2}=g e^{-\frac{g_{c}}{g}}$ and finally
\begin{equation}{\label{eq.dn}}
\tilde{d}(n)=e^{-ng_{c}}-ge^{-\frac{g_{c}}{g}}n.
\end{equation}
We may even think of $g_{c}$ in Eq. \eqref{eq.Nj} as depending on $j$, i.e., $g_{cj}$. The recursion relations will lead to $d_{N}=d_{N-1}\left[1-g_{c(N-1)}/G \right]-1=G \prod_{j=1}^{N-1}\left[1-g_{cj}/G \right] - \sum_{i=2}^{N-1}\prod_{j=i}^{N-1} \left[1-g_{cj}/G \right]-1$. If $g_{cj}=g_{cN}+\Delta_{j,N}$, where $\Delta_{j,N}$ is finite, then $d_{N}=G{\left[1-g_{cN}/G \right]^{N-1}-\sum_{j=0}^{N-1}\left[1-g_{cN}/G \right]^{j}+\mathcal{O}(1/G)}$. In the $\lim_{N,G \to \infty}d_{N}/G$ it yields $\tilde{d}(n)=e^{-ng_{c}(n)}-n$ where $g_{c}(n)=\lim_{N,G \to \infty}g_{cN}$. From this, the ansatz \eqref{eq.Nj} is the simplest assumption on $g_{c}(n)$, $g_{c}(n)=g_{c}=$ constant, through which state exclusion is introduced in the state counting in presence of spatial correlations. This results in a fairly accurate approximation, as shown by comparing predicted observables and simulations for linear particle lattice gases.
The exclusion constant $g_{c}$ is fully determined by the zero density limit of the mean number of states excluded particle, $\mathcal{G}(n)$. Accordingly, from Eqs.\eqref{eq.Po},\eqref{eq.Gn} and \eqref{eq.dn}
\begin{equation}{\label{eq.Go}}
\begin{aligned}
\mathcal{G}_{o}=\lim_{n\to 0}\mathcal{G}(n)=\lim_{n\to 0} \left[1-P_{o}(n)\right]/n=2g e^{-g_{c}/g}+2g_{c}-1
\end{aligned}
\end{equation}
$\mathcal{G}_{o}$ being the state exclusion at zero density, i.e, number of states excluded by an isolated particle in the system. Moreover, $\lim_{n\to n_{m}}\mathcal{G}(n)=\lim_{n\to n_{m}} \left[1-P_{o}(n)\right]/n=g$. The two exclusion constants, $g_{c}$ and $g$ in Eq. \eqref{eq.dn}, come from the infinite dilution and saturation limits of $\mathcal{G}(n)$, respectively.
From here on, we analyze linear $k$-mers ideal lattice gases under the proposed framework. We mean by linear $k$-mers, linear rigid particles made of $k$ identical beads occupying $k$ consecutive sites (one bead per site) on a regular lattice. For instance, this is a simple model for small polyatomics/hydrocarbons adlayers. For $k$-mers on a one-dimensional (1D) lattice, $g=k$, $\mathcal{G}_{o}=2k-1=2g-1$, the solution of Eq. \eqref{eq.Go} is $g_{c}=0$ $\forall k(\forall g)$ and the case reduces to Haldane's FE and Wu's distribution with $g=k$ resulting in the exact density dependence of the chemical potential $\mu\equiv\mu(n)_{T,V}$ from Eq. \eqref{eqmu} (already derived in \cite{riccardo2004fractional} for non-interacting $k$-mers in 1D). In a $k$-mer 1D lattice gas, each state of $N$ $k$-mers on a lattice with $M=G$ sites and $n=N/M$ can be mapped onto a one of $N$ monomers on a equivalent lattice with $M'= M-(k-1)N$ sites and $n'=N/M'=n/[1-(g-1)n]$. Thus, there is not effective spatial correlation between excluded states for $k$-mers in 1D. On the other hand, for $k$-mers on a square lattice of $M$ sites, $G=2M$, $n_{m}=N_{m}/G=(M/k)/2M=1/(2k)=1/g$, then $g=2k$ and $\mathcal{G}_{o}=k^{2}+2k-1=\frac{g^{2}}{4}+g-1$. The solution of Eq. \eqref{eq.Go} is $g_{c}=\frac{g^{2}}{8}+\frac{g}{2}+g \mathcal{L}(z)$ for $g \geq 4$, where $\mathcal{L}(z)$ is the positive solution of $z=\mathcal{W}(z) e^{\mathcal{W}(z)}$, $ \mathcal{W}(z)$ being the Lambert function, namely, the inverse of $f(x)=x e^{x}, \ x=\mathcal{W}(x e^{x})$. Accordingly, $g_{c}=0$ for $k=2(g=4)$, $g_{c}=4.807$ for $k=3(g=6)$, $g_{c}=9.586$ for $k=4(g=8)$,$g_{c}=15.344$ for $k=5(g=10)$, $g_{c}=22.096$ for $k=6(g=12)$,
$g_{c}=29.838$ for $k=7(g=14)$, $g_{c}=38.563$ for $k=8(g=16)$, $g_{c}=48.267$ for $k=9(g=18)$, $g_{c}=58.950$ for $k=10(g=20)$. Furthermore, $\lim_{k\to \infty}{g}_{c}=\mathcal G_{o}/2$.
From Eq. \eqref{eqmu}, the occupation number, $n$, in general satisfies the following relation, formally almost identical to the transcendental equation first derived by Wu \cite{wu1994statistical}
\begin{equation}{\label{distribution}}
\left[\tilde{d}(n)+n\right]^{\tilde{d'}+1} \left[\tilde{d}(n)\right]^{-\tilde{d'}}=n \ e^{\beta\left(U_{o}-\mu \right) }=n \ \xi,
\end{equation}
where $\xi=e^{\beta\left(U_{o}-\mu \right) }$. From the explicit form of $\tilde{d}(n)$ [Eq. \eqref{eq.dn}], the distribution function can be symbolically written as
\begin{equation}{\label{ME_distribution}}
n=\frac{e^{-g_{c} n}}{w(\xi) + g \ e^{-g_{c}/g}},
\end{equation}
similar to Wu's distribution where $n\equiv n(\xi)$ is the solution of the transcendental Eq. \eqref{distribution} and $w(\xi)=\tilde{d}(n)/n$. For particles with exclusion parameter $g$ on spatially non-correlated states, $g_{c}=0$, $\tilde{d}(n)=1-gn$ and the Haldane's FE statistics is recovered and Eq. \eqref{ME_distribution} reduces to the Wu's distribution \cite{wu1994statistical}. Furthermore, $\tilde{d'}(n)=-g$ for $g_{c}=0$, thus $W(n)=\xi-1$ for $g=0$ and $w(n)=\xi$ for $g=1$, resulting Eq. \eqref{ME_distribution} the BE and FD statistics, respectively. Given that $w(n)=\tilde{d}(n)/n\geq0$, from Eq. \eqref{ME_distribution} the occupation-number's range is $0\leq n\leq 1/g$. At temperature $T=0$ (absolute scale), the distribution takes the step-like form $n=1/g$ for $U_{o}<\mu$ and $n=0$ for $U_{o}>\mu$, as expected.
Simulations of $k$-mers lattice gases were carried out in the Grand Canonical Ensemble through the efficient algorithm introduced by Kundu et al. \cite{kundu2013nematic,kunduprodeedings} to overcome the sampling slowdown at high density due to the jamming effects. The temperature, chemical potential $\beta \mu$ and system's size are held fixed and the number of particles on the lattice is allowed to fluctuate through non-local changes, i.e, insertion and deletion of $k$-mers at a time (in contrast to the standard Metropolis algorithm). Shortly, given a configuration of $k$-mers on the lattice, one MCstep is fulfilled by removing all horizontal $k$-mers and keeping the vertical ones. The probabilities corresponding to horizontal segments of unoccupied sites are exactly calculated and stored for all the segment sizes. Then segments are occupied by $k$-mers with probabilities accordingly determined. An identical procedure is carried out in the vertical direction. A reproduction of these calculations is out of the scope of this work. The detailed discussion is found in the original work Refs.\cite{kundu2013nematic,kundu2014phase,kunduprodeedings}. The algorithm has proved to be ergodic, it satisfies the Detailed Balance Principle and equilibrium is reached after typically $10^{7}$ MC steps. $L \times L$ square lattices with periodic boundary conditions were used. The ratio $L/k$ was set to 120. With this value of $L/k$, we verified that finite size effects are negligible. The observables $\mathcal{G}(n)$ [Eq. \eqref{eq.Gn}] and $n=\left\langle N \right\rangle /G= \left\langle N \right\rangle/(2L^{2})$, were calculated by averaging over $10^{7}$ configurations. The distribution function $n$ versus $\beta(\mu-U_{o})$ [Eq. \eqref{eqmu}]) is represented in Fig. \ref{fig:nversusmu} and compared with simulation for linear particles of size $k=2$ to $k=10$.
\begin{figure}[h]
\centering
\includegraphics[trim={2.1cm 1.8cm 3.2cm 2.15cm},clip,scale=0.375]
{fig2.eps}
\caption{State occupation number $n$ versus $\beta(\mu-U_{o})$ for $k=2,4,5,6,7,8,10$ on a square lattice. Lines represent the analytical predictions from Eq. \eqref{eqmu}; symbols come from simulations. Inset shows the case $k=10$ for a smaller $g_{c}=39$ as to visualize the state exclusion effect of the nematic ordering.}
\label{fig:nversusmu}
\end{figure}
The analytical predictions are accurate for all the particle sizes, being much better as $k$ increases up to $k=7$. The ansatz in Eq. \eqref{eq.Nj} does not account explicitly for system's dimensionality, shape or particles size and lattice structure, but all the state correlations are embedded in the exclusion constant $g_{c}$. For instance, the solid line in Fig. \ref{fig:nversusmu} for $k=2$ represents approximately the simulation results for dimers on the square lattice, $k=2 \ (\mathcal{G}_{o}=7,g=4)$, and it does exactly for tetramers on a 1D lattice, $k=4 \ (\mathcal{G}_{o}=7,g=4)$. For both cases the solution of Eq. \eqref{eq.Go} is $g_{c}=0$.
For $k\geq7$, it is known a nematic transition develops at intermediate lattice coverage with particles aligned along a lattice direction in compact clusters \cite{Ghosh}. Its effect is clearly seen in Fig.2 the case for $k=10$ at intermediate occupation where simulation and analytical function do not match. However, because the nematic ordering increases the number of multiply excluded states per particle, $n$ can be very accurately represented by the multiple exclusion statistics for a smaller value of the constant $g_{c}$ [according to the meaning of the corresponding term in Eq. \eqref{eq.Nj}] as shown in the inset of Fig. \ref{fig:nversusmu}.
In addition, results for the exclusion spectra $\mathcal{G}(n)$ from Eq. \eqref{eq.Gn}
are shown in Fig. \ref{fig:gmedioversustitak2tok10} as a function of the lattice coverage $\theta=k<N>/M$, where $<N>$ and $M$ represent the average number of particles on the lattice and the number of lattice sites, respectively. Given that $\theta=k<N>/M=k<N>/(G/2)=2k<N>/G=g n$, all the quantities above can be expressed in the nomenclature of lattice coverage by the variable change $n=\theta/g$ with $0\leq\theta\leq 1$. The adsorption isotherm ($\mu$ vs $\theta$) follows straightforwardly from Eq. \eqref{eqmu} and \eqref{eq.dn}, $ \beta\mu=\ln[\frac{\theta}{g}]+[g_{c} e^{(-\theta g_{c}/g)}+ge^{(-g_{c}/g)}-1] \ln[e^{(-\theta g_{c}/g)}-e^{(-g_{c}/g)} \theta+\theta/g]- [g_{c} e^{(-\theta g_{c}/g)}+ge^{(-g_{c}/g)}] \ln[e^{(-\theta g_{c}/g)}-e^{(-g_{c}/g)} \theta]+\beta U_{o}$.
\begin{figure}[h]
\centering
\includegraphics[trim={1.7cm 1.75cm 3.2cm 2.1cm},clip,scale=0.375]{fig3.eps}
\caption{Exclusion spectrum $\mathcal{G}(\theta)$ for $k=2$ to $k=10$ (from bottom to top). Solid lines are analytical results from Eq. \eqref{eq.Gn} with $n=\theta/g=\theta/(2k)$. Symbols represent simulations.}
\label{fig:gmedioversustitak2tok10}
\end{figure}
Concerning the new quantity we have introduced, $\mathcal{G}(\theta)$, the predictions from this work [Eq. \eqref{eq.Gn} along with \eqref{eq.Po} and \eqref{eq.dn}] reproduce significantly well the exclusion per particle for all $k$ as density varies. This appears as a very useful function in the presence of correlations since can be obtained directly either from the distribution $n(\mu)$ or from experiments providing a relevant average measurement about the spatial configuration of particles in the system from thermodynamics. The limiting values being $\mathcal{G}(0)=\mathcal{G}_{o}$ and $\mathcal{G}(1)=g$. Additionally, state exclusion can be observed through $\mathcal{G}(\theta)$ in the presence of particle interactions and order-disorder transitions, as it will be presented in future work.
Finally, an approach to the equilibrium statistics of many-particle systems with exclusion having spatially correlated states for single-particles has been put forward, the statistical distribution has been obtained, a useful exclusion spectrum function has been defined and the results applied to 2D-lattices from small to large linear particles, resulting in a significant agreement for such a complex statistical systems. The formalism can be straightforwardly applied to other particles/lattice geometries and higher dimensions. In addition, the analysis could be extended to more complex off-lattice systems in the presence of mutual exclusion (such as hard disks and spheres in the continuum). This work is in progress.
This paper was supported in part by CONICET and Universidad Nacional de San Luis, Argentina.
| 2024-02-18T23:40:08.741Z | 2019-06-12T02:04:21.000Z | algebraic_stack_train_0000 | 1,460 | 4,307 |
|
proofpile-arXiv_065-7233 | \section{Introduction}
\IEEEPARstart{I}{t} is commonly accepted that sonographers are exposed to an increased risk in repetitive strain injury \cite{Seto2008, Janga2012, Harrison2015}. A representative study amongst diagnostic medical sonographers and vascular technologists indicates that a significant majority of sonographers experience pain while performing ultrasound scans \cite{Evans2009}. This suggests a high demand to improve ergonomics and offload sonographers during clinical scan procedures.
Recent investigations show that besides diagnostic sonography, there is an increased demand for intraoperative transthoracic \cite{Ben-Dor2006, Hori2015} and transoesophegal \cite{Shanewise1999} ultrasound imaging, particularly for cardiac and lung procedures. Sonographers performing intraoperative ultrasound in for example cardiac catheterization procedures have therefore presumably an increased risk of radiation exposure \cite{McIlwain2014}.
\begin{figure}[t!]
\center{\includegraphics[width=\linewidth]{Linde1.PNG}\caption{Soft robotic end-effector (SEE) performing ultrasound scan on abdominal prenatal phantom}}
\end{figure}
Automating diagnostic and intraoperative ultrasound procedures through robot-guidance or -assistance can help address the aforementioned problems and lay the groundwork for more intelligent image acquisition. Robotic ultrasound guidance has found particular application in procedures involving steering orthopaedic \cite{Goncalves2014} or minimally-invasive surgical tools \cite{Antico2019UltrasoundProcedures} and biopsy needles \cite{Mahmoud2018EvolutionSystems}. Various robotic hardware solutions have been proposed. Researchers have adopted robotic platforms originally aimed at collaborative scenarios in industrial settings, such as Universal Robot’s UR-series \cite{Mathiassen2016, Sen2016} or the KUKA LWR \cite{Goncalves2014} and LBR iiwa \cite{Kojcev2016, Zettinig2016}. A commercial robotic manipulator has been released (LBR Med, KUKA AG, Augsburg, Germany) which is suitable for use in clinical environments due to its conformity with medical device safety (ISO 60601) and medical software regulations (ISO 62304). Current research suggests that such robots can be applied in diagnostics to autonomously perform aorta measurements \cite{Virga2016}, in combination with previously acquired MRI scans to autonomously find standard view-planes \cite{Hennersperger2017} and in intraoperative procedures to autonomously track surgical tools \cite{Salcudean2013}, amongst others.
Whilst such robotic platforms allow for great flexibility through a large workspace and high manipulability, the use of large-scale robotic manipulators can pose various disadvantages for clinical integration. Diagnostic ultrasound scans are divided into their respective body area of interest. For an individual procedure such as a lower abdominal ultrasound scan, a robotic system is therefore only required to achieve a workspace to cover a fraction of the human body. This yields that common robotic manipulators could be oversized for such applications, which unnecessarily poses risks to patient safety. Despite high degrees of electrical safety, a mechanical system with a high mass can potentially be more dangerous \cite{Haddadin2008}.
To address this issue, researchers developed customized solutions which are tailored to the application-specific requirements of diagnostic and interventional sonography. Researchers \cite{Salcudean1999, Salcudean1999a, PurangAbolmaesumiSeptimiuESalcudeanWen-hongZhuMohammadRezaSirouspour2002} have proposed a mechanism which achieves a high degree of probe manipulability and safety. The robot actuation has been moved to the base of the system, thus minimizing its size and weight. Other systems have been developed which separate the probe positioning into two stages: approximate probe placement and finer view-plane adjustments. The first can be achieved by a passive positioning mechanism, which is operated by a clinician, while the latter is obtained with an active end-effector. A system based on cables which are driven by McKibben actuators has been proposed \cite{VilchisGonzales2001}. The antagonistic configuration of the cables is employed to position the ultrasound probe on a patient. The system is tele-operated by a sonographer. Researchers from Waseda University first proposed this concept and corresponding design in \cite{Nakadate2009}, in which the end-effector is driven through a parallel mechanism. Similarly, a consortium of researchers have developed a system with active end-effector with the aim of remote tele-diagnosis \cite{Gourdon1999, Arbeille2003, Vieyres2003}. The system has since been trialled for remote scans \cite{Arbeille2005} and translated to a commercial product (MELODY, AdEchoTech, Naveil, France). Despite the scanning being performed remotely, the design of the system suggests, however, that the assisting operator is still required to apply the necessary force to maintain a stable contact.
Maintaining stable mechanical coupling between ultrasound probe and patient tissue is of paramount importance for ensuring a high-quality image. Approaches to achieve this involve controlling the contact force directly or establishing an elastic contact between the position-controlled device and the patient. While the first has been researched extensively \cite{Siciliano1999RobotControl}, \cite{Fang2017} and can be commonly found in various forms of industrial applications, the latter has found more attention in recent years due to an increased demand in cost effective force control and -limiting solutions for human robot collaboration tasks \cite{McMahan2006, Eiberger2008}. Series-elastic actuators have been developed to provide passive compliance in actuated robotic joints \cite{PrattSeriesActuators}. While providing a degree of compliance, this has the disadvantage that a collision or undesired contact in a direction other than the joint axis cannot be compensated for. We have trialled safety clutches for the use in ultrasound robots which exhibit compliant behaviour once disengaged through an excess force \cite{wang2019analysis, wang2019design}, \cite{Mathur2019}. This, however, renders the system uncontrollable and requires reengaging the clutch mechanism for further operation. In this work, we make use of an elastic soft robotic system, which is aimed at overcoming aforementioned limitations.
Soft robotics technologies have opened up new design paradigms for robotic systems through the use of elastic and deformable materials and structures \cite{Laschi2016, Polygerinos2017}. Soft robotics systems are commonly designed to interact with or conform to environmental contacts. This allows soft robotic manipulators to exhibit highly dexterous manoeuvrability in for example surgical \cite{Cianchetti2014, Marchese2014, Kahrs2015} or search and rescue operations \cite{Hawkes2017}. In these scenarios, however, soft robots are not applied to tasks which require significant loadbearing capabilities, predominantly due to their low stiffness. To bridge the trade-off between manoeuvrability and stiffness, research has been driven towards systems with variable stiffness capabilities. A comprehensive overview of stiffening technologies is given in \cite{Manti2016}. For applications in which softness is desired, high loadings are demanded and stiffening mechanisms are not suitable, soft robotic systems tend to be combined with external constraints to ensure structural integrity. This is commonly found in exoskeleton research and rehabilitation robotics. Examples include full body, soft exosuits \cite{Wehner2013AAssistance}, lower limb exoskeletons \cite{Costa2006} and hand exoskeletons for post-stroke rehabilitation \cite{Chiri2012, Stilli2018AirExGlovePatients}.
In our previous work, we identified the advantages of soft robotics technology in ultrasound interaction tasks compared to rigid state-of-the-art robots and showed an initial proof-of-concept of a parallel soft robotic end-effector with the right characteristics for medical ultrasound tasks \cite{Lindenroth2017}. We now derive a novel soft robotic end-effector which is capable of safely acquiring standard views in extracorporeal diagnostic foetal ultrasound (US). We select foetal US as an initial application due to its high demands to robot safety. We evaluate the performance of our system with respect to derived specifications and show that the proposed system is capable of acquiring a set of standard view-plane required for the assessment of the foetus. The robot utilizes linear soft fluidic actuators (SFAs) which are arranged in parallel around the ultrasound probe to provide high axial loadbearing capabilities and high lateral compliance, thus enabling adaptability and safety in the patient interaction.
The individual contributions of this study are:
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{Linde2.pdf}
\caption{Proposed design of the soft robotic end-effector (a) and workflow (b) for obtaining a desired view through manual placement in the approximate region of interest (i) and active steering of the probe towards the desired view-plane (ii).}
\label{fig:SEE_design}
\end{figure*}
\begin{itemize}
\item Clinical investigation to determine workspace and force requirements for view-plane adjustments in foetal diagnostic ultrasound imaging.
\item Design and verification of a soft robotic end-effector which satisfies the derived clinical requirements in workspace and force. It employs robust linear soft fluidic actuators, for which a novel injection-based fabrication is derived, and undesired twist is prevented through a mesh constraint.
\item Definition and validation of a lumped stiffness model to describe the motion of the soft robotic end-effector in the absence and presence of external loading.
\end{itemize}
The controllability and imaging capabilities of the integrated system are validated in position control and US phantom experiments respectively.
The paper is structured in the following way. In Section \ref{Design} the system requirements are determined, and the robot design is introduced. Based on the design of the system, Section \ref{Modelling} derives a kinetostatic model. Methodologies for the actuation and control of the system are presented in Section \ref{ActuationAndControl}. In Section \ref{Experiments} the mechanical properties of the system and its workspace are evaluated. Results are presented in section \ref{Results}. The proposed model is validated and the position controller performance, as well as the imaging capabilities of the system, are assessed.
\section{Methods}
Prenatal foetal ultrasound is a routine diagnostic procedure for pregnant women to determine birth defects and abnormalities in the foetus. Common checks include measuring the foetus’ biparietal diameter (BPD), its head and abdominal circumferences (HC and AC) as well as its femur length (FL) \cite{Salomon2011}.
In this work we focus on obtaining HC, AC and FL standard view-planes. We establish the clinical requirements to the contact force and movement range of the ultrasound probe for bespoke application and derive a suitable design for a soft robotic end-effector (SEE).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Linde3.pdf}
\caption{Braided nylon mesh uncrimped (a) and crimped (b).}
\label{fig:Mesh}
\end{figure}
\subsection{Design}
\label{Design}
\subsubsection{Clinical data acquisition and processing}
Pregnant women between 18 to 24 weeks of gestation underwent research ultrasound scans at St Thomas’ Hospital (Study title: \emph{Intelligent Fetal Imaging and Diagnosis (iFIND)-2: Further Ultrasound and MR Imaging}, Study reference: 14/LO/1806). Trained sonographers performed the foetal ultrasound scan using a standard ultrasound probe (X6-1, Philips, Amsterdam, Netherlands) which is connected to an ultrasound scanner (EPIQ7, Philips, Amsterdam, Netherlands). The probe was placed in a holder as detailed in \cite{Noh2015}. This holder incorporated an electromagnetic (EM) tracking sensor (Aurora, NDI, Ontario, Canada) and six axis force-torque sensor (Nano 17, ATI, Apex, USA), which allowed measurements of the position and orientation of the probe, and the force applied at the probe face to be measured throughout the scan. The recorded tracking and force data of six patients were analysed by extracting time ranges during which standard fetal anomaly views were imaged. These included HC, AC and FL views. Each time range consisted of the few seconds when the sonographer had placed the probe in the correct anatomical region and was adjusting the probe to find the ideal view. For each view the tracking data were analysed to find the range of positions and orientations in the three axes separately. The X and Y axes show movement in the horizontal plane of the scanning bed (left to right on the patient, and foot to head, respectively), and the Z axis shows vertical movement. Orientation ranges are given in probe coordinates, with yaw showing axial rotation, pitch showing elevational tilting out of the image plane, and roll showing in-plane lateral tilting. Forces were analysed by dividing the measured force vector into normal and tangential components applied to the surface. The local surface angle was determined at each measurement by fitting an ellipsoidal shape to the tracking data of the scan. The 95th percentile of the forces measured within a time range gives an indication of the maximum force that must be applied by the probe.
\subsubsection{Mechanism requirements and synthesis}
Following the results of the clinical data analysis, it is found that the soft robotic end-effector must at least satisfy the following requirements
\begin{itemize}
\item Be able to withstand mean axial and transversal contact forces of 8.01N and 4.42N without significant deterioration of the imaging view.
\item Achieve an axial extension along Z of 5.22mm and transversal translations in X and Y of 7.75mm.
\item Achieve rotations of 5.08$\degree$ around X and Y.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Linde4.pdf}
\caption{Free body diagram of SEE model definition}
\label{fig:Model}
\end{figure}
To maintain a high degree of safety when interacting with the device, the SEE should furthermore comprise of a low transversal stiffness. This allows both the operating clinician and patient to manually displace the probe in case of discomfort.
As the investigated system is compliant, its deflection has to be considered when determining if a position is achievable. Taking into account normal and tangential forces applied during the scanning, the system must satisfy the following conditions
\[
\boldsymbol{\delta}_{SEE} \geq \boldsymbol{\delta}_{req} +\boldsymbol{\delta}_{f} \quad \text{with} \quad \boldsymbol{\delta}_{f} = \boldsymbol{K}^{-1}_{min}\boldsymbol{f}_{req}
\]
Where $\boldsymbol{\delta}_{f}$ is a deformation induced by external forces, $\boldsymbol{f}_{req}$ is a vector of the required forces and $\boldsymbol{K}_{min}$ is the minimum system stiffness throughout the workspace. $\boldsymbol{\delta}_{req}$ and $\boldsymbol{\delta}_{SEE}$ are vectors of the required and achievable translations respectively. As only tip forces are considered in this work, tilting effects induced by external moments at the SEE tip are ignored and forces are assumed to only affect the tip position.
A soft robotic design based on soft fluidic actuators (SFAs), which have previously been presented in \cite{Lindenroth2017}, is proposed. It is comprised of two rigid platforms which serve as base and transducer holder respectively. The platforms are connected through a set of three soft fluidic actuators which are arranged in a parallel fashion at $120\degree$ intervals. To allow for sufficient space for the ultrasound transducer cable, the actuators are tilted at an angle of 15$\degree$. An overview of the design is shown in Fig. \ref{fig:SEE_design}a). Whilst a rigid mechanism of such configuration would be over-constrained and thus unable to move, the elasticity of the SFAs allows the SEE to perform bending (coupled translation and rotation) and axial extension motions.
As the SFAs are tilted, axial extension causes the SFAs to bend into an S-shaped configuration. This allows for the SEE to be axially compliant whilst exhibiting a high degree of load-bearing capabilities, which is further investigated in Section \ref{Stiffness}. Furthermore, curving into an S-shaped configuration eliminates the possibility of unstable buckling to occur in the SFAs, as shown in Section \ref{Results_Stiffness}.
A common problem in such a proposed soft robotic system is the low stiffness along its twist axis. To improve the stability of the system against twist deformations, a nylon fibre mesh is attached to base and transducer platforms, which acts as a mechanical constraint between the two. To reduce unwanted buckling behaviour, crimps can be added to the mesh by deforming and heat-treating it. Examples of uncrimped and crimped meshes are shown in Fig. \ref{fig:Mesh}. Thus, axial rotation of the ultrasound transducer is not considered in this study, as it could be added by simply applying a rotating mechanism to the base of the SEE, which would function as a stiff rotational axis in conjunction with the mesh constraint.
The workflow of imaging using the SEE is shown in Fig. \ref{fig:SEE_design}b). Once the SEE is manually placed in the approximate area of the target view using a passive positioning arm, it is fixed on the patient. The ultrasound probe is then actively steered either in a tele-operated manner by a sonographer or in an autonomous fashion using pose or image feedback. As the loadbearing is achieved by the SEE, contact forces the sonographer is required to apply are minimized, which presumably has an impact on the ergonomics of the sonographer.
\subsection{Kinetostatic modelling}
\label{Modelling}
To determine the ultrasound probe pose under internal fluid volume variation and external loading a kinetostatic model is derived according to \cite{Klimchik2018}. A free body diagram of the model is shown in Fig. \ref{fig:Model}. In the following, a vector denoted as $\boldsymbol{w}_f$ represents a 6 degree of freedom wrench in an arbitrary frame $f$ such that $\boldsymbol{w}_f=[F_x^f,F_y^f,F_z^f,M_x^f,M_y^f,M_z^f ]^T$ with forces $\boldsymbol{F}$ and moments $\boldsymbol{M}$. Similarly, $\boldsymbol{\tau}_f$ denotes a reaction wrench in the local SFA frame, which is of the same form as $\boldsymbol{w}_f$. Vectors noted as $\delta x_f$ indicate infinitesimally small displacements in frame f of the form $\delta \boldsymbol{x}_f=[u_x^f,u_y^f,u_z^f,v_x^f,v_y^f,v_z^f ]^T$ with translations $u$ and rotations $v$.
Let $\boldsymbol{w}_{ext}$ be a vector of forces and moments applied to the tip of the ultrasound transducer. Under static equilibrium conditions, the following holds for a single actuator
\begin{equation}\label{eq:StatEq}
\boldsymbol{w}_{ext}=\boldsymbol{w}_\theta + \boldsymbol{w}_V
\end{equation}
Where $\boldsymbol{w}_\theta$ is the wrench caused by the elastic deformation of the SFA and $\boldsymbol{w}_V$ is the reaction wrench caused by the constrained hydraulic chamber. Both are expressed in the tip frame of the system.
The tip wrenches $\boldsymbol{w}_\theta$ and $\boldsymbol{w}_V$ can be expressed relative to their local frames by
\begin{equation}\label{eq:Ad}
\begin{split}
\boldsymbol{w}_\theta & =\boldsymbol{J}_\theta(\boldsymbol{x}) \boldsymbol{\tau}_\theta \\ \boldsymbol{w}_V & =\boldsymbol{J}_V(\boldsymbol{x})\tau_V
\end{split}
\end{equation}
Where $\boldsymbol{\tau}_\theta$ is a vector of local reaction forces and moments caused by the SFA deformation and $\boldsymbol{\tau}_V$ is the uniaxial reaction force of the volumetric constraint in the actuator. The matrices $\boldsymbol{J}_\theta(\boldsymbol{x})$ and $\boldsymbol{J}_V(\boldsymbol{x})$ are defined by
\begin{equation}\label{eq:Ad}
\begin{aligned}
\boldsymbol{J}_\theta(\boldsymbol{x}) &=
\begin{bmatrix}
\boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x})
\end{bmatrix}\boldsymbol{Ad} \\
\boldsymbol{J}_V(\boldsymbol{x}) &=
\begin{bmatrix}
\boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x})
\end{bmatrix}\boldsymbol{Ad}_z
=
\begin{bmatrix}
\boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x})
\end{bmatrix}\boldsymbol{\hat{H}}
\end{aligned}
\end{equation}
$\boldsymbol{R}(\boldsymbol{x})$ is the rotation matrix of the current tip deflection. Matrix $\boldsymbol{Ad}$ is the wrench transformation matrix relating the local SFA frame to the tip frame by
\begin{equation}
\boldsymbol{Ad} =
\begin{bmatrix}
\boldsymbol{R}_0 & \boldsymbol{0} \\
\boldsymbol{D}_0\boldsymbol{R}_0 & \boldsymbol{R}_0
\end{bmatrix}
\end{equation}
Where $\boldsymbol{R}_0$ is the spatial rotation of the respective frame and $\boldsymbol{D}_0$ is the cross-product matrix with the translation vector $\boldsymbol{d}_0 = [d_x, d_y, d_z]$. $\boldsymbol{\hat{H}}$ is for a single SFA a 6x1 vector containing the third column of $\boldsymbol{Ad}$.
Considering the elastic behaviour of the SFA, its reaction force $\boldsymbol{\tau}_\theta$ caused by an infinitesimally small, local displacement $\delta\boldsymbol{x}_\theta$ can be written as
\begin{equation}\label{eq:Hook}
\boldsymbol{\tau}_\theta=\boldsymbol{K}_\theta \delta \boldsymbol{x}_\theta
\end{equation}
Where the SFA stiffness $\boldsymbol{K}_\theta$ is defined as a Timoshenko beam element with
\[\label{xx}
\boldsymbol{K}_\theta =
\begin{bmatrix}
\frac{12EI}{(1+\Phi)L^3} & 0 & 0 & 0 & \frac{6EI}{(1+\Phi)L^2} & 0\\
0 & \frac{12EI}{(1+\Phi)L^3} & 0 & \frac{-6EI}{(1+\Phi)L^2} & 0 & 0 \\
0 & 0 & \frac{EA}{L} & 0 & 0 & 0\\
0 & \frac{-6EI}{(1+\Phi)L^2} & 0 & \frac{(4+\Phi)EI}{(1+\Phi)L} & 0 & 0\\
\frac{6EI}{(1+\Phi)L^2} & 0 & 0 & 0 & \frac{(4+\Phi)EI}{(1+\Phi)L} & 0\\
0 & 0 & 0 & 0 & 0 & \frac{GJ}{L}
\end{bmatrix}
\]
$L$ describes the length of the SFA, $A$ it’s cross-sectional area, $E$ its Young’s modulus, $I$ the area moment of inertia, $G$ its shear modulus and $J$ the torsion constant. The Timoshenko coefficient $\Phi$ is defined as
\[
\Phi=\frac{12EI}{\frac{A}{\alpha}GL^3}
\]
with the Timoshenko coefficient $\alpha$. An overview of the SFA constants is given in Table \ref{tab:ModelParameters}.
\begin{table}[h!]
\centering
\caption{SFA model parameters}
\label{tab:ModelParameters}
\setlength\tabcolsep{2pt}
\begin{tabular}{lll}
\toprule
Constant & Value & Description\\ \midrule
$L$ & $45$mm & Initial length\\
$A$ & $\pi \cdot 10^2\text{mm}^2$ & Cross-sectional area\\
$a$ & $\pi \cdot 6.9^2 \text{mm}^2$* & Fluid channel area\\
$E$ & $301.51$kPa ** & Young's modulus\\
$I$ & $1200\text{cm}^4$ ** & Area moment of interia\\
$G$ & $0.5$E & Shear modulus\\
$J$ & $0.5\pi\cdot10^4\text{mm}^4$ & Torsion constant\\
$\alpha$ & $5/6 $ & Timoshenko coefficient\\ \bottomrule
\multicolumn{3}{l}{* obtained in Section \ref{SFA_characterization}; ** obtained in Section \ref{Model_validation}}
\end{tabular}
\end{table}
Whilst parameters $L$ and $A$ are obtained from the SFA geometry, the torsion constant of a beam with circular cross-section can be expressed as $J = 0.5\pi r^4$ and its Timoshenko coefficient is defined as $5/6$ \cite{Matrix}. The shear modulus $G$ is approximated as half the Young's Modulus.
For a given SFA volume, the kinematic relationship between an infinitesimal small volume change $\delta V$ of the SFA and the displacement of the ultrasound tip frame is given by
\begin{equation}\label{eq:Constraint}
\delta V/a=\boldsymbol{J}_V^T \delta\boldsymbol{x}_{tip}
\end{equation}
Where $a$ is the cross-sectional area of the fluid actuation channel. The kinematic motion of the tip frame caused by the SFA deflection can be defined as
\begin{equation}\label{eq:Kin}
\delta\boldsymbol{x}_{\theta}=\boldsymbol{J}_\theta^T \delta \boldsymbol{x}_{tip}
\end{equation}
Substituting Equation \ref{eq:Kin} into \ref{eq:Hook} yields
\begin{equation}\label{eq:DeflForce}
\boldsymbol{\tau}_\theta=\boldsymbol{K}_\theta \boldsymbol{J}_\theta^T \delta \boldsymbol{x}_{tip}
\end{equation}
Applying Equations \ref{eq:Ad} and \ref{eq:DeflForce}, the static equilibrium condition in Equation \ref{eq:StatEq} can be written as
\begin{equation}\label{eq:StatEqFinal}
\boldsymbol{w}_{ext}=\boldsymbol{J}_\theta\boldsymbol{K}_\theta\boldsymbol{J}^T_\theta\delta\boldsymbol{x}_{tip} + \boldsymbol{J}_V\tau_V
\end{equation}
Equation \ref{eq:StatEqFinal} can be combined with the imposed kinematic constraint defined by Equation \ref{eq:Constraint} to a linear equation system of the form
\begin{equation}\label{eq:EqSys1}
\begin{bmatrix}
\boldsymbol{w}_{ext}\\
\delta V/a
\end{bmatrix} =
\begin{bmatrix}
\boldsymbol{J}_\theta\boldsymbol{K}_\theta\boldsymbol{J}_\theta^T & \boldsymbol{J}_V\\
\boldsymbol{J}_V^T & \boldsymbol{0}
\end{bmatrix}
\begin{bmatrix}
\delta \boldsymbol{x}_{tip}\\
\tau_V
\end{bmatrix}
\end{equation}
The deflection of the ultrasound transducer tip and internal reaction of the system can consequently be found through matrix inversion
\begin{equation}\label{eq:EqSys2}
\begin{bmatrix}
\delta \boldsymbol{x}_{tip}\\
\tau_V
\end{bmatrix} =
\begin{bmatrix}
\boldsymbol{J}_\theta\boldsymbol{K}_\theta\boldsymbol{J}_\theta^T & \boldsymbol{J}_V\\
\boldsymbol{J}_V^T & \boldsymbol{0}
\end{bmatrix}^{-1}
\begin{bmatrix}
\boldsymbol{w}_{ext}\\
\delta V/a
\end{bmatrix}
\end{equation}
The formulation can be expanded to a number of $n$ SFAs by considering a lumped stiffness $\boldsymbol{K}$ in the probe tip frame. As the actuators are aligned in a parallel configuration, it can be defined by
\begin{equation}\label{eq:Lump}
\boldsymbol{K} = \sum_{i=1}^{n}\boldsymbol{J}_\theta^i \boldsymbol{K}_\theta^i {\boldsymbol{J}_\theta^i}^T
\end{equation}
The matrix $\boldsymbol{J}_V$ is adopted by appending the respective columns of the wrench transformation matrix of actuator $i$ $\boldsymbol{Ad}_z^i$ to $\boldsymbol{\hat{H}}$ such that
\begin{equation}\label{eq:J_V}
^n\boldsymbol{J}_V = \begin{bmatrix}
\boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x})
\end{bmatrix}
[\boldsymbol{Ad}_z^1, \boldsymbol{Ad}_z^2, ..., \boldsymbol{Ad}_z^n]
\end{equation}
The kinematic constraint relationship then becomes
\begin{equation}\label{eq:KinVector}
\delta \boldsymbol{V}/a = ^n\!\boldsymbol{J}_V^T \boldsymbol{x}_{tip}
\end{equation}
Where $\delta \boldsymbol{V}$ is an $n \times 1$ vector of SFA volume changes. Consequently, $\tau_V$ is expanded to an $n \times 1$ vector containing $n$ local reactions in the form $\boldsymbol{\tau}_V=[\tau_{V,1},\tau_{V,2}...,\tau_{V,n}]^T$.
To account for changes in matrices $\boldsymbol{J}_\theta$ and $\boldsymbol{J}_V$ for a given motion, the model is solved numerically by dividing the applied external wrench and induced volume vectors into small increments $[\Delta \boldsymbol{w}_{ext}, \Delta \boldsymbol{V}]^T$. After each iteration, $\boldsymbol{R}(\boldsymbol{x})$ is updated according to the previous tip pose.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{Linde5.pdf}
\caption{Actuation unit (a) with syringe pumps (b) and controller system}
\label{fig:SP}
\end{figure}
For the given number of three SFAs, the update rule for the numerical solution is defined by
\begin{equation}\label{eq:EqSys2}
\begin{bmatrix}
\boldsymbol{x}^{k}_{tip}\\
\boldsymbol{\tau}_V^{k}
\end{bmatrix} =
\begin{bmatrix}
\boldsymbol{x}^{k+1}_{tip}\\
\boldsymbol{\tau}_V^{k+1}
\end{bmatrix} +
\begin{bmatrix}
\boldsymbol{K} & ^3\boldsymbol{J}^k_V\\
^3{\boldsymbol{J}^k_V}^T & \boldsymbol{0}
\end{bmatrix}^{-1}
\begin{bmatrix}
\Delta \boldsymbol{w}_{ext}\\
\Delta \boldsymbol{V}/a
\end{bmatrix}
\end{equation}
For iteration step $k$.
\subsection{Actuation and control}
\label{ActuationAndControl}
The SEE is actuated by inflating respective SFAs with a working fluid. As shown in our previous work \cite{Lindenroth2016}, we utilize custom hydraulic syringe pumps (Fig. \ref{fig:SP}b)) which are driven by stepper motors (Nema 17, Pololu Corporation, Las Vegas, USA) to induce volume changes in the SFAs. The pumps are controlled with a microcontroller (Teensy 3.5, PJRC, Sherwood, USA) which communicates via a serial interface with a PC running ROS (Intel Core I7-7700HQ, XPS15 9560, Dell, Texas, USA). The PC generates demand velocities or positions for the microcontroller and solves the previously-defined kinetostatic model to determine the system Jacobian for a given pose. Furthermore, the laptop handles interfaces with peripherals such as a joystick for teleoperation (Spacemouse Compact, 3dconnexion, Monaco) and an electromagnetic tracking (EM) system for closed-loop position control (Aurora, NDI, Ontario, Canada).
The linear soft fluidic actuators which are utilized to drive the system have first been conceptualized in our previous work \cite{Lindenroth2017}. They are comprised of a silicone rubber body (Dragonskin 10-NV, SmoothOn Inc, Pennsylvania, USA) and stiffer silicone rubber endcaps (SmoothSil 945, SmoothOn Inc, Pennsylvania, USA). A helical constraint is inserted into the silicone to counteract radial expansion of the actuator upon inflation. This, in combination with the stiff endcaps, allows for the actuators to maintain its form and only expand in the direction of actuation. The moulding process of creating SFAs has been significantly improved from our previous work. For the radial constraint an extension spring (Fig. \ref{fig:Mould}(v)) is used. The liquid silicone rubber is injected through an inlet (Fig. \ref{fig:Mould}(ii)) using a syringe instead of being poured into the mould. This has the significant advantage for the user to be able to pre-assemble the mould without having to manually wind the constraint helix, as it has been commonly done in soft fluidic actuators \cite{Suzumori2002}. In combination with the injection of the silicone this could reduce variations in the fabrication process. A drawing of a finished actuator is shown in Fig. \ref{fig:Mould}(vii). The combination of radial constraint and stiff endcaps allows for the actuators to be driven efficiently with a volumetric input without exhibiting a nonlinear relationship between input volume and output length change due to bulging, which is investigated in Section \ref{SFA_characterization}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Linde6.pdf}
\caption{Overview of mould components (i)-(vi) and drawing of final SFA (vii)}
\label{fig:Mould}
\end{figure}
In this work, two methods for controlling the ultrasound probe pose are investigated. A joystick-based teleoperated open-loop controller is implemented to allow a sonographer to steer the probe according to the acquired ultrasound image stream. For this purpose, the aforementioned joystick is used. The axial motion of the joystick is linked to a translation of the SEE in Z-direction while the two tilt axes of the joystick are mapped to the X- and Y-rotation axes of the SEE. The high-level controller generates syringe pump velocities according to
\begin{equation}
\boldsymbol{\dot{V}_d} = \boldsymbol{J}_V^T \boldsymbol{v_{cart}}
\end{equation}
Where $\boldsymbol{\dot{V}}_d$ is the desired SFA velocity, $\boldsymbol{v_{cart}}$ the target velocity in Cartesian space and $\boldsymbol{J_V}^T$ the actuation matrix of the system which has been derived in Section \ref{Modelling}.
\textcolor{black}{
A closed-loop controller is integrated to drive the ultrasound probe tip position according to EM tracker feedback. For this purpose, a high-level trajectory generator continuously updates the demand position for the position controller, which generates in return demand volumes for the three syringe pumps according to the control law
\begin{equation}
\boldsymbol{\Delta{V}_d} = \boldsymbol{J}_V^T \boldsymbol{U}
\end{equation}
Where $\Delta\boldsymbol{V}_d$ is the desired change in volume and $\boldsymbol{U}$ the control signal. A linear PI controller of the form
\begin{equation}
\boldsymbol{U} = \boldsymbol{K}_P\boldsymbol{X}_e + \boldsymbol{K}_I\int\boldsymbol{X}_e dt
\end{equation}
is employed, where $\boldsymbol{X}_e = \boldsymbol{X}_d - \boldsymbol{X}_c$. $\boldsymbol{X}_d$ and $\boldsymbol{X}_c$ are demanded and measured probe tip position respectively.}
\textcolor{black}{The gain matrices $\boldsymbol{K}_P=diag(k_P,k_P,k_P)$ and $\boldsymbol{K}_I=diag(k_I,k_I,k_I)$ contain the gain constants $k_P$ and $k_I$, which have been verified experimentally and are defined as $0.3 \frac{\text{ml}}{\text{mm}}$ and $0.03 \frac{\text{ml}\cdot s}{\text{mm}}$ respectively.}
The target points are generated at 2Hz while both the position controller and the kinetostatic model are updated at 30Hz. The low-level step generation for driving the syringe pumps is achieved with an update rate of 6kHz.
\section{\textcolor{black}{Experimental validation}}
\label{Experiments}
\subsection{SFA characterization}
Using the \textcolor{black}{three SFAs to control the SEE pose} in an open-loop configuration requires the volume-extension relation to be predictable for any given point in time. \textcolor{black}{From the radial mechanical constraint incorporated in the SFA design it is assumed that the relationship between induced volume and SFA length change is linear. To verify this, the extension behaviour of a single SFA is investigated for different working fluid changes} using a linear rail setup. The position of the tip of the actuator is equipped with a slider and tracked using a linear potentiometer. Contact friction between the linear bearings and rails is minimized using lubrication and friction forces are therefore neglected in the evaluation of the results. Volume and extension data are tracked and synchronized using ROS.
\subsection{Stiffness characterization}
\label{Stiffness}
\textcolor{black}{As the SEE is highly compliant, knowledge of its deformability under external loads is required to determine its efficacy to the given task. To verify the structural behaviour of the SEE under contact forces required for the clinical application, the stiffness of the system is characterized} with the setup shown in Fig. \ref{fig:UR3}. The SEE is mounted to a base plate and its tip is connected through a force-torque sensor (Gamma, ATI, Apex, USA) to a robot manipulator (UR3, Universal Robots, Odense, Denmark). To determine the stiffness of the SEE in a given direction, the manipulator moves the SEE in said direction and the resulting reaction force is measured\textcolor{black}{. The robot allows for an accurate, repeatable displacement of the SEE in a defined direction, thus isolating the desired DOFs. The payload of the system is with 3kg sufficiently high to withstand the induced reaction forces caused by the elastic deformation of the SEE}. The motions are repeated 10 times for each configuration. The linearized relationship between reaction force and manipulator displacement corresponds to the stiffness of the SEE.
The mesh reinforcement’s effect on the axial twist stiffness is determined by twisting the SEE repeatedly by $10\degree$ and measuring the z-axis moment. This is done for a configuration without mesh reinforcement, mesh reinforcement without crimps and mesh reinforcement with crimps.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Linde7.pdf}
\caption{Experimental setup for stiffness characterization}
\label{fig:UR3}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Linde8.pdf}
\caption{SEE moving in contact with soft rubber patch. \textcolor{black}{The tip pose change with respect to the SEEs origin is highlighted with an arrow.}}
\label{fig:ContactPatch}
\end{figure}
The directional lateral stiffness is obtained by displacing the SEE tip radially in a defined direction over a distance of 10mm. This is repeated for four inflation levels (25\%, 50\%, 75\% and 100\% of the maximum SFA volume) and for directions between 0$\degree$ and $345\degree$ in $15\degree$ increments around the z-axis. The axial stiffness which corresponds to each extension is determined by displacing the SEE tip in negative z-direction by 1.5mm for 25\% and 50\% inflation, and by 2.5mm for 75\% and 100\% extension.
\subsection{Workspace and repeatability}
\label{Experimental_WS}
\textcolor{black}{To verify whether the attainable motions of the SEE satisfy the imposed clinical requirements for the ultrasound probe motion, the workspace of the SEE is mapped for achievable volumetric inputs.} The \textcolor{black}{SEE pose} is measured using an electromagnetic tracker (6DOF Reference, Aurora, NDI, Ontario, Canada) which is attached to the side of the SEE tip. The pose of the ultrasound probe tip is calculated with the known homogeneous transformation between tracker and tip. The SFA volumes are varied between 0\% and 100\% in 10\% increments and the resulting static tip pose is determined with respect to its deflated state.
The repeatability in positioning the tip of the SEE is determined by repeatedly approaching defined SFA volume states and measuring the tip pose. A set of 6 states is defined and the resultant trajectory is executed 50 times.
\subsection{Model validation}
The derived model is validated by comparing the workspace and corresponding SFA volumes to the calculated tip pose of the SEE. \textcolor{black}{For this purpose tip poses are calculated for each configuration achieved in Section \ref{Experimental_WS} and the error between model and measurement is determined.}
\subsection{Indentation behaviour}
\textcolor{black}{Whilst the abdomen exhibits an increased stiffness with the duration of the pregnancy and thus counteracts indentation of the ultrasound probe, deep tissue indentation in the early weeks can affect the positioning behaviour of the SEE.} To verify the effect a soft tissue-like contact has on the SEE, a soft mechanical phantom is created. The cylindrical phantom is moulded from a layer of Ecoflex Gel and a structural layer of Ecoflex 00-30 (SmoothOn Inc, Pennsylvania, USA).
The tip of the SEE is controlled to perform a line trajectory from its negative to positive x-axis limits at 60\% inflation. The tip pose is monitored with a magnetic tracker and contact forces between SEE and phantom are measured using aforementioned force sensor at the base of the phantom. The manipulator is used to test for different indentation depths from 0mm to 15mm in 5mm increments.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Linde9.pdf}
\caption{Sonographer performing SEE-assisted ultrasound scanning of a prenatal abdominal phantom (iii). The SEE (i) is attached to a passive arm (ii) and manually placed on the phantom. A joystick (iv) is used to manipulate the ultrasound probe under visual guidance of the acquired image (v).}
\label{fig:Imaging}
\end{figure}
\subsection{Controllability}
\textcolor{black}{To achieve a desired view-plane in the ultrasound image, the probe attached to the SEE needs to be steerable accurately across the patient's body.} The controllability of the SEE is verified with the closed-loop position control system described in Section \ref{ActuationAndControl}. Target trajectories are defined as isosceles triangles with a base of 12.33mm and height of 10mm. For the tilted trajectory, the triangle is titled about one of its sides by 19$\degree$. The trajectory is tested in a planar and tilted configuration and tracked 3 times each.
To determine the controllability under an external load, a stiff silicone rubber patch is created as shown in Fig. \textcolor{black}{\ref{fig:ContactPatch}}. The patch is lubricated and positioned with its center at the tip of the SEE. To ensure contact with the patch, an initial axial force of 5N is generated by displacing the patch and running the position controller. This is repeated for planar and tilted configurations, where each trajectory is tracked 3 times.
\subsection{Sonographer-guided teleoperation}
The imaging capabilities of an ultrasound transducer guided by the SEE are verified using a prenatal abdominal phantom (SPACE FAN-ST, Kyoto Kagaku, Japan). The SEE is equipped with an abdominal ultrasound probe (X6-1, Philips, Amsterdam, Netherlands) which is connected to an ultrasound scanner (EPIQ7, Philips, Amsterdam, Netherlands). A passive positioning arm (Field Generator Mounting Arm, NDI, Ontario, Canada) is used to manually position the SEE in the region of interest on the phantom. The sonographer uses the provided ultrasound image feedback to steer the SEE with the connected joystick towards a desired view-plane.
The target view-planes are manually acquired using a handheld ultrasound probe. An overview of the experimental setup is shown in Fig. \ref{fig:Imaging}.
\section{Results}
\label{Results}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Linde10.pdf}
\caption{\textcolor{black}{Time series of probe pose (a) and tip force (b) for subject 5. Data between motions corresponding to standard views HC, AC and FL have been omitted.}}
\label{fig:ClinicalData}
\end{figure}
\subsection{Clinical data}
\label{ClinicalData}
\textcolor{black}{The results of the clinical data acquisition are presented in Table \ref{tab:ClinicalData}. For each subject the maximum observed motion range in translation and rotation of the ultrasound probe is presented for the HC, AC and FL standard views. The presented forces correspond to the 95th percentile of the occurring normal and tangential force magnitudes. A time series of the probe pose and force data obtained for subject 5 is shown in Fig. \ref{fig:ClinicalData}.}
For subject 2 only HC and AC views were obtained. Translations and rotations are shown with respect to the patient bed. The normal force is assumed to be acting only in negative probe direction and the tangential force shows the vector magnitude of the tangential forces in X and Y.
\textcolor{black}{To obtain workspace requirements which are compatible with the obtained forces, it is divided into transversal and axial movements and transversal rotations. In this study, axial rotations of the probe are ignored. Workspace requirements for the SEE are consequently obtained by selecting the larger translation between X and Y for the transversal $\delta_{req}^{tr}$ and the translation in Z for the axial motion $\delta_{req}^{ax}$, thus resulting in a required cylindrical workspace of radius $\delta_{req}^{tr}$ and height $\delta_{req}^{ax}$. For the orientation, the required rotation is defined by $\theta_{req}^{tr}$. The mean required workspace from the clinical data is therefore
\begin{equation}
\begin{split}
\boldsymbol{\delta}_{req} = &[\delta_{req}^{ax}, \delta_{req}^{tr}]^T = [5.22\text{mm}, 7.75\text{mm}]^T\\
\theta_{req} = &5.08\degree
\end{split}
\end{equation}}
Corresponding maximum tilts of pitch and roll are in ranges of $\pm9.8\degree$ and $\pm12.9\degree$. The maximum occurring normal and tangential forces are 20.77N and $\pm$10.67N respectively.
\begin{table}[htbp]
\centering
\caption{Range of motion and contact force required to obtain a desired view in foetal ultrasound. Values used to generate the required SEE workspace are marked in blue.}
\setlength\tabcolsep{2pt}
\resizebox{\linewidth}{!}{
\begin{tabular}{rccccccccc}
\multicolumn{1}{c}{\multirow{2}[3]{*}{Subj.}} & \multirow{2}[3]{*}{View} & \multicolumn{3}{c}{Max. translation [mm]} & \multicolumn{3}{c}{Max. rotation [deg]} & \multicolumn{2}{c}{Force range [N]} \\
\cmidrule{3-10} & & X & Y & Z & Yaw & Pitch & Roll & Normal & Tangential \\
\midrule
\multicolumn{1}{c}{\multirow{3}[2]{*}{1}} & HC & \textbf{8.50} & 3.41 & \textbf{5.67} & \textbf{6.19} & \textbf{4.98} & \textbf{7.72} & \textbf{13.81} & 1.92 \\
& AC & 4.49 & 5.60 & 2.76 & 3.44 & 3.45 & 3.17 & 4.10 & 2.60 \\
& FL & 6.03 & \textbf{7.86} & 4.51 & 5.51 & 4.84 & 2.72 & 7.44 & \textbf{4.15} \\
\midrule
\multicolumn{1}{c}{\multirow{3}[2]{*}{2}} & HC & 4.95 & \textbf{6.45} & 2.96 & 5.74 & \textbf{5.91} & 10.09 & \textbf{14.09} & \textbf{6.15} \\
& AC & \textbf{13.53} & 6.33 & \textbf{7.53} & \textbf{10.80} & 5.88 & \textbf{12.90} & 8.73 & 1.78 \\
& FL & - & - & - & - & - & - & - & - \\
\midrule
\multicolumn{1}{c}{\multirow{3}[2]{*}{3}} & HC & \textbf{9.73} & \textbf{12.41} & 6.52 & 7.26 & \textbf{9.80} & 4.92 & \textbf{13.27} & \textbf{4.87} \\
& AC & 1.53 & 9.37 & 2.60 & 4.23 & 3.62 & 4.10 & 5.55 & 1.40 \\
& FL & 5.81 & 6.93 & \textbf{7.96} & \textbf{10.34} & 3.80 & \textbf{7.78} & 6.61 & 2.36 \\
\midrule
\multicolumn{1}{c}{\multirow{3}[2]{*}{4}} & HC & \textbf{11.87} & \textbf{8.36} & \textbf{7.02} & \textbf{14.84} & \textbf{4.70} & \textbf{9.39} & 3.47 & \textbf{3.62} \\
& AC & 2.76 & 2.31 & 3.19 & 0.67 & 1.04 & 1.09 & \textbf{4.36} & 3.59 \\
& FL & 4.64 & 4.51 & 4.64 & 1.82 & 3.55 & 2.07 & 3.61 & 2.27 \\
\midrule
\multicolumn{1}{c}{\multirow{3}[2]{*}{5}} & HC & \textbf{13.08} & 11.82 & 5.79 & \textbf{14.78} & \textbf{4.96} & \textbf{7.14} & \textbf{8.30} & \textbf{8.94} \\
& AC & 9.77 & \textbf{19.91} & \textbf{10.22} & 2.83 & 4.55 & 2.65 & 4.46 & 1.49 \\
& FL & 4.11 & 3.77 & 2.61 & 6.75 & 1.92 & 3.48 & 3.59 & 2.92 \\
\midrule
\multicolumn{1}{c}{\multirow{3}[2]{*}{6}} & HC & 2.49 & \textbf{10.18} & \textbf{7.55} & \textbf{7.01} & \textbf{7.83} & \textbf{4.07} & \textbf{20.77} & 9.13 \\
& AC & \textbf{5.69} & 8.57 & 4.22 & 2.27 & 3.73 & 1.20 & 6.06 & 7.24 \\
& FL & 3.79 & 3.97 & 3.02 & 3.60 & 2.90 & 1.90 & 7.97 & \textbf{10.67} \\
\midrule
& $\mu$ & 6.63 & \textcolor{blue}{7.75} & \textcolor{blue}{5.22} & 6.36 & 4.56 & \textcolor{blue}{5.08} & \textcolor{blue}{8.01} & \textcolor{blue}{4.42} \\
& \textcolor{black}{$\sigma$} & \textcolor{black}{3.64} & \textcolor{black}{4.16} & \textcolor{black}{2.23} & \textcolor{black}{4.09} & \textcolor{black}{2.01} & \textcolor{black}{3.37} & \textcolor{black}{4.70} & \textcolor{black}{2.87} \\
& max & 13.53 & 19.91 & 10.22 & 14.84 & 9.8 & 12.9 & 20.77 & 10.67 \\
\end{tabular}}
\label{tab:ClinicalData}%
\end{table}%
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Linde11.pdf}
\caption{SFA pressure (a) and extension (b) under increasing working fluid volume \textcolor{black}{for different inflation levels}.}
\label{fig:1DOF}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Linde12.pdf}
\caption{Change in transversal stiffness with the direction of the applied force for different extensions (a). Change in stiffness with extension for axial (i) and transversal stiffness (ii) (b). Change in stiffness with bending for axial (i) and transversal stiffness (ii) (c).}
\label{fig:Stiffness}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Linde13.pdf}
\caption{Measured compression force of the SEE $f_{meas}$ at 0\% (a), 50\% (b) and 100\% axial extension with it's corresponding linear interpolation $f_{lin}$. For each configuration the compressed SEE is depicted and the SFA centerlines are highlighted.}
\label{fig:Buckling}
\end{figure}
\subsection{SFA characterization}
\label{SFA_characterization}
The results of the SFA characterization are shown in Fig. \ref{fig:1DOF}. The hydraulic pressure under SFA inflation and the resulting extension are shown in Fig. \ref{fig:1DOF}a) and \ref{fig:1DOF}b) respectively.
\textcolor{black}{Hysteresis is mainly observable in the fluid pressure. The mean deviation from the centerline between loading and unloading is 3.82$\pm$1.63kPa for the pressure and 0.14$\pm$0.05mm for the extension across the different inflation cycles. A maximum deviation due to hysteresis is observable in the pressure when inflated to 100\% with 9.28kPa and when inflated to 50\% at 0.44mm}.
The volume-extension curve of the SFA can be separated into two regions, a nonlinear (0ml to $\approx$1.25ml) and a linear region ($\approx$1.25ml to 5ml). In the linear region, the relationship can be can be approximated with a first order polynomial as $\Delta L(\Delta V)=6.61\text{mm}/\text{ml} - 5.52\text{mm}$. \textcolor{black}{The interpolation is used to determine the relationship between the SFA length change and the input volume change of the form $a = {\Delta V}/{\Delta L} = \pi \cdot 6.9^2 \text{mm}^2$}. As the proportion of the nonlinear region compared to the overall extension of the SFA is small, it is ignored for the following investigations. SFAs are therefore assumed to be pre-extended with a volume of 1.25ml.
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{Linde14.pdf}
\caption{Workspace of the SEE in position (a-b) and orientation (c-d). \textcolor{black}{The required workspace $\boldsymbol{\delta}_{req}$ without and with consideration of the deflected tip pose $\boldsymbol{\delta}_f$ are indicated. A cross-section view along the dotted lines shows the coupling between position and orientation in the performed bending motions (e), in which the dashed lines indicate iso-volume lines for $V_2=V_3$.}}
\label{fig:WSPos}
\end{figure*}
\subsection{Stiffness}
\label{Results_Stiffness}
The results of the twist stiffness characterization for each mesh configuration are shown in Table \ref{tab:Twist}, where $\mu$ and $\sigma$ are the mean and standard deviation of the twist stiffness $K_{tw}$ respectively. The application of a nylon mesh helps to significantly stiffen the torsional axis of the system by 184\%. A crimped mesh can further improve the torsional stiffness to 299\% of its original value.
\begin{table}[h!]
\centering
\caption{Twist stiffness}
\label{tab:Twist}
\begin{tabular}{@{}cccc@{}}
\toprule
& None & Uncrimped & Crimped\\ \midrule
$\mu(K_{tw})$ [Nmm/$\degree$] & 45.94 & 84.68 & 137.37\\
$\sigma(K_{tw})$ [Nmm/$\degree$] & 0.70 & 4.30 & 2.73 \\
\bottomrule
\end{tabular}
\end{table}
The results of the lateral stiffness characterization under inflation of the SEE are shown in Fig. \ref{fig:Stiffness}a) in polar coordinates. The radius indicates the magnitude of the stiffness in the given direction.
The axial and averaged lateral stiffness of the SEE under axial extension are presented in Fig. \ref{fig:Stiffness}b). The data are presented alongside their corresponding spline interpolations. Both decrease monotonically with the axial stiffness starting from a maximum of 34.83N/mm and reaching a minimum of 14.41N/mm at 100\% extension. The transversal stiffness decreases at a comparable rate from 3.21N/mm at 25\% down to 1.51N/mm at 100\% extension.
The stiffness variation under bending of the SEE is shown in Fig. \ref{fig:Stiffness}c) with the visualized trends interpolated by splines. Whilst the the transversal stiffness decreases monotonically from 3.15N/mm to 1.77N/mm, the axial stiffness decreases from 21.15N/mm at 0.3$\degree$ tilt to a minimum of 9.99N/mm at 10$\degree$ followed by an increase in stiffness to 18.75N/mm at 13.75$\degree$. The presented data is employed to determine the minimum stiffness of the system throughout the workspace to infer possiblly occurring tip pose deviations from external forces. It can be seen that the system reaches a minimum axial stiffness of 14.41N/mm and transversal stiffness of 1.51N/mm, both in a straight and fully extended configuration.
Despite high loads along the axial direction of the SEE no discontinuous buckling behaviour of the SFAs is observable. This is demonstrated in Fig. \ref{fig:Buckling}. The force-displacement relationships and their corresponding linear interpolations are shown for 0\%, 50\% and 100\% extension and depictions of the SEE at the corresponding maximum loads are presented. Whilst a slight increase in the nonlinearity between force and displacement is observable for 100\% extension (the corresponding mean absolute errors between data and linear interpolation are 0.84N, 0.62N and 1.16N for 0\%, 50\% and 100\% extension) no discontinuities are identifiable. The depictions of the deformed SEE show how the forced S-shape bending of the SFAs helps to prevent buckling. An increase in axial force only causes the curvature of the S-bend to increase.
\subsection{Workspace}
The workspace of the SEE in position and orientation is shown in Fig. \ref{fig:WSPos}. The figures show the tip pose acquired by the EM tracker for any given SFA configuration. The required workspace in position and orientation, $\boldsymbol{\delta}_{req}$ and $\boldsymbol{\theta}_{req}$, obtained in Section \ref{ClinicalData} from clinical data is projected into the center of the SEE workspace.
The deflected workspace $\boldsymbol{\delta}_f$ is calculated from the results obtained in Section \ref{Stiffness}. It can be seen that the SEE exhibits an minimum transversal stiffness of $1.51\text{N}/\text{mm}$ and a minimum axial stiffness of $14.41\text{N}/\text{mm}$ at 100\% extension. Taking into account the mean external load applied to the tip, a possible additional deflection of
\[
\boldsymbol{\delta}_f =
\begin{bmatrix}
14.41& 0\\
0& 1.51
\end{bmatrix}^{-1}
\begin{bmatrix}
8.01\\
4.42
\end{bmatrix}
=
\begin{bmatrix}
0.56\\
2.95
\end{bmatrix}
\]
Thus, the workspace the SEE is required to achieve extends correspondingly to
\begin{equation}
\boldsymbol{\hat{\delta}}_{req} = \boldsymbol{\delta}_{req} + \boldsymbol{\delta}_f =
\begin{bmatrix}
5.78\\
10.68
\end{bmatrix}
\end{equation}
Whilst in some instances larger motions have to be achieved, the derived values represent a baseline motion range desirable from the SEE.
\textcolor{black}{To quantify whether the SEE is able to reach the desired workspace, the intersections between requirement and SEE workspace volumes are computed. It can be seen that for the unloaded requirements in translation and rotation, $\boldsymbol{\delta}_{req}$ and $\theta_{req}$, the SEE can accomplish 100\% of the workspace. For the workspace adapted to account for an external force $\boldsymbol{\hat{\delta}_{req}}$, the robot achieves 95.18\% of the required workspace.}
It is shown that a maximum combined lateral deflection of 19.01mm can be reached along the principal plane of $SFA_3$, which is about 4.5\% lower than the maximum transversal motion \textcolor{black}{observed in manual scanning}. The maximum extension of the SEE of 22.13mm is reached for a full inflation of all SFAs \textcolor{black}{and exceeds the demanded axial translation of 10.22mm as well as the transversal translation of 19.91mm determined from the clinical data}. The maximum tilt of the SEE is reached along the principal plane of $SFA_1$ with 14.02$\degree$\textcolor{black}{, which is $\approx9\%$ greater than the maximum demanded tilt of 12.9$\degree$}. A maximum axial torsion of 1.03$\degree$ occurs. Compared to the tilt ranges in X and Y the twist is significantly lower and will therefore be ignored in the following investigations.
\textcolor{black}{The coupling between translation and rotation, the bending, of the SEE upon actuator inflation is shown in Fig. \ref{fig:WSPos}e) for a cross-section of the workspace along the central x-z-plane in translation and the corresponding y-axis of the rotational workspace. It can be seen that with the amount of transversal translation, the rotation of the tip increases, whilst axial extension has no effect on the rotation.}
\begin{table}[h!]
\centering
\caption{Repeatability}
\label{tab:Repeatability}
\begin{tabular}{@{}cccc@{}}
\toprule
Pose & \textcolor{black}{$[V_1, V_2, V_3]$}& $||\delta_e|| [\text{mm}]$ & $||\theta_e|| [\degree]$ \\
\midrule
\textcolor{black}{$C_1$}&$[0\%, 0\%, 0\%]$& $0.07 \pm 0.05$& $0.03 \pm 0.02$\\
\textcolor{black}{$C_2$}&$[75\%, 50\%, 75\%]$& $0.10 \pm 0.05$ & $0.05 \pm 0.02$\\
\textcolor{black}{$C_3$}&$[25\%, 0\%, 100\%]$& $0.07 \pm 0.03$ & $0.06 \pm 0.03$\\
\textcolor{black}{$C_4$}&$[50\%, 25\%, 0\%]$& $0.08 \pm 0.06 $& $0.04 \pm 0.02$\\
\textcolor{black}{$C_5$}&$[70\%, 80\%, 25\%]$& $0.09 \pm 0.04$ & $0.06 \pm 0.03$\\
\textcolor{black}{$C_6$}&$[0\%, 20\%, 70\%]$&$ 0.11 \pm 0.05$ & $0.07 \pm 0.03$\\
\midrule
\multicolumn{2}{c}{$\mu$ } & 0.09 & 0.05\\
\bottomrule
\end{tabular}
\end{table}
The results of the positioning repeatability evaluation are presented in Table \ref{tab:Repeatability}. The table indicates the mean Euclidean errors in position and orientation with their respective standard deviations from the given pose for the 50 repetitions \textcolor{black}{with respect to the mean pose for the given configuration, $\mu(\boldsymbol{x}(C_j))$}. \textcolor{black}{For a configuration $C_j$, for instance, the Euclidean error $||\delta_e||$ is computed as
\begin{equation}
||\delta_e|| = \sum_{i=1}^{n=50}\frac{||\boldsymbol{x}_i-\mu(\boldsymbol{x}(C_j))||}{n}
\end{equation}
The pose $\boldsymbol{x}_i$ for a given configuration $C_j$ is obtained by averaging the measured static tip pose over a period 4 seconds. The orientation error $||\theta_e||$ and both corresponding standard deviations are calculated in the same manner.
Whilst it can be seen that the measured accuracy of the SEE is with $\approx 0.1$mm in position and $0.05\degree$ orientation slightly below the rated accuracy of the EM tracking system (0.48mm and 0.30$\degree$ RMS \cite{NDI2013}), it can be seen that averaging the pose data over 4 seconds reduces noise-related variance in the data. The samples are normally distributed across the workspace and thus the time-averaged mean is assumed to represent the tip pose sufficiently.}
\begin{figure}[t]
\centering
\includegraphics[width = \linewidth]{Linde15.pdf}
\caption{Workspace generated with model in position (a) and orientation (b). The colour indicates the normalized Euclidean error in the given state with respect to the maximum deviation from the model\textcolor{black}{, which is 2.37mm in position and 2.46$\degree$ in orientation}.}
\label{fig:ModelValidation}
\end{figure}
\subsection{Model validation}
\label{Model_validation}
The results of the model validation are shown in Fig. \ref{fig:ModelValidation} and summarized in Table \ref{tab:ModelValidation}\textcolor{black}{, where $\mu$ refers to the mean error, $\sigma$ to the standard deviation and $max$ to the maximum error}. The estimated workspace of the SEE generated with the kinetostatic model is shown in Fig. \ref{fig:ModelValidation}. The colour of each marker indicates the Euclidean distance between the calculated point and the corresponding measured pose normalized to the maximum error in position and orientation respectively, namely 2.37mm and 2.46$\degree$. \textcolor{black}{The Young's modulus of the SFA material $E$ and its area moment of intertia $I$ have been manually tuned to minimize the Euclidean error in position and orientation. The obtained values are shown in Table \ref{tab:ModelParameters}.}
Overall, the model validation shows with a mean Euclidean error of $1.18\pm0.29$mm in position and $0.92\pm0.47\degree$ in orientation good results in predicting the tip pose under SFA extension.
\begin{table}[h!]
\centering
\caption{Model validation}
\label{tab:ModelValidation}
\begin{tabular}{@{}ccccccc@{}}
\toprule
& \multicolumn{3}{c}{Displacement [mm]} & \multicolumn{3}{c}{Tilt [$\degree$]} \\ \cline{2-7}
& $\mu$ & $\sigma $ & max & $\mu$ & $\sigma $ & max \\ \midrule
$e_x$ & -0.81 & 0.20 & 1.25 & 0.04 & 0.44 & 1.34 \\
$e_y$ & -0.55 & 0.47 & 1.60 & 0.05 & 0.86 & 2.26 \\
$e_z$ & -0.05 & 0.50 & 1.80 & -0.13 & 0.36 & 1.36 \\
$||e||$ & 1.18 & 0.29 & 2.37 & 0.92 & 0.47 & 2.47 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Linde16.pdf}
\caption{Effect of axial loading on transversal motion}
\label{fig:Contact}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Linde17.pdf}
\caption{Example of tracked trajectory under external loading. The normal force is represented with scale of 0.2mm/N}
\label{fig:PositionControl}
\end{figure}
\subsection{Contact experiment}
The motion constraint induced by an indentation contact is investigated. Fig. \ref{fig:Contact} shows the constraint of the mean X-displacement and Y-tilt for a given motion over 10 repetitions normalized to \textcolor{black}{their respective} maximum value\textcolor{black}{s of 12.92mm in position and 8.67$\degree$ in orientation when no external force is present, as well as their corresponding linear interpolations}. The transversal force applied by the SEE is measured with the force torque sensor. For both, the displacement and the tilt, the magnitude declines monotonically. Whereas the displacement reaches a minimum at 27.84\%, the tilt remains less affected by the lateral force with a minimum of 55.35\%. Linearizing the trends yield a decrease of 14.09$\%/\text{N}$ for the displacement and only 8.56$\%/\text{N}$ for the tilt.
\begin{table}[t!]
\centering
\caption{Position control results}
\label{tab:PositionControl}
\begin{tabular}{@{}ccccccc@{}}
\toprule
& \multicolumn{3}{c}{Flat - Unloaded}& \multicolumn{3}{c}{Tilted - Unloaded} \\ \cline{2-7}
& $\mu$ & $\sigma$ & max & $\mu$ & $\sigma$ & max\\ \midrule
$e_x$ [mm] & 0.20 & 0.19 & 0.98 & 0.19 & 0.19 & 0.82\\
$e_y$ [mm] & 0.25 & 0.20 & 1.03 & 0.27 & 0.20 & 0.88\\
$e_z$ [mm] & 0.11 & 0.10 & 0.52 & 0.13 & 0.07 & 0.35\\
$||e||$ [mm] & 0.34 & 0.29 & 1.51 & 0.36 & 0.28 & 1.25\\ \midrule
& \multicolumn{3}{c}{Flat - Loaded}& \multicolumn{3}{c}{Tilted - Loaded}\\ \cline{2-7}
& $\mu$ & $\sigma$ & max & $\mu$ & $\sigma$ & max\\ \midrule
$e_x$ [mm] & 0.24 & 0.23 & 1.10 & 0.27 & 0.28 & 1.50 \\
$e_y$ [mm] & 0.33 & 0.22 & 1.08 & 0.32 & 0.25 & 1.07 \\
$e_z$ [mm] & 0.13 & 0.10 & 0.56 & 0.17 & 0.10 & 0.65 \\
$||e||$ [mm] & 0.42 & 0.33 & 1.64 & 0.45 & 0.39 & 1.95 \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Position control}
An example of a tracked trajectory with external loading is shown in Fig. \ref{fig:PositionControl}. The position controller tracks the desired position accurately with marginally larger tracking error around the corners of the triangular path. The quantitative results of the controller evaluation for the three executions are presented in Table \ref{tab:PositionControl} for both the unloaded and loaded trajectories, \textcolor{black}{where, as in Section \ref{Model_validation}, $\mu$ refers to the mean error, $\sigma$ to the standard deviation and $max$ to the maximum error in the respective direction}. The results indicate a higher mean error for the z-direction regardless of the configuration, which is also observable in the visualization above.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Linde18.pdf}
\caption{Ultrasound images acquired by sonographer (a-c) and SEE (d-f) for HC (a,d), AC (b,e) and FL (c,f) measurements }
\label{fig:USImages}
\end{figure}
\subsection{Teleoperation and image-acquisition}
\label{ImageAcquisition}
The images obtained through manual ultrasound probe placement and steering with the SEE are presented in Fig. \ref{fig:USImages}. Anatomical structures of the foetus phantom are clearly visible throughout all images with minor shadowing on the left side of the FL standard view-plane, outside of the region of interest. In both cases, the regions of interest are centered in the image. Moreover, the contrast in the robot-acquired images is similar to the one in the manually-obtained images.
\section{Discussion}
In this work we developed a soft robotic ultrasound imaging system \textcolor{black}{to offload sonographers in day-to-day scanning routines. The system addresses the issue of providing a stable contact between the ultrasound probe and patient, which could help improve sonographers’ ergonomics particularly with respect to work-related musculoskeletal disorders which arise from stresses induced by repeated manual probe handling.} The robot allows for tele-operated scanning and provides a platform for advanced imaging approaches. It is designed in form of an end-effector which is manually positioned in the area of interest and actively steered towards the desired view-plane. Due to its inherent compliance, the SEE is able to maintain contact while exhibiting sufficient axial stiffness to ensure mechanical coupling for the ultrasound image acquisition \textcolor{black}{which is verified by acquiring standard views on a foetal ultrasound phantom}.
The system shows with its high axial and low lateral stiffness good applicability to foetal ultrasound scanning. Despite the quick decline of stiffness with axial extension, the SEE is with $14.41\text{N}/\text{mm}$ axial stiffness at full extension still capable to apply sufficiently high forces to the patient without significant deformation, \textcolor{black}{which is approximately 1.44mm at maximum axial load of 20.77N}. The lower lateral stiffness allows for the system to be adaptable to the contact surface and to be moved away in case of discomfort in the patient \textcolor{black}{whilst being sufficiently high to counteract transversal loads occurring during the intervention. It can be seen that for the fully extended SEE the transversal displacement at a maximum occurring load of 10.67N reaches 7.1mm.}
\textcolor{black}{The compliance of the system allows for deformation upon external motion when clamped onto a patient. Thus, the resulting contact force is significantly lower compared to a rigid system. It furthermore exhibits a low mass which could be beneficial in the dynamic safety of the system \cite{Haddadin2008}.}
\textcolor{black}{If the stiffness in the axial direction of the probe needs to be adjusted or the tip force controlled, the system can be equipped either with a force sensor at the base to estimate tip forces or serve as a sensor itself \cite{Lindenroth2017IntrinsicActuation}. While in the first case the tip pose change during the operation needs to be accounted for to accurately determine the external force, either by an accurate model or pose feedback, the second case can make use of the deformable structure of the robot paired with the kinematic constraints induced by the actuation channels to infer the external force.}
We have shown that the integration of a braided nylon mesh, which has previously only been used to avoid ballooning in SFAs, can significantly improve the twist stiffness of the SEE to up to three times in comparison to the mesh-free system. \textcolor{black}{The use braided meshes is a highly versatile design approach and shows the potential to become a de facto standard in reinforcing not only soft robotics system but also continuum robots against unwanted external twists induced by contact wrenches, thus enabling such robots for a wider range of applications.}
\textcolor{black}{The workspace achieved by the SEE covers without external loading the \textcolor{black}{average} translation and rotation motion ranges required to achieve a desired view, as shown from clinical data. Loading the probe with the contact forces measured in clinical scans and assuming the lowest possible stiffness of the system reduces the achieved workspace to about 95.18\% of the \textcolor{black}{mean required range}. Whilst\textcolor{black}{, for example,} the maximum translation of the SEE is at 19.01mm significantly higher than the required deflected motion of 10.68mm, the non-homogeneous shape of the SEE workspace dictates the limitations in covering the required translation range.} This limitation could be addressed by adding a linear stage to the base of the SEE to allow for axial translation without sacrificing the softness of the system. \textcolor{black}{Moreover, an axial rotation stage could be added to allow for more complex probe motions.}
\textcolor{black}{A high variability in the monitored ultrasound probe motion ranges can be observed across the obtained views and subjects. Whilst, on average, relatively small maximum deflections are observed, in some instances significantly larger motions occur. This is indicated by the high standard deviations in the motion ranges of the respective axes. Further research needs to be conducted into the exact metrics of the ultrasound probe motions and whether the designed system can satisfy those metrics. Additional considerations such as the coupling between different motion axes then need to be accounted for. Another factor in the feasibility of a desired view is the accuracy of the manual placement of the passive positioning arm. If the accuracy is low and the view is out of reach of the end-effector, the passive arm could either be repositioned manually or additional DOFs could be added to the system. More accurate methods should be employed in evaluating the manual probe motions. The use of a percentile is difficult for the given data due to the high variability in the times required to obtain desired views, as seen in the presented time series for the motions of subject 5 in Fig. 10 for example. Thus, a larger scale and more streamlined data acquisition needs to be conducted.}
\textcolor{black}{We showed that the combination of SFAs and hydraulic actuation exhibits good properties for the SEE to be driven in an open-loop configuration. The relationship between SFA length and input volume is highly linear and only shows 0.14$\pm$0.05mm deviation due to hysteresis, thus allowing for an accurate prediction of the kinematic constraints imposed on the SEE. This compliments the derived kinetostatic model, which is able to accurately predict the SEE tip motion with an accuracy of 1.18mm in position and 0.92$\degree$ in orientation as a function of the induced working fluid volume. The model deviates more along the boundaries of the workspace, which could be caused by the larger deflection of the SFAs and resultant nonlinearieties caused by the bending of the actuators. This could be addressed by extending the model to a nonlinear approach, as we have for example demonstrated in \cite{Lindenroth2017IntrinsicActuation} for a soft continuum robot.}
\textcolor{black}{The repeatability lies with 0.1mm in position and 0.05$\degree$ in orientation slightly below the rated accuracy of the measurement system. As the obtained measurements are expressed relative to a mean pose, averaged over time and normally distributed, it is assumed that these values still represent the true pose well. The high repeatability and should allow for accurate positioning of the SEE in view-plane finding applications.}
\textcolor{black}{The system maintains stability and controllability well when in contact with a tissue-like soft silicone rubber patch. We showed that the implemented closed-loop position controller is able to track target trajectories accurately with a mean position error of 0.35mm with only marginally increased errors in the tracking accuracy of 0.44mm when a contact force applied. In scenarios where EM tracking is not available, the ultrasound image could be used to provide pose feedback. This could then employed as a substitute for the position feedback in the closed-loop controller.}
The coupling between position and orientation is an obvious limitation in the usability of the design. It can be seen, however, that the mechanical properties of the surface contact greatly affect the coupling behaviour. We have shown that an indenting contact reduces the lateral motion of the ultrasound probe significantly more than the tilt. It can easily be seen that a very stiff coupling in combination with the minimal contact friction caused by the application of ultrasound gel greatly reduces the tilt capabilities of the system while allowing for lateral sliding. It can therefore be assumed that in practice the coupling can be reduced by varying the axial pressure applied to the patient. This is supported by the findings of the tele-operated image acquisition in Section \ref{ImageAcquisition} and will be investigated further in future research.
\section{Conclusion}
The SEE design proposed in this work \textcolor{black}{shows a novel approach to applying soft robotics technologies in medical ultrasound imaging.} We have shown that under certain conditions the SEE satisfies the requirements imposed by the clinical application. The derived kinetostatic model mimics adequately the behaviour of the physical robot and the integrated system is capable of tracking target trajectories accurately and obtaining high-quality ultrasound images of a prenatal ultrasound phantom. In our future work, we will make use of the hydraulic actuation to integrate a force-controlled system through intrinsic force sensing, as shown in our previous work \cite{Lindenroth2017IntrinsicActuation}.
| 2024-02-18T23:40:09.178Z | 2020-07-24T02:04:35.000Z | algebraic_stack_train_0000 | 1,481 | 11,451 |
|
proofpile-arXiv_065-7307 | \section{Introduction}
\input{tex/intro.tex}
\section{Background}
\input{tex/background.tex}
\section{Algorithms}
\input{tex/algo.tex}
\section{Discussions}
\input{tex/discussion.tex}
\section{Extension to Trust Region}
\input{tex/penfac.tex}
\section{Experiments}
\input{tex/exp.tex}
\section{Conclusion}
In the context of learning deterministic policies, we studied the properties of two not very well-known but efficient updates, Continuous Actor Critic Learning Automaton (CACLA) and Continuous Actor Critic (CAC).
We first showed how closely they both are related to the stochastic policy gradient (SPG).
We explained why they are well designed to learn continuous deterministic policies when the value function is only approximated.
We also highlighted the limitations of those methods: a potential poor sample efficiency when the dimension of the action space increases and no guarantee that the underlying deterministic policy will converge toward a local optimum of $J(\mu_\theta)$ even with a linear approximation.
In the second part, we extended Neural Fitted Actor Critic (NFAC), itself an extension of CACLA, with a trust region constraint designed for deterministic policies and proposed a new algorithm, Penalized NFAC (PeNFAC).
Finally, we tried our implementation on various high-dimensional continuous environments and showed that PeNFAC performs better than DDPG and PPO to learn continuous deterministic policies.
As future work, we plan to consider off-policy learning and the combination of the updates of CAC and DPG together to ensure the convergence toward a local optimum while benefiting from the good updates of CAC.
\section*{Acknowledgments}
This work has been supported in part by the program of National Natural Science Foundation of China (No. 61872238).
Experiments presented in this paper were carried out using the Grid’5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr).
\bibliographystyle{named}
\subsection{Continuous Actor Critic Learning Automaton}
Continuous Actor Critic Learning Automaton (CACLA) \cite{VanHasselt2007} is an actor-critic method that learns a stochastic policy $\pi$ and its estimated value function $\hat V^\pi$.
We assume in this paper that CACLA uses isotropic Gaussian exploration, which implies that
$\pi$ can be written as follows:
\begin{equation}
\label{eq:hypo_polstoch}
\pi_{\theta,\sigma}(\cdot|s) = \mathcal{N}\big(\mu_\theta(s), \sigma^2 I)
\end{equation}
where $I$ is the identity matrix and $\sigma>0$ possibly annealed during learning.
CACLA alternates between two phases:
\noindent 1) a hill climbing step in the action space using a random optimization (RO) algorithm \cite{matyas1965random},
\noindent 2) a gradient-like update in the policy parameter space.
RO consists in repeating the following two steps:
i) sample a new action $a'$, which is executed in the environment in current state $s$, by adding a normally distributed noise to the current action $a=\mu(s)$,
ii) if $R(s, a') + \gamma \hat V^\pi(s') > \hat V^\pi(s)$ then $a \leftarrow a'$ else $a$ does not change.
\noindent Phase 2) is based on following update:
\begin{equation} \label{eq:base_cacla}
\text{If } \delta(s,a) > 0: \tilde{\theta} \leftarrow \theta - \alpha \big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s),
\end{equation}
where $\delta(s,a) = R(s, a) + \gamma \hat V^\pi(s') - \hat V^\pi(s)$ is the temporal difference (TD) error.
As the expectation of the TD error is equal to the advantage function, this update can be interpreted as follows: if an exploratory action $a$ has a positive advantage then policy $\mu$ should be updated towards $a$.
Note that although CACLA executes a stochastic policy $\pi$, it can be seen as learning a deterministic policy $\mu$.
\citeauthor{VanHasselt2007} \shortcite{VanHasselt2007} state that when learning in continuous action space, moving away from a bad action could be meaningless.
Indeed, while for stochastic policies, the probability of a bad action can be decreased,
for deterministic policies, moving in the action space in the opposite direction of an action with a negative advantage may not necessarily lead to better actions.
Thus, CACLA's update is particularly appropriate for learning continuous deterministic policies.
\subsection{Continuous Actor Critic}
In our discussion, we also refer to a slightly different version of CACLA, Continuous Actor Critic (CAC) \cite{VanHasselt2007}.
The only difference between CAC and CACLA is that
the update in CAC is scaled by the TD error:
\begin{equation}
\text{If } \delta(s,a) > 0: \tilde{\theta} \leftarrow \theta - \alpha \delta(s,a) \big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s),
\end{equation}
Thus an action with a larger positive advantage (here, estimated by the TD error) will have a bigger impact over the global objective.
\subsection{Neural Fitted Actor Critic}
The Neural Fitted Actor Critic (NFAC) \cite{zimmer2016,zimmer2018developmental} algorithm is an efficient instantiation of the CACLA update, which integrates the following techniques: batch normalization, $\lambda$-returns for both the critic and the actor, and batch learning with Adam\cite{Kingma2015}.
In this algorithm, the update of the parameters is not done anymore at each time step, but at the end of a given number of episodes.
\subsection{Trust Region for Deterministic Policies}
We now introduce a trust region method dedicated to continuous deterministic policies.
Given current deterministic policy $\mu$, and an exploratory policy $\pi$ defined from $\mu$, the question is to find a new deterministic policy $\tilde{\mu}$ that improves upon $\mu$.
Because a deterministic policy is usually never played in the environment outside of testing phases, a direct measure between two deterministic policies (i.e., a deterministic equivalent of Equation $\ref{eq:stochperfmeasure}$) is not directly exploitable.
Instead we introduce the following measure:
\begin{lemma}
The performance $J(\tilde{\mu})$ of a deterministic policy $\tilde{\mu}$ can be expressed by the advantage function of another stochastic policy $\pi$ built upon a deterministic policy $\mu$ as:
\begin{flalign} \label{eq:j mu bar}
J(\tilde{\mu}) = J(\mu) + \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) A^\mu(s, a) da ds + \notag \\
\int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) A^\pi \big(s, \tilde{\mu}(s)\big) ds.
\end{flalign}
\end{lemma}
See Appendix~\ref{appendix:fosl} for the proof.
The first two quantities in the RHS of (\ref{eq:j mu bar}) are independent of ${\tilde{\mu}}$.
The second one represents the difference of performance from moving from the deterministic policy $\mu$ to its stochastic version $\pi$.
Because $d^{\tilde{\mu}}_\gamma$ would be too costly to estimate, we approximate it with the simpler quantity $d_\gamma^\pi$, as done by \citeauthor{Schulman2015} \shortcite{Schulman2015} for TRPO, a predecessor to PPO.
\begin{theorem} \label{theo:trustdeter} Given two deterministic policies $\mu$ and $\tilde{\mu}$, a stochastic Gaussian policy $\pi$ with mean $\mu(s)$ in state $s$ and independent variance $\sigma$, if the transition function $T$ is L-Lipschitz continuous with respect to the action from any state then:
\begin{flalign*}
&\Big| \int_{\mathcal{S}} d^{\tilde{\mu}}(s) A^\pi \big(s, \tilde{\mu}(s)\big) - \int_{\mathcal{S}} d^{\pi}(s) A^\pi \big(s, \tilde{\mu}(s)\big) \Big| \leq \\ & \frac{\epsilon L}{1-\gamma} \underset{t>0}{\operatorname{max\ }} \Big( \big|\big| \tilde{\mu}(s) - \mu(s)\big|\big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t,
\end{flalign*}
where $\epsilon = \text{max}_{s,a} |A^\pi(s,a)| $.
\end{theorem}
\noindent The proof is available in Appendix~\ref{appendix:prooftrust}.
Thus, to ensure a stable improvement at each update, we need to keep both $|| \mu - \tilde{\mu} ||_{2,\infty}$ and $\sigma$ small.
Note that the Lipschitz continuity condition is natural in continuous action spaces.
It simply states that for a given state, actions that are close will produce similar transitions.
\subsection{Practical Algorithm}
To obtain a concrete and efficient algorithm, the trust region method can be combined with the previous algorithms.
Its integration to NFAC with a CAC update for the actor is called Penalized Neural Fitted Actor Critic (PeNFAC).
\citeauthor{VanHasselt2007} \shortcite{VanHasselt2007} observed that the CAC update performs worse that the CACLA update in their algorithms.
In their setting where the policy and the critic are updated at each timestep, we believe this observation is explained by the use of the TD error (computed from a single sample) to estimate the advantage function.
However, when using variance reduction techniques such as $\lambda$-returns and learning from a batch of interactions, or when
mitigating the update with a trust region constraint, we observe that this estimation becomes better (see~Figure~\ref{fig:penfaccomp}).
This explains why we choose a CAC update in PeNFAC.
In order to ensure that $|| \mu - \tilde{\mu} ||_{2,\infty}$ stays small over the whole state space, we approximate it with a Euclidean norm over the state visited by $\pi$.
To implement this constraint, we add a regularization term to the update and automatically adapts its coefficient, for a trajectory $(s_0, s_1, \ldots, s_h)$:
\begin{equation*}
\sum_{t=0}^{h-1} \Delta_{\text{CAC}}(s_t, \mu_\theta) + \beta \nabla_\theta \big|\big| \mu_{\text{old}}(s_t) - \mu_\theta(s_t) \big|\big|^2_2,
\end{equation*}
where $\beta$ is a regularization coefficient.
Similarly to the adaptive version of Proximal Policy Optimization (PPO) \cite{PPO}, $\beta$ is updated in the following way (starting from $\beta \leftarrow 1$):
\begin{itemize}
\item if $\hat{d}(\mu,\mu_{\text{old}}) < d_{\text{target}} / 1.5$: $\beta \leftarrow \beta / 2 $,
\item if $\hat{d}(\mu,\mu_{\text{old}}) > d_{\text{target}} \times 1.5$: $\beta \leftarrow \beta \times 2 $,
\end{itemize}
where $\hat{d}(\mu,\mu_{\text{old}}) = \frac{1}{\sqrt{m L}} \sum_{s \sim \pi} || \mu_{\text{old}}(s) - \mu_\theta(s) ||_2$ with $L$ being the number of gathered states.
Those hyper-parameters are usually not optimized because the learning is not too sensitive to them.
The essential value to adapt for the designer is $d_\text{target}$.
Note that the introduction of this hyperparameter mitigates the need to optimize the learning rate for the update of the policy, which is generally a much harder task.
\section{Proofs}
\onecolumn
\section{Proofs}
\subsection{Relation between DPG and CAC update for a given state}
\label{appendix:prooflimCAC}
For simplification, the proof of a single dimension $k$ of the parameter space is provided. To denote the $k$\textsuperscript{th} dimension of a vector $x$, we write $x_k$. If $x$ is a matrix, $x_{:,k}$ represents the $k$\textsuperscript{th} column vector.
We will use the following result from \citeauthor{Silver2014} \shortcite{Silver2014}:
\begin{flalign*}
\lim_{\sigma \rightarrow 0} \nabla_\theta J(\pi_{\theta,\sigma}) = \nabla_\theta J(\mu_\theta).
\end{flalign*}
Thus, the following standard regularity conditions are required: $T, T_0, R, \mu, \pi, \nabla_a T, \nabla_a R, \nabla_\theta \mu$ are continuous in all variables and bounded.
From this result, we derive the following equation for a fixed state $s$:
\begin{flalign*}
\lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) \nabla_\theta \pi_{\theta,\sigma}(a|s) da = \nabla_a A^\mu(s,a) \big|_{a=\mu_\theta(s)} \nabla_\theta \mu_\theta(s).
\end{flalign*}
\noindent We first study the special case of $\Delta_{\text{DPG}}(s, \mu_\theta)_k = 0$ and want to show that ${\lim_{\sigma \rightarrow 0} \Delta_{\text{CAC}} (s, \mu_\theta)}_k$ is also zero:
\begin{flalign*}
{\Delta_{\text{DPG}}(s, \mu_\theta)}_k = 0 \implies & \nabla_a A^\mu(s,a) \big|_{a=\mu_\theta(s)} {\nabla_\theta \mu_\theta(s)}_{:,k} = 0,\\
\implies & \lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) {\nabla_\theta \pi_{\theta,\sigma}(a|s)}_{:,k} da = 0, \\
\implies & \lim_{\sigma \rightarrow 0} \frac{1}{\sigma^2} \int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) A^\pi(s,a) \big(a - \mu_\theta(s) \big) {\nabla_\theta \mu_\theta(s)}_{:,k} da = 0,\\
\implies & \lim_{\sigma \rightarrow 0} \frac{1}{\sigma^2} \int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) H\big(A^\pi(s,a)\big) A^\pi(s,a) \big(a - \mu_\theta(s) \big) {\nabla_\theta \mu_\theta(s)}_{:,k} da = 0,\\
\implies & \lim_{\sigma \rightarrow 0} {\Delta_{\text{CAC}} (s, \mu_\theta)}_k = 0.
\end{flalign*}
\noindent Now, we study the more general case ${\Delta_{\text{DPG}}(s, \mu_\theta)}_k \neq 0$:
\begin{flalign*}
g_k^+(s, \mu_\theta) =& \frac{\lim_{\sigma \rightarrow 0} \Delta_{\text{CAC}} (s, \mu_\theta)_k}{\Delta_{\text{DPG}}(s, \mu_\theta)_k}, \\
=& \frac{\lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) H(A^\pi(s,a)) \nabla_\theta {\pi_{\theta,\sigma}(a|s)}_{:,k} da}{ \lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) \nabla_\theta {\pi_{\theta,\sigma}(a|s)}_{:,k} da }, \\
= & \lim_{\sigma \rightarrow 0} \frac{\int_{\mathcal{A}} A^\pi(s,a) H(A^\pi(s,a)) \nabla_\theta {\pi_{\theta,\sigma}(a|s)}_{:,k} da}{ \int_{\mathcal{A}} A^\pi(s,a) {\nabla_\theta \pi_{\theta,\sigma}(a|s)}_{:,k} da },\\
& \implies 0 \leq g_k^+(s, \mu_\theta) \leq 1.
\end{flalign*}
\subsection{Performance of a deterministic policy expressed from a Gaussian stochastic policy}
\label{appendix:fosl}
The proof is very similar to \cite{kakade2002approximately,Schulman2015} and easily extends to mixtures of stochastic and deterministic policies:
\begin{flalign*}
& \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) A^\mu(s, a) da ds + \int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) A^\pi(s, \tilde{\mu}(s)) ds = \\
& \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) \Big( R(s,a) + \gamma \mathbb{E}\big[V^\mu(s') | a\big] - V^\mu(s) \Big) da +
\int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) \Big( R(s,\tilde\mu(s)) + \gamma \mathbb{E}\big[V^\pi(s') | \tilde{\mu}(s) \big] - V^\pi(s) \Big) ds = \\
& J(\pi) + J(\tilde{\mu}) + \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) \Big( \gamma \mathbb{E}\big[V^\mu(s') | a\big] - V^\mu(s) \Big) da + \int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) \Big( \gamma \mathbb{E}\big[V^\pi(s') | \tilde{\mu}(s) \big] - V^\pi(s) \Big) ds = \\
& J(\pi) + J(\tilde{\mu}) + \int_{\mathcal{S}} d_\gamma^{\pi}(s) \Big( - V^\mu(s) + \gamma \int_\mathcal{A} \pi(a|s) \mathbb{E}\big[V^\mu(s') | a\big]da \Big) - J(\pi) = \\
& J(\tilde{\mu}) - J(\mu).
\end{flalign*}
\subsection{Trust region for continuous deterministic policies}
\label{appendix:prooftrust}
For this theorem we also use the following standard regularity conditions:
$I(\mathcal{S}) = \int_\mathcal{S} ds < \infty$ and $\Big|\Big| \tilde{\mu}(s) - \mu(s))\Big|\Big|_{2,\infty} < \infty$. $m$ denotes the number of dimension of the action space.
We start from the two terms we want to bound:
\begin{flalign}
&\Big| \int_{\mathcal{S}} d^{\tilde{\mu}}_\gamma(s) A^\pi(s, \tilde{\mu}(s)) - \int_{\mathcal{S}} d^{\pi}_\gamma(s) A^\pi(s, \tilde{\mu}(s)) \Big| = \notag \\
& \Big| \int_{\mathcal{S}} \big( d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \big) A^\pi(s, \tilde{\mu}(s)) \Big| \leq \notag \\
& \int_{\mathcal{S}} \Big| d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \Big| . \Big| A^\pi(s, \tilde{\mu}(s)) \Big| \leq \notag \\
& \epsilon \int_{\mathcal{S}} \Big| d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \Big|, \label{eq:proofstep3}
\end{flalign}
where $\epsilon = \text{max}_{s,a} |A^\pi(s,a)| $.
So, we need to bound the difference between $d^{\tilde{\mu}}$ and $d^{\pi}$ for a given state $s$:
\begin{flalign}
& \Big| d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \Big| = \notag \\
& \Big| \int_{\mathcal{S}} T_0(s_0) \Big( \sum^\infty_{t=0} \gamma^{t} p(s|s_0,t,\tilde{\mu}) - \sum^\infty_{\pw{t=0}} \gamma^{t} p(s|s_0,t,\pi) \Big) ds_0 \Big| = \notag \\
& \Big| \int_{\mathcal{S}} T_0(s_0) \sum^\infty_{\pw{t=0}} \gamma^{t} \Big( p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big) ds_0 \Big| \leq \notag \\
& \int_{\mathcal{S}} \Big| T_0(s_0) \Big| \sum^\infty_{\pw{t=0}} \gamma^{t} \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| \notag ds_0 \leq \\
& \int_{\mathcal{S}} \sum^\infty_{\pw{t=0}} \gamma^{t} \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| ds_0 \leq \notag \\
& \int_{\mathcal{S}} \sum^\infty_{\pw{t=0}} \gamma^{t} \underset{t'>0}{\operatorname{max}} \Big| p(s|s_0,t',\tilde{\mu}) - p(s|s_0,t',\pi) \Big| ds_0 = \notag \\
& \frac{1}{1-\gamma} \int_{\mathcal{S}} \underset{t>0}{\operatorname{max}} \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| ds_0. \label{eq:proofstep4}
\end{flalign}
Finally, we have to bound the difference between $ p(s|s_0,t,\tilde{\mu})$ and $ p(s|s_0,t,\pi) $.
To do so, we define $\tau = \{s_1, ..., s_t=s\}$, and $\mathcal{D}_\tau$ all the possible path from the state $s_1$ to the state $s_t=s$.
\begin{flalign}
& \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| = \notag \\
& \Big| \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big( T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - \int_\mathcal{A} \pi(a|s_{k-1}) T( s_k | s_{k-1}, a ) da \Big) d\tau \Big| \leq \notag \\
& \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big| T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - \int_\mathcal{A} \pi(a|s_{k-1}) T( s_k | s_{k-1}, a ) da \Big| d\tau = \notag \\
& \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big| \int_\mathcal{A} \pi(a|s_{k-1}) \big( T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - T( s_k | s_{k-1}, a ) \big) da \Big| d\tau \leq \notag \\
& \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int_\mathcal{A} \pi(a|s_{k-1}) \Big| T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - T( s_k | s_{k-1}, a ) \Big| da d\tau \leq \notag
\end{flalign}
\begin{flalign}
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int_\mathcal{A} \pi(a|s_{k-1}) \Big|\Big| \tilde{\mu}(s_{k-1}) - a\Big|\Big|_2 da d\tau = \label{eq:proofstep1} \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int \frac{1}{(\sigma \sqrt{2 \pi})^m} e^{-\frac{1}{2\sigma^2} ||b||_2^2} \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1}) + b\Big|\Big|_2 db d\tau \leq \label{eq:proofstep2} \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int \frac{1}{(\sigma \sqrt{2 \pi})^m} e^{-\frac{1}{2\sigma^2} ||b||_2^2} \Big( \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1})\Big|\Big|_2 + \Big|\Big| b\Big|\Big|_2 \Big) db d\tau \leq \notag \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big( \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1})\Big|\Big|_2 + \int \frac{1}{(\sigma \sqrt{2 \pi})^m} e^{-\frac{1}{2\sigma^2} ||b||_2^2} \Big|\Big| b\Big|\Big|_1 \Big) db d\tau = \notag \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big( \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1})\Big|\Big|_2 + \frac{2m\sigma}{\sqrt{2 \pi}} \Big) d\tau \leq \notag \\
& L \int_{\mathcal{D}_\tau} \Big( \underset{s_k \in \tau}{\operatorname{max\ }} \Big|\Big| \tilde{\mu}(s_{k}) - \mu(s_{k})\Big|\Big|_2 \notag + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t d\tau \leq \\
& I(\mathcal{S})^t L \Big( \underset{s_k \in \mathcal{S}}{\operatorname{max\ }} \Big|\Big| \tilde{\mu}(s_{k}) - \mu(s_{k})\Big|\Big|_2 + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t. \label{eq:proofstep5}
\end{flalign}
To obtain (\ref{eq:proofstep1}), we use the assumption that the transition function is L-Lipschitz continuous with respect to the action and the L2 norm.
To obtain (\ref{eq:proofstep2}), we use (\ref{eq:hypo_polstoch}).
Equation \ref{eq:proofstep5} does no longer depend on $s$ and $s_0$, thus added to (\ref{eq:proofstep4}) and (\ref{eq:proofstep3}) it gives:
\begin{flalign}
& \frac{\epsilon L}{1-\gamma} \underset{t>0}{\operatorname{max\ }} I(\mathcal{S})^{t+2} \Big( \Big|\Big| \tilde{\mu}(s) - \mu(s)\Big|\Big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t \leq \notag \\
&\frac{\epsilon L}{1-\gamma} \underset{t>0}{\operatorname{max\ }} \Big( \Big|\Big| \tilde{\mu}(s) - \mu(s)\Big|\Big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t. \label{eq:proofstep6}
\end{flalign}
To obtain (\ref{eq:proofstep6}), we suppose that $I(\mathcal{S})$ is smaller than 1. We can make this assumption without losing in generality: it would only affect the magnitude of the Lipschitz constant.
Thus if $ \big|\big| \tilde{\mu}(s) - \mu(s)\big|\big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} $ stays smaller than $1$, the optimal $t$ will be $1$, and (\ref{eq:proofstep6}) could be reduced to: $$ \frac{\epsilon L}{1-\gamma} \Big( \Big|\Big| \tilde{\mu}(s) - \mu(s)\Big|\Big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big). $$
\section{Additional experiments on CACLA's update}
In those two experiments, we want to highlight the good performance of CACLA compared to SPG and DPG without neural networks. The main argument to use DPG instead of SPG is its efficiency when the action dimensions become large. In the first experiment, we study if CACLA suffers from the same variance problem as SPG. The second experiment supports our claim that CACLA is more robust than SPG and DPG when the approximation made by the critic is less accurate.
\subsection{Sensitivity to action space dimensionality}
\label{appendix:sensitivitydima}
We used a setup similar to that of \citeauthor{Silver2014} \shortcite{Silver2014}: those environments contain only one state and the horizon is fixed to one. They are designed such that the dimensionality of the action space can easily be controlled but there is only little bias in the critic approximation. The policy parameters are directly representing the action: $\mu_\theta(\cdot) = \theta$.
\noindent Compatible features are used to learn the Q value function for both SPG and DPG. For CACLA, the value function V is approximated through a single parameter.
The Gaussian exploration noise and the learning rate of both the critic and actor have been optimized for each algorithm on each environment.
In Figure~\ref{fig:1s}, similarly to \citeauthor{Silver2014} \shortcite{Silver2014}, we observe that SPG is indeed more sensitive to larger action dimensions.
CACLA is also sensitive to this increase in dimensionality but not as much as SPG.
Finally, we also note that even if the solution of CACLA and DPG are not exactly the same theoretically, they are very similar in practice.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.7\textwidth]{plots/1S-QUA-crop.pdf}
\includegraphics[width=0.7\textwidth]{plots/1S-RAS-crop.pdf}
\includegraphics[width=0.7\textwidth]{plots/1S-ROS-crop.pdf}
\end{center}
\caption{Comparison of DPG, SPG and CACLA over three domains with 100 seeds for each algorithm. On the left, the action dimensions is 5 and 50 on the right.}
\label{fig:1s}
\end{figure}
\subsection{Robustness to the critic approximation errors}
Compared to the previous experience, we introduce a bigger bias in the approximation of the critic by changing the application domains: the horizon is deeper and there is an infinite number of states.
The policy is represented as $\mu_\theta(s)=\phi(s) \cdot \theta$ where $\phi(s)$ are tiles coding features.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.33\linewidth]{plots/COCLA-MCC-crop.pdf}
\includegraphics[width=0.33\linewidth]{plots/COCLA-P-crop.pdf}
\includegraphics[width=0.33\linewidth]{plots/COCLA-RR-crop.pdf}
\end{center}
\caption{Comparison of CACLA, DPG and SPG over two environments of OpenAI Gym and one environment of Roboschool (60 seeds are used for each algorithm). }
\label{fig:cocla}
\end{figure}
In Figure~\ref{fig:cocla}, we observe that as soon as value functions become harder to learn, CACLA performs better than both SPG and DPG.
\section{Broader comparison between PeNFAC and NFAC}
\label{appendix:penfacvsnfac}
To avoid overloading previous curves, we did not report the performance of NFAC (except in the ablation study on the HalfCheetah environment). In Figure~\ref{fig:nfac}, we extend this study to two other domains of Roboschool: Hopper and Humanoid.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.48\linewidth]{plots/CO6-crop.pdf}
\includegraphics[width=0.48\linewidth]{plots/CO7-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC and NFAC over RoboschoolHopper and RoboschoolHumanoid with 60 seeds for each algorithm.}
\label{fig:nfac}
\end{figure}
We observe that PeNFAC is significantly better than NFAC which demonstrates the efficiency of the trust region update combined with CAC.
\section{Impact of evaluating PPO with a deterministic policy}
\label{appendix:deterppo}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.48\linewidth]{plots/CO9-crop.pdf}
\includegraphics[width=0.48\linewidth]{plots/CO8-crop.pdf}
\end{center}
\caption{Comparison of evaluating PPO with a deterministic policy instead of the stochastic policy produced by PPO.}
\label{fig:deter_ppo}
\end{figure}
In Figure~\ref{fig:deter_ppo}, we observe that using a deterministic policy to evaluate the performance of PPO is not penalizing.
This is the only experiment of the paper where deterministic policies and stochastic policies are compared.
\section{Hyperparameters}
\label{appendix:hyperparam}
For the sake of reproducibility \cite{Henderson2017}, the hyperparameters used during the grid search are reported here. In Tables~\ref{tab:hyper1}-\ref{tab:hyper4}, "ho", "ha" and "hu" stand respectively for Hopper, HalfCheetah, and Humanoid Roboschool environments.
\begin{table}[H]
\centering
\begin{tabular}{c|c}
$\gamma$ & $0.99$ \\
Actor network & $64 \times 64$ \\
Critic network & $64 \times 64$ \\
Actor output activation & TanH \\
\end{tabular}
\caption{Set of hyperparameters used during the training with every algorithm.}
\label{tab:hyperX}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\underset{\text{ho,ha,hu}}{\text{Leaky ReLU} (0.01)}$, TanH \\
Actor learning rate & $\underset{\text{ho,ha,hu}}{10^{-4}}$, $10^{-3}$, $10^{-2}$ \\
Critic learning rate & $10^{-4}$, $\underset{\text{ho,ha,hu}}{10^{-3}}$, $10^{-2}$ \\
Batch norm & first layer of the actor \\
$d_{\text{target}}$ & $\underset{\text{ho,ha,hu}}{0.03}$, $0.01$, $0.005$ \\
ADAM & $\underset{\text{ho,ha,hu}}{(0, 0.999, 10^{-8})}$, $(0.9, 0.999, 10^{-8})$ \\
Number of ADAM iteration (actor) & 10, $\underset{\text{ho,ha,hu}}{30}$, $50$\\
Number of ADAM iteration (critic) & 1\\
$\lambda$ & $0$, $0.5$, $0.6$, $\underset{\text{ho,ha,hu}}{0.9}$, $0.95$, $0.97$ \\
$\sigma^2$ (Truncated Gaussian law) & $0.01$, $0.05$, $0.1$, $\underset{\text{ho,ha,hu}}{0.2}$, $0.5$ \\
Number fitted iteration & $1$, $\underset{\text{ho,ha,hu}}{10}$, $20$, $50$ \\
Update each $x$ episodes & $1$, $2$, $3$, $\underset{\text{ha}}{5}$, $10$, $\underset{\text{ho}}{15}$, $20$, $30$, $\underset{\text{hu}}{50}$, $100$
\end{tabular}
\caption{Set of hyperparameters used during the training with PeNFAC.}
\label{tab:hyper1}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\underset{\text{ho,ha,hu}}{\text{TanH}}$, ReLu, Leaky ReLU (0.01) \\
Layer norm & no \\
ADAM & $(0.9, 0.999, 10^{-5})$ \\
Entropy coefficient & 0 \\
Clip range & 0.2 \\
$\lambda$ & $\underset{\text{ho,ha}}{0.97}, \underset{\text{hu}}{0.95}$ \\
Learning rate & $\underset{\text{hu}}{10^{-4}}, \underset{\text{ho,ha}}{3e^{-4}}$ \\
nminibatches & $\underset{\text{hu}}{4}$, $\underset{\text{ho,ha}}{32}$ \\
noptepochs & $4$, $\underset{\text{ho,ha}}{10}$, $15$, $\underset{\text{hu}}{50}$ \\
nsteps & $2^{11}$, $\underset{\text{ha}}{2^{12}}$, $\underset{\text{ho}}{2^{13}}$, $\underset{\text{hu}}{2^{14}}$, $2^{15}$, $2^{16}$ \\
sample used to make the policy more deterministic & 15 \\
\end{tabular}
\caption{Set of hyperparameters used during the training with PPO.}
\label{tab:hyper2}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\text{Leaky ReLU} (0.01)$ \\
Actor learning rate & $10^{-4}$ \\
Critic learning rate & $10^{-3}$ \\
Batch norm & first layer of the actor \\
ADAM & $\underset{\text{ho,ha,hu}}{(0, 0.999, 10^{-8})}$, $(0.9, 0.999, 10^{-8})$ \\
L2 regularization of the critic & $\underset{\text{ha,ho}}{0.01}$, $\underset{\text{hu}}{\text{without}}$ \\
Exploration & Gaussian ($0.2$), $\underset{\text{ha,ho,hu}}{\text{Ornstein Uhlenbeck} (0.001, 0.15, 0.01)}$\\
Mini batch size & $32$, $64$, $\underset{\text{hu,ha,ho}}{128}$ \\
Reward scale & $0.1$, $1$, $\underset{\text{hu,ha,ho}}{10}$ \\
Soft update of target networks & $\underset{\text{hu}}{0.001}$, $\underset{\text{ha,ho}}{0.01}$ \\
Replay memory & $10^6$ \\
N-step returns & $\underset{\text{ha}}{1}$, $\underset{\text{hu,ho}}{5}$
\end{tabular}
\caption{Set of hyperparameters used during the training with DDPG (DDRL implementation).}
\label{tab:hyper3}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\underset{\text{ho,ha,hu}}{\text{ReLu}}$, TanH \\
Actor learning rate & $10^{-4}$ \\
Critic learning rate & $10^{-3}$ \\
Layer norm & no \\
ADAM & $(0.9, 0.999, 10^{-5})$ \\
L2 regularization of the critic & $0.01$ \\
Exploration & Ornstein Uhlenbeck ($0.2$), $\underset{\text{ho,ha,hu}}{\text{Parameter Space (0.2)}}$\\
Mini batch size & $128$ \\
Reward scale & $1$, $\underset{\text{ho,ha,hu}}{10}$ \\
Soft update of target networks & $\underset{\text{ho,hu}}{0.001}$, $\underset{\text{ha}}{0.01}$ \\
Replay memory & $10^6$ \\
nb\_rollout\_steps & $10$,$\underset{\text{ho,ha,hu}}{100}$ \\
nb\_train\_steps & $1$,$10$,$\underset{\text{ho,ha,hu}}{50}$ \\
\end{tabular}
\caption{Set of hyperparameters used during the training with DDPG (OpenAI baselines implementation).}
\label{tab:hyper4}
\end{table}
\subsection{CACLA}
We first explain the relationship between an algorithm based on stochastic policy gradient (SPG) and CACLA.
For this discussion, we assume that SPG is applied to parametrized policies that are Gaussian policies $\pi_{\theta, \sigma}$ (i.e., Gaussian around $\mu_\theta$).
Then the first common feature between the two algorithms is that the distributions over states they induce during learning are the same (i.e., $d^{\pi}_\gamma(s)$) because they both use the same exploratory policy to interact with the environment.
Moreover, SPG can be written as follows:
\begin{align*}
&\nabla_\theta J(\pi_{\theta,\sigma}) \notag \\
&= \int_{\mathcal{S}} d^\pi_\gamma(s) \int_{\mathcal{A}} \pi_{\theta,\sigma}(a|s) A^\pi(s,a) \nabla_\theta \text{ log } \pi_\theta(a|s) da ds, \notag \\
&= \frac{1}{\sigma^2} \int_{\mathcal{S}} d^\pi_\gamma(s) \int_{\mathcal{A}} \pi_{\theta,\sigma}(a|s) A^\pi(s,a) \big(a - \mu_\theta(s)\big) \cdot \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nabla_\theta \mu_\theta(s) da ds \notag.
\end{align*}
For CACLA, we interpret update~(\ref{eq:base_cacla}) as a stochastic update in the following direction:
\begin{flalign}
\label{eq:caclaeq}
& \int_{\mathcal{S}} d^{\pi}_\gamma(s) \Delta_{\text{CACLA}}(s, \mu_\theta) ds,\\ \text{with } & \Delta_{\text{CACLA}}(s,\mu_\theta) =
\int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s)
H\big(A^\pi(s,a)\big) \times \notag \\
& \hspace{12em}
\big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s) da \notag,
\end{flalign}
where $H$ is the Heaviside function.
Indeed, the inner integral is estimated using a single Monte Carlo sample during the run of CACLA.
Under this form, it is easy to see the similarity between SPG and CACLA.
The constant factor $\frac{1}{\sigma^2}$ can be neglected because it may be integrated into the learning rate.
The sign difference of the term $(a-\mu_\theta(s))$ is because SPG performs gradient ascent and CACLA gradient descent.
So the main difference between SPG and CACLA is the replacement of $A^\pi(s,a)$ by $H(A^\pi(s,a))$.
Therefore CACLA optimizes its exploratory stochastic policy through an approximation of SPG hoping to improve the underlying deterministic policy (for a fixed state, the direction of CACLA and SPG are the same up to a scalar).
Moreover, relating CACLA's update with (\ref{eq:caclaeq}) also brings to light two main limitations.
The first one concerns the inner integral over the action space which has a high variance.
Therefore, we expect CACLA to be less and less data efficient in high-dimension action space (which is the main theoretical justification of DPG over SPG - see Appendix~\ref{appendix:sensitivitydima}).
The second limitation that appears is that over one update, CACLA does not share the same exact optimal solutions as DPG or SPG.
Indeed, if we define $\theta^*$ such as $\nabla_\theta J(\mu_{\theta})\big|_{\theta =\theta^*} = 0$ it is not possible to prove that (\ref{eq:caclaeq}) will also be 0 (because of the integral over the state space).
It means that CACLA could decrease the performance of this local optimal solution.
\subsection{CAC}
Similarly, the update in CAC can be seen as a stochastic update in the following direction:
\begin{flalign}
\label{eq:cac}
& \int_{\mathcal{S}} d^{\pi}_\gamma(s) \Delta_{\text{CAC}}(s, \mu_\theta) ds, \notag \\
\text{with } & \Delta_{\text{CAC}}(s,\mu_\theta) =
\int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) A^\pi(s,a) H\big(A^\pi(s,a)\big) \times \notag \\ & \hspace{10em} \big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s) da \notag.
\end{flalign}
This shows that CAC is even closer to SPG than CACLA and provides a good theoretical justification of this update at a local level (not moving in potentially worst action).
However, there is also a justification at a more global level.
\begin{lemma} For a fixed state, when the exploration tends to zero,
CAC maintains the sign of the DPG update with a scaled magnitude:
\begin{equation}
\lim_{\sigma \rightarrow 0} \Delta_{\text{CAC}} (s, \mu_\theta) \gets g^+(s,\pi) \circ \Delta_{\text{DPG}} (s, \mu_\theta),
\end{equation}
where $g^+(s,\pi)$ is a positive function between $[0; 1]^{n}$ with $n$ as the number of parameters of the deterministic policy and $\circ$ is the Hadamard product (element-wise product).
\end{lemma}
The proof is provided in Appendix~\ref{appendix:prooflimCAC}.
The consequence of this lemma is that, for a given state and low exploration, a local optimal solution for DPG will also be one for CAC.
However it is still not the case for the overall update because of the integral over the different states.
The weights given to each direction over different states are not the same in CAC and DPG.
One might think that in such a case, it would be better to use DPG.
However, in practice, the CAC update may in fact be more accurate when using an approximate advantage function.
Indeed, there exist cases where DPG with an approximate critic might update towards a direction which could decrease the performance.
For instance, when the estimated advantage $\hat{A}\big(s,\mu(s) \big)$ is negative,
the advantage around $\mu(s)$ is therefore known to be poorly estimated.
In such a case, thanks to the Heaviside function,
CAC will not perform any update for actions $a$ in the neighborhood of $\mu(s)$ such that $\hat A(s, a) \le 0$.
However, in such a case, DPG will still perform an update according to this poorly estimated gradient.
\subsection{Performance of PeNFAC}
We compared the performance of PeNFAC to learn continuous deterministic policies with two state-of-the-art algorithms: PPO and DDPG.
A comparison with NFAC is available in the ablation study (Section \ref{sec:ablationstudy}) and in Appendix \ref{appendix:penfacvsnfac}.
Because PPO learns a stochastic policy, for the testing phases, we built a deterministic policy as follows $\mu(s) = \mathbb{E}[a | a \sim \pi_\theta(\cdot,s)]$.
We denote this algorithm as "deterministic PPO".
In Appendix \ref{appendix:deterppo}, we experimentally show that
this does not penalize the comparison with PPO, as deterministic PPO provides better results than standard PPO.
For PPO, we used the OpenAI Baseline implementation. To implement PeNFAC and compare it with NFAC, we use the DDRL library \cite{zimmer2018developmental}. Given that DDPG is present in those two libraries, we provided the two performances for it.
The OpenAI Baseline version uses an exploration in the parameter space and the DDRL version uses n-step returns.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.92\linewidth]{plots/HP-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC, DDPG and deterministic PPO over 60 different seeds for each algorithm in Hopper.}
\label{fig:penfacperf1}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.92\linewidth]{plots/HC-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC, DDPG and deterministic PPO over 60 different seeds for each algorithm in HalfCheetah.}
\label{fig:penfacperf2}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.92\linewidth]{plots/HU-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC, DDPG and deterministic PPO over 60 seeds for each algorithm in Humanoid.}
\label{fig:penfacperf3}
\end{figure}
We performed learning experiments over three high-dimensional domains:
Hopper, HalfCheetah and Humanoid.
Dimensions of $\mathcal{S} \times \mathcal{A}$ are $15 \times 3$ (Hopper), $26 \times 6$ (HalfCheetah) and $44 \times 17$ (Humanoid).
The neural network architecture is composed of two hidden layers of 64 units for either the policy or the value function.
The choice of the activation function in the hidden units was optimized for each algorithm: we found that ReLU was better for all of them except for PPO (where tanh was better).
The output activation of the critic is linear and the output activation of the actor is tanh.
In Figures \ref{fig:penfacperf1}-\ref{fig:penfaccomp}, the lighter shade depicts one standard deviation around the average, while the darker shade is the standard deviation divided by the square root of the number of seeds.
In Figures \ref{fig:penfacperf1}-\ref{fig:penfacperf3}, PeNFAC outperforms DDPG and deterministic PPO during the testing phase.
On Humanoid, even after optimizing the hyperparameters, we could not obtain the same results as those of \citeauthor{PPO} \shortcite{PPO}.
We conjecture that this may be explained as follows:
1) the RoboschoolHumanoid moved from version 0 to 1,
2) deterministic PPO
\begin{figure}[H]
\begin{center}
\includegraphics[width=.89\linewidth]{plots/CO5-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO2-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO3-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO4-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO1-crop.pdf}
\end{center}
\caption{Comparison of the different components ($\lambda$-returns, fitted value-iteration, CAC vs CACLA update, batch normalization) of the PeNFAC algorithm during the testing phase over the HalfCheetah environment and 60 seeds for each version. }
\label{fig:penfaccomp}
\end{figure}
\noindent might be less efficient than PPO,
3) neither LinearAnneal for the exploration, nor adaptive Adam step size is present in the OpenAI Baseline implementation.
However, we argue that the comparison should still be fair since PeNFAC also does not use those two components.
On Humanoid, we did not find a set of hyperparameters where DDPG could work correctly with both implementations.
\subsection{Components of PeNFAC}
\label{sec:ablationstudy}
In Figure~\ref{fig:penfaccomp}, we present an ablation analysis in the HalfCheetah domain to understand which components of the PenFAC algorithm are the most essential to its good performance.
From top to bottom plots of Figure~\ref{fig:penfaccomp}, we ran PenFAC with or without trust region, with or without $\lambda$-returns, with or without fitted value iteration, with CACLA update or CAC update, and finally with or without batch normalization.
It appears that $\lambda$-returns and fitted value iteration are the most needed, while the effect of batch normalization is small and mostly helps in the beginning of the learning.
We also tried updating the actor every timestep without taking into account the sign of the advantage function (i.e., using SPG instead of CAC), but the algorithm was not able to learn at all.
This also demonstrates that the CAC update is an essential component of PenFAC.
| 2024-02-18T23:40:09.466Z | 2019-06-26T02:08:08.000Z | algebraic_stack_train_0000 | 1,491 | 6,715 |
|
proofpile-arXiv_065-7454 | \section{Introduction}
Thermophotovoltaic systems promise to provide high efficiency energy conversion from heat to electricity by spectrally matching the thermal emission from an emitter to the bandgap of a conventional single-junction photovoltaic cell \cite{bauer2011thermophotovoltaics,ilic2012overcoming,park2008performance,nefzaoui2012selective,molesky2015ideal}. This field of research has seen a resurgence in recent years due to parallel developments in the fields of nanoscale thermal engineering \cite{nagasaka2008zhuomin}, nanofabrication \cite{fleming2003three,yeng2012enabling} and high temperature material science \cite{guler2014refractory,molesky2013high,guo2014thermal,nagpal2008efficient}. An important aspect of the thermophotovoltaic system is the design of the emitter, which has to suppress the thermal emission of sub-bandgap photons as they cannot be converted to electrical output \cite{fleming2003three,yeng2012enabling} and simultaneously enhance thermal emission just above the cell bandgap. Note that sub-bandgap blackbody photons are the primary reason for decrease in the efficiency of energy conversion (see Fig.~\ref{fig:Fig1}). One necessary requirement for an emitter is robust optical response and thermal/structural stability of the emitter which has to withstand high temperatures \cite{bauer2011thermophotovoltaics}.
Significant recent advances have shown the potential of photonic crystals \cite{fleming2003three,yeng2012enabling,nagpal2008efficient,rephaeli2009absorber,narayanaswamy2005thermal, celanovic20041d} , thin films \cite{guo2014thermal,wang2015tunneling,song2015enhancement,basu2011maximum, edalatpour2013size, dimatteo2001enhanced, narayanaswamy2003surface,park2008performance, joulain2008near,nefedov2011giant,tong2015thin}, perfect absorbers \cite{tong2015thin}, thermal metamaterials \cite{dyachenko2016controlling,molesky2013high,mason2011strong,liu2011taming,wu2012metamaterial}, tungsten metasurfaces \cite{zhao2013thermophotovoltaic,neuner2013efficient,wu2012metamaterial}, graphene \cite{messina2012graphene, svetovoy2014graphene}, surface waves \cite{ben2013controlling,joulain2005surface} and other nanophotonic structures \cite{ilic2012overcoming,deng2015broadband} to engineer the thermal emission \cite{drevillon2011far,lee2007coherent} and achieve spectrally selective emitters \cite{drevillon2011far, wang2009spatial,lee2007coherent}. Simultaneously, work has progressed to advance the critical challenge involving the absorber/TPV cell design.
Here, we build on our previous work on the first suggestion of high temperature plasmonics \cite{guo2014thermal,molesky2013high} and experimental demonstration of high temperature thermal metamaterials \cite{dyachenko2016controlling}. In this paper, we provide an alternative in selective emitter design employing only a single layer of plasmonic thermal emitter coating. This design holds the significant advantage of not requiring 2D patterning to achieve the desired optical response. Along with ease of large area fabrication it also opens up the possibility of high temperature stable operation since 2D texturing often reduces the melting point of metals. The primary objective of thermal suppression of sub-bandgap photons can be achieved by tuning the epsilon-near-zero (ENZ) frequency (also known as plasma frequency) of a metal to the bandgap of the cell. Thus natural material properties provide the reflectivity change desired for the thermal emitter as opposed to structural bandgap effects as in photonic crystals. We also show that our design is useful for engineering narrowband near-field thermal emission paving the way for a unified platform for both far-field and near-field thermal emitters. We also provide detailed performance analysis for our thin film designs. It should be noted that polar dielectric thin films (eg: silicon carbide) only show phonon-polaritonic resonances and epsilon-near-zero regime only in the mid-infrared frequency ranges and not in the near-infrared range conducive for high temperature thermophotovoltaic applications.
\begin{figure*}
\includegraphics[width=1\linewidth]{TPVschematic.png
\caption{\label{fig:Fig1} Efficient TPV energy conversion requires two critical features: (1) Suppression of thermal radiation with energy less than the band-gap of the PV cell. (2) Enhancement of thermal radiation within the high efficiency window of energies which lies directly above the PV cell band-gap. Both of these goals can be accomplished by tuning the plasma frequency (epsilon-near-zero frequency) of a high temperature plasmonic thermal emitter coating (p-TEC).}
\end{figure*}
One important application of selective thermal emitters is the ability to modulate thermal emission along with spectrally selective nature \cite{brar2015electronic,van2011fast,fang2013active,freitag2010thermal} . Graphene presents an ideal platform to achieve this effect due to the strong electrical tunability of its optical absorption properties \cite{thongrattanasiri2012complete} . Graphene is expected to find wide applications in next generation electronic devices due to its good electrical \cite{bolotin2008ultrahigh} and thermal properties \cite{ghosh2008extremely} and strong light matter interactions \cite{koppens2011graphene}. A substrate coated with a graphene layer has been reported to have highly directive far-field thermal emission, enhanced light absorption \cite{pu2013strong} and enhanced near-field radiation transfer \cite{ilic2012near,lim2013near}. An important emerging platform are the multilayer graphene based metamaterials \cite{iorsh2013hyperbolic,zhang2014tunable,sreekanth2013negative}, which have been reported to exhibit tunable absorption and hyperbolic dispersion \cite{othman2013graphene,linder2016graphene,chang2016realization}.
In this paper, we report a sharp suppression in the thermal emissivity of graphene-multilayer structure, and show that it can be controlled by tuning the topological transition in the graphene metamaterial. We also report a sharp enhancement in the near-field thermal emission in graphene-multilayers due to topological transitions. These results forms a crucial step in the realization of thermal energy scavenging technologies in future graphene based electronic devices. We note that the opportunity to electrically tune topological transitions can be an important design principle for spectrally selective thermal modulation.
\section{High temperature plasmonic thermal emitter coating (p-TEC)}
Selective thermal emitters for TPVs have to perform the important function of thermal suppression below the bandgap of the absorbing cell and simultaneously enhance emission just above the PV cell bandgap \cite{rephaeli2009absorber,lenert2014nanophotonic}. We design this feature by employing a thin film of Drude metal with engineered plasma frequency (ENZ frequency) and no surface patterning. The plasma frequency is the characteristic frequency above which metals lose their reflectivity and become transparent. Thus metals absorb above the plasma frequency and reflect light below the plasma frequency. Fig.~\ref{fig:Fig2}(a) and (b) show the correspondence between a sharp reflectivity change at normal incidence and the ENZ wavelength for a Drude metal. The high reflectivity leads to decreased optical absorption and suppresses the thermal emission by the Kirchoff’s law (absorptivity=emissivity). One can therefore achieve suppression of sub-bandgap thermal emission simply by tuning the plasma frequency in the near-infrared range close to the bandgap of the TPV cell. The design achieves enhancement in thermal emission in the transparency range of the metal tuned to be above the bandgap of the cell. Note, all metals are transparent and lossy above the ENZ frequency (plasma frequency) giving rise to large absorption and emission.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Fig2-eps-converted-to.pdf}
\caption{\label{fig:Fig2}(a) Depolarized (averaged $s$ and $p$ polarized) normal reflectivity from plasmonic half space. High reflectivity beyond the epsilon-near-zero wavelength implies a low absorptivity/emissivity (thermal suppression of low energy sub-bandgap photons). Note, Kirchhoff’s laws require the far-field emissivity of the plasmonic half space to be its absorptivity. (b) By shifting the plasma frequency (epsilon-vear-zero frequency) of the metal, the spectral content of the emitted radiation can be engineered to match the bandgap of a photovoltaic cell. This arises through the sharp change of the metal’s reflectivity at its plasma frequency. The sample Drude metal considered here is assumed to have background dielectric constant of 5, a plasma frequency of 0.66 eV and a loss parameter of 70 meV.}
\end{figure}
\section{High Temperature Material Properties}
We now outline the choices for high temperature metals that can act as a plamonic thermal emitter coating. Even though tungsten and tantalum have excellent thermal properties\cite{lenert2014nanophotonic}, the plasma frequency (ENZ frequency) cannot be tuned to the near-infared range to match the low energy bandgap of gallium antimonide or germanium TPV cells. We have performed detailed analysis of titanium nitride and zirconium nitride which are refractory metals with plasmonic response for TPV applications \cite{guler2014refractory,li2014refractory}. Our analysis (to be published later) indicate that their interband transitions and losses, especially in the near-infrared region are sub-optimal for both the far-field and the near-field plasmonic thermal emitter performance. They will need to be optimized to find use in TPV systems as a thin film plasmonic coating but can be used as nanoantennas \cite{guler2014refractory,li2014refractory}. In this work, we therefore focus on the use of aluminum doped zinc oxide (AZO) and gallium doped zinc oxide (GZO) with plasma frequency (ENZ frequency) tuned in the near-infrared for p-TECs \cite{pradhan2014extreme,kim2013optical}.
In general, the optical response of materials at high temperatures poses a significant challenge for thermal emitter design. We have utilized empirical models of tungsten and AZO/GZO available in literature to arrive at general conclusions of the real part and imaginary part of the response at high temperatures \cite{pradhan2014extreme,kim2013optical,krishnamoorthy2012topological,naik2011oxides,roberts1959optical,costescu2003thermal,schmid1998optical,larruquert2012self,liu2015enhanced}. Increasing temperature causes a rise in electron-phonon interactions and simultaneous reduction of collision time. This is manifested in a reduction of the bulk polarization response and increase in optical absorption. Consequently, the real part of the dielectric constant governing the polarization response is reduced and the imaginary part of the dielectric constant governing losses is increased. Thus the performance of plasmonic components is expected to be reduced at high temperature. A detailed analysis of these empirical models will be published later.
\begin{figure}
\includegraphics[width=1\linewidth]{Fig3_v3-eps-converted-to.pdf}
\caption{\label{fig:Fig3} A single layer of AZO can suppress the long-wavelength emissivity, similar to a microfabricated tungsten photonic crystal. (a) Emissivity spectra of AZO thin film on tungsten substrate at 20$^\circ$ emission angle and varying AZO thickness. (b) Emissivity spectra of 1000nm AZO film on W substrate at different emission angles. Inset: Real and imaginary part of AZO dielectric permittivity. Beyond 1400nm the emissivity is suppressed by a large impedance mismatch with vacuum. (c) Emissivity of tungsten photonic crystals at different angles. Inset: Schematic of the PhC structure. By controlling the bandgap of photonic crystals, we can tune the emissivity cut-off match wavelength to the PV cell bandgap.}
\end{figure}
\begin{figure}
\includegraphics[width=1\linewidth]{Fig4_v3-eps-converted-to.pdf
\caption{\label{fig:Fig4} AZO thin-film outperforms TiO2 thin-film in supression of sub-bandgap thermal emission. (a)Spectral irradiance ($dl/d\lambda$) of AZO single layer in comparison with a black body at 1700K (40 degrees emission angle). The selective spectral irradiance shows a large thermal suppression of the sub-bandgap photons and enhanced thermal emission below the ENZ wavelength (above the plasma energy). (b) Spectral irradiance of a single layer TiO2 anti-reflection coating design at 1700K and 40 degrees emission angle. The enhanced absorption in TiO2 near 1500~nm arises from the Fabry-Perot mode resonance so the peak emittance can be tuned by changing the thickness of the anti-reflection coating. However, the suppression of the sub-bandgap photons is incomplete in TiO2 layer, resulting in reduced conversion efficiency.}
\end{figure}
\section{High temperature thermal emission suppression}
\subsection{\label{sec:level2}Far-field thermal emission: thin-film AZO and other approaches}
We now provide the results of calculations using an AZO plasmonic thermal emitter coating. Using Kirchhoff’s laws \cite{greffet1998field}, we calculate the emissivity of an AZO film of thickness 600~nm, 1000~nm and 1400~nm, placed on a tungsten substrate (Fig. 3(a)). It can be observed that there is a sharp suppression in the emissivity above ENZ wavelength of 1400~nm, irrespective of the thickness of the AZO film. This is because the spectral thermal emission response is dominated by the material properties of the AZO and the thickness only effects the ripples in the high emissivity region. Fig.~3(b) shows the emissivity of a 1000~nm thick AZO film at different emission angles. The inset shows the real and imaginary part of AZO dielectric response. The emissivity performance at different emission angles displays a similar cut-off behavior, which is necessary for TPV applications. The p-TEC (Fig.~\ref{fig:Fig3}(b)) achieves an emissivity profile very close to a tungsten photonic crystal (PhC) design (Fig. 3(c)). However, note that there are fundamental differences in the approach to achieving thermal suppression. The PhC utilizes a structural resonance and interference effects whereas the p-TEC uses an engineered material response. A simple thin-film coating with engineered material response matches the performance of a microfabricated photonic crystal.
Thin-film p-TEC with engineered material response also outperforms thin-film anti-reflection coatings. Fig.~\ref{fig:Fig4} compares the performance of an anti-reflection coating (AR) \cite{fraas2001antireflection} as a selective thermal emitter. Fig.~\ref{fig:Fig4}(a) shows the spectral irradiance from an AZO coated tungsten substrate at 1700K. On comparison with black body radiation, it can be clearly seen that the thermal radiation in the sub-bandgap region is suppressed. This sharp suppression at high wavelengths is because the emissivity of AZO coating drops above the epsilon-near-zero wavelength. The thickness of the the AZO film is 1000~nm. Fig.~\ref{fig:Fig4}(b) shows the thermal radiation from a tungsten coated with TiO2 AR coating. Enhancement in the thermal emission near 1500~nm wavelength in AR coating is due to the enhanced absorptivity arising from Fabry-Perot resonance mode in a TiO2 film of thickness 355~nm. The peak emittance in TiO2 can therefore be tuned by tuning the thickness of AR coating. This is in contrast to the AZO thin film, where the suppression in the emissivity is governed by the material response while the thickness has only a minor effect on the emissivity at transparent low-wavelength region of the spectrum. It can be seen that the even at sub-bandgap wavelengths, the thermal emission in TiO2 is considerably larger as compared to the AZO coating. The incomplete thermal suppression in AR coating degrades its efficiency performance.
To provide a performance comparison in the limiting ideal case of these various emitter designs for TPV applications, we apply the Schockley-Queisser detailed balance analysis \cite{rephaeli2009absorber}. In performing this calculation we assume the photovoltaic cell in all cases to be a perfectly absorbing, ideal pn-junction, with a bandgap of 2250 nm for the photonic crystal emitter and 1700 nm for the AZO plasmonic coating and titanium dioxide anti-reflection coating designs. We also assume that the emitter and photovoltaic cell have the same flat area, and that no absorbing stage is present. Under these conditions, the conversion efficiency of radiated thermal energy into electrical energy in AZO, photonic crystal and TiO2 coating is shown in Table~\ref{Tab1}. The efficiency of a simple AZO thin-film is less then, but comparable with the photonic crystal spectrally selective emitter, while it outperforms the TiO2 thin film. The poor efficiency of TiO2 is because of incomplete thermal suppression at higher wavelengths.
\begin{table}[H]
\caption{Comparison of conversion efficiency in AZO, photonic crystal and TiO2 AR coating.}
\label{Tab1}
\centering
\begin{tabular}{ |c| c| c| c|}
\hline
Temperature & AZO thin-film & Photonic crystal & TiO2 \\ \hline
1700K & 31.9 \% & 36.7 \% & 19.6 \% \\ \hline
1300K & 19.4 \% & 29.1 \% & 9.2 \% \\ \hline
\end{tabular}
\end{table}
\subsection{ Narrow-band near-field thermal emission}
\begin{figure}[h]
\includegraphics[width=1\linewidth]{Fig5-eps-converted-to.pdf}
\caption{\label{fig:Fig5}Near-field energy density at a distance of 100nm above (a) a 20nm tungsten film on a sapphire substrate, and (b) a 55nm gallium-doped-zinc-oxide film [18] on a boron carbide substrate [22], relative to that of a black body. The plasmonic thermal emitter coating shows a near-field enhancement by a factor of three over the thin film tungsten. Inset in (a) and (b) shows the permittivity of Tungsten and GZO respectively.}
\end{figure}
A fundamental advantage of the p-TEC design is that it can simultaneously be used as a near-field emitter and can exhibit narrowband near-field thermal emission. This arises due to thin film surface plasmon polaritons excited at high temperatures which lead to narrowband thermal emission with potential applications for near-field TPV. While fundamentally challenging to implement, near-field TPVs promises to achieve high efficiency of conversion with high current densities\cite{basu2009maximum}. This is because near-field thermal energy transfer mediated by evanescent waves can exceed the far-field black body limit.
In Fig.~\ref{fig:Fig5}(a), we plot the energy density in the near-field of a conventional emitter used in near-field TPV designs consisting of a thin film of tungsten. The energy density is normalized to that of a black body in the far-field and calculated using Rytov’s fluctuational electrodynamics\cite{nagasaka2008zhuomin,liu2015enhanced}. The energy density at a distance $z$ and frequency $\omega$ is \cite{joulain2007radiative}
\begin{equation*}
\begin{split}
u(z,\omega,T)&=\frac{U_{BB}(\omega,T)}{2} \!\! \left \{ \!\! \int_{0}^{k_0} \!\! \frac{k_\rho dk_\rho}{k_0|k_{1z}|}\frac{(1-|r^s|^2)+(1-|r^p|^2)}{2} \right. \\
& \left. +\int_{k_0}^{\infty}\frac{k^3_\rho dk_\rho}{k^3_0 |k_{1z}|}e^{-2Im(k_{1z})z}(Im(r^s)+Im(r^p))\right\}
\end{split}
\end{equation*}
where $T$ denotes the temperature. $k_\rho=\sqrt{k^2_x+k^2_y}$, $k_0=\omega/c$ , $k_{1z}=\sqrt{k^2_0-k^2_\rho}$ (such that $Im k_{1z}>0$), $r^s$and $r^p$ are the Fresnel reflection coefficients from the thin films for (s) and (p) polarized light respectively and “$Im$” denotes the imaginary part. $U_{BB}(\omega,T)$ is the black body emission spectrum. The near-field energy density enhancement is due to the excitation of weak and lossy surface waves in highly absorptive tungsten (above its plasma wavelength of $\approx 936~nm$). Due to the lossy nature of tungsten at near-infrared wavelengths \cite{roberts1959optical}, the enhancement is low, with a broad spectrum. This poor spectral performance is a significant limiting factor for TPVs.
Fig. 5(b) shows the near-field energy density near a thin film of Gallium doped Zinc-oxide (GZO). The plasma frequency (ENZ frequency) of doped zinc-oxide can be tuned to be in the near-infrared spectral range, matched to the bandgap of a TPV cell (GaSb). The spectrally selective nature along with the larger enhancement of near-field energy density, evident in Fig.5(b), is due to the excitation of surface plasmons. By controlling the film thickness and ENZ permittivity of the GZO film, the spectral content of near-field enhancement can tuned to match the high efficiency window of energies directly above the band-gap of a PV cell. For comparison, the thickness of tungsten and gallium doped-zinc-oxide are selected such that the peak in energy density enhancement is optimized at the same frequency. The plasmonic thermal emitter coating (p-TEC) outperforms the thin film of tungsten and can be used in both near-field and far-field TPV.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Graphene_far_field_emission_v2-eps-converted-to.pdf}
\caption{Far-field emissivity of Graphene multilayer. (a) shows the effective permittivity of graphene multilayer substrate when graphene layers are separated by a dielectric of permittivity 2.1 and thickness 10 nm. The Fermi energy of the graphene is $E_F$ = 0.4 eV. The effective permittivity perpendicular to the graphene layers remains unaffected while the parallel component drops as the excitation wavelength increases. The point where $Re(\epsilon_{||})$ crosses the zero is the topological transition point. (b) shows the emissivity of bulk graphene-multilayer at three different angles (c) shows the emissivity of varying finite thickness of graphene-multilayer deposited on a Tungsten substrate.}
\label{fig:Graphene_far_field_emission}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Graphene_far-field_vary_Temp_thickness_2micrometer-eps-converted-to.pdf}
\caption{Effect of temperature on the far-field emissivity response of the graphene-multilayer structure. The conductivity of the graphene at 300K and 600K is shown in the inset.}
\label{fig:Graphene_far-field_vary_Temp_thickness=2micrometer}
\end{figure}
\begin{figure}
\centering
\includegraphics[width =1\linewidth]{Graphene_near_field_emission_version3-eps-converted-to.pdf}
\caption{ (a) Near-field energy density near the graphene-multilayer substrate at a distance of 100 nm above the substrate. Inset shows that the effect of thickness on the energy density is negligible as compared to enhancement due to bulk topological states. (b) shows the sign of $Re(\epsilon_{||})$ as a function of $E_F$ and excitation wavelength. Topological transition from elliptic to hyperbolic substrate occurs on the line $Re(\epsilon_{||}) =0 $. (c) shows the near-field energy density as function of $E_F$ and excitation wavelength. It can be seen that hyperbolic region of the graphene multilayer enhances the near-surface energy states. The thermal emission is computed at $600^\circ K$.}
\label{fig:Graphene_near_field_emission}
\end{figure}
\section{Thermal topological transitions in graphene multilayers}
Thermal emission can also be engineered by controlling the topological transition in a graphene-multilayer structure. A schematic of the graphene multilayers stacked between dielectric slabs of $\epsilon_d=2.1$ and thickness $d= 10$nm is shown in inset of Fig.~\ref{fig:Graphene_far_field_emission}(a). The layers are stacked along the $z$ direction. A multilayer structure thus formed has an anisotropic dielectric tensor, whose perpendicular component (along the $z$-direction) of the permittivity ($\epsilon_\perp$) is given by the dielectric $\epsilon_d$, while the parallel component ($\epsilon_{||}$, parallel to the layers in $x-y$~plane) is governed a combination of three factors: (i) the complex conductivity of graphene, (ii) the thickness of the separating substrate ($d$) and (iii) the dielectric $\epsilon_d$. The equations for computing the dielectric tensor of graphene metamaterial is presented in Appendix. Fig.~\ref{fig:Graphene_far_field_emission}(a) shows the parallel component (in $x-y$ plane) and the perpendicular component (along $z$-direction) of the dielectric tensor, when Fermi energy of each graphene layer is $E_F=0.4$~eV. It can be seen that the parallel component of permittivity changes sign from positive to negative around 4.05~$\mu$m, triggering a topological transition from elliptical to hyperbolic iso-frequency surface, as shown in the inset. At the topological transition wavelength, the emissivity of the structure drops sharply due to large miss-match in the hyperbolic topology of the graphene-multilayer substrate and the elliptical topology of the free-space. Fig.~\ref{fig:Graphene_far_field_emission}(b) shows the suppression of emissivity at different angles from bulk graphene metamaterial. For thermal emission applications the graphene metamaterial substrate will need to be deposited on a substrate, such as tungsten. Fig.~\ref{fig:Graphene_far_field_emission}(c) shows the emission characteristics of graphene metamaterial deposited on tungsten substrate for varying thickness of the graphene-multilayer. It can be seen that as the thickness increases beyond 5~$\mu$m, its emission characteristics approach those of the bulk graphene. These computations are done for a Fermi-energy of 0.4~eV.
We would like to point out that the wavelength at which topological transition in graphene metamaterial occurs can be controlled by the dielectric thickness $d$ as well as the Fermi energy of the individual graphene sheets. This gives us an additional degree of freedom in controlling the topological transition wavelength. The graphene metametrial is also suited for high temperature applications as its doping concentration and complex conductivity is does not vary substantially with temperature \cite{fang2015temperature}. Fig.~\ref{fig:Graphene_far-field_vary_Temp_thickness=2micrometer} shows the variation in emissivity at temperature T=300K and T=600K. The change in complex conductivity of individual graphene sheets at the two temperatures is shown as inset. It can be seen that the even at a higher temperature, the emissivity of the graphene metamaterial is suppressed at higher wavelengths.
The topological transition in graphene-multilayers also results in enhanced near-field thermal emission. Graphene multilayers support unbounded bulk hyperbolic states which increase the local density of states in vacuum, near the surface. This increased local density of states results in enhanced near-field energy density and near-field thermal emission. Fig.~\ref{fig:Graphene_near_field_emission}(a) shows the large enhancement in the near-field energy density (normalized to that of black body), in the hyperbolic region of the spectrum. It can be observed that there is a sharp increase in energy density at the topological transition wavelength. The near-field energy is computed for increasing thickness of graphene-multilayer on a tungsten substrate. It can be seen in the zoomed-in inset that the finite thickness has small effect on the near-field as compared to large enhancement triggered by topological transition. The near-field energy is computed at a distance of 100 nm from the substrate. The topological transition and hence the near-field emission can be tuned by controlling the Fermi energy of the graphene layers. The contour on which the topological transition occurs as function of wavelength and Fermi-energy is shown in Fig.~\ref{fig:Graphene_near_field_emission}(b). The corresponding enhancement in the thermal emission at $600$K is shown in Fig.~\ref{fig:Graphene_near_field_emission}(c). It is evident that the enhancement in near-field energy density is triggered at the topological transition for all values of Fermi energy.
\section{Summary}
In conclusion, we have introduced the concept of a plasmonic thermal emitter coating. It functions on the basis of reflectivity change of metallic thin films near the epsilon-near-zero frequency. Our approach shows superior performance as compared to anti-reflection coatings and is easier to fabricate than photonic crystals which require 2D surface patterning of tungsten. We have shown that it achieves narrowband thermal emission in the near-field as compared to tungsten. Developments in high temperature plasmonics can make our thin film design a viable large area technology for thermal emitters. We have also shown that the thermal topological transitions in graphene multilayers can lead to tunable spectrally selective thermal emission.
\section*{Acknowledgements}
We acknowledge funding from Alberta Innovates Technology Futures, NSERC, Helmholtz Alberta Initiative and NSF EFRI NEWLAW.
| 2024-02-18T23:40:10.062Z | 2017-02-07T02:07:33.000Z | algebraic_stack_train_0000 | 1,522 | 4,251 |
|
proofpile-arXiv_065-7706 | \section{Introduction}
\label{sec:1}
In Bilateria (an ample clade of animals that includes humans) the body plan displays an overall mirror-symmetric disposition. Mirror (or bilateral) symmetry captures mathematically our day-to-day experience that reflected objects look the same, but inverted side-wise (an object's left appears at the reflection's right). A mirror symmetry exists in bilaterian bodies with respect to our sagittal central plane (which separates our left and right sides). For example, both our hands look like the mirror reflection of each other with respect to that plane.
This symmetry is present in most parts of bilaterian central nervous systems -- including the human brain, where it also appears broken at a range of levels \cite{GalaburdaGeschwind1978, TogaThompson2003, Hugdahl2005, HerveTzourio2013, ENIGMA2018, ENIGMA2020, Seoane2020}. From a structural perspective some brain areas grow bigger than their symmetric counterpart -- for example, usually, several regions of the frontal and temporal left hemisphere are thicker than their contralateral counterpart \cite {ENIGMA2018, ENIGMA2020}. Some of these structural differences underlie an asymmetry of function as well -- human language is a preferred example as it depends on the development, in a dominant side only, of a series of areas (Broca's, Wernicke's, etc.)\ as well as exuberant connections between them around the Sylvian fissure \cite {Geschwind1972, CataniFfytche2005, FedorenkoKanwisher2009, FedorenkoKanwisher2012, FedorenkoThompson2014, BlankFedorenko2016, BerwickChomsky2016}. In some cases, functional asymmetry is observed without such salient morphological difference -- e.g.\ the right hemisphere usually dominates regarding high-level visual processing, taking care of discerning fine details; while the left hemisphere performs similar operations for rather broader scales \cite{BrownKosslyn1995, Hellige1995}. Whether linked to structure and function or not, behavior can break the mirror symmetry as well -- take hand dominance in humans, also present at different levels (and varying in choice of dominant side) in other vertebrates \cite{Galaburda1995, RogersVallortigara2004, HalpernRogers2005, Rogers2006}.
The discovery of Broca's area proved in one fell swoop that brain function is localized (which was unclear at the time) and that the bilateral symmetry of this organ is broken in humans. An assumption lingered among scientists that the symmetry breaking was due to the complex human cognitive abilities, thus that increased complexity would generally have a role in favoring asymmetry. Indeed, brain asymmetry was deemed a human trait that other species should lack \cite{Harrington1995, Corballis2009, MacNeilageVallortigara2009}. As this was proved wrong (and brain asymmetry was found widespread among other animals \cite{Galaburda1995, PascualPreat2004, VallortigaraBisazza1999, RogersVallortigara2004, HalpernRogers2005, Rogers2006}), neuroscientists worried less about the role of sheer complexity in symmetry breaking. They focused instead on more tangible mechanisms, such as input of light during hatching in birds \cite{HalpernRogers2005} or faster speed in single-hemisphere processing \cite{RingoSimard1994}. The hypothesis of complexity as a driving force of symmetry breaking in the brain survived with some nuances \cite{TogaThompson2003, Corballis2017}, often linked to mechanistic explanations (e.g. that that each hemisphere could have specific abilities, and more complex tasks would recruit subunits in each hemisphere differently \cite{VallortigaraBisazza1999}; or that asymmetric brains would allow a more optimal packing, thus supporting more functions \cite{Corballis2017}). But the question remains: Can cognitive complexity {\em per se} be a driving force behind the break of bilateral symmetry in neural systems? If so, is it possible to infer thresholds of complexity beyond which bilateral symmetry is doomed? Are there parsimonious ways through which the lost symmetry of a neural circuit could re-emerge? A rigorous mathematical formalism to answer these questions is lacking.
We tackle these issues from a computational framework, within which notions of complexity can be well defined and quantified. We assume that neural circuits are performing some computational job -- whether relaying signals, taking in inputs, or transforming them in some way. All these actions have metabolic costs (incurred into by neurons as they are engaged) but also more abstract thermodynamic costs that relate to the complexity of the computation itself or to the reliability of the operations performed \cite{KolchinskyWolpert2020, WolpertKolchinsky2020, KolchinskyWolpert2021}.
Reliability is important as real neural systems work within a noisy environment -- a feature itself that can be exploited for computation \cite{Maass2014, Maass2015}. How to compute with unreliable units is a problem that worried researchers since the inception of computer science \cite{vonNeumann1956, MooreShannon1956, WinogradCowan1963}. Redundancy (introducing units that perform the same operations in parallel, or that can substitute a main piece if it is damaged) is often a preferred strategy to cope with this issue. But too much redundancy would incur in unnecessary computational costs (e.g.\ as responses from parallel circuits need to be integrated) or multiply the energetic metabolic expenses. When is redundancy preferred depending on the reliability of the circuitry and the complexity of the task at hand?
Bilateral symmetry has been, throughout evolution, a source of redundancy that provided neural circuits in duplicated pairs. While our discussion is centered on lateralization versus bilaterality, our results are strictly general for any pair of redundant circuits -- originated in the bilateral body plan or not. As Darwinism proceeds, we expect that optimality constraints regarding computation, complexity, and redundancy will guide, allow, or prevent certain evolutionary paths. These are the possibilities that we intend to illuminate in this paper. Matters of optimality, reliable computation, and selection are also relevant during development -- as learning engages in a Darwinian process of its own \cite{Edelman1987}. Finally, the optimality of lateralized or bilateral neural configurations might be challenged again as aging proceeds \cite{OlleSole2020} or as various insults damage the nervous system. A landscape of possible, optimal configurations of neural circuits might help us navigate these cases and even suggest treatments to improve cognitive performance in damaged or aged brains.
\begin{figure}[]
\begin{center}
\includegraphics[width=\columnwidth]{./fig_models.pdf}
\caption{{\bf Simple models for brain bilaterality versus lateralization.} {\bf a-c} Modeling simple phenotypes. {\bf a} A simple task can be carried out by a single unit (in this case, in the left hemisphere -- black square). This would be a lateralized configuration. A similar unit at the right hemisphere (white square) and some circuit or mechanism to integrate the activity of both symmetric counterparts (white triangle) are left unused. {\bf b} Instead, both mirror-symmetric units might be engaged to solve the simple task (black squares). In this case, the mechanism that integrates both outputs (black triangle) might be needed. This is a bilaterally-symmetric configuration. {\bf c} It might be possible to implement the task by engaging both units in some gradual manner (partial recruitment is marked by shades of gray). That would demand a graded use of the integrating mechanism as well (shaded triangle). {\bf d-f} Modeling an emerging phenotype that recruits $K$ (in this case, $3$) different modules, each implementing a simple task. Regarding lateralized ({\bf d}) versus full ({\bf e}) or partial ({\bf f}) bilateral solutions, we find the same possibilities as before. What configurations are optimal in each case? }
\label{fig:1}
\end{center}
\end{figure}
In this paper we lay out a minimal mathematical model to map optimal configurations (bilateral versus fully lateralized -- allowing any intermediate designs) of computing neural units as a function of their running costs, reported benefits, reliability (as measured by an error rate), and complexity of the task at hand. All mathematical details are developed in the appendices. In the Results section we discuss our most important insights. In Sec.\ \ref{sec:3.1} we study the least complicated case, in which a simple task (so simple that it cannot be decomposed further) is implemented either by a faulty, irreducible neural unit, or by that faulty unit and its mirror symmetric counterpart (Fig.\ \ref{fig:1}{\bf a-c}). While any combination of engagement of either unit is allowed, we show that only fully lateralized (Fig.\ \ref{fig:1}{\bf a}) or fully bilateral (Fig.\ \ref{fig:1}{\bf b}) solutions matter. In Sec.\ \ref{sec:3.2} we study a complex computation that needs the cooperation of several such couples of units, with each bilaterally symmetric couple taking care of a strictly different subtask (Fig.\ \ref{fig:1}{\bf d-f}). We say that this is an emergent or complex computation, or an emergent or complex phenotype of the cooperating couples of units. We asses under what conditions such phenotypes might emerge, and whether they entail partial (Fig.\ \ref{fig:1}{\bf f}) or full lateralization or bilateral configurations (finding, again, that only all-or-nothing engagement is relevant; Fig.\ \ref{fig:1}{\bf d-e}). In Sec.\ \ref{sec:3.3} we use our model to explore evolutionary or developmental paths that should lead from a bilateral to a fully lateralized configuration. While we use the simplest possible model to guide our discussion, we prove in the appendixes that some results are quite general. An ample range of biologically and computationally meaningful choices of costs and fitness functions should share the most salient features that we derive -- notably, that only fully lateralized or completely bilateral configurations matter. In the Discussion, we review some real neural structures using our framework and highlight the implications of our results for development and treatment in damaged, diseased, or aged brains.
\section{Results}
\label{sec:3}
\subsection{Charting bilaterality and lateralization for simple tasks}
\label{sec:3.1}
\begin{figure*}[]
\begin{center}
\includegraphics[width=\textwidth]{./fig_phases.pdf}
\caption{{\bf Optimal lateral vs bilateral configurations.} For $(\varepsilon, \hat{c})$ within white areas, the sought phenotype is so costly that it never pays off. Within light gray regions, it is optimal to lateralize. Within black areas, bilaterality is preferred. {\bf a} Model for simple cognitive tasks with $b=1$, $c = 0.05$. Such simple computations are the only ones in which we find graded solutions (green curve separating lateralized and bilateral configurations), but they occupy a negligible part of the phase space. Red dots correspond to the utility functions shown in Fig.\ \ref{fig:app1.1}. {\bf b} Model for strictly emergent phenotypes with $\tilde{b}=1$, $c=0.01$, $K=10$. Red dots correspond to the utility functions shown in Fig.\ \ref{fig:app2.1}. {\bf c} Model with emergent phenotypes upon a substrate that retains the simpler functionality with $b=1$, $\tilde{b}=1$, $c=0.05$, $K=10$. Arrows indicate trajectories across parameter space prompted by degradation of the neural substrate, which might increase coordination costs (vertical arrows) or fallibility (horizontal arrow). }
\label{fig:2}
\end{center}
\end{figure*}
Assume that some neural circuit has to solve a relatively simple task. Assume also that, due to our evolutionary history and bilaterian body plan, we posses two such circuits: one at the left, $L$, and one at the right, $R$ (squares in Fig.\ \ref{fig:1}{\bf a-c}). We will refer to them as left and right units or circuits, and we will say that they conform a mirror-symmetric or bilateral couple of units or circuits. Further assume that they are faulty such that with a probability $\varepsilon$ they fail each time that they attempt their computation.
Does it pay off to keep both mirror-symmetric circuits working? Note that two units together might perform the task more reliably. However, keeping each circuit running has a metabolic cost -- as neurons need to be fed with blood when they are active. Also, if both units function simultaneously, they might interfere with each-other -- one side providing a wrong answer might spoil the other's correct outcome. Some additional structure or mechanism is needed to cross-check simultaneous activity (triangles in Fig.\ \ref{fig:1}{\bf a-c}). Let us attempt to capture, with the simplest equation possible, all the costs and benefits of keeping just one unit of the mirror symmetric couple running (Fig.\ \ref{fig:1}{\bf a}) versus sustaining both circuits engaged (Fig.\ \ref{fig:1} {\bf b}) versus keeping some intermediate level of activity (Fig.\ \ref{fig:1}{\bf c}), or even shutting both of them all together.
Say that, whenever computing this task is required, unit $L$ is switched on with a probability $l \in [0,1]$ and unit $R$ is activated with a probability $r \in [0,1]$. If we deal with a computation that needs to be carried out uninterrupted throughout the day, we can think of $l$ and $r$ as the fraction of time that each unit remains active. We introduce a cost $c$ paid for each occasion in which either unit is switched on, such that
\begin{eqnarray}
C &=& c(l+r)
\label{eq:3.1.1}
\end{eqnarray}
are the total expenses of running both units independently. This stands for some metabolic cost of use. Note that the mere structural existence of each unit should have a great cost as well -- but that is paid independently of use, and only once as the brain develops. Adding such additional cost does not alter our results qualitatively. We could also think of costs that depend, e.g., on the desired accuracy, such that units that compute with a lower error are more costly. One such case was explored partially in \cite{Seoane2020}. In App.\ \ref{app:5} we show that the generality of our results largely includes this scenario as well.
The coordinating structure or mechanism has a cost $\hat{c}$ of its own, and we assume that it is paid whenever both bilateral units are functioning simultaneously. Hence:
\begin{eqnarray}
\hat{C} &=& \hat{c}lr
\label{eq:3.1.2}
\end{eqnarray}
are the total coordination expenses. This can capture the metabolic expenses of the coordinating structure or mechanism, but also any losses due to interference between both units that results in a faultier functioning. We can think of this last possibility as contributing an average loss due to insufficient coordination. We could split these costs, but each of them is contributed only whenever both units are active -- thus they can all be absorbed within $\hat{c}$.
As for the benefits, we assume an all-or-nothing scenario such that some fitness $b$ is gained if and only if the task is successfully implemented. We could think of alternatives -- e.g.\ even an imperfect implementation of the task contributes some fitness. Such possibilities are explored in App.\ \ref{app:5}. Focusing on the simplest Ansatz, we get:
\begin{eqnarray}
B &=& b\left[ (1-\varepsilon)l(1-r) + (1-\varepsilon)(1-l)r + (1-\varepsilon^2)lr \right]. \nonumber \\
\label{eq:3.1.3}
\end{eqnarray}
The first term is the probability that $L$ is active, $l$, and works properly, $1-\varepsilon$, and that $R$ is switched off, $1-r$. The second term is the probability that $R$ is active and works properly and that $L$ is switched off. The third term is the probability that both units are active and at least one of them produces the correct answer (the probability that neither does is $\varepsilon^2$). In this simplest Ansatz we assume that the correct answer cannot be reached if neither unit did on its own. We could, alternatively, assume that two faulty but nearly correct answers could improve each-other (especially thanks to the coordinating mechanism). To capture this possibility, we would use, in this third term, an alternative function of $\varepsilon$ instead of $1-\varepsilon^2$. This is explored in App.\ \ref{app:5}. Back to our simplest equation, what this third term assumes is that, thanks to the coordinating mechanism, if the correct outcome has been produced by at least one unit, it can always be picked up. This does not consider interference as discussed above; but then again, that could be captured within $\hat{C}$.
Subtracting costs from benefits renders a utility function:
\begin{eqnarray}
\rho(l,r) &\equiv& B - C - \hat{C} \nonumber \\
&=& b(1-\varepsilon)\left[ l(1-r) + (1-l)r + (1+\varepsilon)lr \right] \nonumber \\
&& -c(l+r) - \hat{c}lr.
\label{eq:3.1.4}
\end{eqnarray}
Given some values to our model parameters ($b$, $c$, $\hat{c}>0$ and $\varepsilon \in [0,1)$) the maximum utility as a function of $l$ and $r$ tells us the optimal configuration of our units -- i.e.\ which level of activity should be kept in either mirror-symmetric circuit.
We can draw our optimal configurations in a map (or morphospace) to chart how they change as we vary our parameters. Fig.\ \ref {fig:2}{\bf a} shows such a map for Eq.\ \ref{eq:3.1.4} for fixed $b=1$ and $c=0.05$ and varying $\varepsilon \in [0,1)$ and $\hat{c} \in [0,1]$. We observe a white, vertical stripe for $\varepsilon > 1 - c/b$ in which both units are so faulty that the fitness contributed by this task does not pay off enough to keep any circuit active. As such, the phenotype resulting from implementing the task at hand does not appear. In the large gray area, coordination is costly enough that it pays off to lateralize and keep only one unit (indistinctly $L$ or $R$) and shut off the other one. In the smaller black area, it is always convenient to keep both units engaged. At the boundary separating both regions (green curve), the optimal solution has one unit always active and the other one active any arbitrary fraction of the time. This is the only graded configuration (in which both units are not either completely off or completely on) -- otherwise, Eq.\ \ref{eq:3.1.4} has only all-or-nothing solutions.
This is the general shape of the morphospace for Eq.\ \ref{eq:3.1.4} for any parameters (see App.\ \ref{app:1} for its mathematical derivation). The boundary between bilateral and lateralized solutions is a parabola with a maximum of $\hat{c} = b/4 - c$ located at $\varepsilon=1/2$. Note that if $c > b/4$, the bilateral configuration disappears (Fig.\ \ref{fig:app1.2}{\bf c}), so that the lateralized solution is preferred for any $ (\varepsilon, \hat{c})$. If $c=0$, the maximum of the parabola is at $b/4$. This imposes a very stringent limit: coordination costs can never be larger than a fourth of the contributed fitness.
\subsection{Charting bilaterality and lateralization for emergent phenotypes}
\label{sec:3.2}
Conceive now a complex task that, in order to be successfully implemented, needs to recruit a series of brain regions, each one carrying out a specific and different subtask. A good example is the human language ability, that requires the successful functioning of Broca's and Wernicke's areas as well as retrieving information from a semantic map, etc.\ \cite{Geschwind1972, CataniFfytche2005, FedorenkoKanwisher2009, FedorenkoKanwisher2012, FedorenkoThompson2014, BlankFedorenko2016, BerwickChomsky2016}. Failure at any of these subtasks results in specific pathology related to the malfunctioning area. Full-fledged language only emerges if all regions perform correctly. Let us call such computations (that require different units to collaborate) {\em emergent} phenotypes. Assuming that each subtask can be implemented as before (i.e. by either unit within a mirror-symmetric couple), then we ask again: When is it favorable to lateralize and keep just one side active (Fig.\ \ref {fig:1}{\bf d}), keep all bilateral couples functioning (Fig.\ \ref{fig:1}{\bf e}), have them running at some intermediate level (Fig.\ \ref{fig:1}{\bf f}), or switch the whole circuit off completely and, consequently, fail to implement the complex computation?
Let us assume an emergent phenotype that consists of $K$ subtasks. We will take $K$ as a proxy for the cognitive complexity of this phenotype. Let us assume that each of these subtasks can be implemented by a couple of mirror-symmetric units as the ones discussed above, and that each of these couples incurs separately in the same running and coordination costs, such that:
\begin{eqnarray}
C &=& cK(l+r), \nonumber \\
\hat{C} &=& \hat{c}Klr.
\label{eq:3.2.1}
\end{eqnarray}
As before, we can interpret $l$ and $r$ as the average time that units at the left or right sides are active. Alternatively, we can say that a fraction $l$ of the $K$ left units is always switched on (and similarly for the right side). We have assumed, without loss of generality, that all subtasks are equally costly -- if not, we could make $c$ and $\hat{c}$ the corresponding averages.
Regarding the fitness benefit of the emergent phenotype, we insist that it is only fully cashed in if all independent subtasks are successfully implemented, thus:
\begin{eqnarray}
\tilde{B} &=& \tilde{b}K \left[ (1-\varepsilon)l(1-r) + (1-\varepsilon)(1-l)r \right. \nonumber \\
&& \left. + (1-\varepsilon^2)lr \right]^K.
\label{eq:3.2.2}
\end{eqnarray}
Here we see the same probability of implementing each subtask as before -- now raised to the $K$-th power, which gives us the likelihood that no subtask is lacking. We assume that the total benefit reported by the emergent phenotype is $\tilde{b}K$ (we could absorb the $K$ within $\tilde{b}$, but it is convenient not to). As before, in App.\ \ref{app:5} we show that the following results are more general if we would choose different dependencies on $\varepsilon$.
We can now define the following utility function:
\begin{eqnarray}
\rho(l,r) &\equiv& \tilde{B}/K - C/K - \hat{C}/K \nonumber \\
&=& \tilde{b}(1-\varepsilon)^K\left[ l(1-r) + (1-l)r + (1+\varepsilon)lr \right]^K \nonumber \\
&& - c(l+r) - clr.
\label{eq:3.2.3}
\end{eqnarray}
Fig.\ \ref{fig:2}{\bf b} charts its optimal solutions with $\tilde{b}=1$, $c=0.01$, $K=10$, and varying $\varepsilon \in[0,1)$ and $\hat{c} \in [0,1]$ (see App.\ \ref{app:2} for the mathematical derivation of this figure).
The area in which the phenotype fails to emerge (now found for $\varepsilon > 1 - \sqrt[K]{c/\tilde{b}}$, white region in Fig.\ \ref{fig:2}{\bf b}) is much wider than in the previous case. Meanwhile, the combination of parameters for which the lateralized configuration is optimal (gray) has shrunk. Note that we have taken a relatively low running cost for independent circuits ($c = 0.01$). If these costs were even lower ($c \rightarrow 0$), the region in which the phenotype fails to emerge would become negligible as $\varepsilon > 1 - \sqrt[K]{c/\tilde{b}} \rightarrow 1$. This situation is noteworthy, even though we expect realistic scenarios to have non-negligible costs ($c > 0$). Even if we do not approach the $c \rightarrow 0$ limit, we can expect that complex phenotypes contribute a much larger fitness than the implementation of simpler tasks -- think, e.g., the advantages brought about by human language. In that case, since we set $\tilde{b}=1$ to generate our maps, we should re-scale the running costs accordingly, resulting in much smaller $c$. We revisit this possibility in Sec.\ \ref{sec:3.3.1}.
The region in which the bilateral combination is optimal is shifted to lower values of $\varepsilon$ and deformed with respect to the parabola. Unlike before, along the boundary it is not optimal to keep one side switched on and the other side active to an arbitrary degree. Instead, at the boundary between the black and gray regions, both the lateralized and fully bilateral configurations (but no others) are indifferently optimal. Thus now we find even less graded solutions than for simple tasks: all optimal configurations are either both units switched off, on, or just one completely active side.
It can be proved (see App.\ \ref{app:2}) that the curve separating lateralized and bilateral configurations has only one maximum, so all maps have a similar shape as we vary our parameters. As the complexity of the emergent task increases (i.e.\ as $K$ grows because more distinct functionality needs to be recruited), the area in which the phenotype fails to emerge grows (as per the condition $\varepsilon > 1 - \sqrt[K]{c/\tilde{b}}$). The region of bilaterality shifts further left while its peak reaches higher in the $\hat{c}$ axis. For simple tasks there was a harsh limit ($\hat{c} < b/4$) for the optimality of the bilateral configuration. For complex, emergent phenotypes, a much higher coordination cost can be tolerated while the mirror symmetric solution is sustained. \\
So far we have described the very unlikely scenario in which a complex phenotype emerges very swiftly, fully recruiting all the needed units for its own good. Evolutionarily, this resembles a hopeful monster, unlikely in a gradual Darwinian framework. We shall refer to such case as a {\em strictly emergent} phenotype. More realistically, complex cognition will engage the needed tasks in a progressive fashion. While not recruited, the corresponding units can keep implementing their former jobs. In such a case we should add up the sustained fitness contribution brought about by the earlier, independently implemented phenotypes.
Since we have $K$ tasks (which, for simplicity, we assume each one reports a fitness benefit as in Eq.\ \ref {eq:3.1.3}), we can write the following utility function:
\begin{eqnarray}
\rho(l,r) &\equiv& \tilde{B}/K + KB/K - C/K - \hat{C}/K \nonumber \\
&=& \tilde{b}(1-\varepsilon)^K\left[ l(1-r) + (1-l)r + (1+\varepsilon)lr \right]^K \nonumber \\
&& + b(1-\varepsilon)\left[ l(1-r) + (1-l)r + (1+\varepsilon)lr \right] \nonumber \\
&& - c(l+r) - clr.
\label{eq:3.2.4}
\end{eqnarray}
Fig.\ \ref{fig:2}{\bf c} charts its optimal configuration for $\tilde{b}=1$, $b=0.5$, $c=0.05$, and $K=10$ (see App.\ \ref{app:3} for the mathematical details).
Compared to the case with the strictly emergent phenotype, the region in which the complex phenotype combined with the ancient, independent ones fails to emerge is much reduced (white stripe at the right with $\varepsilon > 1 - b/c$; but the condition is actually more lenient -- see App.\ \ref{app:3}). The areas for both lateralized (gray) and bilateral (black) configurations have expanded. Thus the existence of an evolutionary substrate that remains operative largely enables the process through which a complex phenotype can come into existence.
As before, only all-or-nothing solutions are observed -- also along the boundary between lateralized and bilateral configurations. The curve tracing this boundary looks like a mixture of the parabola (from the simplest scenario) and the peaked separation between configurations for strictly emergent phenotypes. The separation curve now might present one or two peaks. The higher peak is always present, and located at smaller error rates. As before, it grows above $\hat{c} = \tilde{b}/4$, allowing bilateral solutions with much larger coordination costs.
\subsection{Evolutionary paths}
\label{sec:3.3}
The morphospaces derived above give us static pictures: what is and what is not optimal or feasible given some fixed conditions? But evolution is a dynamic process and we are interested in what pathways might allow (or force) us to transit from one configuration to another, or when does a given design remain optimal as surrounding constraints change. By combining the maps above we can see how a configuration becomes optimal or suboptimal as a complex phenotype emerges out of simpler ones.
\subsubsection{Swift emergence of complex phenotypes}
\label{sec:3.3.1}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{./fig_paths_simpleStrictEmerge.pdf}
\caption{{\bf Evolutionary paths to strictly emergent phenotypes.} Taking $b=1$: {\bf a-d} Rather large cost of running units independently ($c=0.05$). Different evolutionary paths (labeled as explained in the main text) become available or are lost depending on the complexity of the emergent phenotype: {\bf a} $K=2$, {\bf b} $K=3$, {\bf c} $K=10$, and {\bf d} $K=100$. {\bf e-h} With negligible running costs ($c=10^{-7}$), new pathways (green, labeled {\bf \O$\boldsymbol{\rightarrow}$L} and {\bf \O$\boldsymbol{\rightarrow}$B}) are present when the complexity of the emergent phenotype is not too large ({\bf e} $K=2$ and {\bf f} $K=2$), but not so for more complex emerging tasks ({\bf g} $K=10$ and {\bf h} $K=100$). }
\label{fig:3}
\end{center}
\end{figure*}
Let us look first at the swift emergence of complex phenotypes -- a scenario compared to hopeful monsters above. Fig.\ \ref{fig:3} shows the overlap between the morphospace for simple tasks and the morphospace for strictly emergent phenotypes. The former is represented by the dashed green curve (boundary between bilateral and lateralized solutions from Fig.\ \ref{fig:2}{\bf a}) and by a dotted vertical black line at $\varepsilon = 1-c/b$ (beyond which the simple task is too costly to assume).
In all conditions reported in Fig.\ \ref{fig:3}, we observe (often broad) regions marked {\bf B$\boldsymbol{\rightarrow}$B} (black) and {\bf L$\boldsymbol{\rightarrow}$L} (light gray) in which, respectively, the bilateral and lateralized solutions are optima for the implementation of both the simple and complex phenotypes. We also find ample regions marked {\bf B$\boldsymbol{\rightarrow}$\O} and {\bf L$\boldsymbol{\rightarrow}$\O} (white, Fig.\ \ref{fig:3}{\bf a-d} and {\bf g-h}) in which there was an optimal configuration for the simple task, but in which a profitable circuit fails to emerge for the complex phenotype. This is due to the condition $\varepsilon < 1- \sqrt[K]{c/\tilde{b}}$, which is usually more stringent than $\varepsilon < 1-c/b$. But, as discussed above, this situation can be reversed: if the emergent phenotype is much more profitable than the simple one, and we re-scale costs accordingly, we can find $1- \sqrt[K]{c/\tilde {b}} \rightarrow 1$. Then, we might observe {\bf \O$\boldsymbol{\rightarrow}$L} (green, Fig.\ \ref{fig:3} {\bf e-f}) and {\bf \O$\boldsymbol{\rightarrow}$B} (tiny region at the bottom right part in Fig.\ \ref{fig:3}{\bf e}), in which it is never favorable to implement the simple task but the complex one is so valuable that an optimal, functioning circuit emerges. While mathematically favored, the evolution of such complex circuits de novo might be biologically unfeasible.
For all conditions shown in Fig.\ \ref{fig:3} we see salient regions labeled {\bf L$\boldsymbol{\rightarrow}$B} (dark gray). In this interesting scenario, a lateralized solution is optimal for carrying out the simple task, but a bilateral configuration would be preferred for the emergent phenotype. Depending on an organism's evolutionary history, the symmetric counterpart of a lateralized circuit might have been lost (or devoted to other tasks -- see below), hence recovering the bilateral configuration might not be possible. In that case, the organism might get stuck with the suboptimal, lateralized solution -- a {\em frozen accident}. Alternatively, the evolutionary pressure towards a redundant solution could foster the appearance of a duplicate. This duplicate does not need to be a bilateral counterpart -- as noted above, we label our units {\em left} and {\em right} to focus the discussion on mirror symmetry, but our results are valid for any set of duplicated circuits. The evolution of such duplicates has been observed in the mammalian brain \cite{ChakrabortyJarvis2015}. Our {\bf L$\boldsymbol{\rightarrow}$B} regions suggest ample pressures favoring this evolutionary pathway.
In Figs.\ \ref{fig:3}{\bf a-b} and {\bf e-g} we find regions marked {\bf B$\boldsymbol{\rightarrow}$L} (blue). In them, the bilateral symmetry that is optimal to solve the simple task is broken as a more complex phenotype (as measured by the number, $K$, of different subunits recruited) takes over. The sheer complexity of the emergent phenotype forces this symmetry breaking, proving that computational complexity can be a driving force behind the lateralization of advanced cognition.
This {\bf B$\boldsymbol{\rightarrow}$L} region vanishes for larger $K$ -- i.e. for more complex phenotypes, which recruit more mirror-symmetric modules. That happens because larger $K$ move the threshold $\varepsilon = 1 - \sqrt[K]{c/\tilde{b}}$ (beyond which the phenotype does not pay off) towards lower values of $\varepsilon$. This prevents the assembly of faulty circuits to implement the complex task. Note that the overlap between bilateral solutions for the simple and complex phenotypes ({\bf B$\boldsymbol{\rightarrow}$B}) also shrinks as $K$ grows. Thus, if running costs are kept high ($c \gg 0$), the original mirror-symmetric solution tends to disappear when a complex phenotype emerges swiftly (e.g.\ Fig.\ \ref{fig:3}{\bf d}). As before, this is alleviated if $c \rightarrow 0$ (when re-scaled by $\tilde{b}$, Figs.\ \ref{fig:3}{\bf e-h}). This scenario sustains broader {\bf B$\boldsymbol{\rightarrow}$L} and {\bf B$\boldsymbol{\rightarrow}$B} regions, but they also dwindle for very complex emergent phenotypes (Fig.\ \ref{fig:3}{\bf h}).
\subsubsection{Complex phenotypes emerging upon circuits for simpler tasks}
\label{sec:3.3.2}
\begin{figure*}[]
\begin{center}
\includegraphics[width=\textwidth]{./fig_paths_simpleFullEmerge.pdf}
\caption{{\bf Evolutionary paths to complex phenotypes that emerge upon substrates that retain their simpler functions.} With $b=1$, $\tilde{b}=2$, and $c=0.01$ in all cases. The pathway from bilateral to lateralized configurations {\bf B$\boldsymbol{\rightarrow}$L} is more robust now. {\bf a-d} We explore an emergent phenotype that occupies the neural substrate $90\%$ of the time ($\tau = 0.9$). As the complexity of the emergent phenotype increases ({\bf a} $K=2$, {\bf b} $K=3$, {\bf c} $K=10$, {\bf d} $K=100$), it becomes unavoidable that the mirror symmetry breaks apart. {\bf e-h} We explore a notably complex phenotype ($K = 100$) that gradually increases the fraction of time during which it makes use of the neural substrate ({\bf e} $\tau = 0.1$, {\bf f} $\tau = 0.3$, {\bf g} $\tau = 0.5$, {\bf h} $\tau = 0.7$; with {\bf d} $\tau = 0.9$ completing the progression). This resemble developmental situations in which higher brain functions are assembled gradually and displace simpler computations in a same neural substrate. A circuit sitting where the red dot is would become lateralized by such process. }
\label{fig:4}
\end{center}
\end{figure*}
Complex phenotypes are likely to evolve upon a previously existing substrate. This would consist of couples of mirror symmetric units -- each couple already implementing its own, individual, simpler task. Most likely, those original functions are not completely displaced while the emergent phenotype comes into being. Let us assume that, throughout the day, the emergent task engages its circuitry a fraction $\tau \in [0,1]$ of the time, and that the simpler tasks can make use of their units during the remaining time, $1 - \tau$. Assuming that the emergent and simple phenotypes contribute a fitness $\tilde{b}'$ and $b'$ respectively, we can substitute $\tilde{b} = \tau\tilde{b}'$ and $b = (1-\tau)b'$ in equation \ref{eq:3.2.4}. Note that if $\tau = 1$ we recover the case just discussed in which the emergent phenotype displaces the ancient ones completely.
When we superpose the resulting morphospaces, the picture changes notably from that of strictly emergent phenotypes. Fig.\ \ref{fig:4}{\bf a-d} shows evolutionary paths for an emergent task with a fitness contribution that is not outstanding ($\tilde{b}'=2$ versus $b'=1$) when $\tau=0.9$ (i.e.\ simpler tasks are only present $10\%$ of the time). This tangential yet sustained presence of simpler tasks shrinks notably the combinations of parameters for which emergent phenotypes are not viable -- compare the white areas in Fig.\ \ref{fig:3} with those in Fig.\ \ref{fig:4}. Regions in which the lateralized solution is optimal are much expanded. They now invade profusely the area of bilateral solutions for simpler phenotypes (marked in blue and labeled {\bf B$\boldsymbol{\rightarrow}$L}, in the figure).
This constitutes a more solid route to the lateralization of mirror-symmetric circuits due to the sheer complexity of the emergent task. Note that, in order to observe large {\bf B$\boldsymbol{\rightarrow}$L} regions in Fig.\ \ref{fig:3}{\bf e-h}, we needed to re-scale the costs by five orders of magnitude ($c=0.05$ in Fig.\ \ref{fig:3}{\bf a-d} versus $c=10^{-7}$ in Fig.\ \ref{fig:3}{\bf e-h}). This comes about because we assume that the fitness benefit from the emergent phenotype is $5$ orders of magnitude larger. Instead, in Fig.\ \ref{fig:4} the complex task contributes only twice as much. This means that the mere retention of the old phenotype (which, evolutionarily, is the more parsimonious pathway) strongly enables the emergence of complex phenotypes. These, in turn, would more likely cause the lateralization of brain activity.
Fig.\ \ref{fig:4}{\bf e-h} shows how the available evolutionary paths change as we increase the fraction of time devoted to the complex phenotype. This can be important to discuss function lateralization during development: as a person matures, higher brain functions are more likely to be engaged for longer time periods, potentially displacing simpler tasks. We observe broad regions in which bilateral circuits should lateralize as the more complex phenotype takes over.
We still observe broad regions {\bf L$\boldsymbol{\rightarrow}$B}, which constitute pressures for the evolution of redundant circuitry prompted by the emergence of a complex phenotype.
\subsubsection{Segregating functions}
\label{sec:3.3.3}
\begin{figure*}[]
\begin{center}
\includegraphics[width=\textwidth]{./fig_paths_simpleSplitEmerge.pdf}
\caption{{\bf Evolutionary paths to emerging phenotypes when simpler and complex tasks can be segregated to different hemispheres.} With $b=1$, $\tilde{b}=2$, and $c=0.01$ in all cases. Gray dashed lines represent the boundary between bilateral and lateralized solutions from Fig.\ \ref{fig:4}. {\bf a-d} As before, we explore emerging phenotypes that occupy the neural substrate $90\%$ of the time ($\tau = 0.9$). Again, as the complexity of the emerging phenotype increases ({\bf a} $K=2$, {\bf b} $K=3$, {\bf c} $K=10$, {\bf d} $K=100$), it becomes unavoidable that bilateral circuits lose their mirror symmetry. This time, when bilaterality is lost, each mirror symmetric counterpart becomes specialized in either the simple or complex tasks ({\bf B$\boldsymbol{\rightarrow\left<L|R\right>}$}). We explore a moderately complex phenotype ($K = 10$) that gradually increases the fraction of time during which it makes use of the neural substrate ({\bf e} $\tau = 0.1$, {\bf f} $\tau = 0.3$, {\bf g} $\tau = 0.5$, {\bf h} $\tau = 0.7$; with {\bf c} $\tau = 0.9$ completing the progression). Again, this serves us to model a developmental situations in which higher brain functions are assembled gradually and displace simpler computations in a same neural substrate. A circuit sitting where the red dot is would become lateralized by such process. }
\label{fig:5}
\end{center}
\end{figure*}
In the configurations explored in the previous section, both the emergent and simple phenotypes are implemented by the same circuits -- whether mirror-symmetric or lateralized. Alternatively, both sets of tasks can be lateralized and segregated, confining each to a different hemisphere that would become specialized. When does this configuration pay off?
Fig.\ \ref{fig:5} shows evolutionary pathways towards the lateralized-segregated solution (noted $\left<L|R\right>$, see App.\ \ref{app:4} for mathematical details). This is amply preferred both as the complexity, $K$, of the emergent phenotype increases (Fig.\ \ref{fig:5}{\bf a-d}) and as the fraction, $\tau$, of time devoted to the complex task grows (Fig.\ \ref{fig:5}{\bf e-h}). In Fig.\ \ref{fig:5}, the boundaries between bilateral and lateralized solutions from Fig.\ \ref{fig:4}, as well as the features from the morphospace of simple tasks, have been left as references (respectively shown as dark gray dashed and green dashed curves).
The pathway {\bf L$\boldsymbol{\rightarrow\left<L|R\right>}$} (light gray) departs from a structure that is already lateralized for the implementation of simple tasks. This earlier lateralization might have left the mirror-symmetric structures unused, thus available for the emergent phenotype to recruit. On the other hand, those circuits might have been lost over the course of evolution -- hence this pathway would not be straightforwardly available. Again, since our results are valid for sets of duplicated circuits (not only mirror-symmetric units), the optimality of this segregated solution might be an evolutionary pressure towards the duplication of existing structures within one hemisphere (as discussed in \cite{Seoane2020, ChakrabortyJarvis2015}).
The {\bf B$\boldsymbol{\rightarrow\left<L|R\right>}$} pathway (blue) is more amply preferred the more complex the emergent phenotype is (i.e.\ the larger $K$ grows). This shows, again, how computational complexity can be a very strong evolutionary driver of lateralization -- and (since our results are more general) of symmetry breaking in the brain.
In the literature it is discussed how segregating function can be very convenient -- e.g., to have specialized hemispheres that complex cognition can recruit units from \cite{VallortigaraBisazza1999} or to allow more efficient packing \cite{Corballis2017}. Notwithstanding, even if allowing segregation, the bilateral configuration is optimal in ample regions of parameter space. Note that segregation would not happen in the bilateral solution, since it requires both mirror-symmetric circuits engaged for either the simple or complex phenotypes. The persistence of bilateral designs enable the pathways {\bf B$\boldsymbol{\rightarrow}$B} (black in Fig.\ \ref{fig:5}) and {\bf L$\boldsymbol{\rightarrow}$B} (dark gray, Fig.\ \ref{fig:5}{\bf c-h}). In these cases, the gained fitness from combining faulty circuits into robust ones can overcome the advantages of segregating tasks into equally faulty circuits. The {\bf L$\boldsymbol{\rightarrow}$B} pathway again constitutes a pressure towards duplication of existing circuits or (if still available) re-recruiting the formerly lateralized mirror-symmetric counterpart.
\section{Discussion}
\label{sec:4}
Since very early in the history of computer science, redundancy was acknowledged as an efficient strategy to enable computation with faulty parts \cite{vonNeumann1956, MooreShannon1956, WinogradCowan1963}. The bilaterian body plan is a source of redundancy for many organs, including the central nervous system. Specifically, the brain is equipped with mirror-symmetric duplicates of most cortical regions, ganglia, and other subsystems. This can help make neural computations more robust, but it can also have an excessive metabolic cost not worth paying.
In this paper we have developed a concise yet comprehensive mathematical framework to study the optimality of bilateral versus lateralized solutions. To that end, we have built a series of {\em morphospaces} in which we can look up optimal circuit configurations as a function of (i) costs of running lateralized neural units independently, (ii) costs of coordinating efforts from both brain sides, (iii) how error prone these units are, (iv) the fitness contributed by a successful neural computation, and (v) the complexity of the tasks at hand.
A first, very strong result is that only all-or-nothing configurations are optimal within out framework: either it is better to engage both mirror-symmetric systems completely, or just one lateralized half (with the other one permanently shut off), or none at all. It is never exclusively optimal to keep any circuits engaged to an intermediate degree. We further prove that this result is very general (see App.\ \ref{app:5}): it applies to a range of reasonable models that can be conjectured up to weight differently the cost, benefit, fallibility, and complexity dimensions just mentioned.
Early findings of localized language function suggested that lateralized brain activity was an exclusive trait of the complex human cognition \cite{Harrington1995, Corballis2009, MacNeilageVallortigara2009}. This was debunked after finding lateralized activity in other animals. However, the hypothesis that cognitive complexity might prompt a break of the brain's mirror symmetry has survived with nuances. A solid mathematical understanding of how such a mechanism might operate has been missing. Our model provides the lacking framework. We show mathematically how different routes to lateralization are fostered by the increasing complexity of computational tasks. We prove how the complexity of strictly emergent phenotypes (which completely displace previously existing simpler ones) can lead to the lateralization of the neural circuitry (Fig.\ \ref{fig:3}{\bf a, e-g}). We show that this route to lateralization is further favored if both the complex and simple phenotypes are allowed to coexist within the same neural substrate (Fig.\ \ref{fig:4}) and that it is even much more robust if both phenotypes can be allocated each to a different hemisphere (Fig.\ \ref{fig:5}). With these well grounded mathematical results, we conclude that the evolution of more complex cognition can be a paramount driver of brain lateralization.
But we also provide strong evidence in the opposite direction: for large combinations of costs, benefits, fallibility, and complexity, there exists a pressure upon formerly lateralized circuits to evolve a duplicate again. The scenarios in which this happens are different from the configurations for which it is optimal to lose ancient bilaterality. Hence, as novel, more complex cognitive phenotypes emerge, they can act as sources of new symmetries or break older, existing ones. Which possibility happens will depend on properties of the neural substrate (e.g.\ its fallibility or metabolic needs) and other conditions (e.g.\ the specific complexity of the emerging phenotype). While we focus our discussion on mirror symmetry, our results are general to any sets of duplicated neural structures. Hence, when a pressure to develop redundancy is present, it might be more parsimonious that it arises within a same hemisphere (by literally duplicating an existing brain area). Evidence of such duplicates of rather large cortical regions in the mammalian brain has recently been described \cite{Seoane2020, ChakrabortyJarvis2015}. The alternative (re-recruiting the actual mirror symmetric units) might be impossible, as they might have been lost or repurposed for other tasks.
We visualize optimal configurations as maps (of the model parameters) also called morphospaces. Morphospaces were first introduced to describe the shape of shells as a function of factors affecting their formation \cite{Raup1966, Tyszka2006}. They have been expanded to study other complex systems \cite{Niklas2004, CorominasRodriguez2013, Gonisporns2013, AvenaSporns2015, ArsiwallaVerschure2017, SeoaneSole2018}, including how evolutionary pressures guide the evolution and development of neural or computational substrates \cite{Seoane2019, OlleSole2020, DuongGoni2021}. These maps remind us of phase diagrams that dictate the phase (solid, liquid, etc.)\ of matter samples subjected to different physical conditions. Similarly, the evolutionary pathways that emerge as we move around our morphospaces, or as we superpose the charts of different conditions, remind us of phase transitions. It seems natural to extend these tools and concepts from statistical physics to include computational complexity and reliability (two features closely affected by thermodynamics) -- as we do here.
The ultimate goal in the examples of morphospaces just cited is to portray real-world systems and to infer actual phenomenology. Let us try such exercise with our mathematical framework:
\begin{itemize}
\item {\bf Human language} is the most paradigmatic example of lateralization of higher brain function. Language usually involves mostly regions in the left hemisphere \cite{Geschwind1972, CataniFfytche2005, FedorenkoKanwisher2009, FedorenkoKanwisher2012, FedorenkoThompson2014, BlankFedorenko2016, BerwickChomsky2016}, with some symmetric counterparts taking care of prosody or processing non-syntactic patterns \cite{TogaThompson2003}. Recent fMRI evidence shows that response to language starts out as more symmetric in babies, and that it becomes fully lateralized as children grow \cite{OluladeNewport2020}. A similar trajectory can be seen in our model: take neural circuits sitting at the red dots in Figs.\ \ref{fig:4}{\bf e-h} and \ref{fig:5}{\bf e-h}. As language would gradually recruit those neural substrates (i.e.\ as $\tau$ is increased in our model), the initial bilateral configuration becomes suboptimal.
\item {\bf Hemisphere dominance} is the process through which, while both sides are engaged in some function, one of them takes a much more relevant role -- often acting as a controller of the other or as coordinator of both sides. {\bf Handedness} is a paramount example. There is a strong bias towards mirror symmetry in this case, as the sensory inputs and motor outputs must deal with a bilateral body plan. However, a trend towards laterality is observed in mammals -- with handedness increasing with behavioral complexity \cite{Galaburda1995, RogersVallortigara2004, HalpernRogers2005, Rogers2006}. Patients of unilateral hemiplegia further indicate that the dominant hemisphere is needed (while the dominated one is not) for complex movement of the unaffected hand \cite{Liepmann1905, Harrington1995}. Dominance is observed in many other neural systems such as visual processing \cite{BrownKosslyn1995, Hellige1995} or the {\em theory of mind network} \cite{KliemannTranel2021}. These phenomena might suggest a graded (not all-or-nothing) engagement of the dominated side, which should be a rare configuration according to our mathematical framework. However, our results apply to circuits that are involved in {\em exactly} the same computations. It does not preclude a circuit commanding the other or even delegating specific, unshared tasks on it. Indeed, the different routes to lateralization might promote such controller-controlled configurations.
\item {\bf Neural damage} or {\bf pathology} can alter several of the dimensions involved in our model. For example, a damaged circuit will present higher error rates (increased $\varepsilon$) and, potentially, become more costly to engage or coordinate (growing $c$ and $\hat{c}$). Any of these changes would push bilateral circuits outwards from the bilaterality zone (arrows in Fig.\ \ref{fig:2}{\bf c}). {\bf Aging} should also lead to increased fallibility, thus we should expect more asymmetry (as circuits tend to lateralize) with age -- which is the case \cite{ENIGMA2018, ZhouBeaulieu2013, PlessenPeterson2014}. Even if lateralization becomes more optimal due to changes in fallibility or cost, a developed brain might insist on computing with both mirror-symmetric sides -- as if stuck on a frozen accident. If this would happen, it might become helpful to remove one side to achieve the most optimal configuration. A similar conclusion is suggested by a recent model of brain reorganization after hemispherectomy \cite{SeoaneSole2020}. Less invasive treatment (e.g.\ using transcranial magnetic stimulation, TSM, to silence a neural region) might also push suboptimally bilateral circuits in the appropriate direction.
\item Larger brains should pay higher coordination costs due to limits on callosal transfer of information. In our model, increased $\hat{c}$ moves mirror-symmetric circuits towards lateralized configurations (vertical arrows in Fig.\ \ref{fig:2}{\bf c}). We hence expect increased asymmetry in larger brains -- as it is the case \cite{ENIGMA2018, KangWoods2015}. This also agrees with evidence that more asymmetric brains present less or thinner fibers across the corpus callosum \cite{Witelson1985} -- thus that they might, perhaps, renounce to some coordination efforts and embrace the lateralized solution. Tasks that demand short reaction times should show similar effects, since they would penalize delays due to inter-hemispheric communication (hence increasing $\hat{c}$). This route to lateralization was already explored in the literature \cite{RingoSimard1994}. Our model subsumes this scenario in a more comprehensive framework.
\item Our model can also incorporate parsimoniously other proposed mechanisms for brain lateralization, such as hemisphere specialization \cite{VallortigaraBisazza1999} or optimal packing \cite{Corballis2017} -- both tightly related to our segregated functions from Fig.\ \ref{fig:5}. These proposals were qualitative. We have now built a very general quantitative framework that allows us to understand mathematically how these pathways would work.
\item Our morphospaces predict that nearing perfect performance should be yet another pathway to the lateralization of brain function. When $\varepsilon \rightarrow 0$, keeping duplicated circuits is redundant and ineffective (none of our model configurations has bilateral solutions for $\varepsilon = 0$). Musicians with {\bf perfect pitch} provide a case in which this prediction comes true: they have an increased asymmetry in the planum temporale notably owed to the reduction of the non-dominant side for this task \cite{TogaThompson2003, SchlaugSteinmetz1995, Steinmetz1996, KeenanSchlaug2001}.
\end{itemize}
This is a non-exhaustive list of qualitative correspondences between actual neural systems and phenomenology that our morphospaces can explain. Efforts should now be made to bring quantitative empirical results into this theoretical framework. We could try to measure costs, fitness benefits, phenotype complexity, and fallibility in real neural circuits. This seems difficult; but very realistic computer models \cite{Markram2006} might allow us to simulate exact real-world conditions in silico, carefully controlling all dimensions involved.
Alternatively, we can try to induce transitions from bilaterality to lateralized solutions in experimental setups, quantify the thresholds at which such changes happen, and, thus, constrain our model parameters. This might be feasible with neural preparations or organoids in vitro, or even in vivo with behaving animals and even humans. For example, we could manipulate task complexity while neural activity is monitored, or we could interfere with the circuitry (e.g.\ using TMS) to raise error rates. We could hope that the brain behaves optimally -- i.e.\ that it will adopt a lateralized mode if it becomes more efficient. But there is no guarantee that this is always the case, so we should also measure performance and energy consumption across individuals to check if those with optimal configurations outperform suboptimal ones.
We have tried to make our mathematical framework as general as possible; but, unavoidably, some costs and effects have been left out. Future models should see how our morphospaces change as new aspects come into play. We think that some of the omissions actually make our results more robust. For example, as complex phenotypes emerge, we did not demand that all subtasks are implemented on a same side, and yet we observe clear pathways towards lateralization. Including such penalty to communication across hemispheres should strengthen the trend to asymmetry. We have not discussed structural and lasting costs either: in our model, building a neural circuit is given for granted and we only pay for keeping it running. Such additional costs should exacerbate some of our results -- e.g.\ by making lateralization more definitive, which is relevant for the {\bf L$\boldsymbol{\rightarrow}$B} pathways discussed above. This could introduce path dependency in our evolutionary processes (which reminds us of hysteresis in phase transitions). Other structural constraints might favor bilaterality. We mentioned sensory input and motor output, which are inherently bilateral. It should be feasible to add these conditions to our model and see how the morphospaces are updated.
A kind of cost that could alter our results would be non-linearities as a function of the time ($l$ or $r$) that each mirror-symmetric circuit is engaged. We assume that all costs are proportional to $l$ and $r$, mostly because we model an energetic consumption during the time that circuits are engaged. But neurons can be worn out by use, and this process can be highly non-linear -- e.g.\ circuits might break down after a threshold usage, and needed maintenance might saturate with time of use. Including such non-linearities might interfere with some of the mathematical steps in our demonstrations. This does not necessarily change our results, but such models (nonlinear in $l$ and $r$) meed to be studied individually anew.
Finally, our model is not only agnostic regarding bilaterality versus other sources of redundancy. They are also independent of the computational substrate. This means that our results should apply when designing efficient computing devices. In such cases (think, a chip that could choose to engage several microprocessors depending on the complexity of the task at hand), it should be helpful to extent our framework to arbitrary numbers of redundant circuits (not just two, as here). A similar modeling might be useful to study gene duplication \cite{HurleyPrince2005, OakleyRivera2008}, especially as more computational and cognitive approaches to the functioning of cells are explored.
\vspace{0.2 cm}
\section*{Acknowledgments}
The author wishes to thank Susanna Manrubia for her support in carrying out this research. This work grew through enlightening discussions on brain symmetry and asymmetry with Ricard Sol\'e at the Santa Fe Institute (and elsewhere) and with Susanna Manrubia and her extended group of researchers on Complex Systems at the Spanish National Center for Biotechnology (CNB), the Carlos III University, and other institutions in Madrid, Spain. This work has been funded by the Spanish National Research Council (CSIC) and the Spanish Department for Science and Innovation (MICINN) through a Juan de la Cierva Fellowship (IJC2018-036694-I).
| 2024-02-18T23:40:11.157Z | 2021-12-02T02:11:32.000Z | algebraic_stack_train_0000 | 1,567 | 9,248 |
|
proofpile-arXiv_065-7794 | \section{Introduction}
Let $n\ge 1$, $s,p,q,\alpha, \mu, \beta, a$ be real numbers satisfying
\begin{equation}\label{eqNCB_1'}
s>0, \quad p, q\ge 1, \quad 0\le a\le 1,
\end{equation}
\begin{equation}\label{eqNCB_2}
\frac{1}{s}+\frac{\gamma_1}{n}>0, \quad \frac{1}{p}+\frac{\gamma_2}{n}>0, \quad \frac{1}{q}+\frac{\gamma_3}{n}>0,
\end{equation}
\begin{equation}\label{eqNCB_3}
\frac{1}{s}+\frac{\gamma_1}{n}=a\Big(\frac{1}{p}+\frac{\gamma_2-1}{n}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\gamma_3}{n}\Big),
\end{equation}
\begin{equation}\label{eqNCB_4}
\gamma_1 \le a\gamma_2+(1-a)\gamma_3,
\end{equation}
\begin{equation}\label{eqNCB_5}
\frac{1}{s}\le \frac{a}{p}+\frac{1-a}{q} \quad \textrm{ if } a=0 \textrm{ or } a=1 \textrm{ or }\frac{1}{s}+\frac{\gamma_1}{n}=\frac{1}{p}+\frac{\gamma_2-1}{n}=\frac{1}{q}+\frac{\gamma_3}{n}.
\end{equation}
Caffarelli, Kohn and Nirenberg
established the following classical interpolation inequalities.
\medskip
\noindent\textbf{Theorem A. (\cite{CKN}, see also \cite{CKN2})} \emph{For $n\ge 1$, let
$s, p, q, \gamma_1, \gamma_2, \gamma_3$ and $a$ satisfy (\ref{eqNCB_1'}) and (\ref{eqNCB_2}).
Then there exists some positive constant $C$ such that
\begin{equation}\label{eqD_2_2_0}
\||x|^{\gamma_1}u\|_{L^s(\mathbb{R}^n)}\le C\||x|^{\gamma_2}\nabla u\|_{L^p(\mathbb{R}^n)}^a\||x|^{\gamma_3}u\|_{L^q(\mathbb{R}^n)}^{1-a}
\end{equation}
holds for all $u\in C^1_c(\mathbb{R}^n)$ if and only if (\ref{eqNCB_3})-(\ref{eqNCB_5}) hold. Furthermore, on any compact set in the parameter space in which (\ref{eqNCB_1'}) and (\ref{eqNCB_2}) hold,
the constant $C$ is bounded.}
\medskip
Given (\ref{eqNCB_1'}), condition (\ref{eqNCB_2}) holds if and only if $\||x|^{\gamma_1}u\|_{L^s(\mathbb{R}^n)}$, $\||x|^{\gamma_2}\nabla u\|_{L^p(\mathbb{R}^n)}$ and $\||x|^{\gamma_3}u\|_{L^q(\mathbb{R}^n)}$ are finite for all $u\in C_c^{\infty}(\mathbb{R}^n)$.
The above theorem is the same as the theorem in \cite{CKN}, though the formulation of the conditions is somewhat different.
Lin \cite{Lin} generalized (\ref{eqD_2_2_0})
to include derivatives of any order.
Badiale and Tarantello \cite{BT} derived a cylindrical Sobolev-Hardy type inequality.
Bahouri, Chemin and Gallagher \cite{BCG1, BCG2} obtained refined Hardy inequalities.
Nguyen and Squassina \cite{NS, NS2} generalized the Caffarelli-Kohn-Nirenberg inequalities to fractional Sobolev spaces.
Best constants and the existence (and nonexistence) of extremal functions of (\ref{eqD_2_2_0})
have been studied extensively, see Catrina and Wang \cite{CW}, Dolbeault, Esteban and Loss \cite{DEL}, and the references therein.
Partly motivated by works of
Bourgain, Brezis and Mironescu \cite{BBM1, BBM2} and Maz’ya and Shaposhnikova \cite{MS}, Frank and Seiringer \cite{FS} identified best constants for fractional Hardy type inequalities.
Sharp Sobolev and isoperimetric inequalities with monomial weights, and related problems, are studied by Cabr\'{e}, Ros-Oton and Serra, see \cite{CR, CRS}.
In this paper, we prove the following theorem on anisotropic Caffarelli-Kohn-Nirenberg type inequalities.
For $n\ge 2$, let $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ be real numbers satisfying
\begin{equation}\label{eqNCA_1}
s, q>0, \quad p \ge 1, \quad 0\le a\le 1,
\end{equation}
\begin{equation}\label{eqNCA_2}
\frac{1}{s}+\frac{\alpha}{n-1}>0, \quad \frac{1}{p}+\frac{\mu}{n-1}>0,
\quad \frac{1}{q}+\frac{\beta}{n-1}>0,
\end{equation}
\begin{equation}\label{eqNCA_3}
\frac{1}{s}+\frac{\alpha+\gamma_1}{n}>0, \quad \frac{1}{p}+\frac{\mu+\gamma_2}{n}>0, \quad \frac{1}{q}+\frac{\beta+\gamma_3}{n}>0,
\end{equation}
\begin{equation}\label{eqNCA_5}
\frac{1}{s}+\frac{\gamma_1+\alpha}{n}=a\Big(\frac{1}{p}+\frac{\gamma_2+\mu-1}{n}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\gamma_3+\beta}{n}\Big),
\end{equation}
\begin{equation}\label{eqNCA_6_1}
\gamma_1\le a\gamma_2+(1-a)\gamma_3,
\end{equation}
\begin{equation}\label{eqNCA_6_2}
\gamma_1+\alpha\le a(\gamma_2+\mu)+(1-a)(\gamma_3+\beta),
\end{equation}
\begin{equation}\label{eqNCA_6_3}
\frac{1}{s}+\frac{\alpha}{n-1}\ge a\Big(\frac{1}{p}+\frac{\mu-1}{n-1}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big),
\end{equation}
\begin{equation}\label{eqNCA_7}
\begin{split}
\frac{1}{s}\le \frac{a}{p}+\frac{1-a}{q} & \textrm{ if } a=0 \textrm{ or } a=1 \textrm{ or }\frac{1}{p}+\frac{\gamma_2+\mu-1}{n}=\frac{1}{q}+\frac{\gamma_3+\beta}{n}=\frac{1}{s}+\frac{\gamma_1+\alpha}{n}, \\
& \textrm{ or }
\frac{1}{s}+\frac{\alpha}{n-1}=a\Big(\frac{1}{p}+\frac{\mu-1}{n-1}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big). \end{split}
\end{equation}
Throughout the paper, we denote $x=(x', x_n)$, where $x'=(x_1, ..., x_{n-1})$. We have the following theorem.
\begin{thm}\label{thm_main}
For $n\ge 2$, let $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ be real numbers satisfying (\ref{eqNCA_1})-(\ref{eqNCA_3}).
Then there exists some positive constant $C$ such that
\begin{equation}\label{eqNC}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(\mathbb{R}^n)}\le C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\mathbb{R}^n)}^{a}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(\mathbb{R}^n)}^{1-a}
\end{equation}
holds for all $u\in C_c^{1}(\mathbb{R}^n)$ if and only if
(\ref{eqNCA_5})-(\ref{eqNCA_7}) hold.
Furthermore, on any compact set in the parameter space in which (\ref{eqNCA_1})-(\ref{eqNCA_3}) hold, the constant $C$ is bounded.
\end{thm}
Given (\ref{eqNCA_1}), conditions (\ref{eqNCA_2}) and (\ref{eqNCA_3}) hold if and only if $\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(\mathbb{R}^n)}$, \\
$\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\mathbb{R}^n)}$ and $\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(\mathbb{R}^n)}$
are finite for all $u\in C_c^{\infty}(\mathbb{R}^{n})$.
Inequality (\ref{eqNC}) was proved in \cite{BT} in the special cases when $n\ge 3$, $a=1$, $\gamma_1=\gamma_2=\mu=0$, $1/s+\alpha/n=1/p-1/n$, $1<p<n$, $-1\le \alpha\le 0$ and $(1/p-1/n)(n-1)+\alpha/n>0$;
and proved in \cite{LY} in the special cases when $n\ge 2$, $a=1$, $1\le s=p<n$, $\alpha p>1-n$,
$\mu p>1-n$, $(\alpha+\gamma_1)p>-n$,
$\gamma_1+\alpha=\gamma_2+\mu-1$ and $\gamma_1\le \gamma_2$.
Taking $\alpha=\mu=\beta=0$, inequality (\ref{eqNC}) is an improvement
of the Caffarelli-Kohn-Nirenberg inequalities
(Theorem A)
from $q\ge 1$ to $q>0$.
When $\alpha<0$ and $\mu, \beta>0$, inequality (\ref{eqNC}) strengthens
\[
\||x|^{\gamma_1+\alpha}u\|_{L^s(\mathbb{R}^n)}\le C\||x|^{\gamma_2+\mu}\nabla u\|_{L^p(\mathbb{R}^n)}^a\||x|^{\gamma_3+\beta}u\|_{L^q(\mathbb{R}^n)}^{1-a}
\]
which is given by (\ref{eqD_2_2_0}).
In particular, when $n\ge 3$, $a=1$, $s=p=2$, $\gamma_1=\gamma_2=\alpha=-1/2$, and $\mu=1/2$,
inequality (\ref{eqNC}) takes the form
\begin{equation}\label{eqNSE_2}
\int_{\mathbb{R}^n}\frac{|u|^2}{|x||x'|}dx\le C\int_{\mathbb{R}^n}|\nabla u|^2\frac{|x'|}{|x|}dx,
\end{equation}
which strengthens the Hardy inequality
\[
\int_{\mathbb{R}^n}\frac{|u|^2}{|x|^2}dx\le C\int_{\mathbb{R}^n}|\nabla u|^2dx.
\]
Inequality (\ref{eqNSE_2}) was among the special cases proved in \cite{LY} as mentioned above.
The necessity of (\ref{eqNCA_5}) is proved by dimensional analysis. The necessity of (\ref{eqNCA_6_1}) and (\ref{eqNCA_6_2}) are deduced from the fact that if (\ref{eqNC}) holds for $u$, then it also holds for
$u(x)\mapsto u(x_1+S, x_2, ..., x_{n-1}, x_n+T)$ for all $S, T>0$. The necessity of (\ref{eqNCA_6_3}) and (\ref{eqNCA_7}) are more delicate.
A main ingredient of the proof of Theorem \ref{thm_main}, even in the case when $q\ge 1$,
is the above mentioned improvement of Theorem A from $q\ge 1$ to $q>0$,
which is stated as the following theorem.
\begin{thm}\label{thmD_2
For $n\ge 1$,
let $s, p, q, \gamma_1, \gamma_2, \gamma_3$, and $a$ satisfy (\ref{eqNCA_1}) and (\ref{eqNCB_2}).
Then there exists some positive constant $C$, such that
\begin{equation}\label{eqD_2_2}
\||x|^{\gamma_1}u\|_{L^s(\mathbb{R}^n)}\le C\||x|^{\gamma_2}\nabla u\|_{L^p(\mathbb{R}^n)}^a\||x|^{\gamma_3}u\|_{L^q(\mathbb{R}^n)}^{1-a}
\end{equation}
holds for all $u\in C^1_c(\mathbb{R}^n)$ if and only if (\ref{eqNCB_3})-(\ref{eqNCB_5}) hold. Furthermore, on any compact set in the parameter space in which (\ref{eqNCA_1}) and (\ref{eqNCB_2}) hold,
the constant $C$ is bounded.
\end{thm}
Our proof of the sufficiency part of Theorem \ref{thmD_2}, which yields the extension from $q\ge 1$ to $q>0$, is quite different from the proof of Theorem A in \cite{CKN}.
Another main ingredient of the proof of Theorem \ref{thm_main} is the following nonlinear Poincar\'{e} inequality.
\begin{thm}[A nonlinear Poincar\'{e} inequality]\label{thm1-new}
For $n\ge 1$ and
$0<\lambda<\infty$, assume $1\le p\le \infty$ if $1\le \lambda<\infty$, and $\max\{1, n/(1+n\lambda)\}\le p\le \infty$ if $0<\lambda<1$.
Let $(M, g)$ be an $n$-dimensional smooth compact Riemannian manifold without boundary, $\Omega=M$ or
$\Omega\subset M$ be an open connected set with Lipschitz boundary, and $S\subset \Omega$ has positive measure $|S|$.
Then there exists some positive constant $C$, depending only on $p, \lambda$, $\Omega$ and $S$,
such that for every nonnegative $w\in W^{1,p}(\Omega)$,
\begin{equation}\label{est1-new}
\|w-\big(\mathop{\ooalign{$\int$\cr$-$}}_Sw^{1/\lambda}\big)^{\lambda}\|_{ L^p(\Omega) }\le C \|\nabla w \|_{ L^p(\Omega) },
\end{equation}
On the other hand, if $0<\lambda<1$ and $0<p<n/(1+n\lambda)$ or $0<\lambda<\infty$ and $0<p<1$, there does not exist $C$ for which
(\ref{est1-new}) holds.
\end{thm}
Theorem \ref{thm1-new} gives necessary and sufficient conditions on $(\lambda, p)\in (0, \infty)\times (0, \infty]$ for (\ref{est1-new}) to hold.
\begin{cor}\label{cor_new}
For $n\ge 1$, $1\le p\le \infty$, and $0<q<\infty$, let $(M, g)$ be an $n$-dimensional smooth compact Riemannian manifold without boundary, $\Omega\subset M$ be an open connected set with Lipschitz boundary, and $S\subset \Omega$ has positive measure $|S|$.
Then there exists some positive constant $C$, depending only on $p, q, \Omega$ and $S$,
such that for every nonnegative $w\in W^{1,p}(\Omega)$,
\begin{equation}\label{est1-newcor}
\|w\|_{L^p(\Omega)}\le \big(\mathop{\ooalign{$\int$\cr$-$}}_Sw^{q}\big)^{1/q}|\Omega|^{1/q}+
C \|\nabla w \|_{ L^p(\Omega) }.
\end{equation}
\end{cor}
\bigskip
Theorem \ref{thm_main} grows out of
our study in \cite{LY} on the stability of
solutions to
Navier-Stokes equations.
In joint work with L. Li \cite{LLY1, LLY}, we classified $(-1)$-homogeneous axisymmetric no-swirl solutions of the $3$D incompressible stationary Navier-Stokes equations which are smooth in $\mathbb{R}^3$ away from the symmetry axis. All such solutions $u$ are of
the following three mutually exclusive types:
\begin{enumerate}
\item[]Type 1. Landau solutions, which satisfy $\displaystyle \sup_{|x|=1}|\nabla u(x)|<\infty$;
\item[]Type 2. Solutions satisfying $\displaystyle 0<\limsup_{|x|=1,x'\to 0}|x'||\nabla u(x)|<\infty$;
\item[] Type 3. Solutions satisfying $\displaystyle \limsup_{|x|=1,x'\to 0}|x'|^2|\nabla u(x)|>0$.
%
\end{enumerate}
Karch and Pilarczyk \cite{Karch} proved the asymptotic stability of Landau solutions under $L^2$-perturbations.
In \cite{LY},
we proved the asymptotic stability of Type 2 solutions under $L^2$-perturbations.
An important ingredient in our proof is the following improved version of Hardy's inequality
\begin{equation}\label{eqNSE_1}
\int \frac{|u|^2}{|x||x'|}\le C\int |\nabla u|^2,
\end{equation}
a weaker form of (\ref{eqNSE_2}).
We expect that Theorem \ref{thm_main} will be useful in the study of the asymptotic stability of Type 3 solutions and the stability of Type 2 solutions in other function spaces.
For related results on the stability of singular solutions
to the Navier-Stokes equations, see \cite{CKPW}, \cite{KPS}, \cite{LZZ}, \cite{ZZ} and the reference therein.
In the special case when $a=1$ and $ 1\le s=p<n$, Theorem \ref{thm_main} says that
under the conditions
\begin{equation}\label{eqB_2_0}
p\ge 1, \quad \frac{1}{p}+\frac{\mu}{n-1}>0,\quad \frac{1}{p}+\frac{\gamma_1+\alpha}{n}>0,
\end{equation}
\begin{equation}\label{eqB_3_0}
\gamma_1+\alpha=\gamma_2+\mu-1, \quad \gamma_1\le \gamma_2,
\end{equation}
inequality
\begin{equation}\label{eqB_1_0}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^p(\mathbb{R}^n)}\le C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\mathbb{R}^n)}
\end{equation}
holds for $u\in C_c^1(\mathbb{R}^n)$.
This was proved in \cite{LY} when $1\le p<n$ as mentioned earlier. The proof there applies to $p\ge n$ as well, and we present the proof concisely below.
The necessity of $\gamma_1+\alpha=\gamma_2+\mu-1$ follows from a dimensional analysis argument. The necessity of $\gamma_1\le \gamma_2$ can also be seen easily: take a unit ball $B_i$ centered at $x_i=(x_i', i)$ with $2<|x_i'|<3$ and let $u_i(\cdot)=v(\cdot+x_i)$.
Let $x=(r, \theta)$ in spherical coordinates.
Since $(\gamma_1+\alpha)p+n-1>-1$, we have, for each fixed $\theta$, that
\begin{equation}
\int_{0}^{\infty}r^{(\gamma_1+\alpha)p+n-1}|u|^pdr\le C\int_{0}^{\infty}r^{(\gamma_1+\alpha+1)p+n-1}|\partial_ru|^pdr.
\end{equation}
For any $0<\epsilon<\pi/4$, let $K_{\epsilon}:=\{x\in\mathbb{R}^n\mid |x'|\le |x|\sin\epsilon \}$. Integrate the above in $\theta$
on $(\mathbb{R}^{n}\setminus K_{\epsilon})\cap\mathbb{S}^{n-1}$, we have, using $ |x|\sin\epsilon \le |x'|\le |x|$ in $\mathbb{R}^n\setminus K_{\epsilon}$ and $\gamma_1+\alpha+1=\gamma_2+\mu$, that
\begin{equation}\label{eqB_1}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^p(\mathbb{R}^n\setminus K_{\epsilon})}\le C\||x|^{\gamma_2}|x'|^{\mu}\partial_ru\|_{L^p(\mathbb{R}^n\setminus K_{\epsilon})}.
\end{equation}
Since $\int_{K_{2\epsilon}\setminus K_{\epsilon}}|x|^{\gamma_1 p}|x'|^{\alpha p}|u|^pdx=\int_{\epsilon}^{2\epsilon}\int_{\partial K_{\delta}}|x|^{\gamma_1 p}|x'|^{\alpha p}|\nabla u|^p\frac{|x|}{\cos\delta}d\sigma(x)d\delta$, there is some
$\epsilon<\delta<2\epsilon$, such that
\begin{equation}\label{eqB_2}
\||x|^{\gamma_1}|x'|^{\alpha}|x|^{1/p}u\|_{L^p(\partial K_{\delta})}
\le C\||x|^{\gamma_2}|x'|^{\mu} \partial_r u\|_{L^p(K_{2\epsilon}\setminus K_{\epsilon})}.
\end{equation}
Next, let $x=(r, \theta)=(r, \theta_1, \omega)$, where $r=|x|$, $\omega\in\mathbb{S}^{n-2}$ and $\theta_1$ is the polar angle, i.e. the angle between the $x_n$-axis and the ray from the origin to $x$. Then $|x'|=|x|\sin\theta_1$, and
\[
|u(r, \theta_1,\omega)|^p- |u(r, \delta, \omega)|^p=-\int_{\theta_1}^{\delta}\partial_t|u(r, t, \omega)|^pdt\le \int_{\theta_1}^{\delta }p|u|^{p-1}|\partial_t u|dt.
\]
We multiply the above by $\theta_1^{\alpha p+n-2}$ and integrate in $\theta_1$ over $[0, \delta]$.
We know from (\ref{eqB_2_0}) and (\ref{eqB_3_0}) that $\alpha p+n-1>0$. So we have
\[
\int_{0}^{\delta}\theta_1^{\alpha p+n-2}\int_{\theta_1}^{\delta }p|u|^{p-1}|\partial_t u|dt d\theta_1
\le C\int_{0}^{\delta}\theta_1^{\alpha p+n-1}|u|^{p-1}|\partial_{\theta_1} u|d\theta_1,
\]
and
\[
\int_{0}^{\delta} \theta_1^{\alpha p+n-2}|u(r, \theta_1,\omega)|^pd\theta_1\le C|u(r, \delta,\omega)|^p+C\int_{0}^{\delta}\theta_1^{\alpha p+n-1}|u|^{p-1}|\partial_{\theta_1} u|d\theta_1.
\]
Using Cauchy-Schwarz inequality, we have
\[
\int_{0}^{\delta} \theta_1^{\alpha p+n-2}|u(r, \theta_1,\omega)|^pd\theta_1\le C|u(r, \delta,\omega)|^p+C\int_{0}^{\delta}\theta_1^{(\alpha+1)p+n-2}|\partial_{\theta_1}u(r, \theta_1,\omega)|^pd\theta_1.
\]
Multiplying the above by $r^{(\gamma_1+\alpha)p+n-1}$ and integrating in $r$ and $\omega$, we have, using the fact that $\gamma_1+\alpha+1=\gamma_2+\mu$, $\theta_1^{\alpha+1}\le \theta_1^{\mu}$ on $[0, \delta]$ in view of $\mu\le \alpha+1$, and $|\partial_{\theta_1}u|/r\le |\nabla u|$, that
\[
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^p(K_{\delta})}\le C\||x|^{\gamma_1}|x'|^{\alpha}|x|^{1/p}u\|_{L^p(K_{\delta})}+C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(K_{\delta})}.
\]
Inequality (\ref{eqB_1_0}) follows from (\ref{eqB_1}), (\ref{eqB_2}) and the above.
For the sufficiency part of Theorem \ref{thm_main} when $s\ne p$ or $0<a<1$, the proof is more involved.
Let us look at a simple case where $n=2$, $a=1$, $\gamma_1=\gamma_2=\gamma_3=0$, $s=2$, $p=1$, $\mu=\alpha>-1/2$. The following lemma is a slightly stronger version of Theorem \ref{thm_main} in this case.
\begin{prop}\label{lemS_1}
For $\alpha>-1/2$, there exists some constant $C$ depending only on $\alpha$ such that \begin{equation}\label{eqS_1_1}
\||x_1|^{\alpha}u\|^2_{L^2([0, 1]^2)}\le C\||x_1|^{\alpha}\partial_{x_1}u\|_{L^1([0, 1]^2)}\||x_1|^{\alpha}\partial_{x_2}u\|_{L^1([0, 1]^2)}
\end{equation}
holds for all $u\in C^1([0, 1]^2)$ satisfying $u(1, x_2)=u(x_1, 1)=0$, $0\le x_1, x_2\le 1$. \end{prop}
\begin{proof}
Make a change of variables $y_1=x_1^{2\alpha+1}$, $y_2=x_2$, and $\tilde{u}(y_1, y_2)=u(x_1, x_2)$. Then (\ref{eqS_1_1}) is equivalent to
\begin{equation}\label{eqS_1_2}
\|\tilde{u}\|^2_{L^2([0, 1]^2)}\le C\||y_1|^{\beta}\partial_{y_1}\tilde{u}\|_{L^1([0, 1]^2)}\||y_1|^{-\beta}\partial_{y_2}\tilde{u}\|_{L^1([0, 1]^2)},
\end{equation}
for $\beta<1/2$ (where $\beta:=\alpha/(2\alpha+1)$) and
$\tilde{u}\in C^1([0, 1]^2)$ satisfying $\tilde{u}(y_1, 1)=\tilde{u}(1, y_2)=0$, $0\le y_1, y_2\le 1$.
For $k\in \mathbb{N}$, let $R_k=[2^{-k-1}, 2^{-k}]$,
\[
A_k:=\|\tilde{u}\|_{L^2(R_k\times [0, 1])}, \quad P_k:=\|y_1^{\beta}\partial_{y_1}\tilde{u}\|_{L^1(R_k\times [0, 1])}, \quad Q_k:=\|y_1^{-\beta}\partial_{y_2}\tilde{u}\|_{L^1(R_k\times [0, 1])}, \]
and
\[
P:=\||y_1|^{\beta}\partial_{y_1}\tilde{u}\|_{L^1([0, 1]^2)}, \quad Q:=\||y_1|^{-\beta}\partial_{y_2}\tilde{u}\|_{L^1([0, 1]^2)}.
\]
For any $y_1\in R_k$, $\xi\in R_{k-1}$ and $y_2\in [0, 1]$, we have
\[
\begin{split}
|\tilde{u}(y_1, y_2)|^2 & =|\tilde{u}(y_1, y_2)||\tilde{u}(\xi, y_2)-\int_{y_1}^{\xi}\partial_{\eta}\tilde{u}(\eta, y_2)d\eta|\\
& \le |\tilde{u}(y_1, y_2)||\tilde{u}(\xi, y_2)|+Cy_1^{-\beta}|\tilde{u}(y_1, y_2)|\int_{y_1}^{\xi}\eta^{\beta}|\partial_{\eta}\tilde{u}(\eta, y_2)|d\eta\\
& \le |\tilde{u}(y_1, y_2)||\tilde{u}(\xi, y_2)|+C\int_{0}^{1}y_1^{-\beta}|\partial_{y_2}\tilde{u}(y_1, y_2)|dy_2 \int_{R_k\cup R_{k-1}}\eta^{\beta}|\partial_{\eta}\tilde{u}(\eta, y_2)|d\eta.
\end{split}
\]
Taking $\int_{0}^{1}\mathop{\ooalign{$\int$\cr$-$}}_{R_{k-1}}\int_{R_k}\cdot dy_1d\xi dy_2$ of the above and using H\"{o}lder's inequality, we have
\[
\begin{split}
A_k & \le \frac{1}{|R_{k-1}|}\int_{0}^{1} \Big(\int_{R_k}|\tilde{u}(y_1, y_2)|dy_1\Big)\Big( \int_{R_{k-1}}|\tilde{u}(\xi, y_2)|d\xi\Big)dy_2+CQ_kP\\
& \le
\sqrt{\frac{|R_k|}{|R_{k-1}|}}\sqrt{A_kA_{k-1}}+CQ_kP
\le \frac{1}{2\sqrt{2}}(A_k+A_{k-1})+CQ_kP.
\end{split}
\]
Thus
\[
A_k\le \theta A_{k-1}+CQ_kP,
\]
where $\theta=1/(2\sqrt{2}-1)<1$.
Suming over $k\ge 1$ gives
\begin{equation}\label{eqS_1_3}
\sum_{k=1}^{\infty}A_k\le
\frac{1}{1-\theta} A_0+CPQ\le CPQ.
\end{equation}
where we have used $|\tilde{u}(y_1, y_2)|\le \int_{0}^1|\partial_{y_1}\tilde{u}(y_1, \cdot)|$ and $|\tilde{u}(y_1, y_2)|\le \int_{0}^1|\partial_{y_2}\tilde{u}(\cdot, y_2)|$.
\end{proof}
\medskip
For the sufficiency part of Theorem \ref{thm_main} in general, our first consideration was for $q\ge 1$. We were able to prove the sufficiency part of Theorem \ref{thm_main} for $q\ge 1$ in dimension $n$ provided Theorem A for $q>0$ in dimension $n-1$, with the help of the nonlinear Poincar\'{e}'s inequality (Theorem \ref{thm1-new}). We also proved Theorem A for $q>0$ in dimension $n=1$ and therefore proved Theorem \ref{thm_main} for $q\ge 1$ in dimension $n=2$ as well as Theorem \ref{thm_main}
for axisymmetric $u$ and $q\ge 1$ in dimensions $n\ge 3$.
Next we established the sufficiency part of Theorem \ref{thm_main} for $q\ge 1$ in dimensions $n\ge 3$. A key step is to prove (\ref{eqNC}) on a cylinder $D:=\{x\in \mathbb{R}^n\mid |x'|\le 1, 0\le x_n\le 1\}$ when $\gamma_1=\gamma_2=\gamma_3=0$.
For simplicity, one may consider $u\in C^1_c(D)$ and the estimate is
\begin{equation}\label{eqS_2_0}
\||x'|^{\alpha}u\|_{L^s(D)}\le C\||x'|^{\mu}\nabla u\|_{L^p(D)}^a\||x'|^{\beta}u\|_{L^{q}(D)}^{1-a}.
\end{equation}
The left hand side of the above can be written as
\[
\begin{split}
\||x'|^{\alpha}u\|_{L^s(D)}
& =\||x'|^{\frac{\alpha s}{\bar{s}}}|u|^{\frac{s}{\bar{s}}}\|_{L^{\bar{s}}(D)}^{\frac{\bar{s}}{s}}
\le C\||x'|^{\frac{\alpha s}{\bar{s}}}(|u|^{\frac{s}{\bar{s}}}-|u^*|^{\frac{s}{\bar{s}}})\|_{L^{\bar{s}}(D)}^{\frac{\bar{s}}{s}}+ C\||x'|^{\frac{\alpha s}{\bar{s}}}|u^*|^{\frac{s}{\bar{s}}}\|_{L^{\bar{s}}(D)}^{\frac{\bar{s}}{s}}\\
& =:C(I_1+I_2),
\end{split}
\]
where $1/\bar{s}=1/s+1-1/p$ and $u^*(x',x_n)=\mathop{\ooalign{$\int$\cr$-$}}_{|y'|=|x'|}u(y',x_n) d\sigma (y')$.
Since
Theorem \ref{thm_main} for $q\ge 1$
holds for axisymmetric $u$, so does (\ref{eqS_2_0}). Thus $I_2$ is bounded by the right hand side of (\ref{eqS_2_0}).
The estimate that $I_1$ is bounded by the right hand side of (\ref{eqS_2_0}) follows from a variant of the Caffarelli-Kohn-Nirenberg inequalities (see Theorem \ref{thm6_1}), using the fact that $0<\bar{s}\le s$.
Later we proved Theorem A for $q>0$ in all dimensions and in turn proved Theorem \ref{thm_main}. This is the proof presented in this paper.
\medskip
In Section \ref{sec_2}, we prove the necessity part of Theorem \ref{thm_main} and \ref{thmD_2}. In Section \ref{sec_3}, we prove Theorem \ref{thm1-new} and Corollary \ref{cor_new}. In Section \ref{sec_4}, we prove the sufficiency part of Theorem \ref{thmD_2} by establishing Theorem 4.1, a more general result including inequalities on cones.
In Section \ref{sec_5}, we prove the sufficiency part of Theorem \ref{thm_main}. In Section \ref{sec_6}, we give two variants of Theorem A and Theorem \ref{thm_main}. Some properties of the parameters used in the proofs are given in the appendix.
\section{Proof of the necessity parts of Theorem \ref{thm_main} and \ref{thmD_2}}
\label{sec_2}
In this section, we prove the necessity parts of Theorem \ref{thm_main} and \ref{thmD_2}.
We first prove the necessity part of Theorem \ref{thm_main} by the following lemma.
\begin{lem}\label{lemNC_1}
For $n\ge 2$,
let $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_3}).
If there exists a constant $C$ such that (\ref{eqNC}) holds for all $u$ in $C^{\infty}_c(\mathbb{R}^n)$, then (\ref{eqNCA_5})-(\ref{eqNCA_7}) hold.
\end{lem}
\begin{proof}
Let $C$ denote a positive constant depending only on $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ which may vary from line to line.
We prove (\ref{eqNCA_5})-(\ref{eqNCA_7}) one by one.
\medskip
\noindent\emph{Proof of (\ref{eqNCA_5})}:
Fixing a $v\in C^{\infty}_c(\mathbb{R}^n)\setminus\{0\}$ and plugging $u(x):=v(\lambda x)$, $\lambda>0$, into (\ref{eqNC}), we have
\[
\lambda^{-nA_1} \||x|^{\gamma_1}|x'|^{\alpha}v\|_{L^s(\mathbb{R}^n)}\le C\lambda^{-nA_2}\||x|^{\gamma_2}|x'|^{\mu}\nabla v\|_{L^p(\mathbb{R}^n)}^{a}\||x|^{\gamma_3}|x'|^{\beta}v\|_{L^q(\mathbb{R}^n)}^{1-a},
\]
where $A_1$ and $A_2$ are the left and right hand side of (\ref{eqNCA_5}) respectively.
Sending $\lambda$ to $0$ and $\infty$ in the above, we obtain (\ref{eqNCA_5}).
\medskip
\noindent\emph{Proof of (\ref{eqNCA_6_1}) and (\ref{eqNCA_6_2})}:
Fixing a $v\in C_c^{\infty}(B_1(0))\setminus\{0\}$,
we consider $u(x):=v(x-x_0)$ where $x_0=(S, 0,...,0, R)$ and $S,R>0$. Then $u\in C^{\infty}_c(B_1(x_0))$, and $u$ satisfies (\ref{eqNC}).
Choose $S=2$ and $R$ large.
For $x\in B_1(x_0)$, we have $2\le |x'|\le 3$ and $R/2\le |x|\le 2R$.
Plugging $u$ into (\ref{eqNC}), we have,
\[
R^{\gamma_1}\le CR^{a\gamma_2+(1-a)\gamma_3}
\]
for some constant $C$ independent of $R$. Inequality (\ref{eqNCA_6_1}) follows, since $R$ can be arbitrarily large.
Now we choose large $S$ and $R=0$.
For $x\in B_1(x_0)$, both $|x'|$ and $|x|$ are in $[S/2, S]$.
Plugging $u$ into (\ref{eqNC}), we have
\[
S^{\gamma_1+\alpha}\le CS^{a(\gamma_2+\mu)+(1-a)(\gamma_3+\beta)}.
\]
Inequality (\ref{eqNCA_6_2}) follows from the above, since $S$ can be arbitrarily large.
\bigskip
Next, to prove (\ref{eqNCA_6_3}) and (\ref{eqNCA_7}), we fix a $g\in C_c^{\infty}(1, 4)$ satisfying
\begin{equation*
g(t)=\left\{
\begin{split}
& 0, \quad t\le 1 \textrm{ or }t\ge 4, \\
& 1, \quad 2\le t\le 3.
\end{split}
\right.
\end{equation*}
\noindent\emph{Proof of (\ref{eqNCA_6_3})}:
For $0<\epsilon<1$, let
\[
f_1(\rho)=\left\{
\begin{array}{ll}
0, & \rho \ge 2\epsilon, \\
2\epsilon-\rho, & \epsilon\le \rho\le 2\epsilon, \\
\epsilon, & \rho\le \epsilon.
\end{array}
\right.
\]
Then $u(x): =f_1(|x'|)g(x_n)$ satisfies (\ref{eqNC}).
We have $\mathrm{supp}$ $u\subset \{|x'|\le 2\epsilon, 1\le x_n\le 4\}$. For any $x$ in $\mathrm{supp}$ $u$, $1\le |x|\le 5$. Then (\ref{eqNC}) for this $u$ is equivalent to
\begin{equation*
\||x'|^{\alpha}u\|_{L^s(\mathbb{R}^n)}\le C\||x'|^{\mu}\nabla u\|_{L^p(\mathbb{R}^n)}^{a}\||x'|^{\beta}u\|_{L^q(\mathbb{R}^n)}^{1-a}
\end{equation*}
for some constant $C$ independent of $\epsilon$.
By calculation,
\begin{equation*
\int_{\mathbb{R}^n}||x'|^{\alpha}u|^s dx \ge \frac{1}{C}\int_{5\epsilon/4\le |x'|\le 7\epsilon/4}||x'|^{\alpha}f_1(|x'|)|^sdx'\ge \frac{1}{C}\epsilon^{(\alpha+1)s+n-1},
\end{equation*}
\begin{equation*
\int_{\mathbb{R}^{n}}||x'|^{\mu}\nabla u|^pdx \le C\int_{|x'|\le 2\epsilon}|x'|^{p\mu}(| f'_1(|x'|)|^p+|f_1(|x'|)|^p)dx'
\le C\epsilon^{\mu p+n-1},
\end{equation*}
\begin{equation*
\int_{\mathbb{R}^{n}}||x'|^{\beta} u|^qdx \le C\int_{|x'|\le 2\epsilon}||x'|^{\beta}f_1(|x'|)|^qdx'
\le C\epsilon^{(\beta+1) q+n-1}. \end{equation*}
Thus we have
\[
\epsilon^{\alpha+1+(n-1)/s}\le C\epsilon^{a(\mu +(n-1)/p)+(1-a)(\beta +1+(n-1)/q)}.
\]
Inequality (\ref{eqNCA_6_3}) follows, since $\epsilon$ can be arbitrarily small.
\bigskip
\noindent\emph{Proof of (\ref{eqNCA_7})}:
We divide the proof into two cases.
\medskip
\noindent \textbf{Case 1.} $a=0$ or $a=1$ or $1/p+(\gamma_2+\mu-1)/n=1/q+(\gamma_3+\beta)/n=1/s+(\gamma_1+\alpha)/n$.
\medskip
We first prove the inequality in (\ref{eqNCA_7}) when $1/p+(\gamma_2+\mu-1)/n=1/q+(\gamma_3+\beta)/n=1/s+(\gamma_1+\alpha)/n$.
For $0<\epsilon<1$, let
\begin{equation}\label{eqNC_1_f2}
\displaystyle f_2(r)=\left\{
\begin{array}{ll}
\displaystyle r^{-\alpha-\gamma_1-n/s+\epsilon}, & 0< r\le 1, \\
\displaystyle 1, & 1\le r\le 2, \\
\displaystyle \frac{4-r}{2}, & 2\le r\le 4,\\
\displaystyle 0, & r\ge 4.
\end{array}
\right.
\end{equation}
Then $u(x):=f_2(|x|)g(|x_n|/|x'|)$ satisfies (\ref{eqNC}) by the approximation of
$u_{\delta}(x):=f_2(\sqrt{|x|^2+\delta^2})g(|x_n|/|x'|)$.
By computation,
\begin{equation}\label{eqNC_1_5}
\int_{\mathbb{R}^n}||x|^{\gamma_1}|x'|^{\alpha}u|^s dx
\ge \frac{1}{C}\int_{0<|x|\le 1, 2\le |x_n|/|x'|\le 3}||x|^{\gamma_1+\alpha}f_2|^sdx
\ge \frac{1}{C}\int_{0}^{1}r^{\epsilon s-1}dr
\ge \frac{1}{C}\epsilon^{-1}.
\end{equation}
Notice that for $1\le |x_n|/|x'|\le 4$, $|\nabla g(|x_n|/|x'|)|\le C|g'(|x_n|/|x'|)|/|x'|\le C|x|^{-1}$, we have, using the fact that $1/s+(\gamma_1+\alpha)/n=1/p+(\gamma_2+\mu-1)/n>0$, that
\begin{equation}\label{eqNC_1_6}
\begin{split}
\int_{\mathbb{R}^{n}}||x|^{\gamma_2}|x'|^{\mu}\nabla u|^pdx & \le C\int_{|x|\le 1, 1\le |x_n|/|x'|\le 4}|x|^{p(\gamma_2+\mu)}\big(|\nabla f_2|^p+|x|^{-p}|f_2|^p\big)dx\\
& \le C \int_{0}^{4} r^{(\gamma_2+\mu)p+n-1+(-\alpha-\gamma_1-n/s-1+\epsilon)p}dr\\ &= C\int_{0}^{4}r^{\epsilon p-1}dr
\le C\epsilon^{-1}.
\end{split}
\end{equation}
Similarly, using the fact $1/s+(\gamma_1+\alpha)/n=1/q+(\gamma_3+\beta)/n>0$, we have
\begin{equation}\label{eqNC_1_7}
\int_{\mathbb{R}^{n}}||x|^{\gamma_3}|x'|^{\beta} u|^qdx \le C\int_{|x|\le 1, 1\le |x_n|/|x'|\le 4}||x|^{\gamma_3+\beta}f_2(x)|^qdx \le C\epsilon^{-1}.
\end{equation}
By (\ref{eqNC}), (\ref{eqNC_1_5}), (\ref{eqNC_1_6}) and (\ref{eqNC_1_7}), we have
\[
\epsilon^{-1/s}\le C\epsilon^{-a/p-(1-a)/q}
\]
for arbitrarily small $\epsilon$. So the inequality in (\ref{eqNCA_7}) holds.
Now we turn to $a=0$ or $a=1$. In view of (\ref{eqNCA_5}), when $a=0$, we have $1/s+(\gamma_1+\alpha)/n=1/q+(\gamma_3+\beta)/n$, and when $a=1$, we have $1/s+(\gamma_1+\alpha)/n=1/p+(\gamma_2+\mu-1)/n$. The inequality in (\ref{eqNCA_7}) follows from the same proof as above.
\medskip
\noindent\textbf{Case 2.} $0<a<1$, $1/p+(\gamma_2+\mu-1)/n\ne 1/q+(\gamma_3+\beta)/n$, and
\begin{equation}\label{eqNC_1_11}
\frac{1}{s}+\frac{\alpha}{n-1}=a\Big(\frac{1}{p}+\frac{\mu-1}{n-1}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big).
\end{equation}
If (\ref{eqNC_1_11}) holds, then either Case 1 or Case 2 holds.
\medskip
We divide the proof of Case 2 into two subcases.
\medskip
\noindent \emph{Subcase 2.1.} $1/p+(\mu-1)/(n-1)=1/q+\beta/(n-1)$.
\medskip
In this subcase, we have, in view of (\ref{eqNC_1_11}), that $1/s+\alpha/(n-1)=1/p+(\mu-1)/(n-1)=1/q+\beta/(n-1)$.
For $0<\epsilon<1$, let
\[
f_3(\rho)=\left\{
\begin{array}{ll}
\displaystyle \rho^{-\alpha-(n-1)/s+\epsilon}, & 0< \rho\le 1, \\
\displaystyle 1, & 1\le \rho\le 2, \\
\displaystyle \frac{4-\rho}{2}, & 2\le \rho\le 4,\\
\displaystyle 0, & \rho\ge 4.
\end{array}
\right.
\]
Let $u(x):=f_3(|x'|)g(x_n)$. Then it satisfies (\ref{eqNC}) by the approximation of
$u_{\delta}(x): =f_3(\sqrt{|x'|^2+\delta^2})g(x_n)$.
By computation, we have
\begin{equation}\label{eqNC_1_8}
\int_{\mathbb{R}^n}||x|^{\gamma_1}|x'|^{\alpha}u|^s dx \ge \frac{1}{C}\int_{0\le |x'|\le 1}||x'|^{\alpha}f_3(|x'|)|^sdx'
\ge \frac{1}{C}\int_{0}^{1}\rho^{-1+\epsilon s}d\rho
\ge \frac{1}{C}\epsilon^{-1}. \end{equation}
Since $1/s+\alpha/(n-1)=1/p+(\mu-1)/(n-1)>0$, we have
\begin{equation}\label{eqNC_1_9}
\begin{split}
\int_{\mathbb{R}^{n}}||x|^{\gamma_2}|x'|^{\mu}\nabla u|^pdx & \le C\int_{|x'|\le 1}|x'|^{\mu p}(|\nabla f|^p+|f|^p)dx'\\
& \le C\int_{0}^{4}\rho^{\mu p+(\alpha-(n-1)/s-1+\epsilon)p+n-2}d\rho\\
& = C\int_{0}^{4}\rho^{\epsilon p-1}d\rho
\le C\epsilon^{-1}.
\end{split}
\end{equation}
Similarly, since $1/s+\alpha/(n-1)=1/q+\beta/(n-1)>0$, we have
\begin{equation}\label{eqNC_1_10}
\int_{\mathbb{R}^{n}}||x|^{\gamma_3}|x'|^{\beta} u|^qdx \le C\int_{|x'|\le 1}||x'|^{\beta}f_3(x')|^qdx' \le C\epsilon^{-1}.
\end{equation}
So by (\ref{eqNC}), (\ref{eqNC_1_8}), (\ref{eqNC_1_9}) and (\ref{eqNC_1_10}), we have
\[
\epsilon^{-1/s}\le C\epsilon^{-a/p-(1-a)/q}
\]
for arbitrarily small $\epsilon$. So the inequality in (\ref{eqNCA_7}) follows in this subcase.
\medskip
\noindent \emph{Subcase 2.2.} $1/p+(\mu-1)/(n-1)\ne 1/q+\beta/(n-1)$.
\medskip
Introduce the spherical coordinates in $\mathbb{R}^n_+$: $r=|x|$, $\theta=x/|x|$.
Let $\theta'=x'/|x|$, we have $\theta=(\theta', \sqrt{1-|\theta'|^2})$. For simplicity, we denote $x=(r, \theta')$.
Fix $\delta>0$ small, and let $R_0:=\{x\in \mathbb{R}^n_+ \mid 1<r<2, \delta<|\theta'|<2\delta\}$.
Fix a function $u\in C_c^{\infty}(R_0)\setminus\{0\}$, and let
\[
u_j(r, \theta')
2^{(b_1\kappa+d_1)j}u(2^{\kappa j}r, 2^{j}\theta'), \quad j\ge 1,
\]
where $b_1=n/s+\gamma_1+\alpha$, $d_1=(n-1)/s+\alpha$, and $\kappa$ is some $j$-independent constant to be determined later.
Then $u_j\in C_c^{\infty}(R_j)$, where
\[
R_j:=\{x\in \mathbb{R}^n \mid 2^{-\kappa j}<r<2^{-\kappa j+1}, \ \ 2^{-j}\delta<|\theta'|<2^{-j+1}\delta\}.
\]
Denote
\[
I_0:=\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(R_0)}, \quad A_0:=\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(R_0)}, \quad B_0:=\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(R_0)}.
\]
In the following, the notation $A\simeq B$ means $B/C\le A\le CB$, and $A\lesssim B$ means $A\le CB$, for some $C>1$ depending only on $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$.
For any $\kappa\in\mathbb{R}$, we have
\begin{equation}\label{eqNC_1_13}
\||x|^{\gamma_1}|x'|^{\alpha}u_j\|_{L^s(R_j)}
\simeq I_0.
\end{equation}
Another computation gives
\begin{equation}\label{eqNC_1_14'}
\||x|^{\gamma_2}|x'|^{\mu}\nabla u_j\|_{L^p(R_j)}
\lesssim
2^{(b_1\kappa+d_1-b_2\kappa-d_2)j}A_0
\end{equation}
where $b_2=n/p+\gamma_2+\mu-1$ and $d_2=(n-1)/p+\mu-1$. Since we are in the case when $1/p+(\mu-1)/(n-1)\ne 1/q+\beta/(n-1)$ and $1/p+(\gamma_2+\mu-1)/n\ne 1/q+(\gamma_3+\beta)/n$, we have, using (\ref{eqNCA_5}) and (\ref{eqNC_1_11}), that $b_1\ne b_2$ and $d_1\ne d_2$. Now we fix
\[
\kappa:=\frac{d_2-d_1}{b_1-b_2}\in \mathbb{R}\setminus\{0\},
\]
so that
\begin{equation}\label{eqNC_1_14}
\||x|^{\gamma_2}|x'|^{\mu}\nabla u_j\|_{L^p(R_j)}\lesssim A_0.
\end{equation}
Using (\ref{eqNCA_5}), (\ref{eqNC_1_11}), and the definition of $\kappa$, we have
\begin{equation}\label{eqNC_1_15}
\||x|^{\gamma_3}|x'|^{\beta}u_j\|_{L^q(R_j)}\simeq B_0.
\end{equation}
For any positive integer $m$, $w:=\sum_{j=1}^{m}u_j \in C_c^{\infty}(\mathbb{R}^n)$. Since $(\text{\textup{supp}}\, u_j)\cap (\text{\textup{supp}}\, u_i)=\phi$ for $i\ne j$, we have, by (\ref{eqNC_1_13}), (\ref{eqNC_1_14}) and (\ref{eqNC_1_15}), that
\[
\||x|^{\gamma_1}|x'|^{\alpha}w\|^s_{L^s(\mathbb{R}^n)}\simeq mI_0^s, \quad \||x|^{\gamma_2}|x'|^{\mu}\nabla w\|^p_{L^p(\mathbb{R}^n)}\lesssim mA^p_0,
\quad \||x|^{\gamma_3}|x'|^{\beta}w\|^q_{L^q(\mathbb{R}^n)}\simeq mB_0^q.
\]
If $w$ satisfies (\ref{eqNC}), then we have, by the above, that
\[
m^{1/s}\le C\frac{A_0^{a}B_0^{1-a}}{I_0}m^{a/p+(1-a)/q}.
\]
Since $I_0, A_0, B_0>0$ and $m$ can be arbitrarily large, we have $1/s\le a/p+(1-a)/q$. (\ref{eqNCA_7}) is proved.
\end{proof}
\noindent\emph{Proof of the necessity part of Theorem \ref{thmD_2}}: Let $n\ge 1$, $s, p, q, a, \gamma_1, \gamma_2$ and $\gamma_3$ satisfy (\ref{eqNCA_1}) and (\ref{eqNCB_2}). We show that if (\ref{eqD_2_2}) holds for all $u\in C_c^\infty(\mathbb{R})$, then (\ref{eqNCB_3})-(\ref{eqNCB_5}) hold. This is the same as
in \cite{CKN} when $q\ge 1$, while the proof there applies to $q>0$ as well.
Since the formulation of our conditions is somewhat different from that in \cite{CKN}, we present
a proof of the necessity of (\ref{eqNCB_3})-(\ref{eqNCB_5}) using similar arguments as in the proof of Lemma \ref{lemNC_1}.
Condition (\ref{eqNCB_3}) follows from a dimensional analysis argument as in the proof of (\ref{eqNCA_5})
with $\alpha=\mu=\beta=0$.
Set $\alpha=\mu=\beta=0$ and $x_0=(0, ..., R)$ in the proof of (\ref{eqNCA_6_1}), the same arguments give (\ref{eqNCB_4}).
To prove (\ref{eqNCB_5}), let $u=f_2(|x|)$ where $f_2$ is given by (\ref{eqNC_1_f2}) with $\alpha=0$, and insert $u$ into (\ref{eqD_2_2}).
When $0<a<1$, we have $1/s+\gamma_1/n=1/p+(\gamma_2-1)/n=1/q+\gamma_3/n$. Similar to (\ref{eqNC_1_5})-(\ref{eqNC_1_7}), we have
\[
\||x|^{\gamma_1}u\|_{L^s(\mathbb{R})}\ge C\epsilon^{-1/s}, \quad \||x|^{\gamma_2}u'\|_{L^p(\mathbb{R})}\le C\epsilon^{-1/p}, \quad \||x|^{\gamma_3}u\|_{L^q(\mathbb{R})}\le C\epsilon^{-1/q}.
\]
Using (\ref{eqD_2_2}) and the above, we have $\epsilon^{-1/s}\le C\epsilon^{-a/p-(1-a)/q}$ for arbitrarily small $\epsilon$,
thus the inequality in (\ref{eqNCB_5}) follows.
In view of (\ref{eqNCB_3}), we have $1/s+\gamma_1/n=1/q+\gamma_3/n$ when $a=0$, and $1/s+\gamma_1/n=1/p+(\gamma_2-1)/n$ when $a=1$. The inequality in (\ref{eqNCB_5}) when $a=0$ or $a=1$ follows from the same proof for $0<a<1$.
\qed
\section{A nonlinear Poincar\'{e} inequality}\label{sec_3}
In this section, we give the proof of Theorem \ref{thm1-new}.
\bigskip
\noindent\emph{Proof of Theorem \ref{thm1-new}}: We divide the proof into three steps.
\bigskip
\noindent\textbf{Step 1.} We prove (\ref{est1-new}) under the hypotheses of the theorem.
\medskip
For $\lambda=1$, Theorem \ref{thm1-new} is a generalized Poincare inequality (see e.g. Lemma 1.1.11 of \cite{Mazja}). In the rest of Step 1 we assume $\lambda\ne 1$.
Since $C^1(\bar{\Omega})$ is dense in $W^{1, p}(\Omega)$, we may assume without loss of generality that $w\in C^1(\bar{\Omega})$ and $w>0$ in $\bar{\Omega}$. Let $u:=w^{1/\lambda}$,
then inequality (\ref{est1-new}) takes an equivalent formulation:
for all $u\in C^1(\bar{\Omega})$ and $u>0$ in $\bar{\Omega}$,
\begin{equation*
\|v\|_{ L^p(\Omega) }\le C \|\nabla v\|_{ L^p(\Omega) }, \ \ \mbox{where}\ v: = u^\lambda- (\bar u)^\lambda \ \ \mbox{and}\ \bar u:= \mathop{\ooalign{$\int$\cr$-$}}_{S}u.
\end{equation*}
We prove (\ref{est1-new}) by contradiction argument.
Suppose the contrary,
there exists a sequence of
positive functions $\{u_j\}\in C^1(\overline \Omega)$ such that
\begin{equation}\label{est2-new}
v_j:= (u_j)^\lambda- (\bar u_j)^\lambda
\end{equation}
satisfies
\begin{equation}\label{est2-new-1}
1= \|v_j\|_{ L^p(\Omega) } > j\|\nabla v_j\|_{ L^p(\Omega) },
\end{equation}
where
$
\bar u_j := \mathop{\ooalign{$\int$\cr$-$}}_{S}u_j>0.
$
By (\ref{est2-new-1}) and the compact embedding of $W^{1,p}(\Omega)$ to $L^p(\Omega)$, there exists some $v\in W^{1, p}(\Omega)$ such that, after passing to a subsequence (still denoted by $\{v_j\}$), $v_j\rightharpoonup v$ in $W^{1, p}(\Omega)$, $v_j\to v$ in $L^p(\Omega)$ and q.e.
in
$\Omega$, $\|\nabla v\|_{L^p(\Omega)}=0$, and $\|v\|_{L^p(\Omega)}=1$.
Now we have, using (\ref{est2-new-1}), that
\begin{equation}\label{eq6_2_1}
\|v_j-v\|_{W^{1, p}(\Omega)}\to 0.
\end{equation}
We divide into two cases, $\lambda>1$ and $0<\lambda<1$.
\bigskip
\noindent\emph{Case 1.} $\lambda>1$.
\medskip
In this case the function $s\to s^\lambda$ is convex, and therefore
\begin{equation*}
\bar v_j\ge \big(\mathop{\ooalign{$\int$\cr$-$}}_Su_j\big)^{\lambda}
-(\bar u_j)^\lambda=0.
\label{est5a}
\end{equation*}
Thus, by (\ref{eq6_2_1}), we have
$\bar{v}_j=\mathop{\ooalign{$\int$\cr$-$}}_{S}v_j\to v>0$.
Passing to another subsequence if necessary, we either have $\bar u_j\to \alpha \in [0, \infty)$ or $\bar u_j\to \infty$.
If $\bar u_j\to \alpha \in [0, \infty)$, we have
\[
u_j\to (v+\alpha^\lambda)^{1/\lambda}\ \ \mbox{a.e. in}\ \Omega.
\]
By Fatou's lemma,
\[
|S| (v+\alpha^\lambda)^{1/\lambda}=
\int_{ S }\liminf_{j\to \infty} u_j
\le \liminf_{j\to \infty}\int_{S } u_j= \alpha |S|.
\]
A contradiction, since $v>0$ and $\alpha \ge 0$. So inequality (\ref{est1-new}) holds.
In the rest of Case 1, we assume $\bar u_j\to \infty$.
Denote $a_j: = \bar u_j \to \infty$, and write
\begin{equation}\label{est3-new}
0\le u_j = a_j+\eta_j.
\end{equation}
Then
\begin{equation}\label{est4-new}
\int_{ S }\eta_j=0,\quad \forall\ j,
\end{equation}
and, by (\ref{est2-new}),
\begin{equation*
v_j= (a_j+\eta_j)^\lambda - (a_j)^\lambda.
\end{equation*}
We will show that this leads to a contradiction.
\medskip
Write
$
v_j^+(\theta) = \max\{ v_j(\theta), 0\}$,
$v_j^-(\theta)=\max\{ -v_j(\theta), 0\}$,
$\theta\in \overline \Omega$.
Then $v_j=v_j^+-v_j^-$.
By (\ref{eq6_2_1}) and the positivity of $v$, we have
\begin{equation}\label{lem1-new}
\|v_j^{+}-v\|_{L^1(\Omega)}\to 0, \quad \|v_j^{-}\|_{L^1(\Omega)}\to 0.
\end{equation}
\begin{lem}
\begin{equation*}
(a_j)^{\lambda-1} \int_{ \Omega } \eta_j^{-}\to 0, \quad \textrm{and}\quad
(a_j)^{\lambda-1} \int_{ S } |\eta_j|\to 0.
\label{est6}
\end{equation*}
\label{lem1-3-new}
\end{lem}
\begin{proof}
Write
\[
v_j^-= (a_j)^\lambda- (a_j-\eta_j^-)^\lambda
= (a_j)^\lambda \big( 1-(1- \frac {\eta_j^-}{a_j})^\lambda \big),
\]
and recall from (\ref{est3-new}) that $0\le w^-\le a_j$. Since $\lambda\ge 1$, we have the following elementary inequality:
\[
g(t):= 1- (1-t)^\lambda - t\ge 0,\quad\forall\ 0\le t\le 1.
\]
Indeed, the above inequality holds due to the concavity of $g$ in
$[0, 1]$ ($g''(t)=-(\lambda-1) (1-t)^{ \lambda-2}<0$
for all $0<t<1$) and the fact that $g(0)=g(1)=0$.
Now we have, using (\ref{lem1-new}) and the above, that
\[
\circ(1)= \int_{ \Omega } v_j^-= (a_j)^\lambda \Big( 1-\big(1- \frac {\eta_j^-}{a_j}\big)^\lambda \Big)
\ge (a_j)^{\lambda} \int_{ \Omega }
\frac { \eta_j^-} {a_j}=
(a_j)^{\lambda-1}
\int_{ \Omega } \eta_j^-.
\]
Lemma \ref{lem1-3-new} follows from the above and (\ref{est4-new}).
\end{proof}
\begin{lem} There exists some positive constant $C$ independent of $j$
such that
$$
\int_{ S } (\eta_j^+)^\lambda \ge \frac 1C, \quad \forall\ j.
$$
\label{lem1-4-new}
\end{lem}
\begin{proof}
We will use the following elementary inequality:
for $\lambda\ge 1$, there exists some positive constant $C$, depending
only on $\lambda$, such that
\[
(1+t)^\lambda-1 \le C(t^\lambda+t),\quad\forall \ t\ge 0.
\]
With this constant $C$, we have, using (\ref{lem1-new}), that
\[
\begin{split}
v|S|+\circ(1) & = \int_{S } v_j^+
= (a_j)^\lambda \int_{ S }
\Big( \big(1+ \frac {\eta_j^+}{a_j}\big)^\lambda -1\Big)
\\
& \le C (a_j)^\lambda \int_{ S }
\Big( \big( \frac {\eta_j^+}{a_j}\big)^\lambda +
\frac {\eta_j^+}{a_j}\Big)
= C \int_{ S }
\Big( (\eta_j^+)^\lambda +(a_j)^{\lambda-1} \eta_j^+\Big).
\end{split}
\]
Lemma \ref{lem1-4-new} follows from the above in view of
Lemma \ref{lem1-3-new}.
\end{proof}
\begin{lem}
For every $\epsilon>0$,
$(a_j)^{\lambda-1}|\{\eta_j>\epsilon\}|\to 0$.
\label{lem1-5-new}
\end{lem}
\begin{proof}
Since
\[
v_j\ge (a_j+\epsilon)^\lambda- (a_j)^\lambda>0
\quad\mbox{on}\ \ \{\eta_j>\epsilon\},
\]
we have, using (\ref{lem1-new}), that
\[
v|\Omega|+\circ(1)= \int_{ \Omega } v_j^+\ge
\int_{ \{\eta_j>\epsilon\} }
\left( (a_j+\epsilon)^\lambda- (a_j)^\lambda \right)
=\left( (a_j+\epsilon)^\lambda- (a_j)^\lambda \right) | \{\eta_j>\epsilon\}|.
\]
Lemma \ref{lem1-5-new} follows from the above since $\lambda>1$ and $a_j\to \infty$.
\end{proof}
\begin{lem}
For every $\epsilon>0$,
$
\int_{\Omega }
\left[ (\eta_j-\epsilon)^+\right]^\lambda\to 0$ as $j\to \infty$.
\label{lem1-6-new}
\end{lem}
\begin{proof}
For $\epsilon>0$,
denote $\xi_j:= \left[ (\eta_j-\epsilon)^+\right]^\lambda$.
By Lemma \ref{lem1-5-new},
$|\{ \xi_j=0\}|\to |\Omega|>0$ as $j\to \infty$.
Apply a generalized Poincare inequality (see e.g.
Lemma 7.16 and Lemma 7.12 in \cite{GT}--writing $\Omega$ as the union of finitely many convex open sets and apply these lemmas on each of the convex open sets) and use
(\ref{est2-new-1}) and the fact that $\lambda\ge 1$, we have
\[
\begin{split}
\int_{ \Omega } \xi_j & \le C \int_{ \Omega } |\nabla \xi_j|
\le
C \int_{ \{\eta_j>\epsilon\} } \left[(\eta_j-\epsilon)^+\right]^{\lambda-1} |\nabla \eta_j^+|
\\
& \le C \int_{ \{\eta_j>\epsilon\}} (\eta_j^+)^{\lambda-1} |\nabla \eta_j^+|
\le C \int_{ \{\eta_j>\epsilon\} } |\nabla v_j|\to 0.
\end{split}
\]
Lemma \ref{lem1-6-new} is established.
\end{proof}
\bigskip
For every $\epsilon>0$,
write $\eta_j=(\eta_j-\epsilon)+\epsilon\le (\eta_j-\epsilon)^+ + \epsilon$.
Thus
\[
(\eta_j^+)^\lambda \le 2^\lambda \left[ (\eta_j-\epsilon)^+\right]^\lambda
+2^\lambda \epsilon^\lambda.
\]
It follows, using Lemma \ref{lem1-4-new} and Lemma \ref{lem1-6-new}, that
\[
0<\frac 1C\le \int_{ S }
(\eta_j^+)^\lambda \le
C \int_{ S } \left[ (\eta_j-\epsilon)^+\right]^\lambda
+2^\lambda \epsilon^\lambda | S|
\le \circ(1)+ C\epsilon^{\lambda}.
\]
Sending $j$ to $\infty$, we have from the above that $0< 1/C \le C \epsilon^{\lambda}$. Sending $\epsilon$ to $0$, we have $0< 1/C\le 0$, a contradiction.
Estimate (\ref{est1-new}) is established in Case 1.
\bigskip
\noindent\emph{Case 2.} $0<\lambda<1$.
\medskip
Recall that $p\ge n/ (1+n\lambda)$.
Since $0<\lambda<1$, the function $s\to s^{\lambda}$ is concave, and we have
\begin{equation*}
\bar v_j\le \big(\mathop{\ooalign{$\int$\cr$-$}}_{S}u_j\big)^{\lambda}
-(\bar u_j)^\lambda=0.
\label{E}
\end{equation*}
Thus, by (\ref{eq6_2_1}), we have
\begin{equation}\label{eq6_2_2}
\bar v_j=\mathop{\ooalign{$\int$\cr$-$}}_{S}v_j\to v<0.
\end{equation}
Fix a $\delta>0$ satisfying $1+\delta\le \min\{2, 1/\lambda\}$.
We will make use of the following elementary fact: For $0<\lambda<1$,
there exists some positive constant $C$, depending only on $\lambda$ and $\delta$, such that
\begin{equation*}
\left| (1+t)^{ 1/\lambda } -1-\frac 1\lambda t\right|\le
C(|t|^{1+\delta} +|t|^{ 1/\lambda }),
\quad \forall\ -1\le t<\infty.
\label{F}
\end{equation*}
By (\ref{est2-new}),
\begin{equation}\label{G}
u_j =\left( v_j+ (\bar u_j)^\lambda \right)^{ 1/\lambda}.
\end{equation}
Integrating the above over $S$ gives, with $C$ given by the one in
(\ref{F}), that
\begin{equation}\label{eq6_2_3}
\begin{split}
0 &= \frac 1{ |S| }\int_{S} \Big( \left[ v_j+ (\bar u_j)^\lambda \right] ^{ 1/\lambda}-\bar u_j\Big) = (\bar u_j) \frac 1{ |S| }\int_{S} \Big( \left[ 1+ (\bar u_j) ^{-\lambda} v_j \right] ^{ 1/\lambda} -1\Big)\\
& \le \frac 1\lambda (\bar u_j) ^{1-\lambda}\bar v_j + \frac {C \bar u_j }
{\lambda|S| } \int_{S} \Big( \left| (\bar u_j) ^{-\lambda} v_j\right|^{1+\delta}+ \left| (\bar u_j) ^{-\lambda} v_j\right|^{1/\lambda} \Big).
\end{split}
\end{equation}
Since $1+\delta\le 1/\lambda$, $W^{1,p}(\Omega)$ embeds into $L^{1/\lambda}(\Omega)$ and $L^{1+\delta}(\Omega)$ by the assumption on $p$. By this and (\ref{eq6_2_1}), we have
\begin{equation}\label{eq6_2_4}
\|v_j-v\|_{ L^{1+\delta}(\Omega) }\le C\|v_j-v\|_{ L^{1/\lambda}(\Omega) }
\le C\|v_j-v\|_{W^{1, p}(\Omega)}\to 0.
\end{equation}
We deduce from (\ref{eq6_2_3}), using
(\ref{eq6_2_2}) and (\ref{eq6_2_4}), that
\[
|v|+\circ(1)=-\bar v_j \le C \Big( (\bar u_j) ^{-\delta\lambda} \int_\Omega |v_j|^{1+\delta}+ (\bar u_j) ^{\lambda-1} \int_\Omega |v_j|^{1/\lambda}\Big)
\le C \left( (\bar u_j) ^{-\delta\lambda} +(\bar u_j) ^{\lambda-1} \right).
\]
Since $v\ne 0$, we have the boundedness of $\{\bar u_j\}$.
Passing to a subsequence,
$\bar u_j\to \alpha$ for some $\alpha\in [0, \infty)$.
Integrating
(\ref{G}) over $S$ and using (\ref{eq6_2_4})
and $\bar u_j\to\alpha$, we have
\[
\alpha +\circ(1)
= \mathop{\ooalign{$\int$\cr$-$}}_{S}
\Big( v_j+ (\bar u_j)^\lambda \Big)^{ 1/\lambda}
= \Big( v+ \alpha^\lambda \Big)^{ 1/\lambda} +\circ(1).
\]
It follows that $ \alpha = \left( v+ \alpha^\lambda \right)^{ 1/\lambda}$ which implies
that $v= 0$.
A contradiction. Estimate (\ref{est1-new}) is established in Case 2. Step 1 is completed.
\bigskip
\noindent\textbf{Step 2.} Inequality (\ref{est1-new}) does not hold if $0<\lambda<1$ and $0<p< n/(1+n\lambda)$.
\medskip
For simplicity, we let $\Omega\subset \mathbb{R}^n$ be a bounded open set, and $S\subset \Omega$ has positive Lebesgue measure. Take a Lebesgue point $\bar{x}$ of $S$, i.e. $\lim_{r\to 0^+}|B_r(\bar{x})\cap S|/|B_r(\bar{x})|=1$. For convenience, $\bar{x}=0$ is the Lebesgue point.
For small $\epsilon>0$ and large $\alpha>1$, let
\[
v(x)=
\left\{
\begin{array}{ll}
-1, & \mbox{if}\ |x|\ge \epsilon,\\
\displaystyle -1+\alpha \Big( 1- \frac {|x|}\epsilon\Big), & \mbox{if}\
|x|\le \epsilon.
\end{array}
\right.
\]
In the following, $C$ denotes some positive constant independent of $\alpha$ and $\epsilon$.
A calculation gives
\begin{equation*
\int_\Omega |v|^p=|\Omega\setminus B_{\epsilon}|+\int_{B_{\epsilon}}|v|^p\ge |\Omega|+\frac{1}{C}\alpha^p\epsilon^n,
\end{equation*}
\[
\int_\Omega |\nabla v|^p =\alpha^p \epsilon^{n-p} |B_1|,
\]
\[
\int_{S} (v+1)^{1/\lambda}
= \alpha^{ 1/\lambda} \epsilon^n \int_{ \{|y|\le 1, \ \epsilon y\in S\}} (1-|y|)^{1/\lambda}dy,
\]
where $|O(\alpha^p\epsilon^n)|\le C\alpha^p\epsilon^n$.
Since $0$ is a Lebesgue point of $S$, we have
\[
\lim_{\epsilon\to 0^+}\frac{|\{|y|\le 1, \ \epsilon y\in S\}|}{\{|y|\le 1\}}=\lim_{\epsilon\to 0^+}\frac{|B_{\epsilon}(0)\cap S|}{|B_{\epsilon}(0)|}=1.
\]
It follows that
\[
\lim_{\epsilon\to 0^+}\int_{\{|y|\le 1, \ \epsilon y\in S\}}(1-|y|)^{1/\lambda}dy=\int_{|y|\le 1}(1-|y|)^{1/\lambda}dy>0.
\]
Now we fix the value of $\alpha$ so that
$\int_{S} (v+1)^{1/\lambda}= |S|$.
So $\alpha\le C \epsilon^{-n\lambda}$.
Consider
\[
u:= (v+1)^{1/\lambda}.
\]
Then $\bar u=\mathop{\ooalign{$\int$\cr$-$}}_{S}u=1$, $u\ge 0$, $v= u^\lambda- \bar u^{\lambda}$.
Using $p<n/(1+n\lambda)$,
\[
\int_\Omega |\nabla v|^p\le C\alpha ^p \epsilon^{n-p}\le C\epsilon^{ n-(1+n\lambda)p} \to 0.
\]
This and (\ref{eq6_2_3}) violate (\ref{est1-new}) for any choice of $C$. Step 2 is completed.
\bigskip
\noindent\textbf{Step 3.} Inequality (\ref{est1-new}) does not hold if $0<\lambda<\infty$ and $0<p<1$.
\medskip
For simplicity, we take $\Omega=S=[-1, 1]^n$. For $\alpha>0$ small, let
\[
f(x_1):=\left\{
\begin{array}{ll}
|x_1|^{\alpha}, & x_1<0,\\
-|x_1|^{\alpha}, & x_1\ge 0,
\end{array}
\right.
\]
and
\[
w(x):=(2+f(x_1))^{\lambda}.
\]
Then $w\in W^{1, p}([-1, 1]^n)$ and $w\ge 1$.
By the definition of $w$, we have $\mathop{\ooalign{$\int$\cr$-$}}_{[-1, 1]^n}w^{1/\lambda}=2$. Let $v=w-(\mathop{\ooalign{$\int$\cr$-$}}_{[-1, 1]^n}w^{1/\lambda})^{\lambda}$. We have,
for some constant $C>0$ depending only on $\lambda$ and $p$, that
\begin{equation}\label{eq6_2_6}
\int_{[-1, 1]^n}|v(x)|^pdx\ge \int_{[1/2, 1]^n}|v(x)|^pdx=\int_{[1/2, 1]^n}|(2+|x_1|^{\alpha})^{\lambda}-2^{\lambda}|^pdx\ge \frac{1}{C}.
\end{equation}
On the other hand, by the assumption that $0<p<1$, we have
\[
\begin{split}
\int_{[-1, 1]^n} |\nabla v|^pdx & = \int_{[-1, 1]^n}|\lambda(2+f(x_1))^{\lambda-1}f'(x_1)|^pdx \\
& \le C\int_{-1}^{1}\alpha^p|x_1|^{(\alpha-1)p}dx_1\le C\alpha^{p} \to 0
\end{split}
\]
as $\alpha\to 0$.
This and (\ref{eq6_2_6}) violate (\ref{est1-new}). Step 3 is completed. Theorem \ref{thm1-new} is proved.
\qed
\bigskip
\noindent\emph{Proof of Corollary \ref{cor_new}}:
If $q\le 1$, then, by Theorem \ref{thm1-new} with $\lambda=1/q$, we have
\[
\|w\|_{L^p(\Omega)}\le \big(\mathop{\ooalign{$\int$\cr$-$}}_{S}w^q\big)^{1/q}\cdot |\Omega|^{1/p}+ \|w-\big(\mathop{\ooalign{$\int$\cr$-$}}_{S}w^q\big)^{1/q}\|_{L^p(\Omega)}\le \big(\mathop{\ooalign{$\int$\cr$-$}}_Sw^{q}\big)^{1/q}|\Omega|^{1/p}+C \|\nabla w \|_{ L^p(\Omega) }.
\]
If $q>1$, then (\ref{est1-newcor}) follows from the result for $q=1$ and H\"{o}lder's inequality.
The corollary is proved.
\qed
\section{Extension of the Caffarelli-Kohn-Nirenberg inequalities from $q\ge 1$ to $q>0$}\label{sec_4}
In this section, we prove Theorem \ref{thmD_2}. The necessity part has been established in Section \ref{sec_2}. The sufficiency part follows from the following theorem which includes the inequalities on cones.
Let
$
\mathbb{S}^{n-1}:= \{x\in \mathbb{R}^{n}\ |\ |x|=1\}
$
be the unit sphere in $\mathbb{R}^{n}$.
For any $\Omega\subset\mathbb{S}^{n-1}$ with nonempty Lipchitz boundary, denote the cone
\begin{equation}\label{eq_cone}
K:=\{rx\mid r\ge 0, \ x\in \Omega\}.
\end{equation}
\begin{thm}\label{thmQ_2}
Let $n\ge 1$, $K=\mathbb{R}^n$ or $K$ be as above, and $s, p, q, \gamma_1, \gamma_2, \gamma_3, a$ satisfy (\ref{eqNCA_1}) and (\ref{eqNCB_2})-(\ref{eqNCB_5}).
Then there exists some positive constant $C$
such that for all $u\in C^{0, 1}_c(\overline{K})$
\begin{equation}\label{eqQ2_1
\||x|^{\gamma_1}u\|_{L^s(K)}\le
C\||x|^{\gamma_2}\nabla u\|_{L^p(K)}^a\||x|^{\gamma_3}u\|_{L^q(K)}^{1-a}.
\end{equation}
Furthermore, on any compact set in the parameter space in which (\ref{eqNCA_1}) and (\ref{eqNCB_2}) hold,
the constant $C$ is bounded.
\end{thm}
\begin{lem}\label{lemQ_1}
Let $n\ge 1$, $0<r_1<r_2<\infty$, $K=\mathbb{R}^n$ or $K$ be given by (\ref{eq_cone}), $s, p, q, a, \gamma_1, \gamma_2$ and $\gamma_3$ satisfy (\ref{eqNCA_1}), (\ref{eqNCB_3}), (\ref{eqNCB_4}),
$1/s+\gamma_1/n>0$, and $1/s\le a/p+(1-a)/q$.
Then there exists some positive constant $C$, depending only on $s, q, a, \gamma_1, \gamma_2, \gamma_3$, $r_1, r_2$ and $\Omega$,
such that for all $u\in C^{0, 1}(K\cap B_{r_2})$,
\begin{equation}\label{eqQ_1
\||x|^{\gamma_1}u\|_{L^s(K\cap B_{r_1})}\le C \||x|^{\gamma_1}u\|_{L^s(K\cap B_{r_2}\setminus B_{r_1})}+
C\||x|^{\gamma_2}\nabla u\|_{L^p(K\cap B_{r_2})}^a\||x|^{\gamma_3}u\|_{L^q(K\cap B_{r_1})}^{1-a}.
\end{equation}
\end{lem}
\begin{proof}
For simplicity, we only prove (\ref{eqQ_1}) for $r_1=1$ and $r_2=2$. The general case can be proved similarly.
For $a=0$, we deduce from (\ref{eqNCB_3}), (\ref{eqNCB_4}) and $1/s\le a/p+(1-a)/q$ that $\gamma_1=\gamma_3$ and $s=q$, thus (\ref{eqQ_1}) is obvious.
In the rest of the proof we assume $0<a\le 1$. Without loss of generality, assume $u\ge 0$.
\bigskip
\noindent\textbf{Step 1.} We prove (\ref{eqQ_1}) for $p=1$ and $\gamma_1=0$.
\medskip
Let
\[
R_k:=\{x\in K \mid 2^{k-1}\le |x|\le 2^{k}\}, \quad k\in \mathbb{Z}.
\]
Denote
\begin{equation*
A_k:=\int_{R_k}|u|^sdx,\quad
M_k:=\int_{R_k}||x|^{\gamma_2}\nabla u(x)|dx, \quad N_k:=\int_{R_k}||x|^{\gamma_3}u|^qdx.
\end{equation*}
We first establish for any $0<\epsilon<2^{an}-1$ that
\begin{equation}\label{eqQ_2}
A_k \le \theta A_{k+1}+C(M_{k}+M_{k+1})^{as}N_k^{(1-a)s/q}, \quad k\in \mathbb{Z},
\end{equation}
where
\begin{equation*
\theta:=\frac{a(1+\epsilon)}{2^{an}-(1+\epsilon)(1-a)},
\end{equation*}
and $C$ depends only on $s, q, a, \gamma_1, \gamma_2, \gamma_3$, $r_1, r_2, K$ and $\epsilon$.
Since $K$ is a cone, by (\ref{eqNCB_3}) and scaling, we only need to prove (\ref{eqQ_2}) for $k=0$.
Let $\bar{u}=\mathop{\ooalign{$\int$\cr$-$}}_{R_1}u(y)dy$. For any $0<\epsilon<2^{an}-1$, $x\in R_0$ and $\xi\in R_1$,
we have
\begin{equation}
\begin{split}
|u(x)|^s & =|u(x)|^{(1-a)s}|u(x)|^{as}\\
& \le |u(x)|^{(1-a)s}(|u(x)-\bar{u}|+|\bar{u}-u(\xi)|+|u(\xi)|)^{as}\\
& \le (1+\epsilon) |u(x)|^{(1-a)s}|u(\xi)|^{as}+C|u(x)|^{(1-a)s}(|u(x)-\bar{u}|+|\bar{u}-u(\xi)|)^{as}.
\end{split}
\end{equation}
Taking $\mathop{\ooalign{$\int$\cr$-$}}_{R_1}\int_{R_0}\cdot dxd\xi$ of the above and using H\"{o}lder's inequality, we have
\begin{equation}\label{eqQ_2_1}
\begin{split}
A_0 & \le (1+\epsilon)\mathop{\ooalign{$\int$\cr$-$}}_{R_1}|u(\xi)|^{as}d\xi\int_{R_0}|u(x)|^{(1-a)s}dx+C\int_{R_0}|u(x)|^{(1-a)s}|u(x)-\bar{u}|^{as}dx\\
&+C\int_{R_0}|u(x)|^{(1-a)s}dx\mathop{\ooalign{$\int$\cr$-$}}_{R_1}|\bar{u}-u(\xi)|^{as}d\xi\\
&\le (1+\epsilon)\frac{|R_0|^a}{|R_1|^a}\left(\int_{R_1}|u(\xi)|^sd\xi\right)^a\left(\int_{R_0}|u(x)|^sdx\right)^{1-a}+C\int_{R_0}|u(x)|^{(1-a)s}|u(x)-\bar{u}|^{as}dx\\
&+C\int_{R_0}|u(x)|^{(1-a)s}dx\mathop{\ooalign{$\int$\cr$-$}}_{R_1}|\bar{u}-u(\xi)|^{as}d\xi\\
& =:(1+\epsilon)\frac{|R_0|^a}{|R_1|^a}A_0^{1-a}A_1^a+C(I_1+I_2).
\end{split}
\end{equation}
Since $p=1$, by (\ref{eqNCB_3}) and (\ref{eqNCB_4}), we have $1/s\ge a(1-1/n)+(1-a)/q$. Since we are in the case $1/s\le a/p+(1-a)/q$, we have $a(1-1/n)+(1-a)/q\le 1/s\le a+(1-a)/q$. Thus there exist some $1\le t\le n/(n-1)$ ($1\le t\le \infty$ when $n=1$) such that $1/s=a/t+(1-a)/q$. Then by H\"{o}lder's inequality, Sobolev inequality and Poincar\'{e}'s inequality, we have
\begin{equation}\label{eqQ_3}
\begin{split}
I_1 & \le C\|u-\bar{u}\|_{L^t(R_0\cup R_1)}^{as}\|u\|_{L^q(R_0)}^{(1-a)s}\\
& \le C\big( \|u-\bar{u}\|_{L^1(R_0\cup R_1)}+\|\nabla(u-\bar{u})\|_{L^1(R_0\cup R_1)}\big)^{as}N_0^{(1-a)s/q}\\
& \le C(M_0+M_1)^{as}N_0^{(1-a)s/q}.
\end{split}
\end{equation}
Similarly, we have
\begin{equation}\label{eqQ_4}
\begin{split}
I_2 & \le C\|u-\bar{u}\|_{L^t(R_1)}^{as}\|u\|_{L^q(R_0)}^{(1-a)s}\\
& \le C\big( \|u-\bar{u}\|_{L^1(R_1)}+\|\nabla(u-\bar{u})\|_{L^1(R_1)}\big)^{as}N_0^{(1-a)s/q}\\
& \le CM_1^{as}N_0^{(1-a)s/q}.
\end{split}
\end{equation}
By (\ref{eqQ_2_1}), (\ref{eqQ_3}), (\ref{eqQ_4}) and the fact that $|R_0|/|R_1|=2^{-n}$, we have, for any $0<\epsilon<2^{an}-1$, that
\[
\begin{split}
A_0
& \le (1+\epsilon)\frac{|R_0|^a}{|R_1|^a}A_0^{1-a}A_1^{a}+C(M_0+M_1)^{as}N_0^{(1-a)s/q}\\
& \le (1+\epsilon)2^{-an}((1-a)A_0+aA_1)+C(M_0+M_1)^{as}N_0^{(1-a)s/q}.
\end{split}
\]
Thus
\[
A_0\le \frac{a(1+\epsilon)}{2^{an}-(1+\epsilon)(1-a)}A_1+C(M_0+M_1)^{as}N_0^{(1-a)s/q}.
\]
So (\ref{eqQ_2}) holds for $k=0$, and therefore holds for all $k\in \mathbb{Z}$.
Since $a>0$, $2^{an}>1$, and $0<\epsilon<2^{an}-1$, we have
$0<\theta<1$.
For $c, d\ge 0$, $c+d\ge 1$, and sequences $x_n, y_n\ge 0$, $n\ge 1$, we have
\begin{equation}\label{eqD_C}
\sum_{n=1}^{\infty} x_n^cy_n^d\le \Big(\sum_{n=1}^{\infty} x_n\Big)^c\Big(\sum_{n=1}^{\infty} y_n\Big)^d.
\end{equation}
Take the sum of (\ref{eqQ_2}) over $k\le 0$, by the fact $1/s\le a+(1-a)/q$ and (\ref{eqD_C}) with $c=(1-a)s/q$ and $d=as$,
we have, that
\[
\begin{split}
\sum_{k\le 0}A_k & \le \theta A_1+\theta \sum_{k\le 0}A_k+C \sum_{k\le 0}(M_{k}+M_{k+1})^{as}N_k^{(1-a)s/q}\\
& \le \theta A_1+\theta \sum_{k\le 0}A_k+C\Big( \sum_{k\le 0}(M_{k}+M_{k+1})\Big)^{as}\Big( \sum_{k\le 0}N_k\Big)^{(1-a)s/q}. \\
\end{split}
\]
So
\begin{equation}\label{eqQ_8}
\begin{split}
\int_{K\cap B_1}|u|^sdx & =
\sum_{k\le 0}A_k \le \frac{\theta}{1-\theta} A_1+C\Big(\sum_{k\le 0}(M_{k}+M_{k+1})\Big)^{as}\Big(\sum_{k\le 0}N_k\Big)^{(1-a)s/q}\\
& \le \frac{\theta}{1-\theta} \int_{R_1}|u|^sdx+C \left(\int_{K\cap B_2}||x|^{\gamma_2}\nabla u|dx\right)^{as}\left(\int_{K\cap B_1}||x|^{\gamma_3}u|^qdx\right)^{(1-a)s/q}.
\end{split}
\end{equation}
Thus when $p=1$ and $\gamma_1=0$, (\ref{eqQ_1}) follows from (\ref{eqQ_8}).
\bigskip
\noindent \textbf{Step 2.} We prove (\ref{eqQ_1}) for $p=1$ and $\gamma_1\ne 0$.
\medskip
We will reduce it to Step 1. Make a change of variables $y=|x|^{\gamma_1 s/n}x$, and define $\tilde{u}(y):=u(x)$,
$\tilde{\gamma}_1=0$, $\tilde{\gamma}_2=(\gamma_2n+\gamma_1s(1-n))/(\gamma_1 s+n)$ and $\tilde{\gamma}_3=(\gamma_3 q-\gamma_1 s)n/(\gamma_1 s+n)q$.
We have $s, q>0$ from (\ref{eqNCA_1}) and $1/s+\tilde{\gamma}_1/n=1/s>0$.
By computation and using (\ref{eqNCB_3}),
\begin{equation}\label{eqQ_11
a\Big(1+\frac{\tilde{\gamma}_2-1}{n}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\tilde{\gamma}_3}{n}\Big)=\frac{n}{\gamma_1 s+n}\Big(a\Big(1+\frac{\gamma_2-1}{n}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\gamma_3}{n}\Big)\Big)=\frac{1}{s}.
\end{equation}
Next, by (\ref{eqNCB_3}) and (\ref{eqNCB_4}) with $p=1$, we have $1/s\ge a(1-1/n)-(1-a)/q$. Use this and (\ref{eqQ_11}),
we have
\begin{equation*
a\tilde{\gamma}_2+(1-a)\tilde{\gamma}_3=n\Big(\frac{1}{s}-a\Big(1-\frac{1}{n}\Big)-\frac{1-a}{q}\Big)\ge 0.
\end{equation*}
So we have verified (\ref{eqNCA_1}), (\ref{eqNCB_3}), (\ref{eqNCB_4}), and $1/s+\tilde{\gamma}_1/n>0$ with $\tilde{\gamma}_1=0$.
By this and the fact that $1/s\le a/p+(1-a)/q$, apply Step 1 to $\tilde{u}(y)$ and $\tilde{\gamma}_1, \tilde{\gamma}_3, \tilde{\gamma}_3$, we have
\begin{equation}\label{eqQ_10
\|\tilde{u}\|_{L^s(K\cap B_1)}\le C \|\tilde{u}\|_{L^s(K\cap B_{R}\setminus B_1)}+
C\||y|^{\tilde{\gamma}_2}\nabla\tilde{u}\|_{L^1(K\cap B_{R})}^a\||y|^{\tilde{\gamma}_3}\tilde{u}\|_{L^q(K\cap B_{1})}^{1-a}.
\end{equation}
Since $\gamma_1 s/n+1>0$, we have, with $R:=2^{\gamma_1 s/n+1}>1$, that
\[
\begin{split}
& \int_{K\cap B_1}||x|^{\gamma_1} u(x)|^sdx= \frac{n}{\gamma_1 s+n} \int_{K\cap B_1}|\tilde{u}(y)|^sdy, \\
& \int_{K\cap B_2}||x|^{\gamma_2}\nabla u(x)|dx= \int_{K\cap B_{R}}||y|^{\tilde{\gamma}_2}\nabla\tilde{u}(y)|dy,\\
& \int_{K\cap B_{1}}||x|^{\gamma_3} u(x)|^qdx
=\frac{n}{\gamma_1 s+n}\int_{K\cap B_{1}}||y|^{\tilde{\gamma}_3}\tilde{u}(y)|^qdy.
\end{split}
\]
By (\ref{eqQ_10}) and the above, we have (\ref{eqQ_1}).
\bigskip
\noindent\textbf{Step 3.} We prove (\ref{eqQ_1}) for $p> 1$.
\medskip
Let $\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\gamma}_1, \bar{\gamma}_2$ and $\bar{\gamma}_3$ be defined by
\begin{equation*
\begin{split}
& \frac{1}{\bar{s}}=\frac{1}{s}+\frac{1}{p'}, \quad \bar{p}=1, \quad \frac{1}{\bar{q}}=\frac{s}{\bar{s}q},
\quad \bar{a}=\frac{as}{(1-a)\bar{s}+as}, \\
& \bar{\gamma}_1=\frac{\gamma_1s}{\bar{s}}, \quad \bar{\gamma}_2=\frac{\gamma_1s}{p'}+\gamma_2, \quad \bar{\gamma}_3=\frac{\gamma_3s}{\bar{s}}, \\
\end{split}
\end{equation*}
where $1/p+1/p'=1$.
It can be verified that $0<\bar{s}< s$, and $\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\gamma}_1, \bar{\gamma}_2, \bar{\gamma}_3$ satisfy (\ref{eqNCA_1}), (\ref{eqNCB_3}), (\ref{eqNCB_4}), $1/\bar{s}+\bar{\gamma}_1/n>0$, and $1/\bar{s}\le \bar{a}\bar{p}+(1-\bar{a})/\bar{q}$ (for detail of the verification, see Lemma \ref{lemPre2_1}).
So we can apply Step 2
to $|u|^{s/\bar{s}}$ to obtain, using H\"{o}lder's inequality and Young's inequality, that
\begin{equation*
\begin{split}
\displaystyle
&\quad \||x|^{\gamma_1}u\|_{L^s(K\cap B_{1})}^{s/\bar{s}}
= \||x|^{\bar{\gamma}_1}|u|^{s/\bar{s}}\|_{L^{\bar{s}}(K\cap B_{1})}\\
& \le C \||x|^{\bar{\gamma}_1}|u|^{s/\bar{s}}\|_{L^{\bar{s}}(K\cap B_2\setminus B_1)}+C\||x|^{\bar{\gamma}_2}\nabla |u|^{s/\bar{s}}\|_{L^1(K\cap B_2)}^{\bar{a}}\||x|^{\bar{\gamma}_3}|u|^{s/\bar{s}}\|_{L^{\bar{q}}(K\cap B_1)}^{1-\bar{a}}\\
& \le C \||x|^{\gamma_1}u\|^{s/\bar{s}}_{L^{s}(K\cap B_2\setminus B_1)}+C\||x|^{\bar{\gamma}_2} |u|^{s/\bar{s}-1}|\nabla u|\|_{L^1(K\cap B_2)}^{\bar{a}}\||x|^{\gamma_3}u \|_{L^{q}(K\cap B_1)}^{(1-\bar{a})q/\bar{q}}\\
& \le C\||x|^{\gamma_1}u\|^{s/\bar{s}}_{L^{s}(K\cap B_2\setminus B_1)}+C\||x|^{\bar{\gamma}_2-\gamma_2} |u|^{s/\bar{s}-1}\|^{\bar{a}}_{L^{p'}(K\cap B_2)}\||x|^{\gamma_2}\nabla u\|_{L^p(K\cap B_2)}^{\bar{a}}
\||x|^{\gamma_3}u \|_{L^{q}(K\cap B_1)}^{(1-\bar{a})s/\bar{s}}\\
&\le C \||x|^{\gamma_1}u\|^{s/\bar{s}}_{L^{s}(K\cap B_2\setminus B_1)}+C\| |x|^{\gamma_1}u\|^{\bar{a}s/p'}_{L^{s}(K\cap B_2)}\||x|^{\gamma_2}\nabla u\|_{L^p(K\cap B_2)}^{\bar{a}}\||x|^{\gamma_3}u \|_{L^{q}(K\cap B_1)}^{(1-\bar{a})s/\bar{s}}\\
& \le C \||x|^{\gamma_1}u\|^{s/\bar{s}}_{L^{s}(K\cap B_2\setminus B_1)}+ \displaystyle \frac{1}{2}\||x|^{\gamma_1}u\|_{L^s(K\cap B_{1})}^{s/\bar{s}}
+C\Big(\||x|^{\gamma_2}\nabla u\|_{L^p(K\cap B_2)}^{\bar{a}}\\
&\quad\cdot \||x|^{\gamma_3}u \|_{L^{q}(K\cap B_1)}^{(1-\bar{a})s/\bar{s}}\Big)^{1/(1-\bar{a}\bar{s}/p')}.
\end{split}
\end{equation*}
Inequality (\ref{eqQ2_1}) follows from the above and the definitions of $\bar{a}$ and $\bar{s}$.
Lemma \ref{lemQ_1} is proved.
\end{proof}
\bigskip
\noindent\emph{Proof of Theorem \ref{thmQ_2}.}
Without loss of generality, we assume $u\ge 0$. By (\ref{eqNCB_3}) and scaling, we may assume $\text{\textup{supp}}\, u\subset B_1$.
For $a=0$, we deduce from (\ref{eqNCB_3}), (\ref{eqNCB_4}) and (\ref{eqNCB_5}) that $\gamma_1=\gamma_3$ and $s=q$, thus (\ref{eqQ2_1}) is obvious.
In the rest of the proof we assume $0<a\le 1$.
\bigskip
\noindent\emph{Case 1.} $\displaystyle 1/s\le a/p+(1-a)/q$.
\medskip
In this case, inequality (\ref{eqQ2_1}) follows from Lemma \ref{lemQ_1} with $r_1=1$ and $r_2=2$.
\bigskip
\noindent\emph{Case 2.} $\displaystyle 1/s> a/p+(1-a)/q$.
\medskip
Case 2 can be reduced to Case 1 by section (V) in \cite{CKN} - this reduction is the same for $q>0$ even though $q\ge 1$ was assumed in the paper. For reader's convenience, we include such an argument here.
By (\ref{eqNCB_3}) and (\ref{eqNCB_5}), $1/p+(\gamma_2-1)/n\ne 1/q+\gamma_3/n$. Thus there exist some positive constants $\lambda_1$ and $\lambda_2$, such that $\hat{u}(x)=\lambda_1u(\lambda_2 x)$ satisfies $\||x|^{\gamma_2}\nabla \hat{u}\|_{L^p(K)}=1$ and $\||x|^{\gamma_3}\hat{u}\|_{L^{q}(K)}=1$. We claim that there exist some $0\le a', a''\le 1$, such that
\begin{equation}\label{eqthmQ_2_1}
\begin{split}
\||x|^{\gamma_1}\hat{u}\|_{L^s(K)} & \le C\left(\||x|^{\gamma_2}\nabla u\|_{L^p(K)}^{a'}\||x|^{\gamma_3}u\|_{L^q(K)}^{1-a'}+\||x|^{\gamma_2}\nabla u\|_{L^p(K)}^{a''}\||x|^{\gamma_3}u\|_{L^q(K)}^{1-a''}\right)\\
& =2C\||x|^{\gamma_2}\nabla \hat{u}\|_{L^p(K)}^a\||x|^{\gamma_3}\hat{u}\|_{L^{q}(K)}^{1-a}.
\end{split}
\end{equation}
Then by scaling, we have that (\ref{eqQ2_1}) holds for $u$.
To see (\ref{eqthmQ_2_1}) when $n\ge 2$,
notice that by (\ref{eqNCA_1}), (\ref{eqNCA_2})-(\ref{eqNCB_5}), it can be directly verified that $s, p, q, a, \gamma_1, \gamma_2$ and $\gamma_3$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}) with $\alpha=\mu=\beta=0$. Then (\ref{eqthmQ_2_1}) follows from Lemma \ref{lemPre_3} with $\alpha=\mu=\beta=0$.
If $n=1$, we can obtain (\ref{eqthmQ_2_1}) by the same proof as that of Lemma \ref{lemPre_3}, where we set $\alpha=\mu=\beta=0$ and choose $\alpha'=\alpha''=0$ there.
Theorem \ref{thmQ_2} is proved.
\qed
\bigskip
\noindent \emph{Proof of Theorem \ref{thmD_2}.}
The necessity part has been proved in Section \ref{sec_2}.
The sufficiency part follows from Theorem \ref{thmQ_2} with $K=\mathbb{R}^n$.
\qed
\section{Proof of the sufficiency part of Theorem \ref{thm_main}}\label{sec_5}
In this section, we prove the sufficiency part of Theorem \ref{thm_main}.
We first prove the sufficiency part of Theorem \ref{thm_main} when $1/s\le a/p+(1-a)/q$. We make use of Theorem \ref{thmD_2} (rather, its variants Theorem \ref{thmQ_2} and Lemma \ref{lemQ_1}) and Theorem \ref{thm1-new}.
\medskip
For $\delta, h>0$, denote $B'_{\delta}=\{x'\in\mathbb{R}^{n-1}\mid |x'|\le \delta\}$, $D_{\delta}^h=B'_{\delta}\times[0, h]$ and $D_{\delta}=D_{\delta}^1$.
\begin{lem}\label{lemQ_3}
Let $n\ge 2$, $0<\delta_1<\delta_2<\infty$, $h>0$, $s, p, q, a, \alpha, \mu$ and $\beta$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}) with $\gamma_1=\gamma_2=\gamma_3=0$.
Then there exists some positive constant $C$, depending only on $s, p, q, a, \alpha, \mu, \beta, \sigma$, $\delta_1, \delta_2$ and $h$,
such that for all $u\in C^{0, 1}(D_{\delta_2}^h)$
\begin{equation}\label{eqQ_3_1
\||x'|^{\alpha}u\|_{L^s(D_{\delta_1}^h)}\le C\||x'|^{\alpha}u\|_{L^s(D_{\delta_2}^h\setminus D_{\delta_1}^h)}+C\||x'|^{\mu}\nabla u\|_{L^p(D_{\delta_2}^h)}^a\||x'|^{\beta}u\|_{L^{q}(D_{\delta_2}^h)}^{1-a}.
\end{equation}
\end{lem}
\begin{proof}
Since $\gamma_1=\gamma_2=\gamma_3=0$, we deduce from (\ref{eqNCA_5}) and (\ref{eqNCA_6_3}) that
$1/s-a/p-(1-a)/q\ge (a(\mu-1)+(1-a)\beta-\alpha)/(n-1)=\frac{n}{n-1}(1/s-a/p-(1-a)/q)$, i.e.
\begin{equation}\label{eqQ_3_1_0
\frac{1}{s}\le \frac{a}{p}+\frac{1-a}{q}.
\end{equation}
Let $x=(r', \theta', x_n)$ be the cylindrical coordinates where $r'=|x'|$ and $\theta'=x'/|x'|$.
For simplicity, we only prove the lemma when $h=1$, $\delta_1=1$ and $\delta_2=2$.
The general case can be proved similarly.
If $a=0$, by (\ref{eqNCA_5}), (\ref{eqNCA_6_2}) and (\ref{eqNCA_7}), we have $s=q$ and $\alpha=\beta$, and therefore (\ref{eqQ_3_1}) is obvious. In the rest of proof we assume $0<a\le 1$.
\bigskip
\noindent\textbf{Step 1.} We prove inequality (\ref{eqQ_3_1}) when $p=1$.
\medskip
By (\ref{eqQ_3_1_0}), we have, in view of $p=1$ and $a\le 1$, that
\begin{equation}\label{eqPre2_8}
\frac{1}{s}-\frac{1-a}{q}\le a\le 1.
\end{equation}
\bigskip
\noindent \emph{Case 1.} $1/s-(1-a)/q=1$.
\medskip
By (\ref{eqPre2_8}), we have $a=1$ and $s=1$. Because of this, (\ref{eqNCA_5}), and the fact that $s=p=1$ and $\gamma_1=\gamma_2=\gamma_3=0$, we have $\alpha=\mu-1$.
Let
\begin{equation*
\hat{s}=1,\quad \hat{p}=1, \quad \hat{q}=q, \quad \hat{a}=1, \quad \hat{\gamma}_1=\alpha, \quad \hat{\gamma}_2=\alpha+1, \quad \hat{\gamma}_3=0.
\end{equation*}
It is easy to verify that $\hat{s}, \hat{p}, \hat{q}, \hat{a}, \hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3$ satisfy (\ref{eqNCA_1}), (\ref{eqNCB_2})-(\ref{eqNCB_5}) and $1/\hat{s}\le \hat{a}/\hat{p}+(1-\hat{a})/\hat{q}$. Apply Lemma \ref{lemQ_1} to $u(\cdot, x_n)$ for each fixed $0\le x_n\le 1$, with $K=\mathbb{R}^{n-1}$ and $s, p, q, a, \gamma_1, \gamma_2, \gamma_3$ replaced by $\hat{s}, \hat{p}, \hat{q}, \hat{a}, \hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3$, we have, with notation $\nabla'=\nabla_{x'}$, that
\[
\int_{B'_1}|x'|^{\alpha}|u(x', x_n)|dx'\le C\int_{B_{2}'\setminus B'_1}|x'|^{\alpha}|u(x', x_n)|dx'+C\int_{B_{2}'}|x'|^{\alpha+1}|\nabla' u(x', x_n)|dx'.
\]
Integrate the above in $x_n$ on $[0, 1]$, we have (\ref{eqQ_3_1}) in this case, i.e. \begin{equation*
\||x'|^{\alpha}u\|_{L^1(D_1)}\le C \||x'|^{\alpha}u\|_{L^1(D_{2}\setminus D_1)}+C\||x'|^{\alpha+1}\nabla'u\|_{L^1(D_{2})}.
\end{equation*}
\bigskip
\noindent\emph{Case 2.} $1/s-(1-a)/q < 1$.
\medskip
Let
\begin{equation}\label{eqPre2_2_b}
b=\frac{1}{a}\Big(\frac{1}{s}-\frac{1-a}{q}\Big), \quad \lambda=\frac{a(1-b)}{1-ab}.
\end{equation}
Since $a>0$, $b$ is well defined. In the definition of $\lambda$ above, we have used the assumption that $ab=1/s-(1-a)/q<1$.
By (\ref{eqQ_3_1_0}) with $p=1$, we have $b\le 1$. By (\ref{eqNCA_5}) and (\ref{eqNCA_6_2}), we have $1/s-(a(1/p-1/n)+(1-a)/q)=(a(\gamma_2+\mu)+(1-a)(\gamma_3+\beta)-(\gamma_1+\alpha))/n\ge 0$.
Thus when $p=1$, we have $b=(1/s-(1-a)/q)/a\ge (n-1)/n$. So $(n-1)/n\le b\le 1$. Consequently, we have $0\le \lambda\le 1$ in view of $0<a\le 1$.
Let
\begin{equation}\label{eqPre2_2_0}
\begin{split}
&
\hat{a}=ab, \quad \hat{s}=s, \quad \hat{p}=1, \quad
\frac{1}{\hat{q}}=\lambda+\frac{1-\lambda}{q}, \\
& \hat{\gamma}_1=\alpha, \quad \hat{\gamma}_2=\mu, \quad \hat{\gamma}_3=\lambda\mu+(1-\lambda)\beta.
\end{split}
\end{equation}
We have shown that $0<\hat{a}<1$.
Using (\ref{eqNCA_1})-(\ref{eqNCA_7}) and the assumption that $1/s-(1-a)/q < 1$,
it can be verified that $\hat{s}, \hat{p}, \hat{q}, \hat{a}, \hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3$ satisfy (\ref{eqNCA_1}), (\ref{eqNCB_2})-(\ref{eqNCB_5}) with $\hat{p}=1$, $1/\hat{s}\le \hat{a}/\hat{p}+(1-\hat{a})/\hat{q}$ and with $n$ replaced by $n-1$. For the details of the verification, see Lemma \ref{lemPre2_2}
and its proof.
Let $m=\min\{1, q, s\}$ and $1<\delta<2$ be some fixed number, set
\[
v:=u-\big(\mathop{\ooalign{$\int$\cr$-$}}_{D_{2}\setminus D_1}u^m\big)^{1/m}.
\]
Apply Lemma \ref{lemQ_1} with $p=1$
to $v(\cdot, x_n)$ for each fixed $0\le x_n\le 1$, with $r_1=1, r_2=2$,
and $s, p, q, a, \gamma_1, \gamma_2, \gamma_3$ replaced by $\hat{s}, \hat{p}, \hat{q}, \hat{a}, \hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3$,
we have
\begin{equation}\label{eqQ_3_2}
\begin{split}
& \quad \||x'|^{\alpha}v(\cdot, x_n)\|_{L^s(B_1')}\\
& \le C\||x'|^{\alpha}v(\cdot, x_n)\|_{L^s(B_2'\setminus B_1')}+C\||x'|^{\mu}\nabla'v(\cdot, x_n)\|_{L^1(B_2')}^{ab}\||x'|^{\hat{\gamma}_3}v(\cdot, x_n)\|_{L^{\hat{q}}(B_1')}^{1-ab}.
\end{split}
\end{equation}
Using the definition of $\hat{\gamma}_3$, $\hat{q}$ and $\lambda$ in (\ref{eqPre2_2_b}) and (\ref{eqPre2_2_0}),
and the fact that $0\le \lambda\le 1$,
we apply H\"older's inequality to estimate the last term in (\ref{eqQ_3_2}) as follows.
\begin{equation}\label{eqQ_3_3
\begin{split}
\||x'|^{\hat{\gamma}_3}v(\cdot, x_n)\|_{L^{\hat{q}}(B_1')}^{1-ab} & =\|||x'|^{\mu}v(\cdot, x_n)|^{\lambda}\cdot ||x'|^{\beta}v(\cdot, x_n)|^{1-\lambda}\|_{L^{\hat{q}}(B_1')}^{1-ab}\\
& \le \||x'|^{\mu}v(\cdot, x_n)\|_{L^1(B_1')}^{\lambda(1-ab)}\||x'|^{\beta}v(\cdot, x_n)\|_{L^q(B_1')}^{(1-\lambda)(1-ab)} \\
& \le \||x'|^{\mu}v(\cdot, x_n)\|_{L^1(B_1')}^{a(1-b)}\||x'|^{\beta}v(\cdot, x_n)\|_{L^q(B_1')}^{(1-a)}.
\end{split}
\end{equation}
Next, we estimate the term $\int_{B_{1}'}|x'|^{\mu}|v|dx'$ in the above.
Notice that
\[
|v(x',x_n)|\le C\int_{0}^{1}|v_{x_n}(x', t)|dt+C\int_{0}^{1}|v(x', t)|dt, \quad \forall\ (x', x_n)\in D_2.
\]
So, for each $x_n\in [0, 1]$,
we have
\begin{equation}\label{eqQ_3_4
\int_{B_{1}'}|x'|^{\mu}|v(x',x_n)|dx' \le C\int_{B_{1}'}\int_{0}^{1}|x'|^{\mu}|v_{x_n}(x', t)|dtdx'+C\int_{B_{1}'}\int_{0}^{1}|x'|^{\mu}|v(x', t)|dtdx'.
\end{equation}
Applying Lemma \ref{lemQ_1} in dimension $n-1$, we have, for every $x_n$ in $[0, 1]$, that
\[
\int_{B_1'}|x'|^{\mu}|v(x', x_n)|dx'\le C\int_{B_2'\setminus B_1'}|x'|^{\mu}|v(x', x_n)|dx'+C\int_{B_2'}|x'|^{\mu+1}|\nabla' v(x', x_n)|dx'.
\]
Integrating the above in $x_n$ over $[0, 1]$, and then inserting it into (\ref{eqQ_3_4}), we have
\begin{equation}\label{eqQ_3_3_1}
\int_{B_{1}'}|x'|^{\mu}|v(x',x_n)|dx' \le C\big( \||x'|^{\mu}v\|_{L^1(D_{2}\setminus D_{1})}+\||x'|^{\mu}\nabla v\|_{L^1(D_{2})}\big).
\end{equation}
Putting (\ref{eqQ_3_2}), (\ref{eqQ_3_3}) and (\ref{eqQ_3_3_1}) together, we have
\[
\begin{split}
\||x'|^{\alpha}v(\cdot,x_n)\|^s_{L^s(B'_{1})} & \le C\||x'|^{\alpha}v(\cdot,x_n)\|^s_{L^s(B'_2\setminus B'_{1})}
+ C\||x'|^{\mu} \nabla ' v(\cdot,x_n)\|_{L^1(B_{2}')}^{abs} \\
& \cdot\||x'|^{\beta}v(\cdot,x_n)\|_{L^q(B_{1}')}^{(1-a)s} \left(\||x'|^{\mu}\nabla v\|_{L^1(D_{2})}+\| |x'|^{\mu}v\|_{L^1(D_{2}\setminus D_{1})}\right)^{a(1-b)s}.
\end{split}
\]
Integrating the above in $x_n$ over $[0, 1]$, applying H\"older's inequality, and followed by Young's inequality, we have, using $abs+(1-a)s/q=1$, that \[
\begin{split}
\||x'|^{\alpha}v\|^s_{L^s(D_1)}
& \le C\||x'|^{\alpha}v\|^s_{L^s(D_2\setminus D_{1})}+C\||x'|^{\mu} \nabla ' v\|_{L^1(D_2)}^{abs} \||x'|^{\beta}v\|_{L^q(D_{1})}^{(1-a)s}
\big(\||x'|^{\mu}\nabla v\|_{L^1(D_{2})}\\
& \quad +\| |x'|^{\mu}v\|_{L^1(D_{2}\setminus D_{1})}\big)^{a(1-b)s}\\
& \le C\||x'|^{\alpha}v\|^s_{L^s(D_2\setminus D_{1})}+C \big(\||x'|^{\mu}\nabla v\|_{L^1(D_{2})}+\||x'|^{\mu} \nabla ' v\|_{L^1(D_2)}^b\\
&\quad \cdot\| |x'|^{\mu}v\|^{1-b}_{L^1(D_{2}\setminus D_{1})}\big)^{as}
\||x'|^{\beta}v\|_{L^q(D_{1})}^{(1-a)s} \\
& \le C\||x'|^{\alpha}v\|^s_{L^s(D_2\setminus D_{1})}+C \big(\||x'|^{\mu}\nabla v\|_{L^1(D_{2})} + \||x'|^{\mu} v\|_{L^1(D_{2}\setminus D_{1})}\big)^{as} \\
& \quad \cdot\||x'|^{\beta} v\|^{(1-a)s}_{L^{q}(D_{1})}. \end{split}
\]
By the definition of $v$ and the above, using $m\le s, q$, we have
\begin{equation}\label{eqQ_3_5}
\begin{split}
& \quad \||x'|^{\alpha}u\|_{L^s(D_1)}\\
& \le C\||x'|^{\alpha}u\|_{L^s(D_{2}\setminus D_{1})}+ C \big(\||x'|^{\mu}\nabla u\|^a_{L^1(D_{2})}+ \| |x'|^{\mu}v\|^a_{L^1(D_{2}\setminus D_{1})}\big) \||x'|^{\beta} u\|^{1-a}_{L^{q}(D_{2})}.
\end{split}
\end{equation}
Since $m\le 1$ and $1\le |x'|\le 2$ in $D_2\setminus D_1$, we apply
Theorem \ref{thm1-new} to obtain
\begin{equation}\label{eqQ_3_7}
\| |x'|^{\mu}v\|_{L^1(D_{2}\setminus D_{1})}\le C\|v\|_{L^1(D_{2}\setminus D_{1})}\le C \|\nabla u\|_{L^1(D_{2}\setminus D_{1})}\le C \| |x'|^{\mu}\nabla u\|_{L^1(D_{2}\setminus D_{1})}. \end{equation}
By (\ref{eqQ_3_5}) and (\ref{eqQ_3_7}), we have
\[
\||x'|^{\alpha}u\|_{L^s(D_1)}\le C\||x'|^{\alpha} u\|_{L^s(D_{2}\setminus D_{1})}+C\||x'|^{\mu}\nabla u\|_{L^1(D_{2})}^{a}\||x'|^{\beta} u\|_{L^{q}(D_{2})}^{1-a}.
\]
The lemma is proved for $p=1$.
\bigskip
\noindent\textbf{Step 2.} We prove inequality (\ref{eqQ_3_1}) when $p>1$.
\medskip
Let $\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\alpha}, \bar{\mu}$ and $\bar{\beta}$ be defined by
\begin{equation*
\begin{split}
& \frac{1}{\bar{s}}=\frac{1}{s}+\frac{1}{p'}, \quad \bar{p}=1, \quad \frac{1}{\bar{q}}=\frac{s}{\bar{s}q},
\quad \bar{a}=\frac{as}{(1-a)\bar{s}+as}, \\
& \bar{\alpha}=\frac{\alpha s}{\bar{s}}, \quad
\bar{\mu}=\frac{\alpha s}{p'}+\mu, \quad \bar{\beta}=\frac{\beta s}{\bar{s}},
\end{split}
\end{equation*}
where $1/p+1/p'=1$.
It can be verified that $0<\bar{s}< s$, and $\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\alpha}, \bar{\mu}, \bar{\beta}$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}) with $\gamma_1=\gamma_2=\gamma_3=0$ and $s, p, q, a, \alpha, \mu, \beta$ replaced by $\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\alpha}, \bar{\mu}, \bar{\beta}$ respectively. For the details of the verification, see Lemma \ref{lemPre2_1} and its proof.
For $u\in C^{0, 1}(D_2)$,
we have $|u|^{s/\bar{s}}\in C^{0, 1}(D_{2})$.
Apply (\ref{eqQ_3_1}) with $p=1$ to $|u|^{s/\bar{s}}$, we have, using H\"{o}lder's inequality and Young's inequality, that
\begin{equation*
\begin{split}
&\quad \||x'|^{\alpha}u\|_{L^s(D_1)}^{s/\bar{s}}
= \||x'|^{\bar{\alpha}}|u|^{s/\bar{s}}\|_{L^{\bar{s}}(D_1)}\\
& \le C\||x'|^{\bar{\alpha}}|u|^{s/\bar{s}}\|_{L^{\bar{s}}(D_{2}\setminus D_1)}+C\||x'|^{\bar{\mu}}\nabla |u|^{s/\bar{s}}\|_{L^1(D_{2})}^{\bar{a}}\||x'|^{\bar{\beta}} |u|^{s/\bar{s}}\|_{L^{\bar{q}}(D_{2})}^{1-\bar{a}}\\
& = C\||x'|^{\alpha} u\|_{L^{s}(D_{2}\setminus D_1)}^{s/\bar{s}}+C\||x'|^{\bar{\mu}-\mu} |u|^{s/\bar{s}-1}\cdot |x'|^{\mu}|\nabla u|\|_{L^1(D_{2})}^{\bar{a}}\||x'|^{\beta}u \|_{L^{q}(D_{2})}^{(1-\bar{a})q/\bar{q}}\\
&\le C\||x'|^{\alpha} u\|_{L^{s}(D_{2}\setminus D_1)}^{s/\bar{s}}+C\| |x'|^{\alpha}u\|^{\bar{a}s/p'}_{L^{s}(D_{2})}\||x'|^{\mu}\nabla u\|_{L^p(D_{2})}^{\bar{a}}\||x'|^{\beta}u \|_{L^{q}(D_{2})}^{(1-\bar{a})s/\bar{s}}\\
& \le C\||x'|^{\alpha} u\|_{L^{s}(D_{2}\setminus D_1)}^{s/\bar{s}}+\frac{1}{2}\| |x'|^{\alpha}u\|^{s/\bar{s}}_{L^s(D_{2})}+C\Big(\||x'|^{\mu}\nabla u\|_{L^p(D_{2})}^{\bar{a}}\||x'|^{\beta}u \|_{L^{q}(D_{2})}^{(1-\bar{a})s/\bar{s}}\Big)^{1/(1-\bar{a}\bar{s}/p')}.
\end{split}
\end{equation*}
Inequality (\ref{eqQ_3_1}) follows from the above and the definitions of $\bar{a}$ and $\bar{s}$. Lemma \ref{lemQ_3} is proved.
\end{proof}
\begin{rmk}
In the proof of Lemma \ref{lemQ_3}, when $a=1$ or when $0<a<1$ and $1/s+1-1/p\le q/s$, we can use Theorem A and the classical Poincar\'{e}'s inequality instead of Theorem \ref{thmD_2} and \ref{thm1-new}.
\end{rmk}
For $0\le r_1< r_2\le \infty$ and $\epsilon>0$, let
\begin{equation}\label{eqR_e}
K_{r_1, r_2, \epsilon}:=\{x\in\mathbb{R}^n\mid r_1\le |x|< r_2,\ \ |x'|\le \epsilon |x|\}.
\end{equation}
\begin{lem}\label{lemQ_4
Let $n\ge 2$, $0\le r_1< r_2\le \infty$, $0<\epsilon_1<\epsilon_2\le 1$, $K_{\epsilon_i}:=K_{r_1, r_2, \epsilon_i}$, $i=1, 2$, and let $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ be real numbers satisfying (\ref{eqNCA_1})-(\ref{eqNCA_7}) with
$1/s\le a/p+(1-a)/q$. Then there exists some positive constant $C$, depending only on $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu, \beta, \epsilon_1, \epsilon_2, r_1$ and $r_2$,
such that for all $u\in C^{1}(\bar{K}_{\epsilon_2})$,
\begin{equation}\label{eqQ_4_1
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(K_{\epsilon_1})} \le C\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(K_{\epsilon_2}\setminus K_{\epsilon_1})}+C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(K_{\epsilon_2})}^{a}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(K_{\epsilon_2})}^{1-a}.
\end{equation}
Furthermore, on any compact set in the parameter space in which (\ref{eqNCA_1})-(\ref{eqNCA_3}) hold, the constant $C$ is bounded.
\end{lem}
\begin{rmk}
Consider more general cones $K_{r_1, r_2, \Omega}=\{rx\mid r_1\le r\le r_2,\ x\in \Omega\}$ for some open set $\Omega\subset\mathbb{S}^{n-1}$ with Lipschitz boundary. For open sets $\Omega_1\subset\bar{\Omega}_1\subset \Omega_2\subset\mathbb{S}^{n-1}$ with Lipschitz boundaries, Lemma \ref{lemQ_4} still holds with $K_{r_1, r_2, \epsilon_i}$ replaced by $K_{r_1, r_2, \Omega_i}$, $i=1, 2$.
\end{rmk}
\bigskip
\noindent\emph{Proof of Lemma \ref{lemQ_4}.}
For $\epsilon>0$, denote $K_{\epsilon}:=K_{r_1, r_2, \epsilon}$, and let $K_{\epsilon}^{+}:=K_{\epsilon}\cap\{x_n\ge 0\}$, $K_{\epsilon}^{-}:=K_{\epsilon}\cap\{x_n< 0\}$. We will only prove (\ref{eqQ_4_1}) with $K_{\epsilon_i}$, $i=1, 2$, replaced by $K_{\epsilon_i}^+$. The estimate on $K_{\epsilon_i}^{-}$ is similar.
\bigskip
\noindent\emph{Case 1.} $0<r_1<r_2<\infty$.
\medskip
In this case, there exists a diffeomorphism $y=\Phi(x)$ from $K_{\epsilon_i}^+$ to $D_{\epsilon_i}=B_{\epsilon_i}'\times [0, 1]$, satisfying $|y'|/C\le |x'|\le C|y'|$. Let $\tilde{\mu}=\mu+\gamma_2-\gamma_1/a+(1-a)\gamma_3/a$. By (\ref{eqNCA_6_1}) we have $\tilde{\mu}\ge \mu$.
Notice we are in the case $1/s\le a/p+(1-a)/q$, it can be verified that $s, p, q, a, \alpha, \tilde{\mu}, \beta$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}) with $\mu$ replaced by $\tilde{\mu}$ and $\gamma_1=\gamma_2=\gamma_3=0$. Applying Lemma \ref{lemQ_3} to $\hat{u}=u\circ \Phi^{-1}$, we have
\begin{equation*
\begin{split}
\||y'|^{\alpha}\hat{u}\|_{L^s(D_{\epsilon_1})} & \le C\||y'|^{\alpha}\hat{u}\|_{L^s(D_{\epsilon_2}\setminus D_{\epsilon_1})}+C\||y'|^{\tilde{\mu}}\nabla \hat{u}\|_{L^p(D_{\epsilon_2})}^a\||y'|^{\beta}\hat{u}\|_{L^{q}(D_{\epsilon_2})}^{1-a}\\
& \le C\||y'|^{\alpha}\hat{u}\|_{L^s(D_{\epsilon_2}\setminus D_{\epsilon_1})}+C\||y'|^{\mu}\nabla \hat{u}\|_{L^p(D_{\epsilon_2})}^a\||y'|^{\beta}\hat{u}\|_{L^{q}(D_{\epsilon_2})}^{1-a}.
\end{split}
\end{equation*}
Inequality (\ref{eqQ_4_1}) follows immediately.
\bigskip
\noindent\emph{Case 2.} $r_1=0$ or $r_2=\infty$.
\medskip
Working with $u(\lambda x)$ instead of $u(x)$, we only need to treat the cases when $r_1 =0$ and $r_2=1$, or $r_1=1$ and $r_2=\infty$, or $r_1=0$ and $r_2=\infty$.
Let $R_k:=\{x\in\mathbb{R}^n\mid 2^{k-1}\le |x| < 2^{k}\}$, $k\in \mathbb{Z}$. By Case 1, (\ref{eqNCA_5}) and scaling, we have, for every $k\in \mathbb{Z}$, that
\begin{equation}\label{eqlem_in}
\begin{split}
\||x|^{\gamma_1}|x'|^{\alpha}u\|^s_{L^s(R_k\cap K_{\epsilon_1})} & \le C\||x|^{\gamma_1}|x'|^{\alpha}u\|^s_{L^s(R_k\cap K_{\epsilon_2}\setminus K_{\epsilon_1})}+C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(R_k\cap K_{\epsilon_2})}^{as}\\
& \quad\cdot\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(R_k\cap K_{\epsilon_2})}^{(1-a)s}.
\end{split}
\end{equation}
When $r_1=0$ and $r_2=\infty$, take the sum of (\ref{eqlem_in}) over all $k\in \mathbb{Z}$, we have, using $as/p+\frac{(1-a)s}{q}\ge 1$ and
(\ref{eqD_C}), that
\[
\begin{split}
& \quad \||x|^{\gamma_1}|x'|^{\alpha}u\|^s_{L^s(K_{\epsilon_1})} \\
& \le C\||x|^{\gamma_1}|x'|^{\alpha}u\|^s_{L^s(K_{\epsilon_2}\setminus K_{\epsilon_1})}+C\sum_{k=-\infty}^{\infty}\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(R_k\cap K_{\epsilon_2})}^{as}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(R_k\cap K_{\epsilon_2})}^{(1-a)s}\\
& \le C\||x|^{\gamma_1}|x'|^{\alpha}u\|^s_{L^s(K_{\epsilon_2}\setminus K_{\epsilon_1})}+C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(K_{\epsilon_2})}^{as}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(K_{\epsilon_2})}^{(1-a)s}.
\end{split}
\]
So (\ref{eqQ_4_1}) is proved when $r_1=0$ and $r_2=\infty$.
Inequality (\ref{eqQ_4_1}) for $r_1=0$ and $r_2=1$ follows by summing (\ref{eqlem_in}) over $k\le 0$. For $r_1=1$ and $r_2=\infty$, we sum (\ref{eqlem_in}) over $k\ge 0$.
Lemma \ref{lemQ_4} is proved.
\qed
\bigskip
\noindent\emph{Proof of the sufficiency part of Theorem \ref{thm_main} when $1/s\le a/p+(1-a)/q$.}
\medskip
Fix $\epsilon>0$ small, let $K_{\epsilon}$ be the cone defined by (\ref{eqR_e}) with $r_1=0$ and $r_2=\infty$.
By (\ref{eqNCA_1}), (\ref{eqNCA_3}), (\ref{eqNCA_5}), (\ref{eqNCA_6_2}) and (\ref{eqNCA_7}), we have that $s, p, q, a, \gamma_1+\alpha, \gamma_2+\mu, \gamma_3+\beta$ satisfy (\ref{eqNCA_1}) and (\ref{eqNCB_2})-(\ref{eqNCB_5}) with $\gamma_1, \gamma_2, \gamma_3$ replaced by $ \gamma_1+\alpha, \gamma_2+\mu, \gamma_3+\beta$ respectively. Then by Theorem \ref{thmQ_2}, we have
\begin{equation*
\||x|^{\gamma_1+\alpha}u\|_{L^s(\mathbb{R}^n\setminus K_{\epsilon})}\le C\||x|^{\gamma_2+\mu}\nabla u\|_{L^p(\mathbb{R}^n\setminus K_{\epsilon})}^{a}\||x|^{\gamma_3+\beta}u\|_{L^q(\mathbb{R}^n\setminus K_{\epsilon})}^{1-a}.
\end{equation*}
Since $\epsilon |x|\le |x'|\le |x|$ for $x$ in $\mathbb{R}^n\setminus K_{\epsilon}$, we have
\begin{equation}\label{eqNC_out}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(\mathbb{R}^n\setminus K_{\epsilon})}\le C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\mathbb{R}^n\setminus K_{\epsilon})}^{a}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(\mathbb{R}^n\setminus K_{\epsilon})}^{1-a}.
\end{equation}
By Lemma \ref{lemQ_4},
\begin{equation}\label{eqin_1}
\begin{split}
& \quad \||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(K_{\epsilon})} \\
& \le C\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(K_{2\epsilon}\setminus K_{\epsilon})}+C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(K_{2\epsilon})}^{a}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(K_{2\epsilon})}^{1-a}.
\end{split}
\end{equation}
It follows from (\ref{eqNC_out}) and (\ref{eqin_1}) that
\begin{equation}\label{eqin_2}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(\mathbb{R}^n)}\le C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\mathbb{R}^n)}^{a}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(\mathbb{R}^n)}^{1-a}.
\end{equation}
The sufficiency part of Theorem \ref{thm_main} is proved when $1/s\le a/p+(1-a)/q$.
\qed
\bigskip
Next, we prove the sufficiency part of Theorem \ref{thm_main} when $1/s> a/p+(1-a)/q$. We reduce it to the case $1/s= a/p+(1-a)/q$ by the following lemma. This reduction procedure is analogous to the arguments in Section (V) in \cite{CKN}.
\begin{lem}\label{lemPre_3}
Let $n\ge 2$, $\Omega$ be a bounded open set in $\mathbb{R}^n$ and
$u\in C^{0, 1}(\Omega)$. Assume that for any $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu, \beta$ satisfying (\ref{eqNCA_1})-(\ref{eqNCA_7}) and $1/s=a/p+(1-a)/q$, there exists some constant $C$, depending only on $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$, such that
\begin{equation}\label{eqPre3_0}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(\Omega)}\le C'\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\Omega)}^{a}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(\Omega)}^{1-a}.
\end{equation}
Then for any $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ satisfying (\ref{eqNCA_1})-(\ref{eqNCA_7}) with $1/s>a/p+(1-a)/q$,
there exists some constant $C$ and $0\le a', a''\le 1$, depending only on $s, p, q, a, \gamma_1, \gamma_2, \gamma_3$, $\alpha, \mu, \beta, \Omega$ and $C'$, such that
\begin{equation}\label{eqPre3_0'}
\begin{split}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(\Omega)} & \le C\Big(\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\Omega)}^{a'}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(\Omega)}^{1-a'}\\
& \quad +\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\Omega)}^{a''}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(\Omega)}^{1-a''}\Big).
\end{split}
\end{equation}
\end{lem}
\begin{proof}
For $u\in C^{0, 1}(\Omega)$, we assume (\ref{eqPre3_0}) holds, and we will prove (\ref{eqPre3_0'}). Let $C$ denote a positive constant depending only on
$s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu, \beta, \Omega$ and $C'$ which may vary from line to line. Condition (\ref{eqNCA_7}) and $1/s>a/p+(1-a)/q$ imply $0<a<1$.
Denote
$A:=\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(\mathbb{R}^n)}$ and $B:=\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^{q}(\mathbb{R}^n)}$.
For constants $0\le a', a''\le 1$, $\alpha', \alpha''$, we define $s', s'', \gamma_1',\gamma_1''$ by
\begin{equation}\label{eqPre3_2
\begin{split}
& \frac{1}{s'}=\frac{a'}{p}+\frac{1-a'}{q}, \quad \gamma'_1+\alpha'=a'(\gamma_2+\mu-1)+(1-a')(\gamma_3+\beta),\\
& \frac{1}{s''}=\frac{a''}{p}+\frac{1-a''}{q}, \quad \gamma''_1+\alpha''=a''(\gamma_2+\mu-1)+(1-a'')(\gamma_3+\beta).
\end{split}
\end{equation}
Let $\zeta(x)$ be a smooth function satisfying $\zeta(x)=1$ for $|x|\le 1$, $\zeta(x)=0$ for $|x|\ge 2$ and $|\nabla \xi(x)|\le 3$. We have
\begin{equation}\label{eqPre3_7
\||x|^{\gamma_1}|x_1|^{\alpha}u\|_{L^s(\mathbb{R}^n)}\le \||x|^{\gamma_1}|x_1|^{\alpha}\zeta u\|_{L^s(\mathbb{R}^n)}+\||x|^{\gamma_1}|x_1|^{\alpha}(1-\zeta)u\|_{L^s(\mathbb{R}^n)} =: I_1+I_2.
\end{equation}
We estimate
\begin{equation}\label{eqPre3_8
I_1\le \||x|^{\gamma'_1}|x_1|^{\alpha'}u\|_{L^{s'}(\mathbb{R}^n)}\left(\int_{|x|\le 2}\left||x|^{\gamma_1-\gamma'_1}|x'|^{\alpha-\alpha'}\right|^{ss'/(s'-s)}\right)^{1/s-1/s'},
\end{equation}
and
\begin{equation}\label{eqPre3_9
I_2\le \||x|^{\gamma''_1}|x_1|^{\alpha''}u\|_{L^{s''}(\mathbb{R}^n)}\left(\int_{|x|\ge 1}\left||x|^{\gamma_1-\gamma''_1}|x'|^{\alpha-\alpha''}\right|^{ss''/(s''-s)}\right)^{1/s-1/s''}.
\end{equation}
by H\"{o}lder's inequality, provided
\begin{equation}\label{eqPre3_3
\frac{1}{s'}< \frac{1}{s} \ \ \mbox{and}\ \ \frac{1}{s''}< \frac{1}{s}.
\end{equation}
The second integrals on the right hand sides in (\ref{eqPre3_8}) and (\ref{eqPre3_9}) are finite if
\begin{equation}\label{eqPre3_4
\frac{1}{s'}+\frac{\gamma'_1+\alpha'}{n}<\frac{1}{s}+\frac{\gamma_1+\alpha}{n} < \frac{1}{s''}+\frac{\gamma''_1+\alpha''}{n},
\end{equation}
\begin{equation}\label{eqPre3_5
\frac{1}{s'}+\frac{\alpha'}{n-1}<\frac{1}{s}+\frac{\alpha}{n-1} \ \ \mbox{and}\ \ \frac{1}{s''}+\frac{\alpha''}{n-1}<\frac{1}{s}+\frac{\alpha}{n-1}.
\end{equation}
By the assumption of the lemma, we will have
\begin{equation}\label{eqPre3_18}
\||x|^{\gamma'_1}|x_1|^{\alpha'}u\|_{L^{s'}(\mathbb{R}^n)}\le CA^{a'} B^{1-a'}, \quad \||x|^{\gamma''_1}|x_1|^{\alpha''}u\|_{L^{s''}(\mathbb{R}^n)}\le CA^{a''}B^{1-a''},
\end{equation}
provided
(\ref{eqNCA_1})-(\ref{eqNCA_7}) with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ or $s'', a'', \gamma_1'', \alpha''$ respectively.
So by (\ref{eqPre3_7})-(\ref{eqPre3_9}) and (\ref{eqPre3_18}), to prove (\ref{eqPre3_0'}), we only need to choose appropriate $a', a'', \alpha'$ and $\alpha''$ such that (\ref{eqPre3_3})-(\ref{eqPre3_5}) are satisfied, and (\ref{eqNCA_1})-(\ref{eqNCA_7}) hold with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ or $s'', a'', \gamma_1'', \alpha''$ respectively.
The choice of $a'$ and $\alpha'$ and the choice of $a''$ and $\alpha''$ can be made independently and analogously. We always require $a'$ and $a''$ to be close to $a$ and in particular $0< a', a''< 1$.
By (\ref{eqPre3_2}), conditions (\ref{eqNCA_1}), (\ref{eqNCA_5}) and (\ref{eqNCA_7}) always hold with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ or $s'', a'', \gamma_1'', \alpha''$ respectively.
By (\ref{eqPre3_2}), we have
\[
a'(\gamma_2+\mu)+(1-a')(\gamma_3+\beta)-(\gamma_1'+\alpha')=a'.
\]
By the above requirement on $a'$ and $a''$, we have (\ref{eqNCA_6_2}) with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ respectively. Similarly, we have (\ref{eqNCA_6_2}) with $s, a, \gamma_1, \alpha$ there replaced by $s'', a'', \gamma_1'', \alpha''$ respectively.
By (\ref{eqPre3_2}), we have
\[
\frac{1}{s'}+\frac{\gamma_1'+\alpha'}{n}=a'(\frac{1}{p}+\frac{\gamma_2+\mu-1}{n})+(1-a')(\frac{1}{q}+\frac{\gamma_3+\beta}{n}).
\]
By (\ref{eqNCA_3}) and (\ref{eqNCA_5}), the right hand side of the above is strictly positive when $a'=a$. Thus as long as we choose $a'$ close enough to $a$, we have $1/s'+(\gamma_1'+\alpha')/n>0$, and therefore (\ref{eqNCA_3}) holds with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ respectively. Similarly, we have (\ref{eqNCA_3}) with $s, a, \gamma_1, \alpha$ there replaced by $s'', a'', \gamma_1'', \alpha''$ respectively, as long as we choose $a''$ close enough to $a$.
Moreover, by the assumption $1/s>a/p+(1-a)/q$ and the definition of $s'$ and $s''$ in (\ref{eqPre3_2}), we have that (\ref{eqPre3_3}) hold as long as $a'$ and $a''$ are close enough to $a$.
By (\ref{eqNCA_7}), (\ref{eqNCA_5}) and the assumption that $1/s>a/p+(1-a)/q$, we have $1/p+(\gamma_2+\mu-1)/n\ne 1/q+(\gamma_3+\beta)/n$. For (\ref{eqPre3_4}) to hold, we only need to require
\begin{equation*
\begin{split}
& 0< a'<a<a''< 1, \quad \textrm{ if } \frac{1}{p}+\frac{\gamma_2+\mu-1}{n}>\frac{1}{q}+\frac{\gamma_3+\beta}{n},\\
&1> a'>a>a''> 0, \quad \textrm{ if } \frac{1}{p}+\frac{\gamma_2+\mu-1}{n}<\frac{1}{q}+\frac{\gamma_3+\beta}{n}.
\end{split}
\end{equation*}
It remains to show that we can further require $a', a'', \alpha', \alpha''$ to satisfy additional properties, such that (\ref{eqPre3_5}) is satisfied, and (\ref{eqNCA_2}), (\ref{eqNCA_6_1}) and (\ref{eqNCA_6_3}) hold with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ or $s'', a'', \gamma_1'', \alpha''$ respectively.
By the definition of $1/s'$ and $1/s''$ in (\ref{eqPre3_2}), equation (\ref{eqPre3_5}) holds provided
\begin{equation}\label{eqPre3_20}
\alpha'<G(a'), \quad \alpha''<G(a''),
\end{equation}
where $G(\theta)=(n-1)(1/s-\theta/p-(1-\theta)/q)+\alpha$.
By the definition of $1/s'$ and $1/s''$ in (\ref{eqPre3_2}), equation (\ref{eqNCA_2}) holds with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ or $s'', a'', \gamma_1'', \alpha''$ respectively, provided
\begin{equation}\label{eqPre3_21}
\alpha'>F_1(a'), \quad \alpha''>F_1(a'),
\end{equation}
where $F_1(\theta)=-(n-1)(\theta/p+(1-\theta)/q)$.
By the definition of $\gamma_1'+\alpha'$ and $\gamma_1''+\alpha''$ in (\ref{eqPre3_2}),
equation (\ref{eqNCA_6_1}) holds with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ or $s'', a'', \gamma_1'', \alpha''$ respectively, provided
\begin{equation}\label{eqPre3_22}
\alpha'>F_2(a'), \quad \alpha''>F_2(a''),
\end{equation}
where $F_2(\theta)=\theta(\mu-1)+(1-\theta)\beta$.
By the definition of $1/s'$ and $1/s''$ in (\ref{eqPre3_2}), equation (\ref{eqNCA_6_3}) holds with $s, a, \gamma_1, \alpha$ there replaced by $s', a', \gamma'_1, \alpha'$ or $s'', a'', \gamma_1'', \alpha''$ respectively, provided (\ref{eqPre3_22}). So we only need to further require $a', a'', \alpha', \alpha''$ to satisfy (\ref{eqPre3_20})-(\ref{eqPre3_22}).
By (\ref{eqNCA_2}), $1/s+\alpha/(n-1)>0$, so we have $F_1(a)<G(a)$. By (\ref{eqNCA_6_3}), (\ref{eqNCA_7}) and the assumption that $1/s>a/p+(1-a)/q$, the inequality in (\ref{eqNCA_6_3}) is strict, and therefore $F_2(a)<G(a)$. So as long as $a'$ and $a''$ are close enough to $a$, we can find $\alpha'$ and $\alpha''$ to satisfy
(\ref{eqPre3_20})-(\ref{eqPre3_22}). Lemma \ref{lemPre_3} is proved.
\end{proof}
\bigskip
\noindent\emph{Proof of the sufficiency part of Theorem \ref{thm_main} when $1/s>a/p+(1-a)/q$.}
\medskip
In this case, by (\ref{eqNCA_5}) and (\ref{eqNCA_7}), we must have $1/p+(\gamma_2+\mu-1)/n\ne 1/q+(\gamma_3+\beta)/n$. So there exist some constants $C$ and $\lambda$, such that $\hat{u}=Cu(\lambda x)$ satisfies $\||x|^{\gamma_2}|x'|^{\mu}\nabla \hat{u}\|_{L^p(\mathbb{R}^n)}=1$ and $\||x|^{\gamma_3}|x'|^{\beta}\hat{u}\|_{L^{q}(\mathbb{R}^n)}=1$.
By Theorem \ref{thm_main} for $1/s\le a/p+(1-a)/q$, (\ref{eqNC}) holds for $\hat{u}$ and all $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu, \beta$ satisfying (\ref{eqNCA_1})-(\ref{eqNCA_7}) and $1/s\le a/p+(1-a)/q$.
Then by Lemma \ref{lemPre_3}, when $1/s> a/p+(1-a)/q$, we have
\[
\||x|^{\gamma_1}|x'|^{\alpha}\hat{u}\|_{L^p(\mathbb{R}^n)}\le C\||x|^{\gamma_2}|x'|^{\mu}\nabla \hat{u}\|_{L^p(\mathbb{R}^n)}^a\||x|^{\gamma_3}|x'|^{\beta}\hat{u}\|_{L^{q}(\mathbb{R}^n)}^{1-a}.
\]
Then (\ref{eqNC}) holds for $u$ by scaling.
\qed
\bigskip
The sufficiency part of Theorem \ref{thm_main} is proved.
\section{Two variants of Theorem \ref{thm_main} and Theorem A}\label{sec_6}
We have the following variant of Theorem \ref{thm_main}.
\begin{thm}\label{cor_in}
Let $n\ge 2$, $0\le r_1< r_2\le \infty$, $\epsilon>0$, $K:=K_{r_1, r_2, \epsilon}$ be defined as (\ref{eqR_e}), and $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ be real numbers satisfying (\ref{eqNCA_1})-(\ref{eqNCA_7}).
Then
there exists some positive constant $C$, depending only on $\epsilon, s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$,
such that for all $u\in C^{1}(\bar{K})$ with $u=0$ on $\partial K$,
\begin{equation}\label{eqNC_in}
\||x|^{\gamma_1}|x'|^{\alpha}u\|_{L^s(K)} \le C\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|^a_{L^p(K)}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(K)}^{1-a}.
\end{equation}
Furthermore, on any compact set in the parameter space in which (\ref{eqNCA_1})-(\ref{eqNCA_3}) hold, the constant $C$ is bounded.
\end{thm}
\begin{proof}
Extend $u$ to be zero outside $K$. When $1/s\le a/p+(1-a)/q$, apply Lemma \ref{lemQ_4} to $u$ with $K_{\epsilon_1}=K$ and $K_{\epsilon_2}$ be a larger cone containing $K$, we obtain (\ref{eqNC_in}).
Now we consider the case when $1/s> a/p+(1-a)/q$.
By (\ref{eqNCA_5}) and (\ref{eqNCA_7}), we have $1/p+(\gamma_2+\mu-1)/n\ne 1/q+(\gamma_3+\beta)/n$. So there exist some constants $C$ and $\lambda$, such that $\hat{u}=Cu(\lambda x)$ satisfies $\||x|^{\gamma_2}|x'|^{\mu}\nabla \hat{u}\|_{L^p(K)}=1$ and $\||x|^{\gamma_3}|x'|^{\beta}\hat{u}\|_{L^{q}(K)}=1$.
Since we have proved (\ref{eqNC_in}) when $1/s= a/p+(1-a)/q$, we can apply Lemma \ref{lemPre_3} to $\hat{u}$ to obtain, for some $0\le a', a''\le 1$, that
\[
\begin{split}
& \quad \||x|^{\gamma_1}|x'|^{\alpha}\hat{u}\|_{L^s(K)}\\
& \le C\Big(\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(K)}^{a'}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(K)}^{1-a'}+\||x|^{\gamma_2}|x'|^{\mu}\nabla u\|_{L^p(K)}^{a''}\||x|^{\gamma_3}|x'|^{\beta}u\|_{L^q(K)}^{1-a''}\Big)\\
& =2C\||x|^{\gamma_2}|x'|^{\mu}\nabla \hat{u}\|_{L^p(K)}^a\||x|^{\gamma_3}|x'|^{\beta}\hat{u}\|_{L^{q}(K)}^{1-a}.
\end{split}
\]
Inequality (\ref{eqNC_in}) follows.
\end{proof}
The following is a variant of Theorem A.
\begin{thm}\label{thm6_1}
Let $n\ge 1$, $R>0$, $B_R=\{x\in\mathbb{R}^n\mid |x|\le R\}$,
$0<\lambda<\infty$,
Assume
$s, p, q, a, \gamma_1, \gamma_2, \gamma_3$ satisfy (\ref{eqNCA_1}), (\ref{eqNCB_2})-(\ref{eqNCB_5}). Moreover, assume $1\le p\le \infty$ if $1\le \lambda<\infty$, and $\max\{1, (n-1)/(1+(n-1)\lambda)\}\le p\le \infty$ if $0<\lambda<1$.
Then there exists some positive constant $C$, depending only on $s, p, q, a, \gamma_1, \gamma_2, \gamma_3$ and $\lambda$,
such that for every nonnegative $w\in W^{1, 1}(B_R)$, $v:=w-(\mathop{\ooalign{$\int$\cr$-$}}_{\partial B_{|x|}}w^{1/\lambda})^{\lambda}$ satisfies
\begin{equation}\label{eqPre3_1_0'}
\||x|^{\gamma_1}v\|_{L^s(B_R)}\le C\||x|^{\gamma_2}\nabla v\|_{L^p(B_R)}^a\||x|^{\gamma_3}v \|_{L^q(B_R)}^{1-a}.
\end{equation}
Furthermore, on any compact set in the parameter space in which (\ref{eqNCA_1}) and (\ref{eqNCB_2}) hold,
the constant $C$ is bounded.
\end{thm}
\begin{proof}
Let $C$ denote a positive constant depending only on $s, p, q, a, \gamma_1, \gamma_2, \gamma_3$ and $\lambda$, which may vary from line to line.
For $a=0$, we deduce from (\ref{eqNCB_3}), (\ref{eqNCB_4}) and (\ref{eqNCB_5}) that $\gamma_1=\gamma_3$ and $s=q$, thus (\ref{eqPre3_1_0'}) is obvious. In the rest of the proof we assume $0<a\le 1$.
\bigskip
\noindent\textbf{Case 1.}
$1/s\le a/p+(1-a)/q$.
\medskip
Let
\[
R_k:=\{x'\in B_1\mid \frac{1}{2^k}\le |x'|\le \frac{1}{2^{k-1}}\}, \quad k\in \mathbb{Z}.
\]
We first prove that
\begin{equation}\label{eqPre3_1_2_0}
\||x|^{\gamma_1} v\|_{L^{s}(R_k)}\le C\||x|^{\gamma_2} \nabla v\|^{a}_{L^{p}(R_k)}\||x|^{\gamma_3} v\|^{1-a}_{L^{q}(R_k)}, \quad k\in \mathbb{Z}.
\end{equation}
By scaling, using (\ref{eqNCB_3}), we only need to prove (\ref{eqPre3_1_2_0}) for $k=1$.
Let $\bar{s}, \bar{q}$ and $\bar{a}$ be defined as in (\ref{eqPre0_0}). Since $a>0$, we have $\bar{a}>0$. Define $t\in (0, \infty]$ by
\[
\frac{1}{t}=\frac{1}{\bar{a}}(\frac{1}{\bar{s}}-\frac{1-\bar{a}}{\bar{q}}), \quad \textrm{ if }\frac{1}{\bar{s}}-\frac{1-\bar{a}}{\bar{q}}>0,
\]
and $t=\infty$ if $1/\bar{s}-(1-\bar{a})/\bar{q}=0$.
In the current case we have $1/s\le a/p+(1-a)/q$. By (\ref{eqNCB_3}) and (\ref{eqNCB_4}), we have $1/s\ge a(1/p-1/n)+(1-a)/q$. By the same arguments as in part (g) in the proof of Lemma \ref{lemPre2_1},
we have $1/\bar{s}\le \bar{a}+(1-\bar{a})/\bar{q}$ and $1/\bar{s}-(1-\bar{a})/\bar{q}\ge \bar{a}(n-1)/n\ge 0$, and therefore $(n-1)/n\le 1/t\le 1$.
We have proved that
\[
1\le t\le \frac{n}{n-1} \textrm{ for }n\ge 2\quad \textrm{ and }1\le t\le \infty\textrm{ for }n=1.
\]
By H\"older's inequality, provided $1/\bar{s}=\bar{a}/t+(1-\bar{a})/\bar{q}$, $0< \bar{a}\le 1$, $1\le t\le \infty$, and $\bar{q}>0$, we have,
\begin{equation}\label{eqPre3_1_2_1}
\|v\|_{L^s(R_1)}^{s/\bar{s}}=\||v|^{s/\bar{s}}\|_{L^{\bar{s}}(R_1)}\le \||v|^{s/\bar{s}}\|^{\bar{a}}_{L^t(R_1)}\||v|^{s/\bar{s}}\|^{1-\bar{a}}_{L^{\bar{q}}(R_1)}= \||v|^{s/\bar{s}}\|^{\bar{a}}_{L^t(R_1)}\|v\|^{(1-\bar{a})s/\bar{s}}_{L^{q}(R_1)}.
\end{equation}
where we have used the definition of $\bar{q}$ in the last step.
Since $1\le t\le n/(n-1)$, we apply H\"older's inequality and Sobolev's inequality to obtain
\begin{equation}\label{eqPre3_1_2_3
\begin{split}
\||v|^{s/\bar{s}}\|_{L^t(R_1)} & \le C\||v|^{s/\bar{s}}\|_{L^{\frac{n}{n-1}}(R_1)}\le C(\||v|^{s/\bar{s}}\|_{L^1(R_1)}+\|\nabla |v|^{s/\bar{s}}\|_{L^1(R_1)})\\
& \le C\left(\| |v|^{s/\bar{s}-1}\|_{L^{p'}(R_1)}\| v\|_{L^p(R_1)}+\| |v|^{s/\bar{s}-1}\|_{L^{p'}(R_1)}\| \nabla v\|_{L^p(R_1)}\right)\\
& \le C \|v\|_{L^s}^{s/p'}(\| v\|_{L^p(R_1)}+\| \nabla v\|_{L^p(R_1)}),
\end{split}
\end{equation}
where in the last step we have used the fact that $(s/\bar{s}-1)p'=s$ from the definition of $\bar{s}$.
Since $p\ge 1$ when $1\le \lambda<\infty$ and $\max\{1, n/(1+n\lambda)\}\le p\le \infty$ when $0<\lambda<1$, we have, by Theorem \ref{thm1-new}, that
\[
\|v\|_{L^p(R_1)}^p\le \int_{1/2}^{1} \|v\|_{L^p(\partial B_{\rho})}^pd\rho\le C\int_{1/2}^{1} \|\nabla_{tan}v\|_{L^p(\partial B_{\rho})}^pd\rho \le \|\nabla v\|^p_{L^p(R_1)}.
\]
By (\ref{eqPre3_1_2_1}), (\ref{eqPre3_1_2_3}) and the above, we have
\begin{equation*
\|v\|^{s/\bar{s}}_{L^{s}(R_1)}\le \||v|^{s/\bar{s}}\|^{\bar{a}}_{L^t(R_1)}\|v\|^{(1-\bar{a})s/\bar{s}}_{L^{q}(R_1)}
\le C\|v\|_{L^s}^{\bar{a}s/p'}\|\nabla v\|^{\bar{a}}_{L^p(R_1)}\|v\|^{(1-\bar{a})s/\bar{s}}_{L^q(R_1)}.
\end{equation*}
Using the definition of $\bar{a}$ and $\bar{s}$ in (\ref{eqPre0_0}), we deduce from the above that
\[
\|v\|_{L^{s}(R_1)}
\le C\|\nabla v\|^{a}_{L^p(R_1)}\|v\|^{1-a}_{L^q(R_1)}.
\]
We have proved (\ref{eqPre3_1_2_0}) for $k=1$.
Since we are in the case $as/p+(1-a)s/q\ge 1$, we can use (\ref{eqD_C}) to deduce from (\ref{eqPre3_1_2_0}) that
\[
\begin{split}
\sum_{k=-\infty}^{\infty}\int_{R_k}||x|^{\gamma_1}v|^{s}dx
& \le C \sum_{k=-\infty}^{\infty}\left(\int_{R_k}||x|^{\gamma_2}\nabla v|^pdx\right)^{as/p}\left(\int_{R_k}||x|^{\gamma_3}v|^qdx'\right)^{(1-a)s/q}\\
& \le C \left(\sum_{k=-\infty}^{\infty}\int_{R_k}||x|^{\gamma_2}\nabla v|^pdx\right)^{as/p}\left(\sum_{k=-\infty}^{\infty} \int_{R_k}||x|^{\gamma_3}v|^qdx\right)^{(1-a)s/q}\\
& \le C\||x^{\gamma_2} \nabla v\|^{as}_{L^{p}(B_R)}\||x|^{\gamma_3} v\|^{(1-a)s}_{L^{q}(B_R)}.
\end{split}
\]
We have proved (\ref{eqPre3_1_0'}) in Case 1.
\bigskip
\noindent\textbf{Case 2.} $1/s> a/p+(1-a)/q$.
\medskip
By (\ref{eqNCB_3}) and (\ref{eqNCB_5}), we have $1/p+(\gamma_2-1)/n\ne 1/q+\gamma_3/n$. Thus there exist some positive constants $\lambda_1$ and $\lambda_2$, such that $\hat{v}(x)=\lambda_1v(\lambda_2 x)$ satisfies $\||x|^{\gamma_2}\nabla \hat{v}\|_{L^p(K)}=1$ and $\||x|^{\gamma_3}\hat{v}\|_{L^{q}(K)}=1$.
By Case 1, arguing as in the paragraph below (\ref{eqthmQ_2_1}), we can find $0\le a', a''\le 1$ such that
\[
\begin{split}
\||x|^{\gamma_1}\hat{v}\|_{L^s(K)} & \le C\left(\||x|^{\gamma_2}\nabla v\|_{L^p(K)}^{a'}\||x|^{\gamma_3}v\|_{L^q(K)}^{1-a'}+\||x|^{\gamma_2}\nabla v\|_{L^p(K)}^{a''}\||x|^{\gamma_3}v\|_{L^q(K)}^{1-a''}\right)\\
& =2C\||x|^{\gamma_2}\nabla \hat{v}\|_{L^p(K)}^a\||x|^{\gamma_3}\hat{v}\|_{L^{q}(K)}^{1-a}.
\end{split}
\]
Inequality (\ref{eqPre3_1_0'}) follows.
\end{proof}
\section{Appendix: some facts about the parameters}\label{sec_A}
In this section, we prove some properties of the parameters $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ which we use in earlier sections.
Let $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ be real numbers satisfying (\ref{eqNCA_1}), define $\bar{s}, \bar{p}, \bar{q}, \bar{a}$, $\bar{\gamma}_1, \bar{\gamma}_2, \bar{\gamma}_3, \bar{\alpha}, \bar{\mu}$ and $ \bar{\beta}$ by
\begin{equation}\label{eqPre0_0}
\begin{split}
& \frac{1}{\bar{s}}=\frac{1}{s}+\frac{1}{p'}, \quad \bar{p}=1, \quad \frac{1}{\bar{q}}=\frac{s}{q\bar{s}},
\quad \bar{a}=\frac{as}{(1-a)\bar{s}+as}, \\
& \bar{\gamma}_1=\frac{\gamma_1s}{\bar{s}}, \quad \bar{\gamma}_2=\frac{\gamma_1s}{p'}+\gamma_2, \quad \bar{\gamma}_3=\frac{\gamma_3s}{\bar{s}}, \\
& \bar{\alpha}=\frac{\alpha s}{\bar{s}}, \quad
\bar{\mu}=\frac{\alpha s}{p'}+\mu, \quad \bar{\beta}=\frac{\beta s}{\bar{s}},
\end{split}
\end{equation}
where $1/p+1/p'=1$. Clearly, $0<\bar{s}< s$.
\begin{lem}\label{lemPre2_1
(i) If $n\ge 1$, $s, p, q, a, \gamma_1, \gamma_2$ and $\gamma_3$ satisfy (\ref{eqNCA_1}), (\ref{eqNCB_2})-(\ref{eqNCB_5}), then $\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\gamma}_1, \bar{\gamma}_2$ and $\bar{\gamma}_3$ also satisfy (\ref{eqNCA_1}) and (\ref{eqNCB_2})-(\ref{eqNCB_5}).
(ii) If $n\ge 2$, $s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu$ and $\beta$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}), then
$\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\gamma}_1$, $\bar{\gamma}_2, \bar{\gamma}_3, \bar{\alpha}, \bar{\mu}$ and $\bar{\beta}$ also satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}).
(iii) Assume (\ref{eqNCA_1}) holds, then $1/s\le a/p+(1-a)/q$ if and only if $1/\bar{s}\le \bar{a}/\bar{p}+(1-\bar{a})/\bar{q}$, and $1/s\ge a(1/p-1/n)+(1-a)/q$ if and only if $1/\bar{s}\ge \bar{a}(1-1/n)+(1-\bar{a})/\bar{q}$.
\end{lem}
\begin{proof}
For convenience, denote $\Lambda=(s, p, q, a, \gamma_1, \gamma_2, \gamma_3, \alpha, \mu, \beta)$ and $\bar{\Lambda}=(\bar{s}, \bar{p}, \bar{q}, \bar{a}, \bar{\gamma}_1$, $\bar{\gamma}_2, \bar{\gamma}_3, \bar{\alpha}, \bar{\mu}, \bar{\beta})$.
By (\ref{eqPre0_0}), it is clear that $\bar{\Lambda}$ satisfies (\ref{eqNCA_1}).
Now we prove the following statements (a)-(h), which imply (i)-(iii).
(a) If $n\ge 2$ and (\ref{eqNCA_2}) holds for $\Lambda$, then (\ref{eqNCA_2}) also holds for $\bar{\Lambda}$.
This follows from
\begin{equation*
\begin{split}
& \frac{1}{\bar{s}}+\frac{\bar{\alpha}}{n-1}=\frac{s}{\bar{s}}\Big(\frac{1}{s}+\frac{\alpha}{n-1}\Big), \\
& \frac{1}{\bar{p}}+\frac{\bar{\mu}}{n-1}=\frac{1}{p}+\frac{\mu}{n-1}+\frac{s}{p'}\Big(\frac{1}{s}+\frac{\alpha}{n-1}\Big), \\
& \frac{1}{\bar{q}}+\frac{\bar{\beta}}{n-1}=\frac{s}{\bar{s}}\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big),
\end{split}
\end{equation*}
(b) If $n\ge 1$ and (\ref{eqNCA_3}) holds for $\Lambda$, then (\ref{eqNCA_3}) also holds for $\bar{\Lambda}$.
This follows from
\begin{equation}\label{eqPre1_6}
\begin{split}
& \frac{1}{\bar{s}}+\frac{\bar{\gamma}_1+\bar{\alpha}}{n}=\frac{s}{\bar{s}}\Big(\frac{1}{s}+\frac{\gamma_1+\alpha}{n}\Big), \\
& \frac{1}{\bar{p}}+\frac{\bar{\gamma}_2+\bar{\mu}}{n}=\frac{1}{p}+\frac{\gamma_2+\mu}{n}+\frac{s}{p'}\Big(\frac{1}{s}+\frac{\gamma_1+\alpha}{n}\Big), \\
& \frac{1}{\bar{q}}+\frac{\bar{\gamma}_3+\bar{\beta}}{n}=\frac{s}{\bar{s}}\Big(\frac{1}{q}+\frac{\gamma_3+\beta}{n}\Big).
\end{split}
\end{equation}
(c) Let $n\ge 1$, then $\Lambda$ satisfies (\ref{eqNCA_5}) if and only if $\bar{\Lambda}$ satisfies (\ref{eqNCA_5}).
Using the definition of $\bar{s}$, $\bar{\gamma}_1$ and $\bar{\alpha}$, we have
\begin{equation}\label{eqPre1_V_1}
\begin{split}
\frac{1}{\bar{s}}+\frac{\bar{\gamma}_1+\bar{\alpha}}{n} & =\frac{s}{\bar{s}}\Big(1+\frac{as}{p'}\Big)^{-1}\Big(1+\frac{as}{p'}\Big)\Big(\frac{1}{s}+\frac{\gamma_1+\alpha}{n}\Big)\\
& =\frac{s}{\bar{s}}\Big(1-a+\frac{s}{\bar{s}}a\Big)^{-1}\left(\frac{1}{s}+\frac{\gamma_1+\alpha}{n}+a\frac{s}{p'}\Big(\frac{1}{s}+\frac{\gamma_1+\alpha}{n}\Big)\right).
\end{split}
\end{equation}
By the definition of $\bar{\gamma}_2$, $\bar{\gamma}_3$, $\bar{\mu}$ and $\bar{\beta}$, we have
\begin{equation}\label{eqPre1_V_2}
\begin{split}
& \bar{a}\Big(1+\frac{\bar{\gamma}_2+\bar{\mu}-1}{n}\Big)+(1-\bar{a})\Big(\frac{1}{\bar{q}}+\frac{\bar{\gamma}_3+\bar{\beta}}{n}\Big)\\
&=
\frac{s}{\bar{s}}\Big(1-a+\frac{s}{\bar{s}}a\Big)^{-1}\left(a\Big(\frac{1}{p}+\frac{\gamma_2+\mu-1}{n}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\gamma_3+\beta}{n}\Big)+a\frac{s}{p'}\Big(\frac{1}{s}+\frac{\gamma_1+ \alpha}{n}\Big)\right).
\end{split}
\end{equation}
By (\ref{eqPre1_V_1}) and (\ref{eqPre1_V_2}), we have (c).
(d) Let $n\ge 1$, then $\Lambda$ satisfies (\ref{eqNCA_6_1}) if and only if $\bar{\Lambda}$ satisfies (\ref{eqNCA_6_1}).
This follows from the fact
\[
\begin{split}
\bar{a}\bar{\gamma}_2+(1-\bar{a})\bar{\gamma}_3-\bar{\gamma}_1 & =\frac{s}{(1-a)\bar{s}+as}\left(a\gamma_2+(1-a)\gamma_3+a\gamma_1\Big(\frac{s}{\bar{s}}-1\Big)\right)- \frac{s}{\bar{s}}\gamma_1\\
& =\frac{s}{(1-a)\bar{s}+as}(a\gamma_2+(1-a)\gamma_3-\gamma_1).
\end{split}
\]
(e) $\Lambda$ satisfies (\ref{eqNCA_6_2}) if and only if $\bar{\Lambda}$ satisfies (\ref{eqNCA_6_2}).
This is because of
\[
\begin{split}
&\bar{a}(\bar{\gamma}_2+\bar{\mu})+(1-\bar{a})(\bar{\gamma}_3+\bar{\beta})-(\bar{\gamma}_1+\bar{\alpha})\\
& =\frac{s}{(1-a)\bar{s}+as}\left(a(\gamma_2+\mu)+(1-a)(\gamma_3+\beta)+a(\gamma_1+\alpha)\Big(\frac{s}{\bar{s}}-1\Big)\right)- \frac{s}{\bar{s}}(\gamma_1+\alpha)\\
& =\frac{s}{(1-a)\bar{s}+as}(a(\gamma_2+\mu)+(1-a)(\gamma_3+\beta)-(\gamma_1+\alpha)).
\end{split}
\]
(f) Let $n\ge 2$, then $\Lambda$ satisfies (\ref{eqNCA_6_3}) if and only if $\bar{\Lambda}$ satisfies (\ref{eqNCA_6_3}).
Using the definition of $\bar{s}$ and $\bar{\alpha}$, we have
\begin{equation}\label{eqPre1_V_3}
\begin{split}
\frac{1}{\bar{s}}+\frac{\bar{\alpha}}{n-1}
&=\frac{s}{\bar{s}}\Big(1+\frac{as}{p'}\Big)^{-1}\Big(1+\frac{as}{p'}\Big)\Big(\frac{1}{s}+\frac{\alpha}{n-1}\Big)\\
& =\frac{s}{\bar{s}}\Big(1-a+\frac{s}{\bar{s}}a\Big)^{-1}\left(\frac{1}{s}+\frac{\alpha}{n-1}+a\frac{s}{p'}\Big(\frac{1}{s}+\frac{\alpha}{n-1}\Big)\right).
\end{split}
\end{equation}
By the definition of $\bar{\mu}$ and $\bar{\beta}$, and the second and third equations in (\ref{eqPre1_5}), we have
\begin{equation}\label{eqPre1_V_4}
\begin{split}
& \bar{a}\left(\frac{1}{\bar{p}}+\frac{\bar{\mu}-1}{n-1}\right)+(1-\bar{a})\Big(\frac{1}{\bar{q}}+\frac{\bar{\beta}}{n-1}\Big)\\
& = \frac{s}{\bar{s}}\Big(1-a+\frac{s}{\bar{s}}a\Big)^{-1}\left(a\Big(\frac{1}{p}+\frac{\mu-1}{n-1}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big)+a\frac{s}{p'}\Big(\frac{1}{s}+\frac{\alpha}{n-1}\Big)\right).
\end{split}
\end{equation}
So (f) follows from (\ref{eqPre1_V_3}) and (\ref{eqPre1_V_4}).
(g) $1/s\le a/p+(1-a)/q$ if and only if $1/\bar{s}\le \bar{a}+(1-\bar{a})/\bar{q}$, and $1/s\ge a(1/p-1/n)+(1-a)/q$ if and only if $1/\bar{s}\ge \bar{a}(1-1/n)+(1-\bar{a})/\bar{q}$.
The first part follows from
\begin{equation}\label{eqPre2_1}
\begin{split}
\bar{a}+ \frac{1-\bar{a}}{\bar{q}}-\frac{1}{\bar{s}}
& =\frac{s}{\bar{s}(1-a+as/\bar{s})}\Big(a+\frac{1-a}{q}-\frac{1-a+as/\bar{s}}{s}\Big)\\
& =\frac{s}{\bar{s}(1-a+as/\bar{s})}\Big(a+\frac{1-a}{q}-\frac{1+as/p'}{s}\Big)\\
& =\frac{s}{\bar{s}(1-a+as/\bar{s})}\Big(\frac{a}{p}+\frac{1-a}{q}-\frac{1}{s}\Big).
\end{split}
\end{equation}
The second part follows from (\ref{eqPre2_1}) and the definition of $\bar{a}$, through the following computation
\[
\bar{a}\Big(1-\frac{1}{n}\Big)+ \frac{1-\bar{a}}{\bar{q}}-\frac{1}{\bar{s}} = \bar{a}+ \frac{1-\bar{a}}{\bar{q}}-\frac{1}{\bar{s}}-\frac{\bar{a}}{n}
=\frac{s}{\bar{s}(1-a+as/\bar{s})}\left(a\Big(\frac{1}{p}-\frac{1}{n}\Big)+\frac{1-a}{q}-\frac{1}{s}\right).
\]
(h) $\Lambda$ satisfies (\ref{eqNCA_7}) if and only if $\bar{\Lambda}$ satisfies (\ref{eqNCA_7}), and $\Lambda$ satisfies (\ref{eqNCB_5}) if and only if $\bar{\Lambda}$ satisfies (\ref{eqNCB_5}).
By the definition of $\bar{\Lambda}$, we have $a=0$ if and only if $\bar{a}=0$, and $a=1$ if and only if $\bar{a}=1$. By (\ref{eqPre1_6}) and using $s/\bar{s}=1+s/p'$, we have
\[
\frac{1}{p}+\frac{\gamma_2+\mu-1}{n}=\frac{1}{q}+\frac{\gamma_3+\beta}{n}=\frac{1}{s}+\frac{\gamma_1+\alpha}{n}
\]
if and only if
\
\frac{1}{\bar{p}}+\frac{\bar{\gamma}_2+\bar{\mu}-1}{n}=\frac{1}{\bar{q}}+\frac{\bar{\gamma}_3+\bar{\beta}}{n}=\frac{1}{\bar{s}}+\frac{\bar{\gamma}_1+\bar{\alpha}}{n}.
\]
By (\ref{eqPre1_V_3}) and (\ref{eqPre1_V_4}),
\[
\frac{1}{s}+\frac{\alpha}{n-1}=a\Big(\frac{1}{p}+\frac{\mu-1}{n-1}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big)
\]
if and only if
\[
\frac{1}{\bar{s}}+\frac{\bar{\alpha}}{n-1}=\bar{a}\left(1+\frac{\bar{\mu}-1}{n-1}\right)+(1-\bar{a})\Big(\frac{1}{\bar{q}}+\frac{\bar{\beta}}{n-1}\Big).
\]
(h) then follows from the above in view of the first part of (g).
Now (i) follows from the fact that $\bar{\Lambda}$ satisfies (\ref{eqNCA_1}), (b)-(d) and (h). (ii) follows from the fact that $\bar{\Lambda}$ satisfies (\ref{eqNCA_1}), (a)-(e) and (h). (iii) follows from (g).
\end{proof}
\begin{lem}\label{lemPre2_2}
Let $n\ge 2$, $s, p, q, a, \gamma_1, \gamma_2, \gamma_3,
\alpha, \beta$ and $\mu$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}) with $\gamma_1, \gamma_2, \gamma_3=0$, $a>0$, $p=1$, and $1/s-(1-a)/q<1$. Then the parameters $\hat{s}, \hat{p}, \hat{q}, \hat{a}, \hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3$, defined by (\ref{eqPre2_2_0}), satisfy (\ref{eqNCA_1}), (\ref{eqNCB_2})-(\ref{eqNCB_4}) with $n$ replaced by $n-1$, and $1/\hat{s}\le \hat{a}/\hat{p}+(1-\hat{a})/\hat{q}$.
\end{lem}
\begin{proof}
Assume $s, p, q, a, \gamma_1, \gamma_2, \gamma_3,
\alpha, \beta, \mu$ satisfy (\ref{eqNCA_1})-(\ref{eqNCA_7}) with $\gamma_1, \gamma_2, \gamma_3=0$. For convenience, denote $\Lambda=(s, p, q, a, \gamma_1, \gamma_2, \gamma_3,
\alpha, \beta, \mu)$ and $\widehat{\Lambda}=(\hat{s}, \hat{p}, \hat{q}, \hat{a}, \hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3)$.
Let $b$ and $\lambda$ be defined by (\ref{eqPre2_2_b}). By the arguments below (\ref{eqPre2_2_b}), we have $0<\hat{a}<1$ and $0\le \lambda\le 1$. By this and the definition of $\hat{s}, \hat{p}, \hat{q}, \hat{a}$, (\ref{eqNCA_1}) holds for $\widehat{\Lambda}$.
Also, by the definition (\ref{eqPre2_2_0}) of $\widehat{\Lambda}$ and (\ref{eqNCA_2}) for $\Lambda$, (\ref{eqNCB_2}) holds for $\hat{\Lambda}$ with $n$ replaced by $n-1$.
Next, by the definition of $\widehat{\Lambda}$, $\lambda$ and $b$, we have
\begin{equation}\label{eqPre2_2_3}
\begin{split}
& \hat{a}(1+\frac{\hat{\gamma}_2-1}{n-1})+(1-\hat{a})\left(\frac{1}{\hat{q}}+\frac{\hat{\gamma_3}}{n-1} \right)\\
&= \hat{a}\Big(1+\frac{\mu-1}{n-1}\Big)+(1-\hat{a})\left(\lambda \Big(1+\frac{\mu}{n-1}\Big)+(1-\lambda)\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big)\right)\\
&= ab(1+\frac{\mu-1}{n-1})+(1-ab)\left(\frac{a(1-b)}{1-ab}(1+\frac{\mu}{n-1})+\frac{1-a}{1-ab}(\frac{1}{q}+\frac{\beta}{n-1})\right)\\
& =a\left(b(1+\frac{\mu-1}{n-1})+(1-b)(1+\frac{\mu}{n-1})\right)+(1-a)(\frac{1}{q}+\frac{\beta}{n-1})\\
&=a\Big(1+\frac{\mu}{n-1}-\frac{b}{n-1}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\beta}{n-1}\Big)\\
&=\frac{n}{n-1}\left(a\Big(1+\frac{\mu-1}{n}\Big)+(1-a)\Big(\frac{1}{q}+\frac{\beta}{n}\Big)-\frac{1}{ns}\right),
\end{split}
\end{equation}
where the definition of $b$ is used in the last step.
On the other hand, we have, by using the definition of $\hat{\Lambda}$ and $\lambda$, that
\[
\frac{1}{\hat{s}}+\frac{\hat{\gamma}_1}{n-1}= \frac{1}{s}+\frac{\alpha}{n-1}.
\]
Since $\Lambda$ satisfies (\ref{eqNCA_5}), by the above and (\ref{eqPre2_2_3}), we have (\ref{eqNCB_3}) holds for $\widehat{\Lambda}$.
Using the definition of $\widehat{\Lambda}$ and the fact that $\gamma_1=\gamma_2=\gamma_3=0$, we have, by using the definition of $\hat{\Lambda}$ and $\lambda$, that
\[
\hat{a}\hat{\gamma}_2 +(1-\hat{a})\hat{\gamma}_3-\hat{\gamma}_1=a\mu +(1-a)\beta-\alpha=a(\mu+\gamma_2) +(1-a)(\beta+\gamma_3)-(\alpha+\gamma_1).
\]
In view of (\ref{eqNCA_6_2}) for $\Lambda$, (\ref{eqNCB_4}) holds for $\widehat{\Lambda}$.
Finally, by the definition of $\lambda$ and $\hat{a}$, we have
\[
\hat{a}+\frac{1-\hat{a}}{\hat{q}}=a+\frac{1-a}{q}.
\]
In view of (i) and the assumption $p=1$, we have $1/\hat{s}\le \hat{a}/\hat{p}+(1-\hat{a})/\hat{q}$.
\end{proof}
| 2024-02-18T23:40:11.469Z | 2021-12-22T02:04:57.000Z | algebraic_stack_train_0000 | 1,588 | 21,385 |
|
proofpile-arXiv_065-8099 | \section{Introduction}
Imaging techniques are used in many diverse
areas such as geophysics, astronomy, medical diagnostics, and police
work. The goal of imaging varies widely from determining the density
of the Earth's interior to reading license plates from
blurred photographs to issue speeding fines. My own interest
in the problem stems from seeing an image of Betelguese, a~red
giant $\sim 600$~ly away that has irregular features changing
with time. The~image was obtained using intensity
interferometry such as used in nuclear physics~\cite{boa90}.
After seeing this, the natural question was whether images could be
obtained for nuclear reactions. Needless to say, answers to such
questions tend to be negative.
In a~typical imaging problem, the~measurements yield
a~function (in our case, the correlation function $C$) which is related
in a~linear fashion to the function of interest (in our case, the
source function $S$):
\begin{equation}
\label{CKS}
C(q) = \int dr \, K(q,r) \, S(r) \, .
\end{equation}
In other words, given the data for $C$ with errors, the task of
imaging is the determination of the source function~$S$.
Generally, this requires an~inversion of the kernel~$K$. The~more
singular the kernel~$K$, the~better the chances for a~successful
restoration of~$S$.
In reactions with many particles in the final state, there is a~linear
relation of the type (\ref{CKS})
between the two-particle cross section $d^6 \sigma / d^3 {
p}_1 \, d^3 {
p}_2$ and the unnormalized relative distribution of emission points~$S'$ for
two particles. Interference and interaction terms between the two
particles of interest may be separated out from the general amplitude for the
reaction and described in terms of the two-particle
wavefunction~$\Phi^{(-)}$ (see Fig.~\ref{source}).
\begin{figure}
\begin{center}
\includegraphics[angle=0.0,
width=0.72\textwidth]{source.eps}
\end{center}
\caption{Separation of the interference and final-state
interactions, in terms of the two-particle wavefunction, from
the amplitude for the reaction.}
\label{source}
\end{figure}
The~rest of the amplitude squared, integrated in the cross
section over unobserved particles, yields the unnormalized Wigner
function~$S'$ for the distribution of emission points written
here in the two-particle frame:
\begin{equation}
{d^6 \sigma \over d^3{ p}_{1} \, d^3{ p}_{2}} =
\int
d^3{ r} \,
S'_{\vec{P}}(\vec{ r}) \,
|\Phi_{\vec{ p}_1 - \vec{ p}_2}^{(-)} (\vec{ r})|^2 \, .
\label{2PS}
\end{equation}
The vector $\vec{ r}$ is the relative separation between
emission
points and the equation refers to the case of particles with equal masses.
The size of the source~$S'$ is of the order of the spatial extent of
reaction. The~possibility of probing structures of this size arises
when the wave-function modulus squared, $|\Phi^{(-)}|^2$, possesses
pronounced structures, either due to interaction or symmetrization,
that vary rapidly with the relative momentum, typically at low
momenta.
The two-particle cross section can be normalized to the
single-particle cross sections to yield the correlation
function~$C$:
\begin{equation}
C(\vec{ p}_1 - \vec{ p}_2) =
{ {d^6 \sigma \over d^3{ p}_{1} \, d^3{ p}_{2}} \over
{d^3 \sigma \over d^3{ p}_{1}} \, {d^3 \sigma \over
d^3{ p}_{2}}} = \int
d^3{ r} \,
S_{\vec{ P}} (\vec{ r}) \,
|\Phi_{\vec{ p}_1 - \vec{ p}_2}^{(-)} (\vec{ r})|^2 \, .
\label{CPS}
\end{equation}
The source~$S$ is normalized to~1 as, for large
relative momenta, $C$ is close to~1 and $|\Phi|^2$ in
(\ref{CPS}) averages to~1:
\begin{equation}
\int d^3{ r}
\, S_{\vec{P}} (\vec{ r}) = 1 \, .
\end{equation}
Depending on how the particles are emitted from a~reaction, the
source may have different features. For a~prompt emission, we expect
the~source to be compact and generally isotropic. In the
case of prolonged emission, we expect the~source to be
elongated along the pair momentum, as the emitting system moves
in the two-particle cm. Finally, in the case of secondary
decays, we expect the~source may have an~extended tail.
In the following, I shall discuss restoring the
sources in heavy-ion reactions and extracting information
from the images \cite{bro97}.
\section{Imaging in the Reactions}
The interesting part of the correlation function is its deviation from~1
so we rewrite~(\ref{CPS})
\begin{eqnarray}
\nonumber
\Rftn{P}{q} =
\Corrftn{P}{q}-1
&=& \int \dn{3}{r} \left(\wfsquare{{q}}{{r}}-1\right)
\Source{P}{r} \\
&=& \int \dn{3}{r} K(\vec{q},\vec{r})
\, \Source{P}{r} \, .
\label{RKS}
\end{eqnarray}
From~(\ref{RKS}), it is apparent that to make
the imaging possible $\wfsquare{q}{r}$ must deviate from~1
either on account of symmetrization or interaction within
the pair. The angle-averaged version of (\ref{RKS}) is
\begin{equation}
{\cal R}_{P}({q}) = 4 \pi
\int dr \, r^2 \,
K_0 ({q},{r}) \,
S^0_P(r)
\label{RKS0}
\end{equation}
where $K_0$ is the angle-averaged kernel.
Let us first take the case of identical bosons with negligible
interaction, such as neutral pions or gammas. The two-particle
wavefunction is then
\begin{equation}
\wftn{q}{r}=\frac{1}{\sqrt{2}}\left( e^{i \vec{q}\cdot\vec{r}}
+e^{-i \vec{q}\cdot\vec{r}}\right) \, .
\end{equation}
The interference term causes $\wfsquare{q}{r}$ to deviate from 1 and
\begin{equation}
K(\vec{q},\vec{r}) = \cos{(2 \vec{q} \cdot \vec{r})}.
\end{equation}
In this case, the source is an inverse Fourier cosine-transform
of ${\cal R}_{P}$.
Also, the angle averaged source can be determined from a~Fourier
transformation (FT) of the angle-averaged~$C$ as the averaged
kernel is
\begin{equation}
K^0 (q,r) =
\frac{\sin{(2 q r)}}{2 q r} \, .
\end{equation}
While neutral pion and gamma correlations functions are difficult to measure,
charged pion correlations functions are not. The charged pion correlations
are often corrected approximately for the pion Coulomb interactions
allowing for the use of FT in the pion source determination.
In Figure~\ref{corpi}, I show one such~corrected correlation function for
negative pions from the Au + Au reaction at 10.8~GeV/nucleon
from the measurements by E877 collaboration at
AGS~\cite{bar97}.
\begin{figure}
\begin{center}
\includegraphics[angle=-90.0,
width=0.68\textwidth]{corpi1.eps}
\end{center}
\caption{Gamow-corrected
$\pi^-\pi^-$ correlation function for Au + Au reaction at
10.8~GeV/nucleon obtained by the E877
collaboration~\protect\cite{bar97}.}
\label{corpi}
\end{figure}
In Figure~\ref{pisor}, I show the relative distribution of emission
points for negative pions obtained through the FT of the
correlation function in Fig.~\ref{corpi}.
\begin{figure}
\begin{center}
\includegraphics[angle=0.0,
width=0.66\textwidth]{piminus1.eps}
\end{center}
\caption{Relative source function for negative pions from FT
of the correlation function in Fig.~\protect\ref{corpi}.}
\label{pisor}
\end{figure}
The~FT has been cut off at $q_{max} = 200~MeV/c$ giving
a~resolution in the source of $\Delta r \gapproxeq 1/(2 \,
q_{max}) = 2.0$~fm. The~data spacing gives the highest
distances that can be studied with FT of $r_{max} \lapproxeq 1/(2
\, \Delta q) = 20$~fm. As you see, the~relative source has
a~roughly Gaussian shape.
\section{Perils of Inversion}
For many particle pairs, such as proton pairs, interactions
cannot be ignored and the straightforward FT cannot be used.
Indeed, even in the charged-pion case, one might want to avoid the
approximate Coulomb correction. In lieu of this, we can simply
discretize the source and find the source that minimizes the $\chi^2$.
This procedure could work for any particle pair.
With measurements of $C$ at relative momenta $\lbrace q_i
\rbrace$ and assuming the source is constant over intervals
$\lbrace \Delta r_j \rbrace$, we can write Eq.~(\ref{RKS0}) as
\begin{eqnarray}
{\cal R}_i =
\Clm{0}{0}{q_i}-1 & = & \sum_j 4\pi \, \Delta r
\,
r_j^2 \, K_0 (q_i, r_j) \, S(r_j)\\ &
\equiv & \sum_j K_{ij} \, S_j \, .
\end{eqnarray}
The values ${S_j}$ can be varied to minimize the $\chi^2$:
\begin{equation}
\chi^2 = \sum_j \frac{({\cal R}^{th}(q_j)-
{\cal R}^{exp}(q_j))^2}{\sigma_j^2 } \, .
\end{equation}
Derivatives of the $\chi^2$ with respect to~$S$ give linear algebraic
eqs. for~$S$:
\begin{equation}
\sum_{ij} {1 \over \sigma_i^2} (K_{ij} \, S_j - {\cal
R}_i^{exp}) \, K_{ij} = 0 \, ,
\end{equation}
with the solution in a schematic matrix form:
\begin{equation}
S = (K^\top K)^{-1} \, K^\top \, {\cal R}^{exp} \, .
\label{SKR}
\end{equation}
There is an issue in the above: how do we discretize the source?
The~FT used before suggests fixed-size bins, e.g.~$\Delta r = 2$~fm.
However fixed size bins may not be ideal for all situations as I
will illustrate using Fig.~\ref{gong}. This figure shows the $pp$
correlation function from the measurements~\cite{gon91} of the
$^{14}$N + $^{27}$Al reaction at~75~MeV/nucleon, in different
intervals of total pair momentum.
\begin{figure}
\begin{center}
\includegraphics[angle=90.,width=3.0in]{fig3g.eps}
\end{center}
\caption{
Two-proton correlation function for the $^{14}$N + $^{27}$Al
reaction at~75~MeV/nucleon from the measurements of
Ref.~\protect\cite{gon91} for
three gates of total momentum imposed on protons emitted in the
vicinity of $\theta_{\rm lab} = 25^\circ$. }
\label{gong}
\end{figure}
The different regions in relative momentum are associated with
different physics of the correlation function. For example, the peak
around $q \sim 20$~MeV/c is associated with the $^{1}S_0$
resonance of the wavefunction with a characteristic scale of fm --
this gives access to a~short range structure of the source.
On the other hand, the decline in
the correlation function at low momenta is associated with the
Coulomb repulsion that dominates at large proton separation and
gives access to the source up to (20--30)~fm or more,
depending on how low momenta are available for~$C$. Should we
continue at the resolution of~$\Delta r \gapproxeq 2$~fm up to
such distances? No! At some point there would not be enough many
data points to determine the required number of source values!
Somehow, we should let the resolution vary, depending on the scale
at which we look.
A further issue is that the errors on the source may explode in certain
cases. The errors are given by the inverse square of the kernel:
\begin{equation}
\Delta^2 S_j = (K^\top \, K)^{-1}_{jj} \, .
\end{equation}
The square of the kernel may be diagonalized:
\begin{equation}
(K^\top \, K)_{ij} \equiv \sum_k {1 \over \sigma_k^2} K_{ki}
\, K_{kj} = \sum_\alpha \lambda_\alpha \, u_i^\alpha \,
u_j^\alpha \, .
\end{equation}
where $\lbrace u^\alpha \rbrace$ are orthonormal and
$\lambda_\alpha \ge 0$; the number of vectors of equals the
number of $r$ pts. The errors can be expressed as
\begin{equation}
\Delta^2 S_j = \sum_\alpha {1 \over \lambda_\alpha} \,
u_j^\alpha \, u_j^\alpha \, .
\end{equation}
You can see from the last equation that the errors blow up,
and inversion problem becomes unstable,
if one or more $\lambda$'s approach zero. This
must happen when $K$ maps a~region to zero (remember $K =
|\Phi|^2 - 1$), or when $K$ is too smooth and/or too high
resolution is demanded. A~$\lambda$ close to 0 may be also
hit by accident.
The~stability issue is illustrated with Figs.~\ref{simcor}
and~\ref{nocon2}. Figure~\ref{simcor} shows correlation
functions from model sources with small errors added on.
\begin{figure}
\begin{center}
\includegraphics[angle=0.,width=0.72\textwidth]{cormod.eps}
\end{center}
\caption{
The solid line represents the correlation function from a Gaussian
model source while the dashed lines represent the
correlation functions from a source with an extended tail.
The~points represent values of~$C$ with errors that are
typical for the measurements in Ref.~\protect\cite{gon91}
(Fig.~\protect\ref{gong}). }
\label{simcor}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0.,
width=0.72\textwidth]{sorc.eps}
\end{center}
\caption{
The solid histogram is the relative $pp$ source function $S$
restored from the simulated correlation function in
Fig.~\protect\ref{simcor} from the Gaussian model source
(open symbols there).
The~dashed line is the original source function that we used to
generate the correlation function. We employed
fixed-size intervals of $\Delta r = 2$~fm and we imposed
no constraints on~$S$.
}
\label{nocon2}
\end{figure}
Figure~\ref{nocon2} shows the source in 7~fixed-size intervals
of $\Delta r = 2$~fm. This source was restored following
Eq.~(\ref{SKR}), from the correlation function indicated in
Fig.~\ref{simcor}. The~errors in this case far exceed
the original source function. Every second value of
the restored source is negative.
Vast literature, extending back nearly 75 years, exists on
stability in inversion. One of the first researchers to recognize
the difficulty, Hadamard, in 1923~\cite{had23}, argued that
the potentially unstable problems should not be tackled. A~major
step forward was made by Tikhonov~\cite{tik63} who has shown
that placing constraints on the solution can have a~dramatic
stabilizing effect. In determining the source from data, we
developed a~method of optimized discretization for the source
which yields stable results even without any constraints~\cite{bro97}.
In our method, we first concentrate on the errors. We
use the $q$-values for which the correlation function is
determined and the errors of $\lbrace \sigma_i \rbrace$,
but we disregard the values~$\lbrace C_i \rbrace$. We
optimize the binning for the source function to minimize
expected errors relative to a~rough guess on the source
$S^{mod}$:
\begin{equation}
\sum_j {\Delta S_j \over S_j^{mod}} = \sum_{j} {1 \over
S_j^{mod}} \left( \sum_\alpha
{1 \over \lambda_\alpha} \,
u_j^\alpha \, u_j^\alpha \right)^{1/2} \, .
\end{equation}
Only afterwards we use $\lbrace C_i \rbrace$ to determine
source values $S_j$ with the optimized binning. This
consistently yields small errors and an introduction of
constraints may additionally reduce those errors.
The~proton source imaged using the optimized binning from the
correlation function in Fig.~\ref{simcor} is shown
in~Fig.~\ref{nocono}.
\begin{figure}
\begin{center}
\includegraphics[angle=0.,
width=0.72\textwidth]{sorm1.eps}
\end{center}
\caption{
Relative pp source function~$S$ restored (solid histogram)
through the optimized discretization from the correlation
function in Fig.~\protect\ref{simcor} (open symbols there),
together with the original source
function (dashed
line).
}
\label{nocono}
\end{figure}
\section{pp Sources}
Upon testing the method, we apply it to the~75~MeV/nucleon
$^{14}$N + $^{27}$Al data by Gong {\em et al.}~\cite{gon91} shown in
Fig.~\ref{gong}. In terms of the radial wavefunctions $g$, the
angle-averaged $pp$ kernel is
\begin{equation}
K_0 (q,r)=\frac{1}{2}\sum_{j s \ell \ell'} (2j+1) (g_{j
s}^{j j'} (r))^2-1 \, .
\end{equation}
We calculate the wavefunctions by solving radial Schr\"odinger
equations with REID93~\cite{sto94} and Coulomb potentials.
The~sources restored in the three total momentum intervals are
shown in Fig.~\ref{pps}, together with sources obtained
directly from a~Boltzmann equation model~\cite{dan95} (BEM)
for heavy-ion reactions.
\begin{figure}
\begin{center}
\includegraphics[width=4.55in]{sorpp.eps}
\end{center}
\caption{
Relative source
for protons emitted from the $^{14}$N + $^{27}$Al
reaction at 75~MeV/nucleon, in the vicinity of $\theta_{\rm
lab} = 25^\circ$, within three intervals of total momentum
of 270--390~MeV/c (left panel), 450--780~MeV/c
(center panel), and 840--1230~MeV/c (right panel). Solid and
dotted lines
indicate, respectively, the source values extracted from
data~\protect\cite{gon91} and obtained within the
Boltzmann-equation calculation.
}
\label{pps}
\end{figure}
The sources become more focussed around $r=0$ as total momentum
increases. Now, the~value of the source as $r \rightarrow 0$ gives
information on the average density at freeze-out, on space-averaged
phase-space density, and on the
entropy per nucleon. The~freeze-out density may be
estimated from
\begin{equation}
\rho_{freeze} \simeq N_{\rm part} \times
\Sourcenovec{}{r\rightarrow 0} \, ,
\end{equation}
where $N_{\rm part}$ is participant multiplicity. Using the
intermediate momentum range, we find
\begin{equation}
\rho_{freeze}
\approx
(17)(.0015{\rm fm}^{-3}) = .16 \rho_0 \, .
\end{equation}
The space--averaged phase-space density may be estimated from
\begin{equation}
f(\vec{p})\approx\frac{(2\pi)^3}{2s+1}\spectra{P}
\Sourcenovec{\vec{P}}{{r}\rightarrow 0} \, .
\end{equation}
Using the intermediate momentum range we
get $\langle f \rangle \approx .23$ for this reaction.
The transport model reproduces the low-$r$ features of the
sources, including the increased focusing as the total momentum
increases. The~average freeze-out density obtained directly
within the model is $\rho_{freeze} \simeq .14 \rho_0$. Despite
the agreement at low~$r$ between the data and the model, we see
important discrepancies at large~$r$. I discuss these next.
An important quantity characterizing images is the portion
of the source below a~certain distance (e.g.\ the maximum
$r$ imaged):
\begin{equation}
\lambda(r_{max})=\int_{r<r_{max}} d^3 r \, S(\vec{r}) \, .
\label{lambda}
\end{equation}
If $r_{max}\Rightarrow \infty$, then $\lambda$ approaches
unity. The value of $\lambda < 1$ signals that some of the
strength of~$S$ lies outside of the imaged region. The imaged
region is limited in practice by the available information on
details of~$C$ at very-low~$q$.
We can expect pronounced effects for secondary
decays or for long source lifetimes. If some particles
stem from decays of long-lived resonances,
they may be emitted far from any other
particles and contribute to $S$ at $r > r_{max}$.
Table~\ref{lpp} gives the integrals of the imaged sources
together with the integrals of the sources from BEM over the
same spatial region.
\begin{table}
\begin{center}
\begin{tabular}{|cr@{$\pm$}lcc|}\hline
\multicolumn{1}{|c}{$P$-Range} &
\multicolumn{3}{c}{$\lambda(r_{max})$} &
\multicolumn{1}{c|}{$r_{max}$} \\ \cline{2-4}
\multicolumn{1}{|c}{[MeV/c]} &
\multicolumn{2}{c}{restored} &
\multicolumn{1}{c}{BEM} &
\multicolumn{1}{c|}{[fm]} \\ \hline
270-390 & 0.69 & 0.15 & 0.98 & 20.0 \\
450-780 & 0.574 & 0.053 & 0.91 & 18.8 \\
840-1230 & 0.87 & 0.14 & 0.88 & 20.8 \\\hline
\end{tabular}
\end{center}
\caption{Integrals of sources from data and BEM in the three
intervals of total momentum.}
\label{lpp}
\end{table}
Significant strength is missing from the imaged sources in the
low and intermediate momentum intervals. BEM agrees with data
in the highest momentum interval but not in the two
lower-momentum intervals. In BEM there is no intermediate mass
fragment (IMF) production. The~IMFs might be produced in
excited states and, by decaying, contribute protons with low
momenta spread out over large spatial distances. Information
on this possibility can be obtained by examining the IMF
correlation functions.
\section{IMF Sources}
Because of the large charges ($Z \ge 3$), the~kernel in the case
of IMFs is dominated by Coulomb repulsion. With many
partial waves contributing, the kernel approaches the classical
limit~\cite{kim91}:
\begin{equation}
K_0 (q,r)=\theta(r-r_c) (1-r_c/r)^{1/2}-1 \, ,
\end{equation}
where
$r_c=2\mu Z_1 Z_2 e^2/q^2$ is the distance of closest
approach. There are no IMF correlation data available for the
same reaction used to measure the pp correlation data, so we use
data within the same beam energy range, i.e. the
$^{84}Kr-^{197}Au$ at 35, 55, and 70 MeV/nucleon data
by Hamilton {\em et al.}~\cite{ham96}.
The extracted relative IMF sources are shown in Fig.~\ref{IMF}.
\begin{figure}
\begin{center}
\includegraphics[totalheight=3.3in]{fig9.eps}
\end{center}
\caption{
Relative source for IMFs emitted from
central
$^{84}$Kr + $^{197}$Au reactions from the data of
Ref.~\protect\cite{ham96} at 35 (dotted line), 55~(dashed line), and
70~MeV/nucleon (solid line). The insert shows the source
multiplied by~$r^2$. In both plots, the full image extends out to $90$~fm.
}
\label{IMF}
\end{figure}
The source integrals for the IMF sources are given in
Table~\ref{IMFt}. Interestingly, we are nearly capable of restoring
the complete IMF sources.
\begin{table}
\begin{center}
\begin{tabular}{|cr@{$\pm$}lr@{$\pm$}l|}\hline
\multicolumn{1}{|c}{Beam Energy} &
\multicolumn{2}{c}{$\lambda(90 \,{\rm fm})$} &
\multicolumn{2}{c|}{$\lambda(20 \, {\rm fm})$} \\
\multicolumn{1}{|c}{[MeV/A]} &
\multicolumn{2}{c}{ } &
\multicolumn{2}{c|}{ } \\ \hline
35 & 0.96 & 0.07 & 0.72 & 0.04 \\
55 & 0.97 & 0.06 & 0.78 & 0.03 \\
70 & 0.99 & 0.05 & 0.79 & 0.03 \\\hline
\end{tabular}
\end{center}
\caption{Comparison of the integrals of the midrapidity IMF
source function,
$\lambda(r_{max})$,
in central $^{84}$Kr
+ $^{197}$Au reactions at three beam energies,
for different truncation points, $r_{max}$.
The restored sources use the data of Ref.~\protect\cite{ham96}.}
\label{IMFt}
\end{table}
For the relative distances that are accessible using the pp
correlations ($\sim 20$~fm) we find only (70--80)\% of the IMF
sources. This is is comparable to what we see for the lowest-momentum
pp source but above the intermediate-momentum proton source.
We should mention that we can not expect complete quantitative
agreement, even if the data were from the
same reaction and pertained to the same particle-velocity
range. This is due partly to the fact that more protons than final
IMFs can stem from secondary decays.
\section{$\pi^-$ vs. $K^+$ Sources}
We end our discussion of imaging by presenting sources obtained
for pions and kaons from central Au + Au reactions at about
11~GeV/nucleon. This time we use the optimized discretization
technique rather than
the combination of approximate Coulomb corrections and the FT.
For both meson pairs the kernel $K_0$ is given by
a~sum over partial waves:
\begin{equation}
K_0 (q,r)=\sum_{\ell} \frac{(g^{\ell}(r))^2}{(2\ell+1)} -1 .
\end{equation}
where $g_{\ell}(r)$s stem from solving the radial Klein-Gordon
equation with strong and Coulomb interactions. In practice the
strong interactions had barely any effect on the kernels and the extracted
sources.
The data come from the reactions at 10.8~and 11.4~GeV/nucleon~\cite{bar97}.
The respective $\pi^-$ and $K^+$ sources are displayed Fig.~\ref{pikso}.
\begin{figure}
\begin{center}
\includegraphics[totalheight=2.70in]{piksou1.eps}
\end{center}
\caption{
Relative sources of~$\pi^-$ (circles) and of $K^+$ (triangles)
extracted
from central Au + Au data at
11.4~GeV/nucleon~\protect\cite{von98}, for $\pi^-$ and $K^+$,
and at 10.8~GeV/nucleon~\protect\cite{bar97}, for $\pi^-$.
Lines show Gaussian fits to the sources.
}
\label{pikso}
\end{figure}
The kaon source is far more compact than the pion source and
there are several effects that contribute to this difference.
First, kaons have lower scattering cross sections than pions,
making it easier for kaons to leave the system early. Second,
fewer kaons than pions descend from long-lived resonances.
Next, due to their higher mass, the average kaon has a lower
speed than the average pion, making
the kaons less sensitive to lifetime effects. Finally, the kaons
are more sensitive to collective motion than pions, enhancing the kaons'
space-momentum correlations.
Differences, qualitatively similar to those seen in Fig.~\ref{pikso},
in the spatial distributions of emission points for kaons and pions
were predicted long ago within RQMD by
Sullivan~{\em et al.}~\cite{sul93}. In the model, they were able to
separate
the different contributions to the source functions.
The~effects of long-lived resonances, mentioned above, are
apparent in the sources extracted from the data.
Thus,
Table~\ref{lampik} gives
\begin{table}
\begin{center}
\begin{tabular}{|cccr@{$\pm$}l|}
\hline
\multicolumn{1}{|c}{} &
\multicolumn{2}{c}{} &
\multicolumn{2}{c|}{} \\[-2.1ex]
\multicolumn{1}{|c}{} &
\multicolumn{1}{c}{$R_0$ [fm]} &
\multicolumn{1}{c}{$\bar{\lambda}$} &
\multicolumn{2}{c|}{$\lambda(35 {\rm fm})$} \\ \hline
$K^+$ (11.4 GeV/A) & 2.76 & 0.702 & 0.86 & 0.56 \\
$\pi^-$ (11.4 GeV/A) & 6.42 & 0.384 & 0.44 & 0.17 \\
$\pi^-$ (10.8 GeV/A) & 6.43 & 0.486 & 0.59 & 0.22 \\ \hline
\end{tabular}
\end{center}
\caption{
Parameters of Gaussian fits to the sources and integrals
over imaged regions for the central Au + Au reactions.
}
\label{lampik}
\end{table}
the source integrals over the imaged regions together with
parameters of the Gaussian fits to the sources,
\begin{equation}
S(r) = \frac{\bar{\lambda}}{(2\sqrt{\pi} R_0)^3} \exp{\left(-\left(
\frac{r}{2 R_0}\right)^2\right)} \, .
\end{equation}
The~errors are quite small for the fitted values. We find
$\bar{\lambda}_{\pi^-} < \bar{\lambda}_{K^+} < 1$
and~$\bar{\lambda} \lapproxeq \lambda$.
\section{Conclusions}
We have demonstrated that a~model-independent imaging of
reactions is possible. Specifically, we have carried out
one-dimensional
imaging of pion, kaon, proton, and IMF sources.
The~three-dimensional imaging of pion sources is in
progress. Our method of optimized discretization allows us to
investigate the sources on a~logarithmic scale up to
large distances. The sources generally contain information
on freeze-out phase-space density, entropy, spatial density,
lifetime and size of the freeze-out region, as well as
on resonance decays. The imaging gives us access to the spatial
structure required to extract that information.
\section*{Acknowledgment}
This work was partially supported by the National Science Foundation
under Grant PHY-9605207.
\section*{References}
| 2024-02-18T23:40:12.560Z | 1998-11-13T05:24:50.000Z | algebraic_stack_train_0000 | 1,657 | 4,242 |
|
proofpile-arXiv_065-8154 | \section{Introduction}
One of the most fascinating problem of our century is the possibility of
combining the principles of Quantum Mechanics with those of General
Relativity. The result of this combination is best known as Quantum Gravity.
However such a theory has to be yet developed, principally due to the UV
divergences that cannot be kept under control by any renormalization scheme.
J.A. Wheeler\cite{Wheeler} was the first who conjectured that fluctuations
of the metric have to appear at short scale distances. The collection of
such fluctuations gives the spacetime a kind of foam-like structure, whose
topology is constantly changing. In this foamy spacetime a fundamental
length comes into play: the Planck length. Its inverse, the Planck mass $m_p$%
, can be thought as a natural cut-off. It is believed that in such
spacetime, general relativity can be renormalized when a density of virtual
black holes is taken under consideration coupled to $N$ fermion fields in a $%
1/N$ expansion\cite{CraneSmolin}. It is also argued that when gravity is
coupled to $N$ conformally invariant scalar fields the evidence that the
ground-state expectation value of the metric is flat space is false\cite
{HartleHorowitz}. However instead of looking at gravity coupled to matter
fields, we will consider pure gravity. In this context two metrics which are
solutions of the equations of motion without a cosmological constant are
known with the property of the spherical symmetry: the Schwarzschild metric
and the Flat metric. We will focus our attention on these two metrics with
the purpose of examining the energy contribution to the vacuum fluctuation
generated by a collection of $N$ coherent wormholes. A straightforward
extension to the deSitter and the Schwarzschild-deSitter spacetime case is
immediate. The paper is structured as follows, in section \ref{p2} we
briefly recall the results reported in Ref.\cite{Remo1}, in section \ref{p3}
we generalize the result of section \ref{p2} to $N_w$ wormholes. We
summarize and conclude in section \ref{p3}.
\section{One wormhole approximation}
\label{p2}The reference model we will consider is an eternal black hole. The
complete manifold ${\cal M}$ can be thought as composed of two wedges ${\cal %
M}_{+}$ and ${\cal M}_{-}$ located in the right and left sectors of a
Kruskal diagram whose spatial slices $\Sigma $ represent Einstein-Rosen
bridges with wormhole topology $S^2\times R^1$. The hypersurface $\Sigma $
is divided in two parts $\Sigma _{+}$ and $\Sigma _{-}$ by a bifurcation
two-surface $S_0$. We begin with the line element
\begin{equation}
ds^2=-N^2\left( r\right) dt^2+\frac{dr^2}{1-\frac{2m}r}+r^2\left( d\theta
^2+\sin ^2\theta d\phi ^2\right) \label{a1}
\end{equation}
and we consider the physical Hamiltonian defined on $\Sigma $%
\[
H_P=H-H_0=\frac 1{l_p^2}\int_\Sigma d^3x\left( N{\cal H}+N_i{\cal H}%
^i\right) +H_{\partial \Sigma ^{+}}+H_{\partial \Sigma ^{-}}
\]
\[
=\frac 1{l_p^2}\int_\Sigma d^3x\left( N{\cal H}+N_i{\cal H}^i\right)
\]
\begin{equation}
+\frac 2{l_p^2}\int_{S_{+}}^{}d^2x\sqrt{\sigma }\left( k-k^0\right) -\frac 2{%
l_p^2}\int_{S_{-}}d^2x\sqrt{\sigma }\left( k-k^0\right) ,
\end{equation}
where $l_p^2=16\pi G$. The volume term contains two contstraints
\begin{equation}
\left\{
\begin{array}{l}
{\cal H}=G_{ijkl}\pi ^{ij}\pi ^{kl}\left( \frac{l_p^2}{\sqrt{g}}\right)
-\left( \frac{\sqrt{g}}{l_p^2}\right) R^{\left( 3\right) }=0 \\
{\cal H}^i=-2\pi _{|j}^{ij}=0
\end{array}
\right. , \label{a1a}
\end{equation}
where $G_{ijkl}=\frac 12\left( g_{ik}g_{jl}+g_{il}g_{jk}-g_{ij}g_{kl}\right)
$ and $R^{\left( 3\right) }$ denotes the scalar curvature of the surface $%
\Sigma $. By using the expression of the trace
\begin{equation}
k=-\frac 1{\sqrt{h}}\left( \sqrt{h}n^\mu \right) _{,\mu },
\end{equation}
with the normal to the boundaries defined continuously along $\Sigma $ as $%
n^\mu =\left( h^{yy}\right) ^{\frac 12}\delta _y^\mu $. The value of $k$
depends on the function $r,_y$, where we have assumed that the function $%
r,_y $ is positive for $S_{+}$ and negative for $S_{-}$. We obtain at either
boundary that
\begin{equation}
k=\frac{-2r,_y}r.
\end{equation}
The trace associated with the subtraction term is taken to be $k^0=-2/r$ for
$B_{+}$ and $k^0=2/r$ for $B_{-}$. Then the quasilocal energy with
subtraction terms included is
\begin{equation}
E_{{\rm quasilocal}}=E_{+}-E_{-}=\left( r\left[ 1-\left| r,_y\right| \right]
\right) _{y=y_{+}}-\left( r\left[ 1-\left| r,_y\right| \right] \right)
_{y=y_{-}}.
\end{equation}
Note that the total quasilocal energy is zero for boundary conditions
symmetric with respect to the bifurcation surface $S_0$ and this is the
necessary condition to obtain instability with respect to the flat space. A
little comment on the total Hamiltonian is useful to further proceed. We are
looking at the sector of asymptotically flat metrics included in the space
of all metrics, where the Wheeler-DeWitt equation
\begin{equation}
{\cal H}\Psi =0
\end{equation}
is defined. In this sector the Schwarzschild metric and the Flat metric
satisfy the constraint equations $\left( \ref{a1a}\right) $. Here we
consider deviations from such metrics in a WKB approximation and we
calculate the expectation value following a variational approach where the
WKB functions are substituted with trial wave functionals. Then the
Hamiltonian referred to the line element $\left( \ref{a1}\right) $ is
\[
H=\int_\Sigma d^3x\left[ G_{ijkl}\pi ^{ij}\pi ^{kl}\left( \frac{l_p^2}{\sqrt{%
g}}\right) -\left( \frac{\sqrt{g}}{l_p^2}\right) R^{\left( 3\right) }\right]
.
\]
Instead of looking at perturbations on the whole manifold ${\cal M}$, we
consider perturbations at $\Sigma $ of the type $g_{ij}=\bar{g}_{ij}+h_{ij}$%
. $\bar{g}_{ij}$ is the spatial part of the background considered in eq.$%
\left( \ref{a1}\right) $In Ref.\cite{Remo1}, we have defined $\Delta E\left(
m\right) $ as the difference of the expectation value of the Hamiltonian
approximated to second order calculated with respect to different
backgrounds which have the asymptotic flatness property. This quantity is
the natural extension to the volume term of the subtraction procedure for
boundary terms and is interpreted as the Casimir energy related to vacuum
fluctuations. Thus
\[
\Delta E\left( m\right) =E\left( m\right) -E\left( 0\right)
\]
\begin{equation}
=\frac{\left\langle \Psi \left| H^{Schw.}-H^{Flat}\right| \Psi \right\rangle
}{\left\langle \Psi |\Psi \right\rangle }+\frac{\left\langle \Psi \left|
H_{quasilocal}\right| \Psi \right\rangle }{\left\langle \Psi |\Psi
\right\rangle }.
\end{equation}
By restricting our attention to the graviton sector of the Hamiltonian
approximated to second order, hereafter referred as $H_{|2}$, we define
\[
E_{|2}=\frac{\left\langle \Psi ^{\perp }\left| H_{|2}^1\right| \Psi ^{\perp
}\right\rangle }{\left\langle \Psi ^{\perp }|\Psi ^{\perp }\right\rangle },
\]
where
\[
\Psi ^{\perp }=\Psi \left[ h_{ij}^{\perp }\right] ={\cal N}\exp \left\{ -%
\frac 1{4l_p^2}\left[ \left\langle \left( g-\bar{g}\right) K^{-1}\left( g-%
\bar{g}\right) \right\rangle _{x,y}^{\perp }\right] \right\} .
\]
After having functionally integrated $H_{|2}$, we get
\begin{equation}
H_{|2}=\frac 1{4l_p^2}\int_\Sigma d^3x\sqrt{g}G^{ijkl}\left[ K^{-1\bot
}\left( x,x\right) _{ijkl}+\left( \triangle _2\right) _j^aK^{\bot }\left(
x,x\right) _{iakl}\right]
\end{equation}
The propagator $K^{\bot }\left( x,x\right) _{iakl}$ comes from a functional
integration and it can be represented as
\begin{equation}
K^{\bot }\left( \overrightarrow{x},\overrightarrow{y}\right) _{iakl}:=\sum_N%
\frac{h_{ia}^{\bot }\left( \overrightarrow{x}\right) h_{kl}^{\bot }\left(
\overrightarrow{y}\right) }{2\lambda _N\left( p\right) },
\end{equation}
where $h_{ia}^{\bot }\left( \overrightarrow{x}\right) $ are the
eigenfunctions of
\begin{equation}
\left( \triangle _2\right) _j^a:=-\triangle \delta _j^{a_{}^{}}+2R_j^a.
\end{equation}
This is the Lichnerowicz operator projected on $\Sigma $ acting on traceless
transverse quantum fluctuations and $\lambda _N\left( p\right) $ are
infinite variational parameters. $\triangle $ is the curved Laplacian
(Laplace-Beltrami operator) on a Schwarzschild background and $R_{j\text{ }%
}^a$ is the mixed Ricci tensor whose components are:
\begin{equation}
R_j^a=diag\left\{ \frac{-2m}{r_{}^3},\frac m{r_{}^3},\frac m{r_{}^3}\right\}
.
\end{equation}
After normalization in spin space and after a rescaling of the fields in
such a way as to absorb $l_p^2$, $E_{|2}$ becomes in momentum space
\begin{equation}
E_{|2}\left( m,\lambda \right) =\frac V{2\pi ^2}\sum_{l=0}^\infty
\sum_{i=1}^2\int_0^\infty dpp^2\left[ \lambda _i\left( p\right) +\frac{%
E_i^2\left( p,m,l\right) }{\lambda _i\left( p\right) }\right] , \label{a3}
\end{equation}
where
\begin{equation}
E_{1,2}^2\left( p,m,l\right) =p^2+\frac{l\left( l+1\right) }{r_0^2}\mp \frac{%
3m}{r_0^3}
\end{equation}
and $V$ is the volume of the system. $r_0$ is related to the minimum radius
compatible with the wormhole throat. We know that the classical minimum is
achieved when $r_0=2m$. However, it is likely that quantum processes come
into play at short distances, where the wormhole throat is defined,
introducing a {\it quantum} radius $r_0>2m$. The minimization with respect
to $\lambda $ leads to $\bar{\lambda}_i\left( p,l,m\right) =\sqrt{%
E_i^2\left( p,m,l\right) }$ and eq.$\left( \ref{a3}\right) $ becomes
\begin{equation}
E_{|2}\left( m,\lambda \right) =2\frac V{2\pi ^2}\sum_{l=0}^\infty
\sum_{i=1}^2\int_0^\infty dpp^2\sqrt{E_i^2\left( p,m,l\right) },
\end{equation}
with $p^2+\frac{l\left( l+1\right) }{r_0^2}>\frac{3m}{r_0^3}.$ Thus, in
presence of the curved background, we get
\begin{equation}
E_{|2}\left( m\right) =\frac V{2\pi ^2}\frac 12\sum_{l=0}^\infty
\int_0^\infty dpp^2\left( \sqrt{p^2+c_{-}^2}+\sqrt{p^2+c_{+}^2}\right)
\end{equation}
where
\[
c_{\mp }^2=\frac{l\left( l+1\right) }{r_0^2}\mp \frac{3m}{r_0^3},
\]
while when we refer to the flat space, we have $m=0$ and $c^2=$ $\frac{%
l\left( l+1\right) }{r_0^2}$, with
\begin{equation}
E_{|2}\left( 0\right) =\frac V{2\pi ^2}\frac 12\sum_{l=0}^\infty
\int_0^\infty dpp^2\left( 2\sqrt{p^2+c^2}\right) .
\end{equation}
Since we are interested in the $UV$ limit, we will use a cut-off $\Lambda $
to keep under control the $UV$ divergence
\begin{equation}
\int_0^\infty \frac{dp}p\sim \int_0^{\frac \Lambda c}\frac{dx}x\sim \ln
\left( \frac \Lambda c\right) ,
\end{equation}
where $\Lambda \leq m_p.$ Note that in this context the introduction of a
cut-off at the Planck scale is quite natural if we look at a spacetime foam.
Thus $\Delta E\left( m\right) $ for high momenta becomes
\begin{equation}
\Delta E\left( m\right) \sim -\frac V{2\pi ^2}\left( \frac{3m}{r_0^3}\right)
^2\frac 1{16}\ln \left( \frac{r_0^3\Lambda ^2}{3m}\right) . \label{a4}
\end{equation}
We now compute the minimum of $\widetilde{\Delta E}\left( m\right) =E\left(
0\right) -E\left( m\right) =-\Delta E\left( m\right) $. We obtain two values
for $m$: $m_1=0$, i.e. flat space and $m_2=\Lambda ^2e^{-\frac 12}r_0^3/3$.
Thus the minimum of $\widetilde{\Delta E}\left( m\right) $ is at the value $%
\widetilde{\Delta E}\left( m_2\right) =\frac V{64\pi ^2}\frac{\Lambda ^4}e$.
Recall that $m=MG$, thus
\begin{equation}
M=G^{-1}\Lambda ^2e^{-\frac 12}r_0^3/3.
\end{equation}
When $\Lambda \rightarrow m_p$, then $r_0\rightarrow l_p.$ This means that
an Heisenberg uncertainty relation of the type $l_pm_p=1$ (in natural units)
has to be satisfied, then
\begin{equation}
M=m_p^2e^{-\frac 12}m_p^{-1}/3=\frac{m_p}{3\sqrt{e}}.
\end{equation}
\section{N$_{w}$ wormholes approximation}
\label{p3}
Suppose to consider $N_{w}$ wormholes and assume that there exists a
covering of $\Sigma $ such that $\Sigma =\cup _{i=1}^{N_{w}}\Sigma _{i}$,
with $\Sigma _{i}\cap \Sigma _{j}=\emptyset $ when $i\neq j$. Each $\Sigma
_{i}$ has the topology $S^{2}\times R^{1}$ with boundaries $\partial \Sigma
_{i}^{\pm }$ with respect to each bifurcation surface. On each surface $%
\Sigma _{i}$, quasilocal energy gives
\begin{equation}
E_{i\text{ }{\rm quasilocal}}=\frac{2}{l_{p}^{2}}\int_{S_{i+}}d^{2}x\sqrt{%
\sigma }\left( k-k^{0}\right) -\frac{2}{l_{p}^{2}}\int_{S_{i-}}d^{2}x\sqrt{%
\sigma }\left( k-k^{0}\right) ,
\end{equation}
and by using the expression of the trace
\begin{equation}
k=-\frac{1}{\sqrt{h}}\left( \sqrt{h}n^{\mu }\right) _{,\mu },
\end{equation}
we obtain at either boundary that
\begin{equation}
k=\frac{-2r,_{y}}{r},
\end{equation}
where we have assumed that the function $r,_{y}$ is positive for $S_{i+}$
and negative for $S_{i-}$. The trace associated with the subtraction term is
taken to be $k^{0}=-2/r$ for $B_{i+}$ and $k^{0}=2/r$ for $B_{i-}$. Here the
quasilocal energy with subtraction terms included is
\begin{equation}
E_{i\text{ }{\rm quasilocal}}=E_{i+}-E_{i-}=\left( r\left[ 1-\left|
r,_{y}\right| \right] \right) _{y=y_{i+}}-\left( r\left[ 1-\left|
r,_{y}\right| \right] \right) _{y=y_{i-}}.
\end{equation}
Note that the total quasilocal energy is zero for boundary conditions
symmetric with respect to {\it each} bifurcation surface $S_{0,i}$. We are
interested to a large number of wormholes, each of them contributing with a
Hamiltonian of the type $H_{|2}$. If the wormholes number is $N_{w}$, we
obtain (semiclassically, i.e., without self-interactions)
\begin{equation}
H_{tot}^{N_{w}}=\underbrace{H^{1}+H^{2}+\ldots +H^{N_{w}}}.
\end{equation}
Thus the total energy for the collection is
\[
E_{|2}^{tot}=N_{w}H_{|2}.
\]
The same happens for the trial wave functional which is the product of $%
N_{w} $ t.w.f.. Thus
\[
\Psi _{tot}^{\perp }=\Psi _{1}^{\perp }\otimes \Psi _{2}^{\perp }\otimes
\ldots \ldots \Psi _{N_{w}}^{\perp }={\cal N}\exp N_{w}\left\{ -\frac{1}{%
4l_{p}^{2}}\left[ \left\langle \left( g-\bar{g}\right) K^{-1}\left( g-\bar{g}%
\right) \right\rangle _{x,y}^{\perp }\right] \right\}
\]
\[
={\cal N}\exp \left\{ -\frac{1}{4}\left[ \left\langle \left( g-\bar{g}%
\right) K^{-1}\left( g-\bar{g}\right) \right\rangle _{x,y}^{\perp }\right]
\right\} ,
\]
where we have rescaled the fluctuations $h=g-\bar{g}$ in such a way to
absorb $N_{w}/l_{p}^{2}.$ Of course, if we want the trial wave functionals
be independent one from each other, boundaries $\partial \Sigma ^{\pm }$
have to be reduced with the enlarging of the wormholes number $N_{w}$,
otherwise overlapping terms could be produced. Thus, for $N_{w}$-wormholes,
we obtain
\[
H^{tot}=N_{w}H=\int_{\Sigma }d^{3}x\left[ G_{ijkl}\pi ^{ij}\pi ^{kl}\left(
N_{w}\frac{l_{p}^{2}}{\sqrt{g}}\right) -\left( N_{w}\frac{\sqrt{g}}{l_{p}^{2}%
}\right) R^{\left( 3\right) }\right]
\]
\[
=\int_{\Sigma }d^{3}x\left[ G_{ijkl}\pi ^{ij}\pi ^{kl}\left( \frac{%
l_{N_{w}}^{2}}{\sqrt{g}}\right) -\left( N_{w}^{2}\frac{\sqrt{g}}{%
l_{N_{w}}^{2}}\right) R^{\left( 3\right) }\right] ,
\]
where we have defined $l_{N_{w}}^{2}=l_{p}^{2}N_{w}$ with $l_{N_{w}}^{2}$
fixed and $N_{w}\rightarrow \infty .$ Thus, repeating the same steps of
section \ref{p2} for $N_{w}$ wormholes, we obtain
\begin{equation}
\Delta E_{N_{w}}\left( m\right) \sim -N_{w}^{2}\frac{V}{2\pi ^{2}}\left(
\frac{3m}{r_{0}^{3}}\right) ^{2}\frac{1}{16}\ln \left( \frac{%
r_{0}^{3}\Lambda ^{2}}{3m}\right) .
\end{equation}
Then at one loop the cooperative effects of wormholes behave as one {\it %
macroscopic single }field multiplied by $N_{w}^{2}$; this is the consequence
of the coherency assumption. We have just explored the consequences of this
result in Ref.\cite{Remo1}$.$ Indeed, coming back to the single wormhole
contribution we have seen that the black hole pair creation probability
mediated by a wormhole is energetically favored with respect to the
permanence of flat space provided we assume that the boundary conditions be
symmetric with respect to the bifurcation surface which is the throat of the
wormhole. In this approximation boundary terms give zero contribution and
the volume term is nonvanishing. As in the one-wormhole case, we now compute
the minimum of $\widetilde{\Delta E}_{N_{w}}\left( m\right) =\left( E\left(
0\right) -E\left( m\right) \right) _{N_{w}}=-\Delta E_{N_{w}}\left( m\right)
$. The minimum is reached for $\bar{m}=\Lambda ^{2}e^{-\frac{1}{2}%
}r_{0}^{3}/3$. Thus the minimum is
\begin{equation}
\widetilde{\Delta E}\left( \bar{m}\right) =N_{w}^{2}\frac{V}{64\pi ^{2}}%
\frac{\Lambda ^{4}}{e}.
\end{equation}
The main difference with the one wormhole case is that we have $N_{w}$
wormholes contributing with the same amount of energy. Since $%
m=MN_{w}G=Ml_{N_{w}}^{2}$, thus
\begin{equation}
M=\left( l_{N_{w}}^{2}/N_{w}\right) ^{-1}\Lambda ^{2}e^{-\frac{1}{2}%
}r_{0}^{3}/3.
\end{equation}
When $\Lambda \rightarrow m_{p}$, then $r_{0}\rightarrow l_{p}$ and $%
l_{p}m_{p}=1$. Thus
\begin{equation}
M=\frac{\left( l_{N_{w}}^{2}/N_{w}\right) ^{-1}m_{p}^{-1}}{3\sqrt{e}}=N_{w}%
\frac{m_{N_{w}}}{3\sqrt{e}}
\end{equation}
So far, we have discussed the stable modes contribution. However, we have
discovered that for one wormhole also unstable modes contribute to the total
energy\cite{GPY,Remo1}. Since we are interested to a large number of
wormholes, the first question to answer is: what happens to the boundaries
when the wormhole number is enlarged. In the one wormhole case, the
existence of one negative mode is guaranteed by the vanishing of the
eigenfunction of the operator $\Delta _{2}$ at infinity, which is the same
space-like infinity of the quasilocal energy, i.e. we have the $ADM$
positive mass $M$ in a coordinate system of the universe where the observer
is present and the anti-$ADM$ mass in a coordinate system where the observer
is not there. When the number of wormholes grows, to keep the coherency
assumption valid, the space available for every single wormhole has to be
reduced to avoid overlapping of the wave functions. This means that boundary
conditions are not fixed at infinity, but at a certain finite radius and the
$ADM$ mass term is substituted by the quasilocal energy expression under the
condition of having symmetry with respect to each bifurcation surface. As $%
N_{w}$ grows, the boundary radius $\bar{r}$ reduces more and more and the
unstable mode disappears. This means that there will exist a certain radius $%
r_{c}$ where below which no negative mode will appear and there will exist a
given value $N_{w_{c}}$ above which the same effect will be produced. In
rigorous terms: $\forall N\geq N_{w_{c}}\ \exists $ $r_{c}$ $s.t.$ $\forall
\ r_{0}\leq r\leq r_{c},\ \sigma \left( \Delta _{2}\right) =\emptyset $.
This means that the system begins to be stable. In support of this idea, we
invoke the results discovered in Ref. \cite{B.Allen} where, it is explicitly
shown that the restriction of spatial boundaries leads to a stabilization of
the system. Thus at the minimum, we obtain the typical energy density
behavior of the foam
\begin{equation}
\frac{\Delta E}{V}\sim -N_{w}^{2}\Lambda ^{4}
\end{equation}
\section{Conclusions and Outlooks}
\label{p4}
According to Wheeler's ideas about quantum fluctuation of the metric at the
Planck scale, we have used a simple model made by a large collection of
wormholes to investigate the vacuum energy contribution needed to the
formation of a foamy spacetime. Such investigation has been made in a
semiclassical approximation where the wormholes are treated independently
one from each other (coherency hypothesis). The starting point is the single
wormhole, whose energy contribution has the typical trend of the
gravitational field energy fluctuation. The wormhole considered is of the
Schwarzschild type and every energy computation has to be done having in
mind the reference space, i.e. flat space. When we examine the wormhole
collection, we find the same trend in the energy of the single case. This is
obviously the result of the coherency assumption. However, the single
wormhole cannot be taken as a model for a spacetime foam, because it
exhibits one negative mode. This negative mode is the key of the topology
change from a space without holes (flat space) to a space with an hole
inside (Schwarzschild space). However things are different when we consider
a large number of wormholes $N_w$. Let us see what is going on: the
classical vacuum, represented by flat space is stable under nucleation of a
single black hole, while it is unstable under a neutral pair creation with
the components residing in \ different universes divided by a wormhole. When
the topology change has primed by means of a single wormhole, there will be
a considerable production of pairs mediated by their own wormhole. The
result is that the hole production will persist until the critical value $%
N_{w_c}$ will be reached and spacetime will enter the stable phase. If we
look at this scenario a little closer, we can see that it has the properties
of the Wheeler foam. Nevertheless, we have to explain why observations
measure a flat space structure. To this purpose, we have to recall that the
foamy spacetime structure should be visible only at the Planck scale, while
at greater scales it is likely that the flat structure could be recovered by
means of averages over the collective functional describing the {\it %
semiclassical} foam. Indeed if $\eta _{ij}$ is the spatial part of the flat
metric, ordinarily we should obtain
\begin{equation}
\left\langle \Psi \left| g_{ij}\right| \Psi \right\rangle =\eta _{ij},
\end{equation}
where $g_{ij}$ is the spatial part of the gravitational field. However in
the foamy representation we should consider, instead of the previous
expectation value, the expectation value of the gravitational field
calculated on wave functional representing the foam, i.e., to see that at
large distances flat space is recovered we should obtain
\begin{equation}
\left\langle \Psi _{foam}\left| g_{ij}\right| \Psi _{foam}\right\rangle
=\eta _{ij},
\end{equation}
where $\Psi _{foam}$ is a superposition of the single-wormhole wave
functional
\begin{equation}
\Psi _{foam}=\sum_{i=1}^{N_w}\Psi _i^{\perp }.
\end{equation}
This has to be attributed to the semiclassical approximation which render
this system a non-interacting system. However, things can change when we
will consider higher order corrections and the other terms of the action
decomposition, i.e. the spin one and spin zero terms. Nevertheless, we can
argue that only spin zero terms (associated with the conformal factor) will
be relevant, even if the part of the action which carries the physical
quantities is that discussed in this text, i.e., the spin two part of the
action related to the gravitons.
\section{Acknowledgments}
I wish to thank R. Brout, M. Cavagli\`{a}, C. Kiefer, D. Hochberg, G.
Immirzi, S. Liberati, P. Spindel and M. Visser for useful comments and
discussions.
| 2024-02-18T23:40:12.713Z | 1998-11-26T19:56:39.000Z | algebraic_stack_train_0000 | 1,666 | 4,013 |
|
proofpile-arXiv_065-8197 | \section{Open Charm Photoproduction at HERA}
Photoproduction of `open' charm (as opposed to $c\bar{c}$ bound
states) can take place via several processes at HERA. The most obvious
is photon-gluon fusion. The photon interacts directly with a gluon
from the proton via $t$-channel charm quark, producing a high
transverse energy charm balanced by an anticharm (Fig.~1a).
At high enough transverse energies, the $c$ and $\bar{c}$ will each
lead to the formation of jets of hadrons. One hadron in each jet will
in general be charmed. Photon-gluon fusion is often assumed to be the
dominant, or indeed only, process.
Nevertheless, other possible production mechanisms exist. The photon
can fluctuate into a $q\bar{q}$ state which may be long lived on the
timescale of strong interactions. The $q\bar{q}$ state can thus form a
complex partonic structure. Partons from the photon can then undergo
hard scattering with partons from the proton - so called `Resolved
Photon' interactions. This allows the possibility of charm production
via gluon-gluon fusion (Fig.~1b), where one gluon comes from the
proton, the other from the photon.
\begin{figure}[ht]
\psfig{file=diag_c_bgf.eps,height=5.0cm,angle=270}\psfig{file=diag_c_ggf.eps,height=5.0cm,angle=270}\psfig{file=diag_c_res.eps,height=5.0cm,angle=270}
\caption{\it a) Photon Gluon fusion b) Gluon Gluon fusion c) Charm excitation.}
\end{figure}
This process is very similar to direct photoproduction, producing two
high $\ETJ$ jets each containing a charmed hadron. The difference is
that only a fraction of the photon's momentum enters into the jet
production, the rest being carried off in a photon remnant.
A further class of resolved processes can be imagined. What if the
parton structure evolved by the photon contains charm? In this case,
so-called `charm excitation' processes can take place (Fig.~1c).
This process also leads to two high $\ETJ$ jets, but only one of them
contains a charmed hadron. The second charmed hadron is carried off in
the photon remnant.
It is interesting to ask whether the resolved diagrams are
important. The contribution of the first is especially sensitive to
the gluon distribution inside the photon, whereas the second addresses
the question as to whether charm is somehow generated `inside' the
photon.
We should ask how charm could be generated `in' the photon. Might it
happen via $\gamma \rightarrow c\bar{c}$ or $g \rightarrow c\bar{c}$?
Is it perturbatively calculable? One has to be careful to define what
exactly is meant by charm inside the photon. It is important to note
that at NLO the division between `charm excitation' and `boson gluon
fusion' (and indeed between resolved and direct photoproduction in
general) will depend upon the choice of factorization scale. Moving
factorization scale can turn a LO charm excitation diagram into a NLO
direct photoproduction diagram, as illustrated in Fig.~2. A similar
arbitrariness is present between charm generated via gluon splitting
or assigned to the gluon-gluon fusion process.
\begin{floatingfigure}[l]{13.1cm}
\psfig{file=res2dir_c.eps,width=5.5cm,angle=270}
\caption{\it NLO confusion}
\end{floatingfigure}
Thus the discussion of any results will to some extent depend upon
what approximations are used in the calculations to which the data is
being compared.
\subsection{Massless or Massive?}
Currently two different approximations are used in the calculation
of charm photoproduction at next-to-leading order.
\begin{itemize}
\item{\bf Resummed, or `Massless'.} This approach uses ${\cal
O}(\alpha_s^2)$ matrix elements for charm treated as a massless parton
over the threshold for its production. Charm is an active flavour in
the photon, present at a level dependent upon the choice of parton
distribution set. By allowing charm to be generated in the evolution
of the photon parton distribution, this approach resums logarithms of
$\ETJ /m_c$~\cite{cacc,kniehl}.
It is expected that this scheme will be a good approximation at $\ETJ
\gg m_c$.
\item{\bf Massive.} In this approach, ${\cal O}(\alpha_s^2)$ matrix
elements for massive charm are used. There is no charm content
assigned to the parton distributions inside the photon. No resummation
of $\ETJ /m_c$ logarithms is performed~\cite{frix}.
It is expected that this scheme will be a good approximation when
$\ETJ \approx m_c$.
\end{itemize}
In the jet measurements to be discussed below~\cite{zeusdstar}, $\ETJ
\approx 7$~GeV.
\subsection{Measuring Charm}
The most commonly used method so far for tagging charm at HERA is the
$D^*$ tagging method~\cite{dstarmeth}. This technique exploits the fact
that the mass difference between the $D^*$ and $D^0$ is small. Thus by
cutting on this reconstructed mass difference as well as the mass, a
relatively pure sample of charmed events is obtained.
\section{Inclusive $D^*$ Cross Section}
A sample of charm events is selected using the following cuts;
\begin{floatingfigure}{8cm}
\psfig{file=DESY-98-085_2.eps,width=6.5cm}
\caption{\it $d\sigma/dp_T(D^*)$
The MRSG~\cite{MRSG} and GRV-G~HO~\cite{GRV} parton density functions
are used for the proton and the photon respectively.}
\end{floatingfigure}
$\bullet ~p_T(D^*) > 2.0$~GeV, (when the $D^0$ decays to $K\pi$) or
$p_T(D^*) > 4.0$~GeV, (when the $D^0$ decays to $K\pi\pi\pi$)
$\bullet ~ |\eta (D^*)| < 1.5$.
$\bullet ~ 130 < W_{\gamma p} < 280$~GeV
$\bullet ~ Q^2 < 1$~GeV$^2$
The differential cross section $d\sigma/dp_T(D^*)$ is shown in Fig.~3,
compared to various NLO QCD calculations.
It can be seen that even with an extreme choice of parameters such as
the charm mass, the massive scheme tends to lie below the data (dotted
line). In addition, there is a significant discrepancy between the two
groups using massless charm calculations.
The differential cross section $d\sigma/d\eta(D^*)$ is shown in Fig.~4.
\begin{figure}
\begin{center}
\psfig{file=DESY-98-085_3.eps,width=13cm}
\end{center}
\caption{\it $d\sigma/d\eta(D^*)$
The MRSG~\cite{MRSG} and GRV-G~HO~\cite{GRV} parton density functions
are used for the proton and the photon respectively.}
\end{figure}
Again, the massive calculations generally lie below the data,
especially in forward direction, and the discrepancy between the two
different massless calculations is clear. In the massless scheme, this
cross section has a sensitivity to the parton distributions in the
photon which is of the same order as the other uncertainties at
present.
\section{Photoproduction of Charm in Jets}
Further information about the charm production mechanism can be
obtained by measuring jets in high $\ETJ$ photoproduction and looking
for charm inside the jets. This has been done, again using the $D^*$
tagging method, in a similar $W$ range and with $\ETJ > 6$~GeV,
$p_T(D^*) > 3$~GeV. The jets are defined using the $K_T$
algorithm~\cite{catani} in `inclusive' mode~\cite{ellis}.
Especially given the excess of data over the massive charm
calculations, it is of course interesting to separate direct and
resolved samples. This is possible using the variable~\cite{xgo}:
\[
x_\gamma^{\rm OBS} = \frac{\Sigma_{\rm jets}(\ETJ\ e^{-\eta^{jet}})}{2 E_{e} y}
\]
which is the fraction of the photon's momentum which participates in
the production of the two highest $\ETJ$ jets. Thus LO direct process
have $x_\gamma^{\rm OBS} =1$ and LO resolved processes have lower $x_\gamma^{\rm OBS}$, although
$x_\gamma^{\rm OBS}$ itself is of course defined independently of the order of the
calculation.
Fig.~5 shows the energy flow around jets, compared to the expectation
from the HERWIG Monte Carlo. The energy flow in the rear (negative
$\Delta\eta$) region shows evidence for presence of a photon remnant
in a significant fraction of the events at low $x_\gamma^{\rm OBS}$.
\begin{floatingfigure}{12cm}
\begin{center}
\psfig{file=DESY-98-085_5.eps,width=10cm}
\caption{\it Energy flow around jets}
\end{center}
\end{floatingfigure}
The cross section $d\sigma/dx_\gamma^{\rm OBS}$ is shown in Fig.~6. The cross
section at lower $x_\gamma^{\rm OBS}$ values is significant, indicating that LO
direct processes alone cannot describe charm production successfully.
In particular, the data is inconsistent with the LO Direct process in
HERWIG shown in the figure, even after the effects of parton showering
and hadronisation are included in the Monte Carlo. Such effects can
populate the low $x_\gamma^{\rm OBS}$ region even with direct events, but do not do
so sufficiently. In fact the data require a ($45 \pm 5$)\% LO resolved
contribution from HERWIG.
Furthermore, according to HERWIG this resolved contribution is almost
entirely charm excitation.
Also shown, in the lower half of Fig.~6, is a NLO massive calculation
of $d\sigma/dx_\gamma^{\rm OBS}$. The calculation lies below the data at low
$x_\gamma^{\rm OBS}$. It should be remembered that hadronisation is not included in
the NLO calculation and this may affect the comparison. Nevertheless,
the data suggest a larger resolved contribution than is present in the
calculation.
\section{Summary}
A significant cross section for the `resolved' photoproduction of
charm has been measured. The theory is `close but no cigar' in
inclusive $D^*$ and charmed jet measurements, lying in general
somewhat below the data particularly in the forward and low-$x_\gamma^{\rm OBS}$
regions.
Comparison to the HERWIG simulation, which includes LO matrix
elements, leading logarithmic parton shower and a hadronisation
model, requires a charm excitation contribution of about 45\% in the
kinematic regime measured here.
The parton distribution functions used in the theory comparisons do
not always represent the state-of-the-art in their massive quark
treatment, and our understanding should benefit from a comparison to
other parameterisations.
The data represent a challenge to the theory to truly understand charm
production `inside' the photon. This challenge is likely to get
tougher as more accurate measurements over wider kinematic regimes
become possible from both H1 and ZEUS with micro-vertex detectors, the
introduction of other tagging methods, and higher luminosity from the
coming HERA upgrade. I would like to acknowledge the enormous
efforts of the ZEUS heavy flavour group, as well as the theory groups
who provided the calculations, many of whom will undoubtedly be
responsible for these coming advances.
\begin{figure}[ht]
\begin{center}
\psfig{file=DESY-98-085_6.eps,height=18cm}
\end{center}
\caption{\it $d\sigma/dx_\gamma^{\rm OBS}$
The MRSG~\cite{MRSG} and GRV-G~HO~\cite{GRV} parton density functions
are used for the proton and the photon respectively.}
\end{figure}
| 2024-02-18T23:40:12.835Z | 1998-11-03T17:13:13.000Z | algebraic_stack_train_0000 | 1,671 | 1,745 |
|
proofpile-arXiv_065-8240 | \section*{Introduction and notations}
Let $\,\Sigma_{g,n}\,$ be an oriented surface of genus $\,g\!\geq\!
1\,$ with $n$ boundary components and denote by $\,\mathcal{M}_{g,n}\,$
its mapping class group, that is to say the group of orientation preserving
diffeomorphisms of $\,\Sigma_{g,n}\,$ which are the identity on
$\,\partial\Sigma_{g,n}$, modulo isotopy:
$$\,\mathcal{M}_{g,n}=\pi_{0}\bigl(\hbox{Diff}^{+}(\Sigma_{g,n},
\partial\Sigma_{g,n})\bigr)\,.$$
For a simple closed curve $\alpha$ in $\,\Sigma_{g,n}$, denote by
$\tau_{\alpha}\,$ the Dehn twist along $\alpha$. If $\alpha$ and $\beta$ are
isotopic, then the associated twists are also isotopic: thus, we shall consider
curves up to isotopy. We shall use greek letters to denote them, and we
shall not distinguish a Dehn twist from its isotopy class.
It is known that $\,\mathcal{M}_{g,n}\,$ is generated by Dehn twists
\cite{Dehn,Lickorish1,Lickorish2}.\linebreak[4] Wajnryb
gave in \cite{Wajnryb} a presentation of
$\,\mathcal{M}_{g,1}\,$ and $\,\mathcal{M}_{g,0}\,$ with the minimal
possible number of twist generators. In $\,$\cite{Gervais}, the author
gave a presentation considering either all possible Dehn twists, or
just Dehn twists along non-separating curves. These two presentations
appear to be very symmetric, but infinite. The aim of this article is
to give a finite presentation of $\,\mathcal{M}_{g,n}$.
\vskip3mm\noindent
{\bf Notation.} Composition of diffeomorphisms in
$\,\mathcal{M}_{g,n}\,$ will be written from right to left. For
two elements $x$, $y$ of a multiplicative group, we will denote
indifferently by $x^{-1}$ or $\bar{x}$ the inverse of $x$ and by
$\,y(x)\,$ the conjugate $\,y\,x\,\bar{y}\,$ of $x$ by $y$.
\eject
Next, considering the curves of figure 1, we denote by
$\mathcal{G}_{g,n}\,$ and $\mathcal{H}_{g,n}\,$ (we may on occasion omit the
subscript ``$g,n$'' if there is no ambiguity) the following sets of
curves in $\,\Sigma_{g,n}$:
$$\begin{array}{rcl}
\mathcal{G}_{g,n} & = & \{\beta,\beta_{1},\ldots,\beta_{g-1},\alpha_{1},\ldots,
\alpha_{2g+n-2},(\gamma_{i,j})_{1\leq i,j\leq 2g+n-2,i\not =j}\,\},\\ &&\\
\mathcal{H}_{g,n} & = & \{\alpha_{1},\beta,\alpha_{2},\beta_{1},\gamma_{2,4},\beta_{2},\ldots,
\gamma_{2g-4,2g-2},\beta_{g-1},\gamma_{1,2},\\ &&\hspace*{45mm}
\alpha_{2g},\ldots,\alpha_{2g+n-2},\delta_{1},\ldots,\delta_{n-1}\,\}
\end{array}$$
where $\,\delta_{i}=\gamma_{2g-2+i,2g-1+i}\,$ is the i$^{\hbox{\scriptsize th}}$
boundary component. Note that $\,\mathcal{H}_{g,n}\,$ is a subset of
$\,\mathcal{G}_{g,n}$.
\vskip3mm
Finally, a triple $\,(i,j,k)\!\in\!\{1,\ldots,2g+n-2\}^{3}\,$ will
be said to be {\em good} when:
$$\begin{array}{rl}
\hbox{i)} & (i,j,k)\!\not\in\!\bigl\{(x,x,x)\,/\,x\!\in\{1,\ldots,2g+n-2\}
\bigr\},\\
\hbox{ii)} & i\leq j\leq k\ \hbox{ or }\ j\leq k\leq i\ \hbox{ or }\
k\leq i\leq j\,.
\end{array}$$
\begin{center}
\courbes
figure 1
\end{center}
\vskip5mm\noindent
\begin{remark}
For $\,n=0\,$ or $\,n=1$, Wajnryb's generators are
the Dehn twists relative to the curves of $\mathcal{H}$.
\end{remark}
We will give a presentation of $\,\mathcal{M}_{g,n}\,$ taking as
generators the twists along the curves in $\mathcal{G}$. The
relations will be of the following types.
\vskip3mm\noindent
{\bf The braids:} If $\alpha$ and $\beta$ are two curves in
$\,\Sigma_{g,n}\,$ which do not intersect $\,$(resp. intersect in a
single point), then the associated Dehn twists satisfy the
relation $\,\tau_{\alpha}\tau_{\beta}=\tau_{\beta}\tau_{\alpha}\,$ (resp.
$\,\tau_{\alpha}\tau_{\beta}\tau_{\alpha}=\tau_{\beta}\tau_{\alpha}\tau_{\beta}$).
\vskip3mm\noindent
{\bf The stars:} Concider a subsurface of $\,\Sigma_{g,n}\,$ which is
homeomorphic to $\,\Sigma_{1,3}$. Then, if $\,\alpha_{1},\
\alpha_{2},\ \alpha_{3},\ \beta,\ \gamma_{1},\
\gamma_{2},\ \gamma_{3}\,$ are the curves described in figure 2,
one has in $\,\mathcal{M}_{g,n}\,$ the relation
$$(\tau_{\alpha_{1}}\tau_{\alpha_{2}}\tau_{\alpha_{3}}
\tau_{\beta})^{3}=\tau_{\gamma_{1}}\tau_{\gamma_{2}}\tau_{\gamma_{3}}\,.$$
Note that if $\gamma_{3}\,$ bounds a disc in $\,\Sigma_{g,n}$, then
this relation becomes
$$(\tau_{\alpha_{1}}\tau_{\alpha_{2}}\tau_{\alpha_{2}}
\tau_{\beta})^{3}=\tau_{\gamma_{1}}\tau_{\gamma_{2}}\,.$$
\begin{center}
\etoile
figure 2
\end{center}
\vskip5mm\noindent
{\bf The handles:} Pasting a cylinder on two boundary components of
$\,\Sigma_{g-1,n+2}$, the twists along these two boundary curves
become equal in $\,\Sigma_{g,n}$.
\vskip3mm
\begin{theorem}\label{principaltheorem}
For all $\,(g,n)\!\in\!{\mathbf{N}}^{\ast}\!\times\!{\mathbf{N}}$, the
mapping class group $\,\mathcal{M}_{g,n}\,$ admits a presentation with
generators $b,\,b_{_{1}},\ldots,b_{_{g-1}},a_{_{1}},\ldots,a_{_{2g+n-2}},$
$(c_{_{i,j}})_{1\leq i,j\leq 2g+n-2,\,i\not=j}\,$ and relations
\begin{list}{}{\labelsep=4mm\labelwidth=13mm\leftmargin=24mm\parsep=5mm\topsep=5mm}
\item[(A)] {\rm ``handles'':} $c_{_{2i,2i+1}}=c_{_{2i-1,2i}}\,$ for
all $i$, $\,1\leq i\leq g-1$,
\item[(T)] {\rm ``braids'':} for all $\,x,y\,$ among the generators, $xy=yx$
if the associated curves are disjoint and $xyx\!=\!yxy$ if
the associated curves intersect transversaly in a single point,
\item[(E$_{i,j,k}$)] {\rm ``stars'':} $\ c_{_{i,j}}c_{_{j,k}}c_{_{k,i}}
=(a_{_{i}}a_{_{j}}a_{_{k}}b)^{3}\,$ for all good triples
$\,(i,j,k)\,$, where $\,c_{_{l,l}}\!=\!1$.
\end{list}
\end{theorem}
\begin{remark}
It is clear that the handle relations are unnecessary: one has just to
remove $\,c_{_{2,3}},\ldots,c_{_{2g-2,2g-1}}\,$ from
$\,\mathcal{G}_{g,n}\,$ to eliminate them. But it is convenient for
symmetry and notation to keep these generators.
\end{remark}
Let $\,G_{g,n}\,$ denote the group with presentation given by
theorem~\ref{principaltheorem}. Since the set of generators for
$\,G_{g,n}\,$ that we consider here is parametrized by
$\,\mathcal{G}_{g,n}$, we will consider $\,\mathcal{G}_{g,n}\,$ as a
subset of $\,G_{g,n}$. Consequently, $\,\mathcal{H}_{g,n}\,$ will also
be considered as a subset of $\,G_{g,n}$.
\hfill\break\indent
The paper is organized as follows. In section~\ref{Generators}, we prove
that $\,G_{g,n}\,$ is generated by $\,\mathcal{H}_{g,n}$.
Section~\ref{negalun} is devoted to the proof of theorem~\ref{principaltheorem}
when $\,n=1$. Finally, we conclude the proof in section~\ref{finpreuve}
by proving that $\,G_{g,n}\,$ is isomorphic to $\,\mathcal{M}_{g,n}$.
\section{Generators for $G_{g,n}$ \label{Generators}}
\noindent
In this section, we prove the following proposition.
\begin{proposition} \label{generator}
$G_{g,n}\,$ is generated by $\,\mathcal{H}_{g,n}$.
\end{proposition}
\noindent
We begin by proving some relations in $G_{g,n}$.
\begin{lemma}\label{etoile}
For $\,i,j,k\!\in\! \{1,\ldots, 2g+n-2\}$, if $\,X_{1}=a_{_{i}}a_{_{j}},\ X_{2}=
bX_{1}b\,$ and $\,X_{3}=a_{_{k}}X_{2}a_{_{k}}$, then:
\begin{list}{(\roman{liste})}{\usecounter{liste}\parsep=2mm}
\item $X_{p}X_{q}=X_{q}X_{p}\,$ for all $\,p,q\!\in\!\{1,2,3\}$.
\item $(a_{_{i}}a_{_{j}}a_{_{k}}b)^{3}=X_{1}X_{2}X_{3}$,
\item $(a_{_{i}}a_{_{i}}a_{_{j}}b)^{3}=X_{1}^{2}X_{2}^{2}=
(a_{_{i}}a_{_{j}}b)^{4}=(a_{_{i}}b\,a_{_{j}})^{4}$,
\item $a_{_{i}},\,a_{_{j}},\,a_{_{k}}\,$ and $b$ commute with
$\,(a_{_{i}}a_{_{j}}a_{_{k}}b)^{3}$.
\end{list}
\end{lemma}
\begin{remark}\label{rem}
Combining the braid relations
and lemma~\ref{etoile}, we get\linebreak[4] $\,(E_{i,j,k})=(E_{j,k,i})=
(E_{k,i,j})\,$ and $\,(E_{i,i,j})=(E_{i,j,j})$.
\end{remark}
\proof {\it (i) } Using relations {\it $\,$(T)$\,$}, one has
$$\begin{array}{rcl}
a_{_{i}}\,X_{2} & = & a_{_{i}}\,b\,a_{_{i}}\,a_{_{j}}\,b \\
&=& b\,a_{_{i}}\,b\,a_{_{j}}\,b \\
&=& b\,a_{_{i}}\,a_{_{j}}\,b\,a_{_{j}} \\
&=& X_{2}\,a_{_{j}}\,,
\end{array}$$
and in the same way, $\,a_{_{j}}\,X_{2}=X_{2}\,a_{_{i}}$. Thus, we
get $\,X_{1}\,X_{2}=X_{2}\,X_{1}\,$ and $\,X_{1}\,X_{3}=X_{3}\,X_{1}\,$ since
$\,X_{1}\,a_{_{k}}=a_{_{k}}\,X_{1}$.
\noindent
On the other hand, the braid relations imply
$$\begin{array}{rcl}
b(X_{3}) & = &
b\,a_{_{k}}\,b\,a_{_{i}}\,a_{_{j}}\,b\,a_{_{k}}\,\bar{b} \\
&=& a_{_{k}}\,b\,a_{_{k}}\,a_{_{i}}\,a_{_{j}}\,\bar{a_{_{k}}}\,b\,a_{_{k}} \\
&=& X_{3}\,,
\end{array}$$
and we get $\,X_{2}\,X_{3}=X_{3}\,X_{2}$.
\vskip3mm\noindent
{\it (ii) } Using relations {\it $\,$(T)$\,$} and {\it $\,$(i)$\,$}, one
obtains:
$$
\begin{array}{rcl}
X_{1}X_{2}X_{3} & = & X_{1}X_{3}X_{2} \\
&=&
a_{_{i}}\,a_{_{j}}\,a_{_{k}}\,b\,a_{_{i}}\,a_{_{j}}\,b\,a_{_{k}}\,b\,
a_{_{i}}\,a_{_{j}}\,b\\
&=&
a_{_{i}}\,a_{_{j}}\,a_{_{k}}\,b\,a_{_{i}}\,a_{_{j}}\,a_{_{k}}\,b\,a_{_{k}}\,
a_{_{i}}\,a_{_{j}}\,b\\
&=& (a_{_{i}}a_{_{j}}a_{_{k}}b)^{3}.
\end{array}
$$
\vskip3mm\noindent
{\it (iii) } Replacing $a_{_{k}}$ by $a_{_{i}}$ in $X_{3}$, we get
$$X_{3}=a_{_{i}}\,X_{2}\,a_{_{i}}=a_{_{i}}\,a_{_{j}}\,X_{2}=X_{1}\,X_{2}.$$
Thus, using relations {\it $\,$(T)}, {\it $\,$(i)$\,$} and {\it
$\,$(ii)}, one has:
$$\begin{array}{rcl}
(a_{_{i}}a_{_{i}}a_{_{j}}b)^{3} & = & X_{1}X_{2}X_{1}X_{2}=X_{1}^{2}X_{2}^{2} \\
&=& a_{_{i}}\,a_{_{j}}\,b\,a_{_{i}}\,a_{_{j}}\,b\,a_{_{i}}\,
a_{_{j}}\,b\,a_{_{i}}\,a_{_{j}}\,b\,=\,(a_{_{i}}a_{_{j}}b)^{4} \\
&=& a_{_{i}}\,b\,a_{_{j}}\,b\,a_{_{i}}\,b\,
a_{_{j}}\,b\,a_{_{i}}\,b\,a_{_{j}}\,b \\
&=& a_{_{i}}\,b\,a_{_{j}}\,a_{_{i}}\,b\,a_{_{i}}\,
a_{_{j}}\,b\,a_{_{i}}\,a_{_{j}}\,b\,a_{_{j}} \\
&=&(a_{_{i}}b\,a_{_{j}})^{4}.
\end{array}$$
\vskip3mm\noindent
{\it (iv) } One has just to apply the star and braid relations.
\eproof
\begin{lemma} \label{lantern}
For all good triples $\,(i,j,k)$, one has in $\,G_{g,n}\,$ the relation
$$(L_{_{i,j,k}})\ \
a_{_{i}}\,c_{_{i,j}}\,c_{_{j,k}}\,a_{_{k}}=c_{_{i,k}}\,a_{_{j}}\,X\,a_{_{j}}\,
\bar{X}=c_{_{i,k}}\,\bar{X}\,a_{_{j}}\,X\,a_{_{j}}$$
where $\,X\!=\!b\,a_{_{i}}\,a_{_{k}}\,b$.
\end{lemma}
\begin{remark}
These relations are just the well known {\em lantern} relations.
\end{remark}
\proof If $\,X_{1}\!=\!a_{_{i}}\,a_{_{k}}\,$ and
$\,X_{3}\!=\!a_{_{j}}\,X\,a_{_{j}}\,$, one has by lemma~\ref{etoile} and the
star relations $\,(E_{_{i,j,k}})\,$ and $\,(E_{_{i,k,k}})\,$:
$$X_{1}\,X\,X_{3}=c_{_{i,j}}\,c_{_{j,k}}\,c_{_{k,i}}\ \hbox{ and }\ X_{1}^{2}\,
X^{2}=c_{_{i,k}}\,c_{_{k,i}}\,.$$
\noindent
From this, we get, using the braid relations, that
$$\bar{c_{_{k,i}}}\,X_{1}\,X=c_{_{i,j}}\,c_{_{j,k}}\,\bar{X_{3}}=c_{_{i,k}}\,
\bar{X}\,\bar{X_{1}}\,,$$
that is to say, by lemma~\ref{etoile} and {\it (T)},
$$a_{_{i}}\,c_{_{i,j}}\,c_{_{j,k}}\,a_{_{k}}=c_{_{i,k}}\,\bar{X}\,a_{_{j}}\,X\,
a_{_{j}}=c_{_{i,k}}\,a_{_{j}}\,X\,a_{_{j}}\,\bar{X}\,.$$
\eproof
\begin{lemma}\label{ak}
For all $i,k$ such that $\,1\!\leq i\leq g-1\,$ and
$\,k\not=2i-1,2i$, one has in $\,G_{g,n}$
$$a_{_{k}}\,=\,b\,a_{_{2i}}\,b_{_{i}}\,a_{_{2i-1}}\,b\,
\bar{c_{_{2i,2i-1}}}\,a_{_{2i}}\,c_{_{2i,k}}(b_{_{i}})\,.$$
\end{lemma}
\vskip3mm
\proof If $\,X\!=\!b\,a_{_{2i-1}}\,a_{_{2i}}\,b$, one has by the
lantern relations
$$(L_{2i,k,2i-1}):\,\ a_{_{2i}}\,c_{_{2i,k}}\,c_{_{k,2i-1}}\,a_{_{2i-1}}=
c_{_{2i,2i-1}}\,\bar{X}\,a_{_{k}}\,X\,a_{_{k}}\,,$$
which implies
$$\bar{c_{_{2i,2i-1}}}\,a_{_{2i}}\,c_{_{2i,k}}=
\bar{X}\,a_{_{k}}\,X\,a_{_{k}}\,\bar{a_{_{2i-1}}}\,\bar{c_{_{k,2i-1}}}\,.$$
\vskip3mm\noindent
Thus, denoting $\,b\,a_{_{2i}}\,b_{_{i}}\,a_{_{2i-1}}\,b\,\bar{c_{_{2i,2i-1}}}\,
a_{_{2i}}\,c_{_{2i,k}}(b_{_{i}})\,$ by $y$, we can compute using the
relations {\it $\,$(T)}:
$$\begin{array}{rcl}
y & = & b\,a_{_{2i}}\,b_{_{i}}\,a_{_{2i-1}}\,b\,\bar{X}\,a_{_{k}}\,X\,a_{_{k}}\,
\bar{a_{_{2i-1}}}\,\bar{c_{_{k,2i-1}}}(b_{_{i}}) \\
&=& b\,a_{_{2i}}\,b_{_{i}}\,a_{_{2i-1}}\,b\,\bar{b}\,\bar{a_{_{2i-1}}}\,
\bar{a_{_{2i}}}\,\bar{b}\,a_{_{k}}\,b\,a_{_{2i-1}}\,a_{_{2i}}\,b\,
(b_{_{i}}) \\
&=& b\,\bar{b_{_{i}}}\,a_{_{2i}}\,b_{_{i}}\,a_{_{k}}\,b\,\bar{a_{_{k}}}\,
\bar{b_{_{i}}}\,(a_{_{2i}}) \\
&=& b\,a_{_{k}}\,\bar{b_{_{i}}}\,a_{_{2i}}\,\bar{a_{_{2i}}}(b) \\
&=& b\,\bar{b}(a_{_{k}}) \\
&=& a_{_{k}}.
\end{array}
$$
\eproof
\vskip3mm\noindent
{\bf Proof of proposition~\ref{generator}.\ \,}If $H$ denotes the subgroup
of $\,G_{g,n}\,$ generated by $\,\mathcal{H}_{g,n}\,$, we have to
prove that $\,\mathcal{G}_{g,n}\!\subset\! H$.
\vskip3mm\noindent
a) We first prove inductively that $\,a_{_{2i-1}},\,a_{_{2i}},\,c_{_{2i-1,2i}}\,$
and $\,c_{_{2i,2i-1}}\,$ are elements of $H$ for all $i$, $\,1\leq i\leq
g-1$.
For $\,i\!=\!1$, one obtains $\,a_{_{1}},\,a_{_{2}}\,$ and
$\,c_{_{1,2}}\,$ which are in $H$, and the
relation $\,(E_{1,2,2})\,$ gives $\,c_{_{2,1}}\!=\!(a_{_{1}}a_{_{2}}
a_{_{2}}b)^{3}\bar{c_{_{1,2}}}\in H$. So, suppose inductively that
$\,a_{_{2i-1}},\,a_{_{2i}},\,c_{_{2i-1,2i}},\,c_{_{2i,2i-1}}\,$ are elements
of $H\,$ ($i\leq g-2$) and let us prove that $\,a_{_{2i+1}},\,a_{_{2i+2}},\,
c_{_{2i+1,2i+2}},\,c_{_{2i+2,2i+1}}\,$ are also in $H$. Recall that by
the handle relations, one has $\,c_{_{2i,2i+1}}\!=\!
c_{_{2i-1,2i}}\!\in\! H$. Applying lemma~\ref{ak} respectively with
$\,k\!=\!2i+1\,$ and $\,k\!=\!2i+2$, we obtain
$$a_{_{2i+1}}\,=\,b\,a_{_{2i}}\,b_{_{i}}\,a_{_{2i-1}}\,b\,
\bar{c_{_{2i,2i-1}}}\,a_{_{2i}}\,c_{_{2i,2i+1}}(b_{_{i}})\in H\,,$$
$$a_{_{2i+2}}\,=\,b\,a_{_{2i}}\,b_{_{i}}\,a_{_{2i-1}}\,b\,
\bar{c_{_{2i,2i-1}}}\,a_{_{2i}}\,c_{_{2i,2i+2}}(b_{_{i}})\in H\,.$$
\vskip4mm\noindent
The star relations allow us to conclude the induction as follows:
$$(E_{_{2i,2i+2,2i+2}})\,:\ \ \
c_{_{2i,2i+2}}\,c_{_{2i+2,2i}}=(a_{_{2i}}\,a_{_{2i+2}}\,b)^{4},$$
which gives $\,c_{_{2i+2,2i}}\!\in\! H\,$
($\gamma_{2i,2i+2}\!\in\!\mathcal{H}_{g,n}\,$ by definition);
$$(E_{_{2i,2i+1,2i+2}})\,:\ \ \
c_{_{2i,2i+1}}c_{_{2i+1,2i+2}}c_{_{2i+2,2i}}=
(a_{_{2i}}a_{_{2i+1}}a_{_{2i+2}}b)^{3},$$
which gives $\,c_{_{2i+1,2i+2}}\!\!\in\! H$;
$$(E_{_{2i+1,2i+2,2i+2}})\,:\ \ \
c_{_{2i+1,2i+2}}\,c_{_{2i+2,2i+1}}=
(a_{_{2i+1}}\,a_{_{2i+2}}\,b)^{4},$$
which gives $\,c_{_{2i+2,2i+1}}\!\in\! H$.
\vskip7mm\noindent
b) By lemma~\ref{ak}, one has ($i=g-1$ and $k=2g-1$)
$$a_{_{2g-1}}=b\,a_{_{2g-2}}\,b_{_{g-1}}\,a_{_{2g-3}}\,b\,
\bar{c_{_{2g-2,2g-3}}}\,a_{_{2g-2}}\,c_{_{2g-2,2g-1}}(b_{_{g-1}}).$$
Recall that $\,c_{_{2g-2,2g-1}}\!=\!c_{_{2g-3,2g-2}}\!\in\! H$. Thus,
combined with the case a), this relation implies $\,a_{_{2g-1}}\!\in\! H$.
\vskip7mm\noindent
c) It remains to prove that $\,c_{_{i,j}}\in H\,$ for all $i,j$.
\vskip3mm
$\ast\ $ By definition of $H$ and the case a), one has $\,c_{_{i,i+1}}\in H\,$
for all $i$ such that $\,1\leq i\leq 2g+n-3$.
\vskip3mm
$\ast\ $ Let us show that $\,c_{_{1,j}}\,$ and $\,c_{_{j,1}}\,$ are
elements of $H$ for all $j$ such that $\,2\leq j\leq 2g+n-2$.
We have already seen that $\,c_{_{1,2}},\,c_{_{2,1}}\!\in\! H$.
Thus, suppose inductively that $\,c_{_{1,j}},c_{_{j,1}}\in H\,$ ($j\leq 2g+n-3$).
Using the star relations, one obtains:
\vskip3mm\noindent
\begin{center}
$(E_{_{1,j,j+1}})\,$: $\,c_{_{1,j}}\,c_{_{j,j+1}}\,c_{_{j+1,1}}=
(a_{_{1}}\,a_{_{j}}\,a_{_{j+1}}\,b)^{3}$, which gives $\,c_{_{j+1,1}}\in H$,
\vskip3mm\noindent
$(E_{_{1,j+1,j+1}})\,$: $\,c_{_{1,j+1}}\,c_{_{j+1,1}}=
(a_{_{1}}\,a_{_{j+1}}\,b)^{4}$, which gives $\,c_{_{1,j+1}}\in H$.
\end{center}
\vskip3mm
$\ast\ $ Now, fix $j$ such that $\,2\!\leq\! j\!\leq\! 2g+n-2\,$ and let
us show that $\,c_{_{i,j}},c_{_{j,i}}\in H\,$ for all $i$, $\,1\leq
i<j$. Once more, the star relations allow us to prove this using an
inductive argument:
\vskip3mm\noindent
\begin{center}
$(E_{_{i,i+1,j}})\,$: $\,c_{_{i,i+1}}\,c_{_{i+1,j}}\,c_{_{j,i}}=
(a_{_{i}}\,a_{_{i+1}}\,a_{_{j}}\,b)^{3}$, which gives $\,c_{_{i+1,j}}\in H$,
\vskip3mm\noindent
$(E_{_{i+1,j,j}})\,$: $\,c_{_{i+1,j}}\,c_{_{j,i+1}}=
(a_{_{i+1}}\,a_{_{j}}\,b)^{4}$, which gives $\,c_{_{j,i+1}}\in H$.
\end{center}
\eproof
\section{Proof of theorem~\ref{principaltheorem} for $n=1$ \label{negalun}}
\noindent
Let us recall Wajnryb's result:
\begin{theorem}[\cite{Wajnryb}]
$\ \mathcal{M}_{g,1}\,$ admits a presentation with generators\linebreak[4]
$\,\{\tau_{\alpha}\,/\alpha\!\in\!\mathcal{H}\}\,$
and relations
\begin{list}{(\Roman{liste})}{\usecounter{liste}\labelwidth=8mm
\labelsep=4mm\leftmargin=17mm\itemsep3mm}
\item $\,\tau_{\lambda}\tau_{\mu}\tau_{\lambda}=\tau_{\mu}\tau_{\lambda}\tau_{\mu}\,$
if $\lambda$ and $\mu$ intersect transversaly in a single point, and
$\,\tau_{\lambda}\tau_{\mu}=\tau_{\mu}\tau_{\lambda}\,$ if $\lambda$ and $\mu$
are disjoint.
\item $(\tau_{\alpha_{1}}\tau_{\beta}\tau_{\alpha_{2}})^{4}=\tau_{\gamma_{1,2}}\,\theta\,$
where $\,\theta=\tau_{\beta_{1}}\tau_{\alpha_{2}}\tau_{\beta}\tau_{\alpha_{1}}
\tau_{\alpha_{1}}\tau_{\beta}\tau_{\alpha_{2}}\tau_{\beta_{1}}(\tau_{\gamma_{1,2}})$.
\item $\tau_{\alpha_{2}}\tau_{\alpha_{1}}\varphi\,\tau_{\gamma_{2,4}}=
\bar{t_{1}}\,\bar{t_{2}}\,\tau_{\gamma_{1,2}}\,t_{2}\,t_{1}\,\bar{t_{2}}\,
\tau_{\gamma_{1,2}}\,t_{2}\,\tau_{\gamma_{1,2}}\,\ $ where
\begin{center}
$\,t_{1}=\tau_{\beta}\tau_{\alpha_{1}}
\tau_{\alpha_{2}}\tau_{\beta}\,$, $\,t_{2}=\tau_{\beta_{1}}\tau_{\alpha_{2}}
\tau_{\gamma_{2,4}}\tau_{\beta_{1}}\,$,
\noindent $\,\varphi=\tau_{\beta_{2}}\tau_{\gamma_{2,4}}
\tau_{\beta_{1}}\tau_{\alpha_{2}}\tau_{\beta}\,\sigma(\omega)$,
$\,\sigma=\bar{\tau_{\gamma_{2,4}}}\,\bar{\tau_{\beta_{2}}}\,\bar{t_{2}}
(\tau_{\gamma_{1,2}})\,$
\noindent
and $\,\omega=\bar{\tau_{\alpha_{1}}}\,\bar{\tau_{\beta}}\,
\bar{\tau_{\alpha_{2}}}\,\bar{\tau_{\beta_{1}}}(\tau_{\gamma_{1,2}})$.
\end{center}
\end{list}
\end{theorem}
\begin{remark}
When $\,g\!=\!1$, one just needs the relations $\,${\it (I)}. The relations
$\,${\it (II)}$\,$ and $\,${\it (III)}$\,$ appear respectively for $\,g\!=\!2\,$
and $\,g\!=\!3$.
\end{remark}
\vskip3mm
Denote by $\,\Phi\!:\!G_{g,1}\!\rightarrow\! \mathcal{M}_{g,1}\,$ the map
which associates to each generator $a$ of $\,G_{g,1}\,$ the corresponding
twist $\tau_{\alpha}$. Since the relations {\it (A), (T)} and {\it
(E$_{i,j,k}$)} are satisfied in $\,\mathcal{M}_{g,1}$, $\Phi$ is an homomorphism.
\noindent
Now, consider $\,\Psi:\mathcal{M}_{g,1}\rightarrow G_{g,1}\,$ defined by
$\,\Psi(\tau_{\alpha})=a\,$ for all
$\,\alpha\in\mathcal{H}$.
\begin{lemma}\label{psi}
$\Psi$ is an homomorphism.
\end{lemma}
This lemma allows us to prove the theorem~\ref{principaltheorem} for
$\,n=1$. Indeed, since $\,\mathcal{M}_{g,1}$ is generated by
$\,\{\tau_{\alpha}\,/\,\alpha\in\mathcal{H}_{g,1}\}$,
one has $\,\Phi\circ\Psi=Id_{_{\mathcal{M}_{g,1}}}$. On the other hand,
$\,\{a\,/\,\alpha\in\mathcal{H}_{g,1}\}\,$
generates $\,G_{g,1}\,$ by proposition~\ref{generator}, so
$\,\Psi\circ\Phi=Id_{_{G_{g,1}}}$.
\vskip5mm\noindent
{\bf Proof of lemma~\ref{psi}.\,} We have to show that the
relations {\it (I)}, {\it (II)} and {\it (III)} are satisfied in
$G_{g,1}$. Relations {\it (I)} are braid relations and are therefore
satisfied by {\it (T)}. Let us look at the relation {\it (II)}. The
star relation $\,(E_{_{1,2,2}})$, together with
lemma~\ref{etoile}, gives $\,(a_{_{1}}\,b\,a_{_{2}})^4=c_{_{1,2}}\,c_{_{2,1}}$.
Thus, relation {\it (II)} is satisfied in $G_{g,1}\,$ if and only
if $\,\Psi(\theta)=c_{_{2,1}}$. Let us compute:
$$\begin{array}{rcll}
\Psi(\theta) & = &
b_{_{1}}\,a_{_{2}}\,b\,a_{_{1}}\,a_{_{1}}\,b\,a_{_{2}}\,b_{_{1}}(c_{_{1,2}}) & \\
&=&b_{_{1}}\,a_{_{2}}\,b\,a_{_{1}}\,a_{_{1}}\,b\,a_{_{2}}\,\bar{c_{_{1,2}}}
(b_{_{1}}) & \hbox{by {\it (T)}}, \\
&=&b_{_{1}}\,a_{_{2}}\,b\,a_{_{1}}\,a_{_{1}}\,b\,a_{_{2}}\,(\bar{a_{_{1}}}\,
\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,\bar{b})^{3}c_{_{2,1}}(b_{_{1}}) &
\hbox{by } \,(E_{_{1,1,2}}), \\
&=&b_{_{1}}\,\bar{b}\,\bar{a_{_{1}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{1}}}\,
\bar{a_{_{1}}}\,c_{_{2,1}}(b_{_{1}}) & \hbox{by lemma~\ref{etoile}}, \\
&=&b_{_{1}}\,\bar{b_{_{1}}}(c_{_{2,1}}) & \hbox{by {\it (T)}}, \\
&=& c_{_{2,1}}. &
\end{array}$$
\vskip3mm\noindent
Wajnryb's relation {\it (III)} is nothing but a lantern relation.
Via $\Psi$, it becomes in $\,G_{g,1}\,$
$$a_{_{2}}\,a_{_{1}}\,f\,c_{_{2,4}}=l\,m\,c_{_{1,2}}\ \ \ (\ast)$$
\vskip3mm\noindent
where
$\,m=\bar{b_{_{1}}}\,\bar{a_{_{2}}}\,\bar{c_{_{2,4}}}\,\bar{b_{_{1}}}
(c_{_{1,2}})$,
$\,l=\bar{b}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,\bar{b}(m)\,$ and
$\,f=b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,a_{_{2}}\,b\,s(w)$, with
$\,s=\Psi(\sigma)=\bar{c_{_{2,4}}}\,\bar{b_{_{2}}}(m)\,$ and
$\,w=\Psi(\omega)=\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,\bar{b_{_{1}}}
(c_{_{1,2}})$.
\vskip3mm\noindent
In $G_{g,1}$, the lantern relation $\,(L_{_{1,2,4}})\,$ yields
$$a_{_{1}}\,c_{_{1,2}}\,c_{_{2,4}}\,a_{_{4}}=
c_{_{1,4}}\,\bar{X}\,a_{_{2}}\,X\,a_{_{2}}\ \ \ (L_{_{1,2,4}})$$
where $\,X=b\,a_{_{1}}\,a_{_{4}}\,b$. To prove that the relation $\,(\ast)\,$
is satisfied in $\,G_{g,1}$, we will see that it is exactly the
conjugate of the relation $\,(L_{_{1,2,4}})\,$ by
$\,h=b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}$.
This will be done by proving the following seven equalities in
$\,G_{g,1}\,$:
$$\begin{array}{c}
\hbox{ 1) }\,h(a_{_{1}})=a_{_{2}}\ \ \ \hbox{ 2) }\,h(c_{_{1,2}})=a_{_{1}}
\ \ \ \hbox{ 3) }\,h(c_{_{2,4}})=f\ \ \ \hbox{ 4) }\,h(a_{_{4}})=c_{_{2,4}} \\ \\
\hbox{ 5) }\,h(c_{_{1,4}})=l\ \ \ \hbox{ 6) }\,h(a_{_{2}})=c_{_{1,2}}
\ \ \ \hbox{ 7) }\,h\bar{X}(a_{_{2}})=m.
\end{array}$$
\vskip3mm\noindent
1) Just applying the relations {\it (T)}, one obtains:
$$\begin{array}{rcl}
h(a_{_{1}}) & = & b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(a_{_{1}}) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,\bar{a_{_{1}}}(b) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,
\bar{b_{_{2}}}\,b\,\bar{b}(a_{_{2}}) \\
&=& a_{_{2}}\,.
\end{array}$$
\noindent
2) Using the relations {\it (T)} again, we get
$$
\begin{array}{rcl}
h(c_{_{1,2}}) & = & b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(c_{_{1,2}}) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,a_{_{2}}\,
a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,\bar{c_{_{1,2}}}(b_{_{1}}) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,\bar{b_{_{1}}}(a_{_{2}}) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,\bar{a_{_{2}}}(b) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,\bar{b}(a_{_{1}}) \\
&=& a_{_{1}}\,.
\end{array}
$$
\noindent
3) The relation $\,(L_{_{2,3,4}})\,$ yields
$$a_{_{2}}\,c_{_{2,3}}\,c_{_{3,4}}\,a_{_{4}}=
c_{_{2,4}}\,\bar{Y}\,a_{_{3}}\,Y\,a_{_{3}}\ \ \ \hbox{ where }\
Y=b\,a_{_{2}}\,a_{_{4}}\,b.$$
Since $\,c_{_{2,3}}\!\!=\!\!c_{_{1,2}}\,$ by the handle relations, this
equality implies the following one:
$$\bar{c_{_{2,4}}}\,a_{_{2}}\,c_{_{1,2}}= \bar{Y}\,a_{_{3}}\,Y\,a_{_{3}}\,
\bar{a_{_{4}}}\,\bar{c_{_{3,4}}}\ \ \ \ \ (1).$$
\noindent
From this, we get:
$$\begin{array}{rcll}
h(c_{_{2,4}}) & = & b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,
a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(c_{_{2,4}}) & \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,a_{_{2}}\,
a_{_{1}}\,b\,b_{_{1}}\,\bar{c_{_{2,4}}}\,c_{_{1,2}}\,a_{_{2}}(b_{_{1}})
& \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,a_{_{2}}\,
a_{_{1}}\,b\,b_{_{1}}\,\bar{Y}\,a_{_{3}}\,Y\,a_{_{3}}\,\bar{a_{_{4}}}\,
\bar{c_{_{3,4}}}(b_{_{1}}) & \hbox{by }\,(1) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,a_{_{2}}\,
a_{_{1}}\,b\,b_{_{1}}\,\bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{4}}}\,\bar{b}\,
a_{_{3}}\,b\,a_{_{2}}\,a_{_{4}}\,b(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,a_{_{1}}\,
\bar{b_{_{1}}}\,a_{_{2}}\,b_{_{1}}\,\bar{a_{_{4}}}\,a_{_{3}}\,b\,
\bar{a_{_{3}}}\,\bar{b_{_{1}}}(a_{_{2}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,a_{_{1}}\,
\bar{b_{_{1}}}\,a_{_{2}}\,\bar{a_{_{4}}}\,a_{_{3}}\,\bar{a_{_{2}}}(b)
& \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,b\,a_{_{1}}\,
a_{_{3}}\,\bar{b_{_{2}}}\,b(a_{_{4}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,a_{_{4}}\,(\bar{a_{_{1}}}\,\bar{a_{_{3}}}\,\bar{a_{_{4}}}\,
\bar{b})^{3}\,c_{_{1,3}}\,c_{_{3,4}}\,b\,a_{_{1}}\,a_{_{3}}\,
\bar{b_{_{2}}}\,b(a_{_{4}}) & \hbox{by }\,(E_{_{1,3,4}}) \\
&=& b_{_{2}}\,\bar{a_{_{1}}}\,\bar{a_{_{3}}}\,\bar{b}\,
(\bar{a_{_{1}}}\,\bar{a_{_{3}}}\,\bar{a_{_{4}}}\,\bar{b})^{2}\,
b\,a_{_{1}}\,a_{_{3}}\,c_{_{3,4}}\,b\,a_{_{4}}(b_{_{2}})
& \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,\bar{a_{_{1}}}\,\bar{a_{_{3}}}\,\bar{b}\,
\bar{a_{_{1}}}\,\bar{a_{_{3}}}\,\bar{b}\,\bar{a_{_{4}}}\,\bar{b}\,
b\,a_{_{4}}\,\bar{b_{_{2}}}(c_{_{3,4}})
& \hbox{by {\it (T)}} \\
&=& c_{_{3,4}} & \hbox{by {\it (T)}.} \\
\end{array}$$
Now, if $\,x\!=\!c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}(c_{_{1,2}})$, one has
$$f=b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,a_{_{2}}\,b\,\bar{c_{_{2,4}}}\,
\bar{b_{_{2}}}\,\bar{b_{_{1}}}\,\bar{a_{_{2}}}\,\bar{c_{_{2,4}}}\,
\bar{b_{_{1}}}(x)\,.$$
First, let us compute $x$:
$$\begin{array}{rcll}
x & = & c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}(c_{_{1,2}}) & \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,c_{_{2,4}}\,c_{_{1,2}}\,\bar{a_{_{1}}}\,\bar{b}\,
\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,(a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b)^{3}\,\bar{c_{_{4,1}}}\,
\bar{a_{_{1}}}\,\bar{b}\,
\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by }\,(E_{_{1,2,4}}) \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,(a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b)^{2}\,a_{_{1}}\,a_{_{2}}\,
a_{_{4}}\,b\,\bar{a_{_{1}}}\,\bar{b}\,
\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,(a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b)^{2}\,a_{_{4}}\,a_{_{2}}\,
\bar{b}\,a_{_{1}}\,b\,\bar{b}\,
\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,(a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b)^{2}\,a_{_{4}}\,
\bar{b}\,\bar{a_{_{2}}}\,b(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b\,a_{_{1}}\,a_{_{2}}\,b\,
a_{_{4}}\,b\,\bar{b}\,\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, a_{_{2}}\,b_{_{1}}\,
b_{_{2}}\,a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b\,a_{_{1}}\,\bar{b}\,a_{_{2}}\,
b(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, b_{_{2}}\,a_{_{2}}\,b_{_{1}}\,
a_{_{2}}\,a_{_{4}}\,b\,a_{_{1}}\,b\,\bar{b}\,a_{_{2}}
(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, b_{_{2}}\,b_{_{1}}\,a_{_{2}}\,b_{_{1}}\,
a_{_{4}}\,b\,\bar{b_{_{1}}}(a_{_{2}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, b_{_{2}}\,b_{_{1}}\,a_{_{2}}\,
a_{_{4}}\,\bar{a_{_{2}}}(b) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, b_{_{2}}\,\bar{b}(a_{_{4}})
& \hbox{by {\it (T)}.} \\
\end{array}$$
Next, using the braid relations, we prove that $\,b_{_{1}},\ c_{_{2,4}},
\ b_{_{2}}\,$ and $\,a_{_{2}}\,$ commute with $x$ :
$$b_{_{1}}(x) = b_{_{1}}\,c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\, b_{_{2}}\,
\bar{b}(a_{_{4}}) = c_{_{1,2}}\,b_{_{1}}\,c_{_{1,2}}\,c_{_{2,4}}\, b_{_{2}}\,
\bar{b}(a_{_{4}}) = x,$$
$$c_{_{2,4}}(x) = c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,b_{_{1}}\, b_{_{2}}\,
\bar{b}(a_{_{4}}) = x,$$
$$b_{_{2}}(x) = c_{_{1,2}}\,b_{_{1}}\,b_{_{2}}\,c_{_{2,4}}\,b_{_{2}}\,
\bar{b}(a_{_{4}}) = c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,b_{_{2}}\,c_{_{2,4}}\,
\bar{b}(a_{_{4}}) = x,$$
$$\begin{array}{rcll}
a_{_{2}}(x) & = & a_{_{2}}\,c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,a_{_{2}}\,
b_{_{1}}\,b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}(c_{_{1,2}}) & \\
&=& c_{_{1,2}}\,b_{_{1}}\,a_{_{2}}\,b_{_{1}}\,c_{_{2,4}}\,
b_{_{1}}\,b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}(c_{_{1,2}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,a_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,c_{_{2,4}}\,
b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}(c_{_{1,2}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,a_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,b_{_{2}}\,
c_{_{2,4}}\,b_{_{2}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}(c_{_{1,2}}) & \hbox{by {\it (T)}} \\
&=& c_{_{1,2}}\,b_{_{1}}\,a_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,b_{_{2}}\,
c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}(c_{_{1,2}}) & \hbox{by {\it (T)}} \\
&=& x. &
\end{array}$$
\noindent
To conclude, we get,
$$\begin{array}{rcll}
f & = & b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,a_{_{2}}\,b\,\bar{c_{_{2,4}}}\,
\bar{b_{_{2}}}\,\bar{b_{_{1}}}\,\bar{a_{_{2}}}\,\bar{c_{_{2,4}}}\,
\bar{b_{_{1}}}(x) & \\
&=& b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,a_{_{2}}\,b(x) & \\
&=& b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,a_{_{2}}\,b\,c_{_{1,2}}\,b_{_{1}}\,
c_{_{2,4}}\,b_{_{2}}\,\bar{b}(a_{_{4}}) & \\
&=& b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,a_{_{2}}\,c_{_{1,2}}\,b_{_{1}}\,
c_{_{2,4}}\,\bar{a_{_{4}}}(b_{_{2}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{4}}}\,\bar{b_{_{2}}}\,b_{_{1}}\,
a_{_{2}}\,c_{_{1,2}}\,b_{_{1}}(c_{_{2,4}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,(a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b)^{3}\,\bar{c_{_{1,2}}}\,
\bar{c_{_{4,1}}}\,\bar{a_{_{4}}}\,\bar{b_{_{2}}}\,b_{_{1}}\,a_{_{2}}\,
c_{_{1,2}}\,b_{_{1}}(c_{_{2,4}}) & \hbox{by }\,(E_{_{1,2,4}}) \\
&=& b_{_{2}}\,(a_{_{1}}\,a_{_{2}}\,a_{_{4}}\,b)^{3}\,\bar{a_{_{4}}}\,
\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,\bar{c_{_{1,2}}}\,b_{_{1}}\,
c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(c_{_{2,4}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,(a_{_{1}}\,a_{_{2}}\,b)^{2}\,a_{_{4}}\,b\,a_{_{1}}\,a_{_{2}}\,b\,
\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,\bar{c_{_{1,2}}}\,b_{_{1}}\,
c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(c_{_{2,4}}) & \hbox{by lemma~\ref{etoile}}\\
&=& (a_{_{1}}\,a_{_{2}}\,b)^{2}\,b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,
\bar{b_{_{2}}}\,b\,a_{_{1}}\,a_{_{2}}\,b\,
b_{_{1}}\,c_{_{1,2}}\,\bar{b_{_{1}}}\,a_{_{2}}\,
b_{_{1}}(c_{_{2,4}}) & \hbox{by {\it (T)}} \\
&=& (a_{_{1}}\,a_{_{2}}\,b)^{2}\,b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,
\bar{b_{_{2}}}\,b\,a_{_{2}}\,a_{_{1}}\,b\,
b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,
b_{_{1}}\,\bar{a_{_{2}}}(c_{_{2,4}}) & \hbox{by {\it (T)}} \\
&=& (a_{_{1}}\,a_{_{2}}\,b)^{2}\,h(c_{_{2,4}}) & \\
&=& (a_{_{1}}\,a_{_{2}}\,b)^{2}(c_{_{3,4}}) & \\
&=& c_{_{3,4}} & \hbox{by {\it (T)}\,.} \\
\end{array}$$
Finally, we have proved that $\,h(c_{_{2,4}})=c_{_{3,4}}=f$.
\vskip3mm\noindent
4) We can compute $\,h(a_{_{4}})\,$ as follows:
$$\begin{array}{rcll}
h(a_{_{4}}) & = & b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(a_{_{4}}) &\\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b(a_{_{4}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,a_{_{4}}\,(\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,\bar{a_{_{4}}}\,
\bar{b})^{3}\,c_{_{1,2}}\,c_{_{2,4}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b(a_{_{4}}) & \hbox{by }\,(E_{_{1,2,4}}) \\
&=& b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,
\bar{b}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,\bar{a_{_{4}}}\,
\bar{b}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,\bar{a_{_{4}}}\,
\bar{b}\,\bar{b_{_{2}}}\,b\,a_{_{2}}\,
a_{_{1}}\,b(a_{_{4}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,
\bar{b}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,\bar{b}\,\bar{a_{_{4}}}\,
\bar{b}\,\bar{b_{_{2}}}\,b(a_{_{4}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,c_{_{2,4}}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,
\bar{b}\,\bar{a_{_{1}}}\,\bar{a_{_{2}}}\,\bar{b}\,\bar{a_{_{4}}}\,
a_{_{4}}(b_{_{2}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,c_{_{2,4}}(b_{_{2}}) & \hbox{by {\it (T)}} \\
&=& c_{_{2,4}} & \hbox{by {\it (T)}.} \\
\end{array}$$
\noindent
5) For $\,h(c_{_{1,4}})$, we have:
$$\begin{array}{rcll}
h(c_{_{1,4}}) & = & b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,
a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(c_{_{1,4}}) & \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,b\,a_{_{2}}\,a_{_{1}}\,
b\,b_{_{1}}\,a_{_{2}}\,\bar{b_{_{2}}}(c_{_{1,4}}) & \hbox{by {\it (T)}} \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{a_{_{4}}}\,\bar{b}\,\bar{a_{_{2}}}\,
\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{4}}}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,
c_{_{1,2}}\,c_{_{2,4}}\,b_{_{1}}\,a_{_{2}}\,\bar{b_{_{2}}}(c_{_{1,4}})
& \hbox{by }\,(E_{_{1,2,4}}) \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,\bar{a_{_{4}}}\,\bar{a_{_{1}}}\,
a_{_{2}}\,c_{_{1,4}}(b_{_{2}}) & \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,c_{_{1,2}}\,c_{_{2,4}}\,\bar{X}\,
\bar{a_{_{2}}}\,X(b_{_{2}}) & \hbox{by }\,(L_{_{1,2,4}}) \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,c_{_{2,4}}\,\bar{b}\,
\bar{a_{_{1}}}\,\bar{a_{_{4}}}\,\bar{b}\,\bar{a_{_{2}}}\,b\,a_{_{4}}(b_{_{2}})
& \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{2}}\,b_{_{1}}\,c_{_{2,4}}\,b_{_{1}}\,\bar{b}\,
\bar{a_{_{1}}}\,\bar{a_{_{4}}}\,a_{_{2}}\,\bar{b}\,\bar{a_{_{2}}}\,
a_{_{4}}(b_{_{2}}) & \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{1}}\,b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,\bar{b}\,
\bar{a_{_{1}}}\,a_{_{2}}\,b\,\bar{a_{_{4}}}\,\bar{b}(b_{_{2}})
& \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{1}}\,b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,\bar{b}\,
\bar{a_{_{1}}}\,a_{_{2}}\,b\,b_{_{2}}(a_{_{4}}) & \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,b_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,\bar{b}\,
\bar{a_{_{1}}}\,a_{_{2}}\,\bar{a_{_{4}}}(b) & \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,b_{_{2}}\,c_{_{2,4}}\,\bar{b}\,\bar{a_{_{1}}}\,
\bar{a_{_{4}}}\,\bar{b}\,b_{_{1}}(a_{_{2}}) & \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,b_{_{2}}\,c_{_{2,4}}\,\bar{b}\,\bar{a_{_{1}}}\,
\bar{a_{_{4}}}\,\bar{b}\,\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,
c_{_{1,2}}\,b_{_{1}}\,c_{_{2,4}}\,b_{_{2}}\,a_{_{1}}\,a_{_{4}}\,a_{_{2}}\,X\,
\bar{c_{_{1,2}}}\,\bar{c_{_{4,1}}}(b_{_{1}})
& \hbox{by }\,(E_{_{1,2,4}}) \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,c_{_{1,2}}\,
\bar{a_{_{2}}}\,b_{_{1}}\,a_{_{2}}\,\bar{c_{_{1,2}}}\,c_{_{2,4}}(b_{_{1}})
& \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,c_{_{1,2}}\,
b_{_{1}}\,a_{_{2}}\,\bar{b_{_{1}}}\,\bar{c_{_{1,2}}}\,\bar{b_{_{1}}}(c_{_{2,4}})
& \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,c_{_{1,2}}\,
b_{_{1}}\,a_{_{2}}\,\bar{c_{_{1,2}}}\,\bar{b_{_{1}}}\,
\bar{c_{_{1,2}}}(c_{_{2,4}}) & \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{b_{_{1}}}\,
c_{_{1,2}}\,b_{_{1}}\,a_{_{2}}\,\bar{b_{_{1}}}(c_{_{2,4}})
& \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{b_{_{1}}}\,
c_{_{1,2}}\,\bar{a_{_{2}}}\,b_{_{1}}\,a_{_{2}}(c_{_{2,4}})
& \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{b_{_{1}}}\,
\bar{a_{_{2}}}\,\bar{c_{_{2,4}}}\,c_{_{1,2}}(b_{_{1}})
& \hbox{by {\it (T)}} \\
&=& \bar{b}\,\bar{a_{_{2}}}\,\bar{a_{_{1}}}\,\bar{b}\,\bar{b_{_{1}}}\,
\bar{a_{_{2}}}\,\bar{c_{_{2,4}}}\,\bar{b_{_{1}}}(c_{_{1,2}})
& \hbox{by {\it (T)}} \\
&=& l\,.
\end{array}$$
\noindent
6) By the relations {\it (T)}, one has
$$\begin{array}{rcl}
h(a_{_{2}}) & = & b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(a_{_{2}}) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b\,a_{_{2}}\,a_{_{1}}\,
b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,\bar{a_{_{2}}}(b_{_{1}}) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,\bar{b_{_{1}}}(c_{_{1,2}}) \\
&=& c_{_{1,2}}\,.
\end{array}$$
\noindent
7) Using the braid relations, one gets
$$\begin{array}{rcl}
h(b) & = & b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,c_{_{1,2}}\,a_{_{2}}\,b_{_{1}}(b) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,a_{_{1}}\,b\,b_{_{1}}\,\bar{b}(a_{_{2}}) \\
&=& b_{_{2}}\,a_{_{4}}\,\bar{c_{_{4,1}}}\,\bar{b_{_{2}}}\,b
\,a_{_{2}}\,\bar{a_{_{2}}}(b_{_{1}}) \\
&=& b_{_{1}}\,.\\
\end{array}$$
\vskip3mm\noindent
Thus, one has
$\,h\bar{X}(a_{_{2}})=\bar{b_{_{1}}}\,\bar{a_{_{2}}}\,\bar{c_{_{2,4}}}\,
\bar{b_{_{1}}}(c_{_{1,2}})=m$.
\vskip3mm\noindent
This concludes the proof of lemma~\ref{psi}.
\eproof
\section{Proof of theorem~\ref{principaltheorem} \label{finpreuve}}
We will proceed by induction on $n$. To do this, we need the exact
sequence (see \cite{Birman,Harer}):
\diagram[size=1.5em]
1 & \rto & \mathbf{Z}\times\pi_{1}(\Sigma_{g,n-1},p) & \rto^{f_{1}} &
\mathcal{M}_{g,n} & \rto^{f_{2}} & \mathcal{M}_{g,n-1} & \rto & 1\ . \\
\enddiagram
Here, $f_{2}\,$ is defined by collapsing $\delta_{n}\,$ with a disc
centred at $p$ and by extending each map over the disc by the identity, and
$f_{1}\,$ by sending each $\,k\in\mathbf{Z}\,$ to
$\,\tau_{\delta_{n}}^k\,$ and each $\,\alpha\in\pi_{1}(\Sigma_{g,n-1},p)\,$
to the spin map $\,\tau_{\alpha'}\tau_{\alpha''}^{-1}\,$ $\,$($\alpha'$ and $\alpha''$
are two curves in $\Sigma_{g,n-1}\,$ which are separated by $\delta_{n}\,$
and such that $\,\alpha'=\alpha''=\alpha\,$ in $\Sigma_{g,n-1}$).
Let us denote by $\,a'_{_{1}},\ldots,a'_{_{2g+n-3}},b',b'_{_{1}},\ldots,
b'_{_{g-1}},(c'_{_{i,j}})_{1\leq i\not= j\leq2g+n-3}\,$ the \linebreak[4]
generators of $\,G_{g,n-1}\,$ corresponding to the curves in
$\,\mathcal{G}_{g,n-1}$. We define $\,g_{2}:G_{g,n}\rightarrow G_{g,n-1}\,$ by
$$
\begin{array}{rcll}
g_{2}(a_{_{i}}) & = & a'_{_{i}} & \hbox{ for all }\, i\not= 2g+n-2 \\
g_{2}(a_{_{2g+n-2}}) & = & a'_{_{1}} & \\
g_{2}(b) & = & b' & \\
g_{2}(b_{_{i}}) & = & b'_{_{i}} & \hbox{ for }\, 1\leq i\leq g-1 \\
g_{2}(c_{_{i,j}}) & = & c'_{_{i,j}}& \hbox{ for }\, 1\leq i,j\leq 2g+n-3 \\
g_{2}(c_{_{i,2g+n-2}}) & = & c'_{_{i,1}}& \hbox{ for }\, 2\leq i\leq 2g+n-3 \\
g_{2}(c_{_{2g+n-2,j}}) & = & c'_{_{1,j}}&\hbox{ for }\, 2\leq j\leq 2g+n-3 \\
g_{2}(c_{_{1,2g+n-2}}) & = & (a'_{_{1}}\,b'\,a'_{_{1}})^{4} \\
g_{2}(c_{_{2g+n-2,1}}) & = & 1\,.&
\end{array}
$$
\vskip3mm
\begin{lemma}
For all $\,(g,n)\!\in\!{\mathbf{N}}^{\ast}\!\times\!{\mathbf{N}}^{\ast}$,
$g_{2}\,$ is an homomorphism.
\end{lemma}
\proof We have to prove that the relations in $\,G_{g,n}\,$ are
satisfied in $\,G_{g,n-1}\,$ via $g_{2}$. Since for all $i$ such
that $\,1\leq i\leq g-1$, one has $\,g_{2}(c_{_{2i,2i+1}})\!=\!c'_{_{2i,2i+1}}\,$
and $\,g_{2}(c_{_{2i-1,2i}})\!=\!c'_{_{2i-1,2i}}\,$, this is clear for the
handle relations.
So, let $\lambda$, $\mu$ be two elements of $\,\mathcal{G}_{g,n}\,$ which
do not intersect (resp. intersect transversaly in a single point).
If $\,l\,$ and $\,m\,$ are the associated elements of $\,G_{g,n}$, we have
to prove that
\[
(\bullet)\left\{\begin{array}{c}
\,g_{2}(l)g_{2}(m)=g_{2}(m)g_{2}(l) \\
\Bigl(\hbox{resp. }\ g_{2}(l)g_{2}(m)g_{2}(l)=g_{2}(m)g_{2}(l)g_{2}(m)\Bigr).
\end{array}\right.\]
\vskip3mm\noindent
When $\lambda$ and $\mu$ are distinct from $\,\gamma_{_{2g+n-2,1}}\,$ and
$\,\gamma_{_{1,2g+n-2}}$, these relations are precisely braid relations
in $\,G_{g,n-1}$. If not, $\lambda$ and $\mu$ do not intersect in a
single point. Thus, it remains to consider the cases where $\,\lambda=\gamma_{_{1,2g+n-2}}\,$
or $\,\gamma_{_{2g+n-2,1}}\,$ and $\,\mu\in\mathcal{G}_{g,n}\,$ is a curve
disjoint from $\lambda$. For $\,\lambda=\gamma_{_{2g+n-2,1}}$, one has
$\,g_{2}(l)=1\,$ and the relation $\,(\bullet)\,$ is satisfied in
$\,G_{g,n-1}$. So, suppose that $\,\lambda\!=\!\gamma_{_{1,2g+n-2}}$. Then,
we have $\,g_{2}(l)\!=\!(a'_{_{1}}\,b'\,a'_{_{1}})^{4}$. The
curves in $\,\mathcal{G}_{g,n}\,$ which are disjoint from $\lambda$
are
$\,\beta,\beta_{1},\ldots,\beta_{g-1},\alpha_{1},\alpha_{2g+n-2},\gamma_{2g+n-2,1}\,$
and $\,(\gamma_{i,j})_{1\leq i<j\leq 2g+n-2}$. Let us look at the
different cases:
\begin{list}{--}{\leftmargin10mm\parsep=2mm\topsep=4mm}
\item By lemma~\ref{etoile}, $\,b'=g_{2}(b)\,$ and $\,a'_{_{1}}=
g_{2}(a_{_{1}})=g_{2}(a_{_{2g+n-2}})\,$ commute with $\,
(a'_{_{1}}\,b'\,a'_{_{1}})^{4}=g_{2}(l)$.
\item For all $i$, $\,1\leq i\leq g-1$,
$\,b'_{_{i}}=g_{2}(b_{_{i}})\,$ commutes with $\,
(a'_{_{1}}\,b'\,a'_{_{1}})^{4}$ by the braid relations
in $\,G_{g,n-1}$.
\item For all $\,i,j\,$ such that $\,1\leq i<j\leq 2g+n-2$, one has
$\,g_{2}(c_{_{i,j}})=c'_{_{i,j}}\,$ if $\,j\not = 2g+n-2$, and
$\,g_{2}(c_{_{i,j}})=c'_{_{i,1}}\,$ otherwise. In all cases, one
has that $\,g_{2}(c_{_{i,j}})g_{2}(l)=g_{2}(l)g_{2}(c_{_{i,j}})\,$ by the
braid relations in $\,G_{g,n-1}$.
\end{list}
\vskip3mm
Now, let us look at the star relations. For $\,i,j,k\!\not =\! 2g+n-2$,
$\,(E_{_{i,j,k}})$ is sent by $g_{2}\,$ to
$\,(E'_{_{i,j,k}})$, the star relation in $\,G_{g,n-1}\,$ involving
the same curves. For all $i,j$ such that $\,2\leq i\leq j<2g+n-2$,
$\,(E_{_{i,j,2g+n-2}})\,$ is sent to $\,(E'_{_{i,j,1}})$. Next, for
$\,2\leq j<2g+n-2$, $\,(E_{_{1,j,2g+n-2}})\,$ is sent to $\,(E'_{_{1,1,j}})$.
Finally, since $\,g_{2}(c_{_{2g+n-2,1}})=1\,$ and
$\,g_{2}(c_{_{1,2g+n-2}})=(a'_{_{1}}b'a'_{_{1}})^{4}$, the
relation $\,(E_{_{1,1,2g+n-2}})\,$ is satisfied in $\,G_{g,n-1}\,$
via $g_{2}\,$ by lemma~\ref{etoile}. This concludes the proof by remark~\ref{rem}.
\eproof
\hfill\break\indent
Since the relations {\it (T)}, {\it (A)} and {\it
(E$_{i,j,k}$)} are satisfied in $\,\mathcal{M}_{g,n}\,$ (see \cite{Gervais}),
one has an homomorphism $\,\Phi_{g,n}:G_{g,n}\rightarrow \mathcal{M}_{g,n}\,$
which associates to each $\,a\in\mathcal{G}_{g,n}\,$ the corresponding twist
$\,\tau_{\alpha}$. Since we view $\,\Sigma_{g,n}\,$ as a subsurface of
$\,\Sigma_{g,n-1}$, we have $\,\Phi_{g,n-1}\circ g_{2}\!=\!f_{2}\circ\Phi_{g,n}$.
Thus, we get the following commutative diagram:
\diagram[size=2.5em]
1 & \rto & \ker g_{2} & \rto & G_{g,n} & \rTo^{g_{2}} &&
G_{g,n-1} & \rto & 1 \\
& & \dto_{h_{g,n}} & & \dto_{\Phi_{g,n}} & &&
\dto_{\Phi_{g,n-1}} & &\\
1 & \rto & \mathbf{Z}\times\pi_{1}(\Sigma_{g,n-1},p) & \rTo^{f_{1}} &
\mathcal{M}_{g,n} & \rto^{f_{2}} && \mathcal{M}_{g,n-1} & \rto & 1 \\
\enddiagram
\vskip3mm\noindent
where $h_{g,n}\,$ is induced by $\Phi_{g,n}$.
\begin{proposition}\label{h}
$h_{g,n}\,$ is an isomorphism for all $\,g\!\geq\!1$ and $\,n\!\geq\! 2$.
\end{proposition}
In order to prove this proposition, we will first give a system of
generators for $\ker g_{2}$. Thus, we consider the following elements of
$\,\ker g_{2}$:
$$\begin{array}{c}
x_{_{0}}=a_{_{1}}\bar{a_{_{2g+n-2}}},\ \ x_{_{1}}=b(x_{_{0}}),\ \
x_{_{2}}=a_{_{2}}(x_{_{1}}),\ \ x_{_{3}}=b_{_{1}}(x_{_{2}}),\\ \\
\hbox{for }\, 2\leq i\leq g-1,\
\,x_{_{2i}}=c_{_{2i-2,2i}}(x_{_{2i-1}})\ \hbox{ and }\
x_{_{2i+1}}=b_{_{i}}(x_{_{2i}}),\\ \\
\hbox{and for }\,2g\leq k\leq 2g+n-3,\ \
x_{_{k}}=a_{_{k}}(x_{_{1}})\,.
\end{array}$$
\begin{remark}
If $\,g\!=\!1$, one has just to concider
$\,x_{_{0}},x_{_{1}},x_{_{2}},\ldots,x_{_{n-1}}$.
\end{remark}
\begin{lemma}\label{normal}
For all $\,(g,n)\!\in\!{\mathbf{N}}^{\ast}\!\times\!{\mathbf{N}}^{\ast}$,
$\,\ker g_{2}\,$ is normally generated by $\,d_{_{n}}\,$ and $\,x_{_{0}}$.
\end{lemma}
\proof Let us denote by $K$ the subgroup of $\,G_{g,n}\,$ normally
generated by $\,d_{_{n}}\,$ and $\,x_{_{0}}$. Since
$\,g_{2}(d_{_{n}})\!=\! 1\,$ and $\,g_{2}(a_{_{2g+n-2}})\!=\!
g_{2}(a_{_{1}})$, one has $\,K\!\subset\!\ker g_{2}$. In order to
prove the equality, we shall prove that $\,g_{2}\,$ induces a
monomorphism $\,\widetilde{g_{2}}\,$ from $\,G_{g,n}/K\,$ to
$\,G_{g,n-1}$.
\noindent
Define $\,k:G_{g,n-1}\rightarrow G_{g,n}/K\,$ by
$$
\begin{array}{rcll}
k(b') & = & \widetilde{b} & \\
k(b'_{_{i}}) & = & \widetilde{b_{_{i}}} & \hbox{ for }\, 1\leq i\leq g-1 \\
k(a'_{_{i}}) & = & \widetilde{a_{_{i}}} &
\hbox{ for all }i,\,\ 1\leq i\leq 2g+n-3 \\
k(c'_{_{i,j}}) & = & \widetilde{c_{_{i,j}}} &
\hbox{ for all }\,i\not= j,\,\ 1\leq i,j\leq 2g+n-3
\end{array}
$$
where, for $\,x\!\in\! G_{g,n}$, $\,\widetilde{x}$ denote the class of
$x$ in $\,G_{g,n}/K$. Pasting a pair of pants to $\,\gamma_{2g+n-3,1}\,$ allows
us to view $\,\Sigma_{g,n-1}\,$ as a subsurface of $\,\Sigma_{g,n}$,
and $\,\mathcal{G}_{g,n-1}\,$ as a subset of $\,\mathcal{G}_{g,n}$.
Thus, $k$ appears to be clearly a morphism. Let us prove that
$\,k\!\circ \widetilde{g_{2}}\!=\!Id$.
\noindent
Denote by $H$ the subgroup of $\,G_{g,n}/K\,$ generated by $\,\{\widetilde{b},
\widetilde{b_{_{1}}},\ldots,\widetilde{b}_{_{g-1}},$
$\widetilde{a_{_{1}}},\ldots,\widetilde{a}_{_{2g+n-3}},
(\widetilde{c}_{_{i,j}})_{_{1\leq i\not= j\leq 2g+n-3}}\}$.
Since, by definition of $\,g_{2}\,$ and $k$, one has $\,k\!\circ\!
g_{2}(\widetilde{x})\!=\!\widetilde{x}\,$ for all
$\,\widetilde{x}\!\in\!H$, we just need to prove that\linebreak[4]
$\,G_{g,n}/K=H$. We know that $\,G_{g,n}/K\,$ is generated by
$\,\{\widetilde{x}\,/\,x\!\in\!\mathcal{G}_{g,n}\}$; thus, the
following computations allow us to conclude.
\vskip3mm
\begin{list}{--}{\leftmargin10mm\itemsep3mm}
\item $\widetilde{a}_{_{2g+n-2}}\!=\,\widetilde{a_{_{1}}}$.
\item $\widetilde{c}_{_{2g+n-2,1}}\!=\!\widetilde{d_{_{n}}}\!=\!1$.
\item By the star relation $\,(E_{_{1,1,2g+n-2}})$, one has
$$\widetilde{c}_{_{1,2g+n-2}}\!=\!(\widetilde{a_{_{1}}}\,
\widetilde{a_{_{1}}}\,\widetilde{a}_{_{2g+n-2}}\,\widetilde{b})^{-3}\,
\widetilde{c}_{_{2g+n-2,1}}\,=\,(\widetilde{a_{_{1}}}\,
\widetilde{a_{_{1}}}\,\widetilde{a_{_{1}}}\,\widetilde{b})^{-3}\,.$$
\item For $\,2\!\leq\! i\!\leq\! 2g+n-3$, one has by the lantern relation
$\,(L_{_{2g+n-2,1,i}})$:
$$a_{_{2g+n-2}}\,c_{_{2g+n-2,1}}\,c_{_{1,i}}\,a_{_{i}}=c_{_{2g+n-2,i}}\,
a_{_{1}}\,X\,a_{_{1}}\,\bar{X}$$
where $\,X\!=\!b\,a_{_{2g+n-2}}\,a_{_{i}}\,b$.
This relation implies the following one by $\,${\it (T)}:
$$\begin{array}{rcl}
c_{_{2g+n-2,i}} & = & c_{_{1,i}}\,a_{_{i}}\,X\,\bar{a_{_{1}}}\,\bar{X}\,
\bar{a_{_{1}}}\,a_{_{2g+n-2}}\,c_{_{2g+n-2,1}}\\
&=& c_{_{1,i}}\,X\,\bar{x_{_{0}}}\,\bar{X}\,\bar{x_{_{0}}}\,d_{_{n}}\,,
\end{array}$$
which yields $\,\widetilde{c}_{_{2g+n-2,i}}\!=\!\widetilde{c}_{_{1,i}}$.
\item In the same way, using the lantern relation $\,(L_{_{i,2g+n-2,1}})$,
one proves that $\,\widetilde{c}_{_{i,2g+n-2}}\!=\!
\widetilde{c}_{_{i,1}}\,$ for $\,2\!\leq\! i\!\leq\! 2g+n-3$.
\end{list}
\eproof
\begin{lemma}\label{gen-ker g2}
For all $\,(g,n)\!\in\!{\mathbf{N}}^{\ast}\!\times\!{\mathbf{N}}^{\ast}$,
$\,\ker g_{2}\,$ is generated by\linebreak[4] $\,d_{_{n}}\!=\!c_{_{2g+n-2,1}}\,$ and
$\,x_{_{0}},\ldots,x_{_{2g+n-3}}$.
\end{lemma}
\proof By lemma~\ref{normal}, $\ker g_{2}\,$ is normally generated by
$\,d_{_{n}}\,$ and $\,x_{_{0}}$. Furthermore, by the braid relations,
$\,d_{_{n}}\,$ is central in $\,G_{g,n}$. Thus, denoting by $K$ the subgroup
generated by $\,d_{_{n}},x_{_{0}},\ldots,x_{_{2g+n-2}}$, we have to
prove that $\,gx_{_{0}}g^{-1}\!\in\! K\,$ for all $\,g\!\in\!G_{g,n}$.
To do this, it is enough to show that $K$ is a normal subgroup of $\,G_{g,n}$.
By proposition~\ref{generator}, $\,G_{g,n}\,$ is
generated by $\,\mathcal{H}_{g,n}\!=\!\{a_{_{1}},b,a_{_{2}},b_{_{1}},\ldots,$
$b_{_{g-1}},c_{_{2,4}},\ldots,c_{_{2g-4,2g-2}},c_{_{1,2}},a_{_{2g}},\ldots,
a_{_{2g+n-2}},d_{_{1}},\ldots,d_{_{n-1}}\}$. Since, by the braid relations,
$\,d_{_{1}},\ldots,d_{_{n-1}}\,$ are central in $\,G_{g,n}$, we
have to prove that $\,y(x_{_{k}})\,$ and $\,\bar{y}(x_{_{k}})\,$ are
elements of $K$ for all $k$, $\,0\leq k\leq
2g+n-3$, and all $\,y\!\in\! \mathcal{E}\,$ where
$\,\mathcal{E}=\mathcal{H}_{g,n}\!\setminus\!\{d_{_{1}},\ldots,d_{_{n-1}}\}$.
\vskip3mm\noindent
$\ast$ \underline{Case 1: $\,k\!=\!0$}.
\vskip3mm\begin{list}{--}{\leftmargin7mm\itemsep3mm}
\item $\,b(x_{_{0}})=x_{_{1}}$.
\item We prove, using relations {\it (T)}, that
$\,\bar{b}(x_{_{0}})=x_{_{0}}\,\bar{x_{_{1}}}\,x_{_{0}}$:
$$\begin{array}{rcl}
x_{_{0}}\,\bar{x_{_{1}}}\,x_{_{0}} & = &
a_{_{1}}\,\bar{a_{_{2g+n-2}}}\,b\,a_{_{2g+n-2}}\,
\bar{a_{_{1}}}\,\bar{b}\,a_{_{1}}\,\bar{a_{_{2g+n-2}}} \\
&=& a_{_{1}}\,b\,a_{_{2g+n-2}}\,\bar{b}\,b\,
\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2g+n-2}}} \\
&=& \bar{b}\,a_{_{1}}\,b\,\bar{b}\,\bar{a_{_{2g+n-2}}}\,b \\
&=& \bar{b}(x_{_{0}})\,.
\end{array}$$
\item For $\,y\!\in\!\mathcal{E}\!\setminus\!\{b\}$, one has
$\,y(x_{_{0}})\!=\bar{y}(x_{_{0}})\!=\!x_{_{0}}\,$ by the braid
relations.
\end{list}
\vskip5mm\noindent
$\ast$ \underline{Case 2: $\,k\!=\!1$}.
\vskip3mm\begin{list}{--}{\leftmargin7mm\itemsep3mm}
\item $a_{_{1}}(x_{_{1}})$ \parbox[t]{73mm}{$=a_{_{1}}\,b\,a_{_{1}}\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,\bar{a_{_{1}}}=b\,a_{_{1}}\,b\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,\bar{a_{_{1}}}$ $=b\,a_{_{1}}\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,a_{_{2g+n-2}}\,\bar{a_{_{1}}}=x_{_{1}}\,
\bar{x_{_{0}}}\,$,}
\item[] $\bar{a_{_{1}}}(x_{_{1}})$ \parbox[t]{73mm}{$=\bar{a_{_{1}}}\,b\,a_{_{1}}\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,a_{_{1}}=b\,a_{_{1}}\,\bar{b}\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,a_{_{1}}$ $=b\,a_{_{1}}\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,\bar{a_{_{2g+n-2}}}\,a_{_{1}}=x_{_{1}}\,
x_{_{0}}\,$.}
\item $a_{_{2g+n-2}}(x_{_{1}})$ \parbox[t]{60mm}{$=a_{_{2g+n-2}}\,b\,a_{_{1}}\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,\bar{a_{_{2g+n-2}}}$
$=a_{_{2g+n-2}}\,b\,a_{_{1}}\,\bar{b}\,\bar{a_{_{2g+n-2}}}\,\bar{b}$
$=a_{_{2g+n-2}}\,\bar{a_{_{1}}}\,
b\,a_{_{1}}\,\bar{a_{_{2g+n-2}}}\,\bar{b}=\bar{x_{_{0}}}\,x_{_{1}}\,$,}
\item[] $\bar{a_{_{2g+n-2}}}(x_{_{1}})$ \parbox[t]{60mm}{$=\bar{a_{_{2g+n-2}}}\,
b\,a_{_{1}}\,\bar{a_{_{2g+n-2}}}\,\bar{b}\,a_{_{2g+n-2}}$
$=\bar{a_{_{2g+n-2}}}\,
b\,a_{_{1}}\,b\,\bar{a_{_{2g+n-2}}}\,\bar{b}$
$=\bar{a_{_{2g+n-2}}}\,a_{_{1}}\,
b\,a_{_{1}}\,\bar{a_{_{2g+n-2}}}\,\bar{b}=x_{_{0}}\,x_{_{1}}\,$.}
\item One has $\bar{b}(x_{_{1}})\!=\!x_{_{0}}\,$, and by the braid relations,
$\,b(x_{_{1}})\!=\!x_{_{1}}\,\bar{x_{_{0}}}\,x_{_{1}}\,$:
$$\begin{array}{rcl}
x_{_{1}}\,\bar{x_{_{0}}}\,x_{_{1}} & = & b\,a_{_{1}}\,
\bar{a_{_{2g+n-2}}}\,\bar{b}\,\bar{a_{_{1}}}\,a_{_{2g+n-2}}\,b\,
a_{_{1}}\,\bar{a_{_{2g+n-2}}}\,\bar{b} \\
&=& b\,\bar{a_{_{2g+n-2}}}\,\bar{b}\,\bar{a_{_{1}}}\,b\,\bar{b}\,
a_{_{2g+n-2}}\,b\,a_{_{1}}\,\bar{b} \\
&=& b\,b\,\bar{a_{_{2g+n-2}}}\,\bar{b}\,b\,a_{_{1}}\,\bar{b}\,\bar{b} \\
&=& b(x_{_{1}}).
\end{array}$$
\item For $\,i\!\in\!\{2,2g,2g+1,\ldots,2g+n-3\}$, we have
$\,a_{_{i}}(x_{_{1}})\!=\!x_{_{i}}\,$ and $\,\bar{a_{_{i}}}(x_{_{1}})\!=
\!x_{_{1}}\,\bar{x_{_{i}}}\,x_{_{1}}\,$:
$$\begin{array}{rcll}
x_{_{1}}\,\bar{x_{_{i}}}\,x_{_{1}} & = &
b\,x_{_{0}}\,\bar{b}\,a_{_{i}}\,b\,\bar{x_{_{0}}}\,
\bar{b}\,\bar{a_{_{i}}}\,b\,x_{_{0}}\,\bar{b} & \\
&=& b\,x_{_{0}}\,a_{_{i}}\,b\,\bar{a_{_{i}}}\,\bar{x_{_{0}}}\,
a_{_{i}}\,\bar{b}\,\bar{a_{_{i}}}\,x_{_{0}}\,\bar{b} &
\hbox{by {\it (T)}} \\
&=& b\,a_{_{i}}\,x_{_{0}}\,b\,\bar{x_{_{0}}}\,\bar{b}\,x_{_{0}}\,
\bar{a_{_{i}}}\,\bar{b} & \hbox{by case 1} \\
&=& b\,a_{_{i}}\,x_{_{0}}\,\bar{x_{_{1}}}\,x_{_{0}}\,
\bar{a_{_{i}}}\,\bar{b} & \\
&=& b\,a_{_{i}}\,\bar{b}\,x_{_{0}}\,b\,
\bar{a_{_{i}}}\,\bar{b} & \hbox{by case 1} \\
&=& \bar{a_{_{i}}}\,b\,a_{_{i}}\,x_{_{0}}\,
\bar{a_{_{i}}}\,\bar{b}\,a_{_{i}} & \hbox{by {\it (T)}} \\
&=& \bar{a_{_{i}}}(x_{_{1}}) & \hbox{by case 1}.
\end{array}$$
\item Each $\,y\!\in\!\{b_{_{1}},\ldots,b_{_{g-1}},c_{_{2,4}},\ldots,
c_{_{2g-4,2g-2}},c_{_{1,2}}\}\,$ commutes with $x_{_{1}}\,$ by the
braid relations, so $\,y(x_{_{1}})\!=\bar{y}(x_{_{1}})\!=\!x_{_{1}}\,$.
\end{list}
\vskip5mm\noindent
$\ast$ \underline{Case 3: $\,k\!\in\!\{2,2g,\ldots,2g+n-3\}$}.
\vskip3mm\begin{list}{--}{\leftmargin7mm\itemsep3mm}
\item By the braid relations and the preceeding cases, we have:
$$a_{_{1}}(x_{_{k}})=a_{_{k}}\,a_{_{1}}(x_{_{1}})=a_{_{k}}\,
x_{_{1}}\,\bar{x_{_{0}}}\,\bar{a_{_{k}}}=x_{_{k}}\,\bar{x_{_{0}}}\,,$$
$$\bar{a_{_{1}}}(x_{_{k}})=a_{_{k}}\,\bar{a_{_{1}}}(x_{_{1}})=a_{_{k}}\,
x_{_{1}}\,x_{_{0}}\,\bar{a_{_{k}}}=x_{_{k}}\,x_{_{0}}\,,$$
$$a_{_{2g+n-2}}(x_{_{k}})=a_{_{k}}\,a_{_{2g+n-2}}(x_{_{1}})=a_{_{k}}\,
\bar{x_{_{0}}}\,x_{_{1}}\,\bar{a_{_{k}}}=\bar{x_{_{0}}}\,x_{_{k}}\,,$$
$$\bar{a_{_{2g+n-2}}}(x_{_{k}})=a_{_{k}}\,\bar{a_{_{2g+n-2}}}(x_{_{1}})=
a_{_{k}}\,x_{_{0}}\,x_{_{1}}\,\bar{a_{_{k}}}=x_{_{0}}\,x_{_{k}}\,.$$
\item It follows from the braid relations and the case 2 that
$$b(x_{_{k}})=b\,a_{_{k}}\,b(x_{_{0}})=a_{_{k}}\,b\,
a_{_{k}}(x_{_{0}})=a_{_{k}}\,b(x_{_{0}})=x_{_{k}}\,,$$
and we get also $\,\bar{b}(x_{_{k}})\!=\!x_{_{k}}\,$.
\item For $\,k\!\not =\! 2$, one has $\,b_{_{1}}(x_{_{k}})\!=\!
\bar{b_{_{1}}}(x_{_{k}})\!=\!x_{_{k}}\,$ by the braid relations.
When $\,k\!=\!2$, we get $\,b_{_{1}}(x_{_{2}})\!=\!x_{_{3}}\,$ and
$\,\bar{b_{_{1}}}(x_{_{2}})\!=\!x_{_{2}}\,\bar{x_{_{3}}}\,x_{_{2}}\,$:
$$\begin{array}{rcll}
x_{_{2}}\,\bar{x_{_{3}}}\,x_{_{2}} & = & a_{_{2}}\,x_{_{1}}\,
\bar{a_{_{2}}}\,b_{_{1}}\,a_{_{2}}\,\bar{x_{_{1}}}\,\bar{a_{_{2}}}\,
\bar{b_{_{1}}}\,a_{_{2}}\,x_{_{1}}\,\bar{a_{_{2}}} & \\
&=& a_{_{2}}\,x_{_{1}}\,b_{_{1}}\,a_{_{2}}\,\bar{b_{_{1}}}\,
\bar{x_{_{1}}}\,b_{_{1}}\,\bar{a_{_{2}}}\,\bar{b_{_{1}}}\,x_{_{1}}\,
\bar{a_{_{2}}} & \hbox{by {\it (T)}} \\
&=& a_{_{2}}\,b_{_{1}}\,x_{_{1}}\,\bar{x_{_{2}}}\,x_{_{1}}\,
\bar{b_{_{1}}}\,\bar{a_{_{2}}} & \hbox{by case 2} \\
&=& a_{_{2}}\,b_{_{1}}\,\bar{a_{_{2}}}\,x_{_{1}}\,a_{_{2}}
\bar{b_{_{1}}}\,\bar{a_{_{2}}} & \hbox{by case 2} \\
&=& \bar{b_{_{1}}}\,a_{_{2}}\,b_{_{1}}\,x_{_{1}}\,\bar{b_{_{1}}}\,
\bar{a_{_{2}}}\,b_{_{1}} & \hbox{by {\it (T)}} \\
&=& \bar{b_{_{1}}}(x_{_{2}}) & \hbox{by case 2}\,.
\end{array}$$
\item Each $\,y\!\in\!\{b_{_{2}},\ldots,b_{_{g-1}},c_{_{2,4}},\ldots,
c_{_{2g-4,2g-2}},c_{_{1,2}}\}\,$ commutes with $\,x_{_{k}}\,$ for
$\,k\!=\!2,2g,\ldots, 2g+n-3\,$ by
the braid relations. Therefore, we get
$\,y(x_{_{k}})\!=\!\bar{y}(x_{_{k}})\!=\!x_{_{k}}\,$.
\item Let $\,i\!\in\!\{2,2g,\ldots,2g+n-3\}$. Suppose
first that $\,i\!\geq\!k$. Then, if $\,m_{_{k}}\!=\!\bar{x_{_{1}}}
(a_{_{k}})$, we have
$$a_{_{i}}(x_{_{k}})=a_{_{i}}\,a_{_{k}}\,x_{_{1}}\,\bar{a_{_{k}}}\,
\bar{a_{_{i}}}=a_{_{i}}\,x_{_{1}}\,m_{_{k}}\,
\bar{a_{_{i}}}\,\bar{a_{_{k}}}\,.$$
By the braid relations, one has
$$m_{_{k}}=b\,\bar{a_{_{1}}}\,a_{_{2g+n-2}}\,\bar{b}(a_{_{k}})=b\,
\bar{a_{_{1}}}\,a_{_{2g+n-2}}\,a_{_{k}}(b)=b\,a_{_{2g+n-2}}\,a_{_{k}}\,
b(a_{_{1}})$$
and the lantern relation $\,(L_{_{2g+n-2,1,k}})\,$ says that
$$a_{_{2g+n-2}}\,c_{_{2g+n-2,1}}\,c_{_{1,k}}\,a_{_{k}}=c_{_{2g+n-2,k}}\,
a_{_{1}}\,Y\,a_{_{1}}\,\bar{Y}$$
where $\,Y=b\,a_{_{2g+n-2}}\,a_{_{k}}\,b$. Thus, we get
$$m_{_{k}}=Y(a_{_{1}})=\bar{a_{_{1}}}\,\bar{c_{_{2g+n-2,k}}}\,
a_{_{2g+n-2}}\,c_{_{2g+n-2,1}}\,c_{_{1,k}}\,a_{_{k}}\,,$$
which implies by the braid relations $\,m_{_{k}}a_{_{i}}\!=\!a_{_{i}}
m_{_{k}}\,$ since $\,i\!\geq\!k$. From this, one obtains
$$a_{_{i}}(x_{_{k}})=a_{_{i}}\,x_{_{1}}\,\bar{a_{_{i}}}\,m_{_{k}}\,
\bar{a_{_{k}}}=a_{_{i}}\,x_{_{1}}\,\bar{a_{_{i}}}\,\bar{x_{_{1}}}\,
a_{_{k}}\,x_{_{1}}\,\bar{a_{_{k}}}=x_{_{i}}\,\bar{x_{_{1}}}\,x_{_{k}}\,.$$
In particular, we have $\,x_{_{k}}\!=\!x_{_{1}}\,\bar{x_{_{i}}}\,
a_{_{i}}\,x_{_{k}}\,\bar{a_{_{i}}}\,$ and so:
$$\begin{array}{rcll}
\bar{a_{_{i}}}(x_{_{k}}) & = & \bar{a_{_{i}}}\,x_{_{1}}\,\bar{x_{_{i}}}\,
a_{_{i}}\,x_{_{k}}\,\bar{a_{_{i}}}\,a_{_{i}} & \\
&=& \bar{a_{_{i}}}\,x_{_{1}}\,a_{_{i}}\,
\bar{a_{_{i}}}\,\bar{x_{_{i}}}\,a_{_{i}}\,x_{_{k}} & \\
&=& x_{_{1}}\,\bar{x_{_{i}}}\,x_{_{1}}\,
\bar{x_{_{1}}}\,x_{_{k}} & \hbox{by case 2} \\
&=& x_{_{1}}\,\bar{x_{_{i}}}\,x_{_{k}}\,. &
\end{array}$$
Conclusion: $\,\left\{ \begin{array}{ccl}
a_{_{i}}(x_{_{k}})=x_{_{i}}\,\bar{x_{_{1}}}\,x_{_{k}}\,, &
\bar{a_{_{i}}}(x_{_{k}})=x_{_{1}}\,\bar{x_{_{i}}}\,x_{_{k}} &
\hbox{if } \,i\geq k, \\
a_{_{i}}(x_{_{k}})=x_{_{k}}\,\bar{x_{_{1}}}\,x_{_{i}} \,, &
\bar{a_{_{i}}}(x_{_{k}})= x_{_{1}}\,\bar{x_{_{k}}}\,x_{_{i}} &
\hbox{if } \,i\leq k.
\end{array}\right.$
\end{list}
\vskip5mm\noindent
$\ast$ \underline{Case 4: $\,k\!=\!3$}.
\vskip3mm
\begin{list}{--}{\leftmargin7mm\itemsep3mm}
\item By the braid relations and the preceeding cases, we have:
$$a_{_{1}}(x_{_{3}})=b_{_{1}}\,a_{_{1}}(x_{_{2}})=b_{_{1}}\,
x_{_{2}}\,\bar{x_{_{0}}}\,\bar{b_{_{1}}}=x_{_{3}}\,\bar{x_{_{0}}}\,,$$
$$\bar{a_{_{1}}}(x_{_{3}})=b_{_{1}}\,\bar{a_{_{1}}}(x_{_{2}})=b_{_{1}}\,
x_{_{2}}\,x_{_{0}}\,\bar{b_{_{1}}}=x_{_{3}}\,x_{_{0}}\,,$$
$$a_{_{2g+n-2}}(x_{_{3}})=b_{_{1}}\,a_{_{2g+n-2}}(x_{_{2}})=b_{_{1}}\,
\bar{x_{_{0}}}\,x_{_{2}}\,\bar{b_{_{1}}}=\bar{x_{_{0}}}\,x_{_{3}}\,,$$
$$\bar{a_{_{2g+n-2}}}(x_{_{3}})=b_{_{1}}\,\bar{a_{_{2g+n-2}}}(x_{_{2}})=
b_{_{1}}\,x_{_{0}}\,x_{_{2}}\,\bar{b_{_{1}}}=x_{_{0}}\,x_{_{3}}\,.$$
\item The relations $\,${\it (T)}$\,$ and the case 3 prove that
$$b(x_{_{3}})=b\,b_{_{1}}(x_{_{2}})=b_{_{1}}(x_{_{2}})=x_{_{3}}=
\bar{b}(x_{_{3}}),$$
and
$$a_{_{2}}(x_{_{3}})=a_{_{2}}\,b_{_{1}}\,a_{_{2}}(x_{_{1}})=
b_{_{1}}\,a_{_{2}}\,b_{_{1}}(x_{_{1}})=b_{_{1}}\,a_{_{2}}(x_{_{1}})
=x_{_{3}}=\bar{a_{_{2}}}(x_{_{3}}).$$
\item One has $\,\bar{b_{_{1}}}(x_{_{3}})\!=\!x_{_{2}}\,$. On the
other hand, we get
$$\begin{array}{rcll}
b_{_{1}}(x_{_{3}}) & = & b_{_{1}}\,x_{_{2}}\,\bar{b_{_{1}}}\,
\bar{x_{_{2}}}\,b_{_{1}}\,x_{_{2}}\,\bar{b_{_{1}}} & \hbox{by case 3} \\
&=& x_{_{3}}\,\bar{x_{_{2}}}\,x_{_{3}}\,. & \\
\end{array}$$
\item Using the braid relations and the case 3, we get $\,\bar{c_{_{2,4}}}
(x_{_{3}})\!=\!x_{_{3}}\,\bar{x_{_{4}}}\,x_{_{3}}\,$:
$$\begin{array}{rcl}
x_{_{3}}\,\bar{x_{_{4}}}\,x_{_{3}} & = & b_{_{1}}\,x_{_{2}}\,
\bar{b_{_{1}}}\,c_{_{2,4}}\,b_{_{1}}\,\bar{x_{_{2}}}\,\bar{b_{_{1}}}\,
\bar{c_{_{2,4}}}\,b_{_{1}}\,x_{_{2}}\,\bar{b_{_{1}}} \\
&=& b_{_{1}}\,x_{_{2}}\,c_{_{2,4}}\,b_{_{1}}\,\bar{c_{_{2,4}}}\,
\bar{x_{_{2}}}\,c_{_{2,4}}\,\bar{b_{_{1}}}\,\bar{c_{_{2,4}}}\,x_{_{2}}\,
\bar{b_{_{1}}} \\
&=& b_{_{1}}\,c_{_{2,4}}\,x_{_{2}}\,\bar{x_{_{3}}}\,x_{_{2}}\,
\bar{c_{_{2,4}}}\,\bar{b_{_{1}}} \\
&=& b_{_{1}}\,c_{_{2,4}}\,\bar{b_{_{1}}}\,x_{_{2}}\,b_{_{1}}\,
\bar{c_{_{2,4}}}\,\bar{b_{_{1}}} \\
&=& \bar{c_{_{2,4}}}\,b_{_{1}}\,c_{_{2,4}}\,x_{_{2}}\,\bar{c_{_{2,4}}}\,
\bar{b_{_{1}}}\,c_{_{2,4}} \\
&=& \bar{c_{_{2,4}}}(x_{_{3}}).
\end{array}$$
On the other hand, we have
$\,c_{_{2,4}}(x_{_{3}})\!=\!x_{_{4}}\,$.
\item The braid relations assure that $\,y(x_{_{3}})\!\!=\!\!\bar{y}(x_{_{3}})
\!\!=\!\!
x_{_{3}}\,$ for all \linebreak[4]$\,y\!\in\!\{b_{_{2}},\ldots,b_{_{g-1}},
c_{_{4,6}},\ldots,c_{_{2g-4,2g-2}}\}$.
\item For each $\,i\!\in\!\{2g,\ldots,2g+n-3\}$, one has by the case 3
$$a_{_{i}}(x_{_{3}})=b_{_{1}}\,a_{_{i}}(x_{_{2}})=
b_{_{1}}\,x_{_{i}}\,\bar{x_{_{1}}}\,
x_{_{2}}\,\bar{b_{_{1}}}=x_{_{i}}\,\,\bar{x_{_{1}}}\,x_{_{3}}\,$$
and
$$\bar{a_{_{i}}}(x_{_{3}})=b_{_{1}}\,\bar{a_{_{i}}}(x_{_{2}})=
b_{_{1}}\,x_{_{1}}\,\bar{x_{_{i}}}\,x_{_{2}}\,\bar{b_{_{1}}}=
x_{_{1}}\,\bar{x_{_{i}}}\,x_{_{3}}\,.$$
\item Finally, we shall prove that $\,c_{_{1,2}}(x_{_{3}})\!=\!x_{_{3}}\,
\bar{x_{_{2}}}\,x_{_{1}}\,\bar{x_{_{0}}}\,d_{_{n}}\,$.
\vskip3mm\noindent
The lantern relation $\,(L_{_{2g+n-2,1,2}})\,$ says
$$a_{_{2g+n-2}}\,c_{_{2g+n-2,1}}\,c_{_{1,2}}\,a_{_{2}}=c_{_{2g+n-2,2}}\,
\bar{X}\,a_{_{1}}\,X\,a_{_{1}}=c_{_{2g+n-2,2}}\,a_{_{1}}\,X\,a_{_{1}}
\,\bar{X}$$
where $\,X\!=\!b\,a_{_{2}}\,a_{_{2g+n-2}}\,b$, that is to say
$\,$($d_{_{n}}\!=\!c_{_{2g+n-2,1}}$):
$$a_{_{2g+n-2}}\,c_{_{1,2}}\,\bar{a_{_{1}}}=c_{_{2g+n-2,2}}\,
\bar{a_{_{2}}}\,\bar{d_{_{n}}}\,\bar{X}\,a_{_{1}}\,X\ \ \
(\star)$$
and
$$c_{_{2g+n-2,2}}\,\bar{c_{_{1,2}}}=X\,\bar{a_{_{1}}}\,\bar{X}\,
\bar{a_{_{1}}}\,a_{_{2}}\,d_{_{n}}\,a_{_{2g+n-2}}\ \ \
(\star\star).$$
Then, one can compute
$$\begin{array}{rcll}
\bar{x_{_{3}}}(c_{_{1,2}}) & = & b_{_{1}}\,a_{_{2}}\,b\,a_{_{2g+n-2}}\,
\bar{a_{_{1}}}\,\bar{b}\,\bar{a_{_{2}}}\,\bar{b_{_{1}}}(c_{_{1,2}}) & \\
&=& b_{_{1}}\,a_{_{2}}\,b\,a_{_{2g+n-2}}\,c_{_{1,2}}\,\bar{a_{_{1}}}\,
\bar{b}\,\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by {\it (T)}} \\
&=& b_{_{1}}\,a_{_{2}}\,b\,c_{_{2g+n-2,2}}\,\bar{a_{_{2}}}\,
\bar{d_{_{n}}}\,\bar{X}\,a_{_{1}}\,X\,\bar{b}\,
\bar{a_{_{2}}}(b_{_{1}}) & \hbox{by }\,(\star) \\
&=& b_{_{1}}\,a_{_{2}}\,b\,c_{_{2g+n-2,2}}\,\bar{a_{_{2}}}\,
\bar{d_{_{n}}}\,\bar{X}\,a_{_{1}}\,b\,a_{_{2g+n-2}}(b_{_{1}}) & \\
&=& b_{_{1}}\,c_{_{2g+n-2,2}}\,\bar{b}\,a_{_{2}}\,b\,\bar{X}(b_{_{1}})
& \hbox{by {\it (T)}} \\
&=& b_{_{1}}\,c_{_{2g+n-2,2}}\,\bar{b}\,a_{_{2}}\,b\,\bar{b}\,
\bar{a_{_{2}}}\,\bar{a_{_{2g+n-2}}}\,\bar{b}(b_{_{1}}) & \\
&=& b_{_{1}}\,\bar{b_{_{1}}}(c_{_{2g+n-2,2}}) & \hbox{by {\it (T)}} \\
&=& c_{_{2g+n-2,2}}\,. &
\end{array}$$
Thus, we get
$$\begin{array}{rcll}
c_{_{1,2}}(x_{_{3}}) &= & c_{_{1,2}}\,x_{_{3}}\,\bar{c_{_{1,2}}} & \\
&=& x_{_{3}}\,\bar{x_{_{3}}}\,c_{_{1,2}}\,x_{_{3}}\,\bar{c_{_{1,2}}} & \\
&=& x_{_{3}}\,c_{_{2g+n-2,2}}\,\bar{c_{_{1,2}}} & \\
&=& x_{_{3}}\,X\,\bar{a_{_{1}}}\,\bar{X}\,\bar{a_{_{1}}}\,a_{_{2}}\,
a_{_{2g+n-2}}\,d_{_{n}} & \hbox{by }\,(\star\star) \\
&=& x_{_{3}}\,b\,a_{_{2}}\,a_{_{2g+n-2}}\,b\,\bar{a_{_{1}}}\,\bar{b}\,
\bar{a_{_{2}}}\,\bar{a_{_{2g+n-2}}}\,\bar{b}\,
a_{_{2}}\,\bar{x_{_{0}}}\,d_{_{n}} & \\
&=& x_{_{3}}\,b\,a_{_{2g+n-2}}\,a_{_{2}}\,\bar{a_{_{1}}}\,\bar{b}\,
a_{_{1}}\,\bar{a_{_{2}}}\,\bar{a_{_{2g+n-2}}}\,\bar{b}\,
a_{_{2}}\,\bar{x_{_{0}}}\,d_{_{n}} & \\
&=& x_{_{3}}\,b\,\bar{x_{_{0}}}\,\bar{b}\,
\bar{a_{_{2}}}\,b\,x_{_{0}}\,\bar{b}\,
a_{_{2}}\,\bar{x_{_{0}}}\,d_{_{n}} & \hbox{by {\it (T)}} \\
&=& x_{_{3}}\,\bar{x_{_{1}}}\,\bar{a_{_{2}}}\,x_{_{1}}\,a_{_{2}}\,
\bar{x_{_{0}}}\,d_{_{n}} & \\
&=& x_{_{3}}\,\bar{x_{_{1}}}\,x_{_{1}}\,\bar{x_{_{2}}}\,x_{_{1}}\,
\bar{x_{_{0}}}\,d_{_{n}} & \hbox{by case 2}\\
&=& x_{_{3}}\,\bar{x_{_{2}}}\,x_{_{1}}\,
\bar{x_{_{0}}}\,d_{_{n}}\,. &
\end{array}$$
It follows from this that
$$\bar{c_{_{1,2}}}(x_{_{3}})\!=\bar{c_{_{1,2}}}\,c_{_{1,2}}\,x_{_{3}}\,
\bar{c_{_{1,2}}}\,\bar{d_{_{n}}}\,x_{_{0}}\,\bar{x_{_{1}}}\,x_{_{2}}\,
c_{_{1,2}}=x_{_{3}}\,\bar{d_{_{n}}}\,x_{_{0}}\,\bar{x_{_{1}}}\,x_{_{2}}\,
.$$
\end{list}
\vskip5mm\noindent
$\ast$ \underline{Case 5: $\,k\!\in\!\{4,5,\ldots,2g-1\}$}.
\vskip3mm\noindent
In order to simplify the notation, let us denote
$$e_{_{3}}=b_{_{1}}\,,\ \,e_{_{4}}=c_{_{2,4}}\,,\ \,e_{_{5}}=b_{_{2}}\,,\ \,
\ldots\,,\ \,e_{_{2g-2}}=c_{_{2g-4,2g-2}}\,,\ \ e_{_{2g-1}}=b_{_{g-1}}\,,$$
so that, for $\,i\!\in\!\{3,\ldots,2g-1\}$,
$\,x_{_{i}}\!=\!e_{_{i}}(x_{_{i-1}})$.
\vskip3mm
\begin{list}{--}{\leftmargin7mm\itemsep3mm}
\item Then, one has by the braid relations and the case 4:
$$a_{_{1}}(x_{_{k}})=e_{_{k}}\,e_{_{k-1}}\cdots
e_{_{4}}\,a_{_{1}}(x_{_{3}})=e_{_{k}}\cdots e_{_{4}}
x_{_{3}}\,\bar{x_{_{0}}}\,\bar{e_{_{4}}}\cdots\bar{e_{_{k}}}=x_{_{k}}\,
\bar{x_{_{0}}}\,.$$
Likewise, we get
$$\,\bar{a_{_{1}}}(x_{_{k}})=x_{_{k}}\,x_{_{0}}\,,\ \ \
a_{_{2g+n-2}}(x_{_{k}})=\bar{x_{_{0}}}\,x_{_{k}}\,,\ \ \
\bar{a_{_{2g+n-2}}}(x_{_{k}})=x_{_{0}}\,x_{_{k}}\,,$$
$$\hbox{and }\ \,b(x_{_{k}})=\bar{b}(x_{_{k}})=x_{_{k}}=
a_{_{2}}(x_{_{k}})=\bar{a_{_{2}}}(x_{_{k}})\,.$$
\item For $\,i\!\in\!\{3,4,\ldots,2g-1\},\ \,i\!<\!k$, one obtains,
using the braid relations, $\,e_{_{i}}(x_{_{k}})\!=\bar{e_{_{i}}}(x_{_{k}})=
x_{_{k}}\,$:
\begin{center}
$e_{_{i}}(x_{_{k}})$ \parbox[t]{98mm}{$=e_{_{k}}\cdots e_{_{i}}\,
e_{_{i+1}}\,e_{_{i}}\cdots e_{_{3}}(x_{_{2}})=e_{_{k}}\cdots e_{_{i+1}}\,
e_{_{i}}\,e_{_{i+1}}\cdots e_{_{3}}(x_{_{2}})$
$=e_{_{k}}\cdots e_{_{3}}(x_{_{2}})=x_{_{k}}\,.$}
\end{center}
\noindent
For $\,i\!>\!k+1$, $\,e_{_{i}}\,$ commutes with
$\,e_{_{k}}\,,\ldots,\,e_{_{4}}\,$ and $\,x_{_{3}}\,$, thus we also have
$$e_{_{i}}(x_{_{k}})=\bar{e_{_{i}}}(x_{_{k}})=x_{_{k}}\,\
(i>k+1)\ \ \ (\ast).$$
\item One has $\,e_{_{k+1}}(x_{_{k}})\!=\!x_{_{k+1}}\,$. Let us prove by
induction on $k$ that $\,\bar{e_{_{k+1}}}(x_{_{k}})\!=\!x_{_{k}}\,
\bar{x_{_{k+1}}}\,x_{_{k}}\,$. We have seen in case 4 that this equaliy
is satisfied at the rank $\,k\!=\!3$. Suppose it is true at the rank
$k\!-\!1$, $\,4\!\leq\!k\leq 2g-2$. Then, we get:
$$\begin{array}{rcl}
x_{_{k}}\,\bar{x_{_{k+1}}}\,x_{_{k}} & = & e_{_{k}}\,x_{_{k-1}}\,
\bar{e_{_{k}}}\,e_{_{k+1}}\,e_{_{k}}\,\bar{x_{_{k-1}}}\,\bar{e_{_{k}}}\,
\bar{e_{_{k+1}}}\,e_{_{k}}\,x_{_{k-1}}\,\bar{e_{_{k}}} \\
&=& e_{_{k}}\,x_{_{k-1}}\,e_{_{k+1}}\,e_{_{k}}\,\bar{e_{_{k+1}}}\,
\bar{x_{_{k-1}}}\,e_{_{k+1}}\,\bar{e_{_{k}}}\,\bar{e_{_{k+1}}}\,
x_{_{k-1}}\,\bar{e_{_{k}}} \ \ \hbox{by {\it (T)}} \\
&=& e_{_{k}}\,e_{_{k+1}}\,x_{_{k-1}}\,e_{_{k}}\,\bar{x_{_{k-1}}}\,
\bar{e_{_{k}}}\,x_{_{k-1}}\,\bar{e_{_{k+1}}}\,
\bar{e_{_{k}}}\ \ \ \ \hbox{by }\ (\ast) \\
&=& e_{_{k}}\,e_{_{k+1}}\,x_{_{k-1}}\,\bar{x_{_{k}}}\,
x_{_{k-1}}\,\bar{e_{_{k+1}}}\,\bar{e_{_{k}}} \\
&=& e_{_{k}}\,e_{_{k+1}}\,\bar{e_{_{k}}}\,x_{_{k-1}}\,e_{_{k}}\,
\bar{e_{_{k+1}}}\,\bar{e_{_{k}}} \ \ \ \hbox{by inductive
hypothesis} \\
&=& \bar{e_{_{k+1}}}\,e_{_{k}}\,e_{_{k+1}}\,x_{_{k-1}}\,\bar{e_{_{k+1}}}\,
\bar{e_{_{k}}}\,e_{_{k+1}} \ \ \ \ \hbox{by {\it (T)}} \\
&=& \bar{e_{_{k+1}}}\,e_{_{k}}\,x_{_{k-1}}\,\bar{e_{_{k}}}\,e_{_{k+1}}
\ \ \ \ \hbox{by }\ (\ast) \\
&=& \bar{e_{_{k+1}}}(x_{_{k}}).
\end{array}$$
\item This last relation implies $\,x_{_{k}}\!=\!x_{_{k-1}}\,\bar{e_{_{k}}}\,
\bar{x_{_{k-1}}}\,e_{_{k}}\,x_{_{k-1}}\,$. Thus, we get
$$e_{_{k}}(x_{_{k}})=e_{_{k}}\,x_{_{k-1}}\,\bar{e_{_{k}}}\,
\bar{x_{_{k-1}}}\,e_{_{k}}\,x_{_{k-1}}\,\bar{e_{_{k}}}=x_{_{k}}\,
\bar{x_{_{k-1}}}\,x_{_{k}}\,.$$
On the other hand, one has $\,\bar{e_{_{k}}}(x_{_{k}})\!=\!x_{_{k-1}}\,$.
\item For $\,i\!\in\!\{2g,\ldots,2g+n-3\}$, we have, by the braid
relations and the cases 2, 3 and 4:
$$a_{_{i}}(x_{_{k}})=e_{_{k}}\cdots e_{_{4}}\,a_{_{i}}(x_{_{3}})=
e_{_{k}}\cdots e_{_{4}}\,x_{_{i}}\,\bar{x_{_{1}}}\,x_{_{3}}\,
\bar{e_{_{4}}}\cdots\bar{e_{_{k}}}=x_{_{i}}\,\bar{x_{_{1}}}\,x_{_{k}}\,,$$
and likewise, we get $\,\bar{a_{_{i}}}(x_{_{k}})\!=x_{_{1}}\,
\bar{x_{_{i}}}\,x_{_{k}}\,$.
\item Finally, since $\,c_{_{1,2}}(x_{_{3}})\!=\!x_{_{3}}\,\bar{x_{_{2}}}\,
x_{_{1}}\,\bar{x_{_{0}}}\,d_{_{n}}$, it follows from the braid
relations and the preceeding cases that $\,c_{_{1,2}}(x_{_{k}})\!=\!x_{_{k}}\,
\bar{x_{_{2}}}\,x_{_{1}}\,\bar{x_{_{0}}}\,d_{_{n}}$. In the
same way, we get $\,\bar{c_{_{1,2}}}(x_{_{k}})\!=\!x_{_{k}}\,
\bar{d_{_{n}}}\,x_{_{0}}\,\bar{x_{_{1}}}\,x_{_{2}}$.
\end{list}
\eproof
\vskip3mm\noindent
{\bf Proof of proposition~\ref{h}.\ \,} If $\,\pi:\mathbf{\bf
Z}\times \pi_{1}(\Sigma_{g,n-1},p)\rightarrow
\pi_{1}(\Sigma_{g,n-1},p)\,$ denotes the projection, the loops
$\,\pi\circ h_{g,n}(x_{_{0}}),\ldots,\pi\circ
h_{g,n}(x_{_{2g+n-3}})\,$ form a basis of the free group
$\,\pi_{1}(\Sigma_{g,n-1},p)$. Thus, $F$, the subgroup of $\,\ker g_{2}\,$
generated by $\,x_{_{0}},\ldots,x_{_{2g+n-3}}\,$ is free of rank
$2g+n-2$ and the restriction of $\,\pi\circ h_{g,n}\,$ to this subgroup is an
isomorphism.
\noindent
Now, for all element $x$ of $\,\ker g_{2}$, there are by
lemma~\ref{gen-ker g2} an integer $k$ and an element $f$ of $F$ such
that $\,x\!=\!d_{_{n}}^k\,f\,$ ($d_{_{n}}\,$ is central in $\,\ker
g_{2}\,$). Then, one has $\,h_{g,n}(x)\!=\!\bigr(k,\pi\circ
h_{g,n}(x)\bigl)\,$ and therefore, $\,h_{g,n}\,$ is one to one. But $h_{g,n}\,$
is also onto. This concludes the proof.
\eproof
\noindent
{\bf Proof of theorem~\ref{principaltheorem}.\ \,}
In section~\ref{negalun}, we proved that $\Phi_{g,1}\,$ is an isomorphism.
Thus, by the five-lemma, proposition~\ref{h} and an inductive argument, $\Phi_{g,n}\,$ is
an isomorphism for all $\,n\geq 1$. In order to conclude the proof,
it remains to look at the case $\,n\!=\!0$.
Since all spin maps are conjugate in $\,\mathcal{M}_{g,1}$, $\,\ker f_{2}\,$ is
normally\linebreak[4] generated by $\,\tau_{\delta_{1}}\,$ and
$\,\tau_{\alpha_{1}}\tau_{\alpha_{2g-1}}^{-1}$. Thus, considering once more
the commutative diagram
\diagram[size=2.5em]
1 & \rto & \ker g_{2} & \rto & G_{g,1} & \rTo^{g_{2}} && G_{g,0} & \rto & 1 \\
& & \dto_{h_{g,1}} & & \dto_{\Phi_{g,1}}^{\approx} & &&
\dto_{\Phi_{g,0}} & &\\
1 & \rto & \mathbf{Z}\times\pi_{1}(\Sigma_{g,0},p) & \rTo^{f_{1}} & \mathcal{M}_{g,1} &
\rto^{f_{2}} && \mathcal{M}_{g,0} & \rto & 1 \\
\enddiagram
\vskip3mm\noindent
and recalling that $\,\ker g_{2}\,$ is normally generated by
$\,d_{_{1}}\,$ and $\,a_{_{1}}\,\bar{a_{_{2g-1}}}\,$
(lemma~\ref{normal}), we conclude
that $\,h_{g,1}\,$ is still an isomorphism. So, we get that
$\Phi_{g,0}\,$ is an isomorphism.
\eproof
\vskip7mm\noindent
{\bf Acknowledgement. } This paper originates from discussions I had with
Catherine Labru\`ere. I want to thank her.
\bibliographystyle{amsplain}
| 2024-02-18T23:40:12.973Z | 1998-11-27T16:19:14.000Z | algebraic_stack_train_0000 | 1,685 | 15,538 |
|
proofpile-arXiv_065-8532 | \section{Motivation in the current situation}
\bigskip
Recently published strong indications of
atmospheric neutrino oscillations
\cite{Superka}
have rekindled the interest in accelerator experiments
that could study the same range of
parameter space. The results of SuperKamiokande
are interpreted as oscillations of muon neutrinos into
neutrinos that are not $\nu_e$s. Roughly
speaking, the measured mixing
angle is close to maximal: $\sin^2 2 \theta > 0.8$,
and $\Delta m^2$ is in the range $5\times 10^{-4}$ to $6\times 10^{-3}$
eV$^2$, all at 90\% confidence.
The SuperK mass (squared) range is one order
of magnitude below the previous Kamiokande observations
\cite{ka}, just what is needed to render the oft-discussed
long baseline experiments --such as MINOS \cite{MINOS}
or a CERN to Gran Sasso \cite{CGS}
project-- hardly capable of covering the whole parameter range
of interest.
The solar neutrino deficit is interpreted either as MSW (matter
enhanced) oscillations \cite{MSW} or as vacuum oscillations
\cite{osc} that deplete
the original $\bar \nu_e$s, presumably in favour of
$\bar \nu_\mu$s. The corresponding mass differences
--$10^{-5}$ to $10^{-4}$ eV$^2$ or some $ 10^{-10}$
eV$^2$--
are significantly below the range deduced from
atmospheric observations. Currently discussed terrestrial
experiments have no direct access to the solar mass range(s).
A straight section in a high intensity muon storage ring is
an excellent putative source of neutrinos \cite{muring}:
a {\it neutrino factory}.
The normalization, energy and angular spectrum of the
$\nu_\mu+\bar\nu_e$ or $\bar\nu_\mu+\nu_e$
beams would be known to
very high precision. The relative amounts of (forward-moving)
electron neutrinos can be tuned by varying the muon
polarisation.
With a very intense but not
unrealistic proton accelerator (with some 100 times the
current of the current CERN-PS)
it is possible to dream of neutrino beams two orders of
magnitude more intense than existing ones \cite{muring,Dydak}.
For the sake of illustration, we shall consider as
a {\it ``reference set-up''} the neutrino
beams resulting from the decay of $n_\mu=2\times 10^{20}$ $\mu^+$s
and/or $\mu^-$s in a straight section of an $E_\mu=20$ GeV
muon accumulator ring pointing at an experiment with
a 10 kT target, some 732 km downstream,
roughly the distance from CERN to Gran Sasso or from
Fermilab to the Soudan Lab. Most of our figures are for
the ``reference baseline'' $L=732$ km, but
we specify the scaling
laws that relate the results at different energies and distances.
When considering
the possibility of detecting the production of $\tau$s
we use the example of the Opera proposal \cite{Opera}:
a one kTon target with a $\tau$-detection efficiency
(weighed with the branching ratio of observable channels) of 35\%.
Appearance experiments (e.g. $\tau$ production in a
high-energy beam from
$\mu$ decay) are more sensitive and potentially more convincing
than disappearance experiments.
Given the current solar and atmospheric results, one must
unavoidably analyze the prospects of neutrino oscillations
in a neutrino factory in a three-generation mixing scenario.
As it turns out, this scenario brings to the fore the importance
of appearance channels other than $\tau$ production, e.g.,
the production of ``wrong sign'' muons, a channel for which
there would be no beam-induced background at a neutrino factory.
We discuss the physics backgrounds in Chapter 6, rather
briefly, as we cannot embark on a more thorough discussion of this
issue without a specific detector in mind.
Our emphasis is not on the traditional and very well studied
$\tau$-appearance channel, but on the wrong sign muons,
which are more specific to a neutrino factory. We choose the
most conservative scenario regarding the neutrino masses:
$\Delta m_{23}^2$ is given by the SuperK observations,
and $\Delta m_{12}^2$ by the ensemble of solar experiments
(disregarding one of the latter or accepting the results of LSND
\cite{LSND}
opens the way to larger mass differences and oscillatory signals).
We devote the next Section to a two-by-two mixing scenario
in order to illustrate the differences with the three-by-three
case, to which we return thereafter.
\section{Generalities in a two-family context}
\bigskip
Interpret the atmospheric neutrino data as
$\nu_\mu \leftrightarrow \nu_\tau$ oscillations with
a mixing angle $\sin^2 ( 2 \theta_{23}) \sim 1$ and
$5\times 10^{-4}$
eV$^2$
$<\Delta m_{23}^2<6\times 10^{-3}$
eV$^2$.
In a two-family scenario the oscillation probability is:
\begin{equation}
P(\nu_\mu\rightarrow\nu_\tau)=
\sin^2 (2 \theta_{23})\,
\sin^2\left({\kappa\,\Delta m_{23}^2\,L\over E_\nu}\right) \, ,
\label{twofamprob}
\end{equation}
with $\kappa=1/4$ in natural units or $\kappa=1.27$ in
GeV per km and eV$^2$.
The mass splitting regions $\Delta m_{12}^2$
preferred by solar neutrino observations are such that
$\Delta m_{12}^2\,L/E_\nu$ would be very small in long baseline
experiments on Earth. If $\nu_e \leftrightarrow \nu_\mu$
oscillations are described by the $(23)\to (12)$ analogue of
Eq.(\ref{twofamprob}), oscillations between the first two
generations would be unobservable in terrestrial experiments.
Though well known to be an oversimplification \cite{FL},
a mixing of two generations at a time is often
assumed, leading to potentially misleading conclusions.
With
stored $\mu^-$s one has a $\nu_\mu+\bar \nu_e$ beam.
The observable $\nu_\mu\to \nu_\tau$ oscillation signals are:
\begin{eqnarray}
\mu^- \rightarrow e^-\, & \nu_\mu & \, \bar{\nu}_e\, ;
\nonumber\\
& \; &\bar{\nu}_e \rightarrow \bar{\nu}_e \rightarrow e^+ \;\; {\rm normalization,}
\nonumber\\
& \nu_\mu &
\rightarrow \nu_\mu\rightarrow \mu^- \;\;\;\;\; {\rm disappearance,}
\nonumber\\
& \nu_\mu & \rightarrow \nu_\tau \rightarrow \tau^- \;\;\;\;\; {\rm appearance.}
\label{nocharges}
\end{eqnarray}
In the absence of dominant backgrounds, the statistical sensitivity
--that we define throughout as the smallest effect
that can be excluded with 90\% confidence--
is very different for appearance and disappearance
processes. In the case of $\nu_\mu$-disappearance
and for $N_\mu$ expected
events, the fractional sensitivity in the measurement
of
the flux- and cross-section weighed probability
$\bar P(\nu_\mu\rightarrow\nu_\tau)$ is
$1.65/\sqrt{N_\mu}$.
For $\nu_\tau$ appearance, there being no $\nu_\tau$
contamination in the beam, the non-observation of
$\tau$ events would establish a 90\% limit
$\bar P(\nu_\mu\rightarrow\nu_\tau)<2.44/N_\tau$, with
$N_\tau$ the number of events to be expected, should all
$\nu_\mu$s be transmogrified into $\nu_\tau$s.
The neutrino fluxes at a neutrino factory have simple analytical
forms\footnote{We expect the
$\nu$ beam divergence to be dominated by the $\mu$-decay kinematics \cite{muring}.}.
Let $y=E_\nu/E_\mu$ be the fractional neutrino energy.
For unpolarized
muons of either polarity, and neglecting corrections of order
$m_\mu^2/E_\mu^2$, the normalized fluxes of forward-moving
neutrinos are:
\begin{eqnarray}
F_{\nu_\mu,\bar\nu_\mu}(y) &\simeq& 2 \, y^2 \, (3-2 y)
\Theta(y)\,\Theta(1-y)\, ,
\cr
F_{\nu_e,\bar\nu_e}(y) &\simeq& 12\,y^2\, (1- y)
\Theta(y)\,\Theta(1-y) \, ,
\end{eqnarray}
and, for each produced neutrino type, the forward flux
from $n_\mu$ $\mu$-decays is:
\begin{equation}
{dN_\nu\over dy\, dS}\Bigm|_{\theta \simeq 0}
\simeq{E^2_\mu\; n_\mu\over \pi\, m_\mu^2\,L^2}\;F_\nu (y)\; .
\label{flux}
\end{equation}
The above expressions are valid at a forward-placed detector of
transverse dimensions much smaller than the beam aperture.
In the absence of oscillations
one can use Eq.(\ref{flux}) and the charged-current
inclusive cross sections per nucleon on an approximately
isoscalar target ($\sigma_\nu\sim 0.67\times 10^{-38}\, E_\nu$
cm$^2$/GeV, $\sigma_{\bar\nu}\sim 0.34\times 10^{-38}\, E_{\bar\nu}$
cm$^2$/GeV \cite{Boehm})
to compute the number of neutrino interactions.
For the reference set-up and baseline defined in Section 1,
one expects some $2.2\times 10^{5}$
$\mu^-$ ($1.1\times 10^{5}$
$\mu^+$) and $9.6\times 10^4$ $e^+$ ($1.9\times 10^5$ $e^-$)
events in a beam
from $\mu^-$ ($\mu^+$) decay \cite{muring}. In our calculations
we make a cut $E_\nu>5$ GeV to eliminate inefficiently observed
low energy interactions. This affects the quoted numbers only
at the few per cent level.
In Fig.~\ref{fig:mutau} we show the
[${\sin^2(2 \theta_{23}),\Delta m_{23}^2}$] sensitivity to $\mu$-disappearance,
basing it on the measurement of $N_\mu$, the total (energy-integrated)
number of muon
events.
By assumption,
$\nu_e$s do not observably oscillate over terrestrial baselines
so that,
in a two-generation scenario, the results would be identical
if extracted from the ratio $N_\mu/N_e$, as in a recent
discussion of an experiment at a $\nu$-factory \cite{bcr1}.
For a $\tau$ detector we refer to Opera \cite{Opera},
in its version described in the introduction. With use
of the cross section $\sigma(\nu_\tau\to \tau)$ given in
\cite{AlbJar}, we also report the
$\tau$-appearance statistical sensitivity in Fig.~\ref{fig:mutau}, basing it
on the expectation for $N_\tau/N_\mu$. In practice, the search for
$\tau$ events is affected by a steadfast charm background.
For our reference beam, detector and baseline,
the moral from this brief analysis of the two-family scenario is that a 10 kTon experiment capable of telling muons
from electrons (or from neutral currents) would be insufficient
to cover the SuperK mass range. The smaller detector we considered, capable
of telling $\tau$ events from the rest, would also barely suffice.
\begin{figure}[htb]
\centering
\mbox{\epsfig{file=2fam.eps,width=4.in,height=4.in}}
\caption{Sensitivity reach in the [$\sin^2 (2 \theta_{23}),\Delta m_{23}^2$]
plane, at 90\% confidence, for our reference beam and detectors
and $L=732$ km.
Continuous (dashed) boundaries correspond to $\mu$ disappearance
($\tau$ appearance). The small region close to $\sin^2 (2 \theta_{23})=1$
is the SuperK domain.}
\label{fig:mutau}
\end{figure}
To study the oscillatory signal
of the two-family scenario of Eq.(\ref{nocharges}) there
is no advantage in
measuring the charges of the produced charged leptons:
for a stored
$\mu^-$ beam one expects charged current neutrino
interactions leading to positrons
and negatively charged heavier leptons, as in Eq.(\ref{nocharges}). For Majorana neutrinos this is not strictly correct, but
the specific wrong-sign and CP-violating effects are
suppressed by an unsurmontable factor $m_\nu/E_\nu$.
In a three-neutrino mixing scenario, contrarywise,
measuring charges could
be extremely useful and CP-violation effects are not suppressed
by the mentioned factor.
\section{Three-family mixing.}
\bigskip
The mixing between $\nu_e$, $\nu_\mu$ and $\nu_\tau$
is described by a conventional Kobayashi-Maskawa matrix
$V$ relating flavour to mass eigenstates
(we are assuming throughout this note
that {\it neutrino fluctuat nec mergitur:} there are no
transitions to sterile neutrinos).
For Dirac neutrinos\footnote{For Majorana neutrinos
fewer phases are reabsorbable by
field redefinitions and the mixing matrix is of the form
$V'=V\;V_{_{\rm{M}}}$ with
$V_{_{\rm{M}}}=\rm{Diag}\, (e^{i\alpha},e^{i\beta},1)$. The effects
of these extra phases are of order $m_\nu/E_\nu$.}
and in an obvious notation:
\begin{equation}
\left(\matrix{\nu_e \cr \nu_\mu \cr\nu_\tau}\right)
= \left(\matrix{
c_{12}c_{13} & c_{13}s_{12} & s_{13} \cr
-c_{23}s_{12}e^{i\delta} -c_{12}s_{13}s_{23}
& c_{12}c_{23}e^{i\delta} -s_{12}s_{13}s_{23}
& c_{13}s_{23} \cr
s_{23}s_{12}e^{i\delta} -c_{12}c_{23}s_{13}
& -c_{12}s_{23}e^{i\delta} -c_{23}s_{12}s_{13}
& c_{13}c_{23} \cr}
\right)
\left(\matrix{\nu_1 \cr \nu_2 \cr\nu_3}\right).
\label {CKM}
\end{equation}
Without loss of generality, we choose the convention in
which all Euler angles lie in the first quadrant:
$0<\theta_{ij}<\pi/2$, while the CP-phase is unrestricted:
$0<\delta<2\,\pi$.
Define
\begin{equation}
W_{\alpha\beta}^{jk}\equiv \,
[V_{\alpha j}V_{\beta j}^* V_{\alpha k}^*V_{\beta k}]
\end{equation}
and
\begin{equation}
\Delta_{jk}\equiv\frac{ \Delta m_{jk}^2}{2\,E_\nu}\; .
\label{Deltas}
\end{equation}
The transition probabilities between
different flavours are:
\begin{equation}
P(\nu_\alpha\rightarrow \nu_\beta)\,=\,
-4\; \sum_{k>j}\,{\rm Re}[W_{\alpha\beta}^{jk}]\,
\sin^2\left({\Delta_{jk}\,L\over 2}\right)
\,\pm\, 2 \,
\sum_{k>j}\, {\rm Im}[W_{\alpha\beta}^{jk}]\, \sin(\Delta_{jk}\,L)
\label{reim}
\end{equation}
with the plus (minus) sign referring to neutrinos (antineutrinos).
Let us adopt, from solar and atmospheric experiments,
the indication that
$|\Delta m_{12}^2|\ll |\Delta m_{23}^2|$, that Barbieri
{\it et al.} \cite{Barb} have
dubbed the ``minimal scheme''. Though this mass hierarchy may
not be convincingly established, the minimal scheme suffices
for our purpose of delineating the main capabilities of
a $\nu$ factory (we have to deviate from minimality only
in the discussion of CP violation).
The difference between neutrino
propagation in vacuum and in matter turns out not to have an
important effect on the sensitivity limits that we discuss in this
chapter (for a fixed baseline of $732$ km). They are relevant at larger
distances. We postpone their discussion to the next chapter,
though the figures introduced anon do take the
matter effects into account.
Atmospheric or terrestrial experiments have
an energy range such that $\Delta m^2\, L/E_\nu\ll 1$ for the smaller
($\Delta m_{12}^2$)
but not necessarily for the larger ($\Delta m_{23}^2$) of these mass gaps.
Even then, solar and atmospheric (or terrestrial) experiments
are not (provided
$\theta_{13}\neq 0$)
two separate two-generation mixing effects. In the minimal scheme
solar effects
are accurately described by three parameters
($\theta_{12}$, $\Delta m_{12}^2$ and $\theta_{13}$),
while the terrestrial effects of interest here depend on
$\theta_{23}$, $\Delta m_{23}^2$ and $\theta_{13}$:
\begin{eqnarray}
P(\nu_e\rightarrow\nu_\mu)&=& \sin^2(\theta_{23})\,
\sin^2(2\theta_{13})\,\sin^2\left({\Delta_{23}\, L\over 2}\right)
\cr
P(\nu_e\rightarrow\nu_\tau)&=& \cos^2(\theta_{23})\,
\sin^2(2\theta_{13})\,\sin^2\left({\Delta_{23}\, L\over 2}\right)
\cr
P(\nu_\mu\rightarrow\nu_\tau)&=& \cos^4(\theta_{13})\,
\sin^2(2\theta_{23})\,\sin^2\left({\Delta_{23}\, L\over 2}\right)\; .
\label{todasprobs}
\end{eqnarray}
In the minimal scheme CP and T violation effects can be neglected,
so that $P(\bar\nu_\alpha\to\bar\nu_\beta)=P(\nu_\alpha\to\nu_\beta)$
and $P(\nu_\beta\to\nu_\alpha)=P(\nu_\alpha\to\nu_\beta)$. With this
information, Eqs.(\ref{todasprobs}) and unitarity one can construct
all relevant oscillation amplitudes, e.g. $P(\nu_\mu\to\nu_\mu)$.
The approximate analysis of the SuperK data
by Barbieri {\it et al.}
\cite{Barb} results (for the range of
$\Delta m_{23}^2$ advocated by the SuperK collaboration)
in the restrictions $\theta_{23}=45\pm 15^o$ and
$\theta_{13}\sim 0\div 45^o$, with a preferred value
around 13$^o$. Fogli {\it et al.} conclude \cite{Fogli},
after a more thorough analysis and with equal
confidence, that $\theta_{13}<23^0$, while their range
of $\Delta m_{23}^2$ is a little narrower than the one
obtained by the SuperK team \cite{Superka}.
We shall present results for the range of angles advocated in \cite{Barb}
and the range of masses of \cite{Superka},
simply because they are the widest.
All mixing probabilities in Eq.(\ref{todasprobs})
have the same sinusoidal dependence
on $\Delta m^2_{23}\, L/E_\nu$, entering into the description of a plethora of
channels:
\begin{eqnarray}
\mu^- \rightarrow e^-\, & \nu_\mu & \, \bar{\nu}_e\, ;
\nonumber\\
& \; & \bar{\nu}_e \rightarrow \bar{\nu}_e \rightarrow e^+ \;\; {\rm disappearance,}
\nonumber\\
& \; & \bar{\nu}_e \rightarrow \bar{\nu}_\mu \rightarrow \mu^+ \;\; {\rm appearance,}
\nonumber\\
& \; & \bar{\nu}_e \rightarrow \bar{\nu}_\tau \rightarrow \tau^+ \;\; {\rm appearance}
\;\;\; (\tau^+ \rightarrow \mu^+;\; e^+)\, ,
\nonumber\\
& \nu_\mu &
\rightarrow \nu_\mu\rightarrow \mu^- \;\;\;\;\; {\rm disappearance,}
\nonumber\\
& \nu_\mu & \rightarrow \nu_e \rightarrow e^- \;\;\;\;\; {\rm appearance,}
\nonumber\\
& \nu_\mu & \rightarrow \nu_\tau \rightarrow \tau^- \;\;\;\;\; {\rm appearance}
\;\;\; (\tau^- \rightarrow \mu^-;\, e^-)\, .
\label{charges}
\end{eqnarray}
The wrong sign channels of $\mu^+$, $\tau^+$ and $e^-$ appearance
are the good news, relative to the two-generation analysis
of Eqs.(\ref{nocharges}).
We extract results on the sensitivity to oscillations
from observable numbers
of muons, and not from ratios such as the number of muons
upon the number of electrons, that are so useful in the analysis of
atmospheric neutrinos. Our conclusions would be essentially
identical, were we to draw them from the customary ratios.
Yet, we refer directly to muon numbers not only because the
neutrino-factory flux would be very well understood
(obviating the main reason to take ratios), but also
because the physics of three-generation mixing leads us to advocate
the advantages of an experiment capable of measuring the
charge of muons. It is likely that a relatively large experiment
of this kind would compromise the possibility of efficiently
distinguishing electron- from neutral-current events.
Naturally, a complementary experiment on the same beam,
capable of observing electrons with precision, would be useful
\cite{CR,Thom}.
In Fig.~\ref{fig:t23m} we show the sensitivity reach, in the
[$\sin^2 (\theta_{23}),\Delta m_{23}^2$] plane
for various values of $\theta_{13}$, for $L=732$ km,
for our reference set-up and for stored $\mu^-$s.
We have chosen to
illustrate the disappearance observable
$N_\mu\equiv N[\mu^+ +\mu^-]$ and the appearance measurement $N[\mu^+]$ (the effects of the small $\mu^+$ contamination from
$\bar\nu_e\to\bar\nu_\tau$ oscillations, $\tau^+$ production and
$\tau^+\to\mu^+$ decay, are negligible).
Figure~\ref{fig:t23m} conveys an important point: for stored
$\mu^-$s the observation of $\mu^+$ appearance
is very superior to a measurement (such as the depletion
of the total number of muons) in which
the charges of the produced leptons are not measured.
This is true for all $\theta_{13}$ bigger than a few degrees.
This angle is very unconstrained by current measurements.
Notice that the SuperK domain would be covered for any
$\sin^2(\theta_{13})> 3.6 \times 10^{-3}$ by the appearance channel, while the disappearance
measurement would fall short of this motivating goal.
All these statements refer to statistical sensitivities, in the
absence of the backgrounds discussed in Section 6.
Fig.~\ref{fig:mutau} and its comparison with
Fig.~\ref{fig:t23m}
convey our point regarding the benefits of muon-charge identification.
We are showing results only for stored $\mu^-$s. The
wrong-sign muon results are slightly superior for the polarity
we do not show: if it is positive, and for equal numbers of decays,
the unoscillated numbers of expected
electron events (and of potential wrong-sign muons) are
roughly twice as numerous. The $\mu$-disappearance results, on
the other hand, are slightly weaker for a $\mu^+$ beam.
\begin{figure}[htb]
\centering
\mbox{\epsfig{file=3fammatter1.eps,width=4.in,height=4.in}}
\caption{Sensitivity reach in the plane
$[\sin^2 \theta_{23},\Delta m_{23}^2]$
at 90\% confidence, for our reference set-up, a $\mu^-$-decay
beam and $L=732$ km. Matter effects are taken into account.
The discontinuous lines
correspond to the appearance observable
$N[\mu^+]$ (at $\theta_{13}=40,13,5^0$) and
the full lines correspond to the disappearance
observable $N_\mu$ at $\theta_{13}=0,40^0$.
The rectangle is the approximate domain allowed by
SuperK data.}
\label{fig:t23m}
\end{figure}
In Fig.~\ref{fig:t13m} we show the sensitivity reach, in the
[$\sin^2(\theta_{13}),\Delta m_{23}^2$] plane
for the extremal values of
$\theta_{23}\sim 30^0,\, 45^0$ allowed by the
SuperK data. In Fig.~\ref{fig:t23t13} we show the sensitivity reach
in the plane $[\sin^2\theta_{23},\sin^2\theta_{13}]$.
\begin{figure}[htb]
\centering
\mbox{\epsfig{file=3fammatter2.eps,width=4.in,height=4.in}}
\caption[]{Sensitivity reach in the plane
$[\sin^2 \theta_{13},\Delta m_{23}^2]$, at 90\% confidence, for the same conditions as in Fig.~\ref{fig:t23m}.
The continuous (dashed) lines correspond to
$\theta_{23}=45^0\, (30^0)$.
The lines covering the most
(least) ground are for the appearance (disappearance) observable $N[\mu^+]$ ($N_\mu$). The rectangular domain is the approximate region allowed
by SuperK data.}
\label{fig:t13m}
\end{figure}
The overall conclusion of this analysis in terms of the
mixing of three generations is that the capability of detecting
``wrong-charge'' muons would be extremely useful in giving access
to the study of a large region of the
($\theta_{13}$, $\theta_{23}$, $\Delta m_{23}^2$) parameter space.
\begin{figure}[htb]
\centering
\mbox{\epsfig{file=3fammatter3.eps,width=4.in,height=4.in}}
\caption{Sensitivity reach in the
[$\sin^2\theta_{23}, \sin^2\theta_{12}$] plane at 90\% confidence,
for the same conditions as in Fig.~\ref{fig:t23m}. The dashed and
dotted lines correspond to the
appearance observable $N[\mu^+]$
with $\Delta m_{23}^2= 2 \times 10^{-3}$ eV$^2$, and
$\Delta m_{23}^2= 10^{-3}$ eV$^2$, respectively.
The regions interior to the continuous and dot-dashed lines are
exclusion domains stemming from
the disappearance observable, $N_\mu$,
with $\Delta m_{23}^2= 2\times 10^{-3}$ eV$^2$, and
$\Delta m_{23}^2= 10^{-3}$ eV$^2$, respectively.
}
\label{fig:t23t13}
\end{figure}
\section{Matter effects and scaling laws}
Of all neutrino species, only $\nu_e$ and $\bar\nu_e$ have charged-current elastic scattering amplitudes on electrons.
This, it is well
known, induces effective ``masses'' $\mu=\pm\, 2\,E_\nu\, A$,
where the signs refer to $\nu_e$ and $\bar\nu_e$ and
$A=\sqrt{2}\, G_F\, n_e$, with $n_e$ the ambient electron
number density \cite{MSW}. Matter effects
\cite{MSW,CPtoda} are important
if $A$ is comparable to, or bigger than, the quantity $\Delta_{jk}= \Delta m_{jk}^2/(2\,E_\nu)$ of Eq.(\ref{Deltas}) for some mass
difference and neutrino energy. In the minimal scheme
$\Delta m_{12}^2$ is neglected relative to $\Delta m_{23}^2$,
the question is the relative size of $A$ and
$\Delta_{23}\simeq \Delta_{13}$ (we assume $\Delta m_{23}=
m_3^2-m_2^2$ to be positive, otherwise the roles of neutrinos
and antineutrinos are to be inverted in what follows).
For the Earth's crust,
with density $\rho\sim 2.8$ g/cm$^3$
and roughly equal numbers of protons, neutrons and electrons,
$A\sim 10^{-13}$ eV. The typical neutrino energies we are considering are tens of GeVs.
For $E_\nu=12$ GeV (the average $\bar\nu_e$ energy in the decay of $E_\mu=20$ GeV muons)
$A\simeq\Delta_{23}$ for
$\Delta m^2_{23}=2.4\times 10^{-3}$ eV$^2$.
This means that $A\gg \Delta_{23}$
for the lower $\Delta m^2$ values in
Figs.~\ref{fig:t23m},\ref{fig:t13m}
while the opposite is true at the other end of the relevant mass domain.
Thus, the matter effects that we have so far neglected are dominant in
the most relevant portion
of the domain of interest: the lower mass scales. Yet, as we
proceed to show,
matter effects are practically irrelevant
(except in the analysis of CP-violation effects) in long baseline
experiments with $L<3000$ km. They only begin to have a sizeable
impact at even larger distances\footnote{This refers to the
approximate assessment of sensitivities, not to the analysis of eventual
results: in the Sun or on Earth, Nature may well have chosen parameter values
for which matter effects are relevant.}.
Define
\begin{equation}
B\equiv \sqrt{\left[\Delta_{23}\,\cos(2\theta_{13})-A\right]^2
+\left[\Delta_{23}\,\sin(2\theta_{13})\right]^2}
\label{B}
\end{equation}
and
\begin{equation}
\sin(2\,\theta_M)\equiv{\Delta_{23}\,\sin(2\theta_{13})/ B}\, ,
\label{thetamatter}
\end{equation}
where $\theta_M$ is to be taken in the first (second) quadrant
if $\Delta_{23}\,\cos(2\theta_{13})-A$ is positive (negative).
The transition probability governing the appearance of wrong sign
muons is, in the minimal scheme, in the presence of matter
effects, and in the approximation of constant $n_e$ \cite{Yasuda}:
\begin{equation}
P(\nu_e\rightarrow\nu_\mu)\simeq s^2_{23}\,
\sin^2(2\theta_M)\,\sin^2\left({B\, L/ 2}\right)
\label{probmatt1}
\end{equation}
which, for $A=0$, reduces to the corresponding vacuum result:
the first of Eqs.(\ref{todasprobs}).
For $B\, L/2$ sufficiently small, it is a good approximation
to expand the last sine in Eq.(\ref{probmatt1}) and to use
Eq.(\ref{thetamatter}) to obtain:
\begin{equation}
P(\nu_e\rightarrow\nu_\mu)\sim s^2_{23}\,
\sin^2(2\theta_{13})\,\left[\Delta_{23}\,L/2\right]^2 \, ,
\label{probmatt11}
\end{equation}
which coincides with the expansion for small
$\Delta_{23}\,L/2=\Delta m^2_{23}\, L/(4\,E_\nu)$
of the vacuum result in Eqs.(\ref{todasprobs}), even when
matter dominates and $B\simeq A$ (at a distance of $L=732$ km, $A\,L/2\sim 0.2$).
In practice,
and after integration over the neutrino flux and cross section,
the above approximations are excellent in that part
of the disappearance sensitivity contours of
Figs.~\ref{fig:t23m}-\ref{fig:t23t13}
that are roughly ``straight diagonal'' lines of slope $-1$. There,
$s_{23}\, \sin(2\theta_{13})\,\Delta m^2_{23}$ is
approximately constant. In this region the
results with and without matter effects are indistinguishable
and (for equal number of events) the sensitivity contours from $\nu_e\to\nu_\mu$
and $\bar\nu_e\to\bar\nu_\mu$ transitions would also coincide.
For sufficiently large $\Delta m^2_{23}$, matter effects are negligible.
In Figs.~\ref{fig:t23m},\ref{fig:t13m} this occurs in the portion of
the limits that are approximately ``straight vertical'' lines, for which the
oscillating factors in Eqs.(\ref{probmatt1},\ref{probmatt11}) average
to 1/2. All in all, only the wiggly regions in the
sensitivity boundaries distinguish
matter from vacuum, neutrinos from antineutrinos. The differences
are not large (factors of order two). All of the above also applies to the
disappearance-channel results shown in the same figures.
The preceding discussion was made in the context of the relatively
``short'' long baseline of 732 km and for $E_\mu=20$ GeV. How do
our results scale to other distances and
stored-muon energies? (the scaling laws differ somewhat from similar
ones for neutrinos from $\pi$ and $K$ decay).
We are considering detectors at a sufficiently
long distance (or otherwise sufficiently small in transverse
dimensions) for the neutrino beam that bathes them to be
transversally uniform. For a fixed number of decaying muons
(independent of $E_\mu$) the forward neutrino flux
varies
as $E_\mu^2\,L^{-2}$, see Eq.(\ref{flux}). The neutrino cross sections at moderate energy
are roughly linear in the neutrino (or parent-muon) energy. For
$L<3000$ km, $\sin^2(A\, L/2)\sim(A\, L/2)^2$ is a good approximation
(better than 25\% and rapidly deteriorating for increasing $L$)
and the vacuum-like result of Eq.(\ref{probmatt11}) is applicable.
Entirely analogous considerations apply to the probability
$P(\nu_\mu\to\nu_\mu)$ whose explicit form in the minimum scheme
\cite{Yasuda} we have not written. All this implies that the
``straight diagonal'' parts of the
appearance contours in Figs.~\ref{fig:t23m}-\ref{fig:t23t13}
scale as $s_{23}\, \sin(2\theta_{13})\,\Delta m^2_{23}\propto
E_\mu^{-1/2}$, with no $L$ dependence. For $L>3000$ km, this
sensitivity (still in the approximation of constant $n_e$)
is weakened by an extra $L$-dependent factor so that, for any distance,
the appearance sensitivity at the low-mass end
scales as:
\begin{equation}
s_{23}\, \sin(2\theta_{13})\,\Delta m^2_{23}\propto
E_\mu^{-1/2}\;(A\, L/2)/\big|\sin(A\, L/2)\big|\; .
\label{sensitivity}
\end{equation}
For the ``straight vertical'' parts of the appearance boundaries
in Figs.~\ref{fig:t23m},\ref{fig:t13m} the oscillation probabilities
average to 50\% and the scaling law is $s_{23}\, \sin(2\theta_{13})
\propto L\, E_\mu^{-3/2}$.
For a disappearance channel the putative signal must compete with
the statistical uncertainty in the background and the $E_\mu$ and $L$
dependence are not those of an appearance channel. Moreover, the
scaling laws for our $N[\mu^+ +\mu^-]$ contours
are not very simple functions of the mixing angles.
For $L<3000$ km
their ``straight diagonal'' portions
in Figs.~\ref{fig:t23m},\ref{fig:t13m} scale up and down as
$\Delta m^2\propto E_\mu^{1/4}\,L^{-1/2}$.
The ``straight vertical'' parts of these limits move right and left as
$\sin \theta\propto L^{1/2}\, E_\mu^{-3/4}$. For $L>3000$ km
the scaling laws for disappearance are more involved.
In Fig.~\ref{fig:lejos} we compare results
for $L=732$ and 6000 km. Only the disappearance
channel at large $\sin^2\theta$ benefits from the larger
distance. For the more attractive wrong-sign $\mu$-appearance
channel there is no advantage to a very long baseline.
\begin{figure}[htb]
\centering
\mbox{\epsfig{file=3fammatter1_6000.eps,width=4.in,height=4.in}}
\caption[]{Sensitivity reach in the plane
$[\sin^2 \theta_{23},\Delta m_{23}^2]$
at 90\% confidence, for our reference set-up, a $\mu^-$-decay
beam and $L=732,\, 6000$ km.
The discontinuous (continuous) lines
correspond to the appearance (disappearance) observable
$N[\mu^+]$ ($N[\mu^++\mu^-]$).
We chose $\theta_{13}=40^0$ for appearance,
$\theta_{13}=0$ for disappearance. }
\label{fig:lejos}
\end{figure}
\section{T and CP violation ?}
The beams from a hypothetical neutrino factory would be so
intense and well understood that one may daydream about
measuring CP violation in the very clean environment of a
$\mu$-decay beam. Standard-model CP-violation effects, as is well
known in the quark sector, entail an unavoidable reference to
all three families. They would consequently vanish in the
minimal scheme that we have been considering, insofar as
the mass difference $\Delta m^2_{12}$ is neglected. With the
inclusion of this difference the parameter space (two mass gaps,
three angles, one CP-odd phase) becomes so large that
its conscientious exploration
would, in our current nescient state,
be premature. We will simply
give some examples of the size of the effects that one could,
rather optimistically, expect.
CP-related observables often involve the comparison between measurements
in the two charge-conjugate modes of the factory.
One example is the asymmetry \cite{Nicola}
\begin{equation}
A_{e \mu}^{CP}\equiv\frac{P(\nu_e\rightarrow \nu_\mu)-
P(\bar\nu_e\rightarrow \bar\nu_\mu)}{P(\nu_e\rightarrow
\nu_\mu)+
P(\bar\nu_e\rightarrow \bar\nu_\mu)}\,
\label{CPodd}
\end{equation}
which would, in vacuum, be a CP-odd observable. The voyage through
our CP-uneven planet, however, induces a non-zero
$A_{e \mu}^{CP}$ even if CP is conserved, since $\nu_e$
and $\bar\nu_e$ are differently affected by the ambient
electrons \cite{Arafcp}.
In a neutrino factory $A_{e \mu}^{CP}$
would be measured by first
extracting $P(\nu_\mu\rightarrow \nu_e)$ from the produced
(wrong-sign) $\mu^-$s in a beam from $\mu^+$ decay and
$P(\bar\nu_e\rightarrow \bar\nu_\mu)$ from the charge conjugate
beam and process. Even if the fluxes are very well
known, this requires a good knowledge of the cross section
ratio $\sigma(\bar\nu_\mu\to\mu^+)/\sigma(\nu_\mu\to\mu^-)$, which
may be gathered in a short-baseline experiment. To obtain the
genuinely CP-odd quantity of interest, the matter effects
must be subtracted with sufficient precision. But we shall see that
the truly serious limitation
is the small statistics inherent to appearance channels.
The T-odd asymmetry \cite{Rusos}
\begin{equation}
A_{e \mu}^{T}\equiv\frac{P(\nu_e\rightarrow \nu_\mu)-
P(\nu_\mu\rightarrow \nu_e)}{P(\nu_e\rightarrow \nu_\mu)+
P(\nu_\mu\rightarrow \nu_e)}\;
\label{CPt}
\end{equation}
is ``cleaner'' than the CP-odd one, in that a non-zero
value for it cannot be induced by matter effects.
As a consequence of CPT-invariance the two asymmetries,
in vacuum,
are identical
$A_{e \mu}^{T}[{\rm vac}]=A_{e \mu}^{CP}[{\rm vac}]$.
The $T$-odd asymmetry
is very difficult to measure in practice.
In a $\mu^-$-generated beam
the extraction of $P(\nu_\mu\rightarrow \nu_e)$ would require
a measurement of electron charge, the $e^++e^-$ number involving
also $P(\bar\nu_e\rightarrow \bar\nu_e)$. It is not easy to measure
the electron charge in a large, high-density experiment.
The complete expressions for $A^{CP}_{e \mu}$ in the presence of
matter are rather elaborate and we do not reproduce
them here. To illustrate the size of the effects,
in Table 1 we give the values of various asymmetries at
$L=732$ km with a fixed neutrino energy, $E_\nu=7$ GeV
with maximal CP violation, $\delta=90^0$,
and with various parameter values chosen in their currently
allowed domains.
The Table reports the vacuum asymmetry
$A^{CP}_{e\mu}[{\rm vac}]$,
the calculated expectation $A^{CP}_{e\mu}(0)$
for the apparent CP-odd asymmetry induced
by matter, and the genuine CP-odd asymmetry in matter:
\begin{equation}
{{\cal A}_{e\mu}}(\delta)=
A^{CP}_{e\mu}(\delta)-A^{CP}_{e\mu}(0) \; ,
\label{CPfixed}
\end{equation}
in which the matter effect is subtracted.
\vskip .5cm
\centerline{
\vbox{\tabskip=0pt \offinterlineskip
\def\noalign{\hrule}{\noalign{\hrule}}
\halign to367pt{\strut#& \vrule#\tabskip=1em plus2em
&\hfil#& \vrule#
&\hfil#& \vrule#
&\hfil#& \vrule#
& \hfil#\hfil& \vrule#
&\hfil#& \vrule#
&\hfil#& \vrule#
&\hfil#& \vrule#
\tabskip=0pt\cr\noalign{\hrule}
&&\omit
$\sin^2\theta_{12}$
&&
\omit $\theta_{13}$
&&
\omit \hfil $\Delta m^2_{12}$ \hfil
&&
\omit \hfil $A^{CP}_{e\mu}[{\rm vac}]$ \hfil
&&
\omit \hfil ${\cal A}_{e\mu}$ \hfil
&&
\omit \hfil $A^{CP}_{e\mu}(0)$ \hfil
&\cr\noalign{\hrule}
&& 0.5
&& 13$^0$
&& $10^{-5}$
&&\hfil $-5.9\, 10^{-3}$ \hfil
&&\hfil $-5.5\, 10^{-3}$ \hfil
&&\hfil $ 1.6\, 10^{-2}$ \hfil
&\cr\noalign{\hrule}
&& $5\, 10^{-3}$
&& 30$^0$
&& $10^{-4}$
&&\hfil $-3.4\, 10^{-3}$ \hfil
&&\hfil $-3.2\,10^{-3}$ \hfil
&&\hfil 9.8 $10^{-3}$ \hfil
&\cr\noalign{\hrule}
&& 0.5
&& 30$^0$
&& $10^{-4}$
&& $-2.6\, 10^{-2}$
&& \hfil $-2.5\, 10^{-2}$ \hfil
&& \hfil $7.8\, 10^{-3}$ \hfil
&\cr\noalign{\hrule}
&& 0.5
&& 13$^0$
&& $10^{-4}$
&&\hfil $-5.6\, 10^{-2}$ \hfil
&&\hfil $-5.4\, 10^{-2}$ \hfil
&&\hfil $1.4\, 10^{-2}$ \hfil
&\cr\noalign{\hrule}}}
}
\vskip.2cm
\noindent{Table 1: The CP asymmetries defined in the text,
at $L=732$ km, for $\delta=\pi/2$,
$\theta_{23}=45^0$,
$\Delta m^2_{23}=10^{-3}$ eV$^2$, $E_\nu=7$ GeV and
choices of other parameters compatible
with solar and atmospheric data.
\vskip.2cm
With no further ado, Table 1 conveys the message that,
if $\Delta m^2_{21}$ is indeed as small as the ensemble of
solar neutrino experiments would imply, the CP-odd
effects are only sizeable in a small domain of parameter
space, exemplified here by the last two rows of the table.
Is that region amenable to empiric scrutiny?
A first question concerns the
relative size of the measured and the theoretically subtracted
terms. For the subtraction procedure to be useful $\theta_{23}$, $\theta_{13}$,
$\Delta m^2_{23}$ and the density profile traversed by the beam
must be known with sufficient precision
for the error in the subtracted term not to dominate the result.
At the distance of $L=732$ km used to construct Table 1, this
does not seem to be a problem: for the parameter values
of the last two rows, the subtractions are small enough
that a precision of a factor of two in their determination would
suffice.
A second question on the observability of CP-violation is that of statistics.
In practice, for our reference set-up, there would be too few events to
exploit the explicit $E_\nu$ dependence of the CP-odd effect.
To construct a realistic CP-odd observable, consider the neutrino-energy
integrated quantity:
\begin{equation}
{\bar A}^{CP}_{e\mu} = \frac{\large\{{N[\mu^-]}/{N_o[e^-]}\large\}_{+}
- \large\{N[\mu^+]/N_o[e^+]\large\}_{-}} {\large\{N[\mu^-]/N_o[e^-]\large\}_{+}
+ \large\{N[\mu^+]/N_o[e^+]\large\}_{-}}\; ,
\label{intasy}
\end{equation}
where the sign of the decaying muons
is indicated by a subindex,
$N[\mu^+]$ $(N[\mu^-])$ are the measured number of wrong-sign muons, and
$N_o[e^+]$ $(N_o[e^-])$ are the expected number of $\bar{\nu}_e (\nu_e)$
charged current interactions in the absence of
oscillations\footnote{ In the analogue
energy-integrated T-odd asymmetry, the T-even contributions to its
numerator do not cancel,
due to the different energy distributions of $\nu_e$s and $\nu_\mu$s in
the beam.}.
The genuine CP-odd asymmetry is
${\overline {\cal A}}_{e\mu}(\delta)=
\bar A_{e\mu}^{CP}(\delta) -\bar A_{e\mu}^{CP}(0)$, the
flux and cross-section weighed version of Eq.(\ref{CPfixed}).
In Fig.~\ref{fig:CP} we give the signal over statistical noise
ratio for $\big|{\overline {\cal A}}_{e\mu}(\pm \pi/2)\big|$
as a function of distance for our
standard set-up, for $E_\mu=10,20$ GeV
and for the parameters in the last row of Table 1.
The number of ``standard deviations'' is seen not to
exceed $\sim 2$ at any distance. Moreover, for very
long baselines, the relative size of the theoretically subtracted
term $\bar A_{e\mu}^{CP}(0)$ increases very rapidly,
as shown in Fig.~\ref{fig:subtract}.
We have examined other parameter values within the limits
of the scenario we have adopted for neutrino masses and mixing
angles\footnote{The CP-violation effects are much bigger
for the larger mass differences that become possible if
the results of some solar neutrino experiment are disregarded.
We have not pursued this option.}. As an example,
increasing $\Delta m^2_{23}$ from $10^{-3}$ to $6 \times 10^{-3}$
eV$^2$, with the other parameters fixed as in Table 1, increases
the maximum number of standard deviations to $\sim 3.5$
(at $L\sim 3000$ km) but the relative size of the theoretically
subtracted term at that distance increases by an order of magnitude
relative to what it is in Fig.~\ref{fig:subtract}.
The conclusion is that,
if the neutrino mass differences are those indicated by
solar and atmospheric observations and the physics is
that of three standard families, there is little hope
to observe CP-violation with the beams and detectors
we have described.
\begin{figure}[htb]
\centering
\mbox{\epsfig{file=cperror_m.eps,width=4.in,height=4.in}}
\caption{Signal over statistical uncertainty in a measurement
of CP asymmetries as a function of distance, with the continuous
(dashed) lines corresponding to
$E_\mu=20\, (10)$ GeV. The chosen CKM parameters are
those of the last row of Table 1. The lower four curves describe
$\big|{\overline {\cal A}}_{e\mu}(\pm \pi/2)\big|$ over its statistical
error. The upper two curves are vacuum results for the same
CP phase(s).
}
\label{fig:CP}
\end{figure}
\begin{figure}[htb]
\centering
\mbox{\epsfig{file=cperror_mb.eps,width=4.in,height=4.in}}
\caption{Ratio of the subtracted term $\bar A_{e\mu}^{CP}(0)$
relative to the genuine CP asymmetry ${\overline {\cal A}}_{e\mu}(\pi /2)$,
as a function of distance, with the continuous
(dashed) lines corresponding to
$E_\mu=20\, (10)$ GeV. The chosen CKM parameters, are
those of the last row of Table 1.}
\label{fig:subtract}
\end{figure}
\section{Observables and
backgrounds in $\pi$- and $\mu$-decay beams.}
In a search for $\tau$ appearance, a $\mu$-decay beam, but for
its conceivable intensity,
would not have overwhelming advantages relative to a conventional
$\pi+K$ decay beam; the contamination of $\nu_\tau$ from $D_s$
decay in the $\pi$ beam is known to be small, witness
the fact that the third generation
neutrino has not yet been ``seen''. The background from the charmed
particles produced by the other neutrino types would be equally
challenging in a conventional or a $\nu$-factory beam. We briefly
compare
these beams for oscillation studies other than $\tau$ appearance.
The $\nu_\mu$ beams from $\pi$ decay have a contamination
of $\nu_e$s from $K_{e3}$ decays. A small contamination
of the wrong helicity neutrinos (e.g. $\bar\nu_\mu$ in
a predominantly $\nu_\mu$ beam) is also unavoidable, due
to limitations of the charge-separation and focusing
system. It is difficult to understand
these beams theoretically to better than 10\% precision.
With a $\pi^+$ decay beam one can measure neutral currents
and the production of electrons and muons, the measurement
of whose charge is immaterial; that is, a total of three
observables, one of which (electron events) is beset
by background problems. Ideally
beams of opposite polarity add information,
but the comparison of $\nu_\mu$ and $\bar\nu_\mu$ disappearance channels
for a study of CP-violation would be even more
demanding than for the $\nu$-factory wrong-sign $\mu$-appearance
examples discussed in the previous section.
The number of useful observables in an experiment with a
$\nu_\mu+\bar\nu_e$ beam from $\mu^-$ decay is
larger than for a $\pi$ decay beam.
Assume that one or various aligned experiments are capable of
distinguishing $\mu^+$, $\mu^-$, $e^++e^-$, and neutral current
events. One of these observables ($\mu^+$ appearance) is a tell-tale
signal of oscillations. From the other three observables one
can extract information on oscillation probabilities with
errors associated only with statistics, backgrounds, efficiencies and
cross sections, but with very small flux uncertainties. In total,
for each polarity, a $\mu$-decay facility could measure four channels
other than $\tau$ appearance. In principle this is sufficient to
determine (or severely constrain) two of
the three Euler angles ($\theta_{23}$ and $\theta_{13}$)
of the neutrino-mixing matrix in Eq.(\ref{CKM})
and (with a measurement of $E_\nu$) the
neutrino mass splitting $\Delta m^2_{23}$.
With a conventional $\pi$-decay beam such a
program would be out of reach\footnote{Charged pions and
kaons decay two orders of magnitude faster
than muons. Only if there was time, in a brief pion lifetime, to clean
up a pion beam of its kaon contamination by some electromagnetic
gymnastics, would a ``pion factory'' compete with a $\mu$-decay
race-track as a candidate neutrino factory.}.
The backgrounds to a wrong-sign $\mu$ signal are not associated
with the beam, but with the numerous decay processes that can
produce or fake such muons. Pions masquerading as muons can
be ranged out with great efficiency, particularly in competition
with the generally energetic primary muon from the leptonic vertex.
Muonic charged currents are not the most threatening background,
since one would also have to miss the right-sign muon.
Electronic charged currents may singly produce charmed
particles, but the decays of the latter lead to muons of the ``right'' sign.
In any case, the
background from charm production and subsequent muonic
decay can be easily suppressed or studied by lowering
$E_\mu$ below the canonical 20 GeV we have been using. At
1/4 the stored muon energy the statistical appearance sensitivity
would be reduced by a factor of 2, while charm production would be
almost completely kinematically forbidden (this is
an extreme example, in that it might jeopardize muon recognition).
Neutral current events
in which a hadron decays into a muon early or straight enough
are presumably the main
hazard. Experience with NOMAD --admittedly not a coarse-grained
very large device-- demonstrates that an `isolation' cut in the transverse
momentum of the muon candidate relative to the direction
of the hadronic jet is extremely efficient
\cite{bcr2}. In these events, an additional cut of the missing
transverse momentum
(carried mainly by the outgoing neutrino in the neutral-current
leptonic vertex) relative to the muon plus hadrons would also
help. Even detectors as coarse-grained as MINOS \cite{MINOS}
or NICE \cite{NICE}
have jet-direction
reconstruction capabilities and could implement similar cuts.
Without a specific detector in mind and
considerable simulation toil we cannot answer the question of
how large the above backgrounds would be. A question that we can
answer is how small they would have to be not to interfere
with the signal. For our standard set-up and an unoptimized
$E_\mu=$ 20 GeV, there would be a grand total of a few $10^5$
events for $n_\mu=2\times 10^{20}$ $\mu$ decays at $L=732$ km.
To compete with a limiting appearance signal of a few
wrong-sign muons may be difficult.
At some 10 times larger $L$ the
low-mass edge of the sensitivity domain
would change very little, as shown in
Eq.(\ref{sensitivity}) and Fig.~\ref{fig:lejos},
while the background would be reduced by two orders of magnitude,
a level at which it would not represent a challenge.
The overall optimization of the signal-to-noise ratio is
a multi-parameter task that we cannot engage in.
\section{Summary.}
The inevitable conclusion of a description of
atmospheric and solar neutrino data
as two independent two-by-two neutrino-mixing effects is that
the only hope to corroborate the atmospheric results with
artificial beams is based on
long baseline experiments looking for $\tau$ appearance
or $\mu$ depletion. These experiments would have great difficulty
in covering the parameter space favoured by SuperK.
If the same data are analysed in a three-generation mixing
scenario, the conclusions are very different:
long baseline experiments
searching for $\nu_e\leftrightarrow \nu_\mu$
transitions regain interest,
since these oscillations (even if primarily responsible
for the long-distance solar effect) will in general also occur over the shorter
range implied by the atmospheric data.
We have studied $\nu_\mu\leftrightarrow \nu_e$ oscillations
in the context of a neutrino factory. Rather than concentrating
on the $\nu_\mu\to \nu_\tau$ process, the observation of which
is notoriously difficult, we have outlined the possibilities
opened by experiments searching, not only for an
unexpected $e/\mu$ production ratio, but very preferably
for the appearance of ``wrong
sign'' muons\footnote{In principle, but not in practice,
the search for wrong-sign $e$s would be equally useful.}:
$\mu^\pm$s in a beam from decaying $\mu^\mp$s.
We have not dealt in detail with the problem of backgrounds.
A neutrino factory
may provide beams clean and intense enough, not only to
corroborate the strong indication for neutrino oscillations
gathered by the SuperK
collaboration, but also to launch a program of precision
neutrino-oscillation physics.
The number of useful observables is sufficient to determine
or very significantly constrain
the parameters $\theta_{23}$ and $\theta_{13}$
and $\Delta m^2_{23}$ of a standard three-generation mixing
scheme. Only if the neutrino mass differences
are much larger than we have assumed would
a neutrino factory serve to measure the remaining mixing
parameters of the very clean neutrino-mixing sector.
It is instructive to compare the current programs to measure the CKM
mixing matrices in the quark and lepton sectors. Considerable
effort is being invested, sometimes in duplicate, to improve our
knowledge of the quark sector case, mainly via better studies
of $B$-decay. Even though non-zero neutrino masses are barely
established, the neutrino sector of the theory can be convincingly
argued to herald physics well beyond the standard model \cite{Wil}.
It is in this perspective --with dedicated $B$-physics experiments
and beauty factories in the background-- that a neutrino
factory should be discussed.
All by itself, as part of a muon-collider
complex or even as a step in its R\&D, a neutrino factory
seems to be a must.
\section{Acknowledgements}
We acknowledge useful conversations with B. Autin,
L. Camilleri, L. Di Lella,
J. Ellis, J. G\'omez-Cadenas, O. Mena, P. Picchi, F. Pietropaolo,
C. Quigg, J. Steinberger, P. Strolin and J. Terr\'on.
M. B. G. thanks the CERN Theory Division for
hospitality during the initial stage of this work; her work was partially
supported as well by CICYT project AEN/97/1678.
| 2024-02-18T23:40:13.916Z | 1998-11-27T19:52:29.000Z | algebraic_stack_train_0000 | 1,749 | 8,076 |
|
proofpile-arXiv_065-8641 | \section{Introduction}
{\it To disavow an error is to invent retroactively.}\\
\hspace*{2in} ---Johann Wolfgang von Goethe
In a classical information system the basic error is represented by a
$0$ becoming a $1$ or vice versa.
The characterization of such errors is in terms of
an error rate, $\epsilon$,
associated with such flips.
The correction of such errors is achieved by appending
check bits to a block of information bits.
The redundancy provided by the check bits can be
exploited to determine the location of errors using the
method of syndrome decoding.
These codes are characterized by a certain capacity
for error-correction per block.
Errors at a rate less than the capacity of the
code are {\it completely} corrected.
Now let us look at a quantum system.
Consider a single cell in a quantum register.
The error here can be due to a random unitary
transformation or by entanglement with the environment.
These errors cannot be defined in a graded sense because
of the group property of unitary matrices
and the many different ways in which the entanglements
can be expressed.
Let us consider just the first type of error,
namely that of random unitary transformation.
If the qubit is the state $| 0\rangle$, it can
become
$a |0\rangle + b | 1 \rangle$.
Likewise, the state $| 00\rangle$ can become
$a | 00\rangle + b | 01 \rangle + c | 10 \rangle + d | 11\rangle $.
In the initialization of the qubit a similar
error can occur\cite{Ka98b}.
If the initialization process consists of collapsing
a random qubit to the basis state $| 0\rangle$, the definition
of the
basis direction can itself have a small error associated
with it.
This error is analog and so, unlike error in classical
digital systems, it cannot be controlled.
In almost all cases, therefore, the qubits will have
superposition states, although the degree of superposition
may be very low.
From another perspective, classical error-correction codes
map the information bits into codewords in a higher
dimensional space so that if just a few errors occur in the
codeword, their location can, upon decoding, be identified.
This identification is possible because the errors perturb
the codewords, {\it locally}, within small spheres.
Quantum errors, on the other hand, perturb the information
bits, in a {\it nonlocal} sense, to a superposition of many states, so the concept of
controlling all errors by using a higher dimensional
codeword space cannot be directly applied.
According to the positivist understanding of
quantum mechanics, it is essential to speak
from the point of view of the observer
and not ask about any intrinsic information in
a quantum state\cite{Ka98a}.
Let's consider, therefore, the representation of
errors by means of particles in a
register of $N$ states.
We could consider errors to be
equivalent to either $n$ bosons or fermions.
Bosons, in a superpositional state
follow the Bose-Einstein statistics.
The probability of each pattern will
be given by
\begin{equation}
\frac{1}{\left( \begin{array}{c}
N + n - 1 \\ n
\end{array}
\right) }.
\end{equation}
So if there are 3 states and 1 error particle, we can only distinguish
between 3 states: $00,~01~or~10,~11$.
Each of these will have a probability of $\frac{1}{3}$.
To the extent this distribution departs from
that of classical mechanics, it represents nonlocality at
work.
If the particles are fermions, then they are
indistinguishable,
and with $n$ error objects in $N$ cells, we have
each with the probability
\begin{equation}
\frac{1}{\left( \begin{array}{c}
N \\ n
\end{array}
\right) }.
\end{equation}
If states and particles have been identified,
these statistics will be manifested by a group
of particles.
If the cells are isolated then their histories
cannot be described by a single unitary transformation.
Like the particles, the errors will also be
subject to the same statistics.
These statistics imply that the errors will not
be independent, an assumption that is basic to
the error-correction schemes examined in the
literature.
To summarize, important characteristics of quantum errors
that must be considered
are {\it component proliferation},
{\it nonlocal effects} and {\it amplitude error}.
All of these have no parallel in the classical case.
Furthermore,
quantum errors are analog and so the system cannot
be shielded below a given error rate. Such shielding
is possible for
classical digital systems.
We know that a computation need not require
any expenditure of energy if it is cast in the form
of a reversible process.
A computation which is not reversible must involve
energy dissipation.
Considering conservation of
information+energy to be a fundamental principle, a
correction of random errors in the qubits
by unitary transformations, without any expenditure
of energy, violates this principle.
Can we devise error-correction coding for
quantum systems? To examine this,
consider the problem of protein-folding, believed to
be NP-complete,
which is, nevertheless, solved efficiently by
Nature.
If a quantum process is at the basis of this
amazing result, then it is almost certain that
reliable or fault-tolerant quantum computing must exist
but, paying heed to the above-mentioned conservation law,
it appears such computing will require some lossy operations.
In this note we examine the currently
investigated models of quantum error-correction
from the point of view of their limitations.
We also consider how quantum errors affect
a computation in comparison with classical errors.
\section{Representing quantum errors}
{\it Sed fugit interea, fugit inreparabile tempus.\\
But meanwhile it is flying, irretrievable time is flying.}\\
\hspace*{2in} ---Virgil
Every unitary matrix can be transformed by a suitable
unitary matrix into a diagonal matrix with all its
elements of unit modulus.
The reverse also being true, quantum errors can play havoc.
The general unitary transformation representing errors
for a qubit is:
\begin{equation}
\frac{1}{\sqrt {||e_1||^2 + ||e_2||^2}} \left[ \begin{array}{cc}
e_1^* & e_2^* \\
e_2 & -e_1 \\
\end{array} \right] .
\end{equation}
These errors ultimately change the probabilities of
the qubit being decoded as a $0$ and as a $1$.
From the point of view of the user, when the quantum state
has collapsed to one of its basis states,
it is correct to speak of an error rate.
But such an error rate cannot be directly applied to
the quantum state itself.
Unlike the classical digital case, quantum errors cannot
be completely eliminated because they are essentially analog
in nature.
The unitary matrix (1) represents
an infinite number of cases of error.
The error process is an analog process,
and so, in general, such errors cannot be corrected.
From the point of view of the qubits, it is a
nonlocal process.
If it is assumed that the error process can be represented
by a small rotation and the initial
state is either a $0$ or a $1$, then this rotation
will generate a superposition of the two states
but the relative amplitudes will be different
and these could be exploited in some specific situations
to determine the starting state.
But, obviously, such examples represent trivial cases.
The error process may be usefully represented
by a process of quantum diffusions and phase
rotations.
Shor\cite{Sh95} showed how the decoherence
in a qubit could be corrected by a system
of triple redundancy coding where each qubit is
encoded into nine qubits as follows:
\[|0\rangle \rightarrow \frac{1}{2\sqrt 2}
( |000\rangle + | 111\rangle )
( |000\rangle + | 111\rangle )
( |000\rangle + | 111\rangle )\],
\begin{equation}
|1\rangle \rightarrow \frac{1}{2\sqrt 2}
( |000\rangle - | 111\rangle )
( |000\rangle - | 111\rangle )
( |000\rangle - | 111\rangle ).
\end{equation}
Shor considers the decoherence process to be one where
a qubit decays into a weighted amplitude
superposition of its basis states.
In parallel to the assumption of independence of
noise in classical information theory,
Shor
assumes that only one qubit out of the
total of nine decoheres.
Using
a Bell basis, Shor then shows that one can determine the
error and correct it.
But this system does not work if more than one
qubit is in error.
Since quantum error is analog, each qubit will be
in some error and so this scheme will, in practice,
not be useful in {\it completely} eliminating
errors.
The question of decoherence, or error, must be considered as
a function of time.
One may use the exponential function $\lambda e^{-\lambda t}$
as a measure of the decoherence probability of the
amplitude of the qubit.
The measure of decoherence that has taken place by time $t$
will then be given by the probability, $p_t$:
\begin{equation}
p_t = 1 - \lambda e^{-\lambda t}.
\end{equation}
In other words, by time $t$, the amplitude of the
qubit would have decayed to a fraction
$(1 - \lambda e^{-\lambda t})$ of its original value.
At any time $t$, there is a $100 \%$ chance that the
probability amplitude of the initial state will
be a fraction $\alpha_k < 1$ of the
initial amplitude.
If we consider a rotation error in each qubit through angle $\theta$,
there exists some $\theta_k$ so that the probability
\begin{equation}
Prob ( \theta > \theta_k) \rightarrow 1.
\end{equation}
This means that we cannot represent the qubit error
probability by an assumed value $p$ as was done
by Shor in analogy with the classical case.
In other words, there can be no guarantee of
eliminating decoherence.
\section{Recently proposed error-correction codes}
{\it The fox knows many things---the hedgehog one {\em big} one.}\\
\hspace*{2in} ---Archilochus
The recently proposed models of
quantum error-correction codes assume
that the error in the
qubit state $a |0\rangle + b | 1 \rangle$
can be either a bit flip $ |0 \rangle \leftrightarrow | 1 \rangle$,
a phase flip between the relative phases of $| 0\rangle$
and $| 1 \rangle $, or both \cite{St96,Sh96,Pr97}.
In other words, the errors are supposed to take the pair
of amplitudes $(a,b)$
to either $(b,a)$, $(a, -b)$, or $(-b,a)$.
But these three cases represent a vanishingly small subset
of all the random unitary transformations associated
with arbitrary error.
These are just three of the infinity of rotations
and diffusions that the
qubit can be subject to.
The assumed errors,
which are all local,
do not, therefore, constitute a distinguished set on
any physical basis.
In one proposed error-correction code,
each of
the states
$ |0\rangle $ or $ | 1 \rangle$
is represented by
a 7-qubit code, where the
strings of the codewords represent the
codewords of the single-error correcting
Hamming code, the details of which we don't
need to get into here.
The code for $| 0 \rangle$ has an even number of
$1$s and the code for $|1 \rangle$ has an odd number
of $1$s.
\[| 0\rangle_{code} = \frac{1}{\sqrt8} (|0000000\rangle + |0001111\rangle+
|0110011\rangle + |0111100\rangle\\\]
\begin{equation}
+ |1010101\rangle
+|1011010\rangle +| 1100110\rangle + |1101001\rangle),
\end{equation}
\[|1\rangle_{code} = \frac{1}{\sqrt8} (|1111111\rangle + |1110000\rangle+
|1001100\rangle + |1000011\rangle \\\]
\begin{equation}
+ |0101010\rangle
+|0100101\rangle +| 0011001\rangle + |0010110\rangle).
\end{equation}
As mentioned before, the errors are assumed to be either in terms of
phase-flips or bit-flips.
Now further ancilla bits--- three in total--- are augmented that compute the
syndrome values.
The bit-flips, so long as limited to one in each group, can
be computed directly from the syndrome.
The phase-flips are likewise computed, but only after a change of
the bases has been performed.
Without going into the details of these steps, which are
a straightforward generalization of classical error
correction theory, it is clear that the assumption
of single phase and bit-flips is very restrictive.
In reality, errors in the 7-qubit words will generate a
superposition state of 128 sequences,
rather than the 16 sequences of equations (5) and (6), together with
16 other sequences of one-bit errors, where the errors
in the amplitudes are limited to the phase-flips mentioned
above.
{\it All kinds of bit-flips}, as well as
modificaitons of the amplitudes will be a part of
the quantum state.
We can represent the state, with the appropriate
phase shifts associated with
each of the 128 component states, as follows:
\begin{equation}
|\phi\rangle = e^{i \theta_{1} } a_1 |0000000\rangle +
e^{i \theta_{2} } a_2 |0000001\rangle + . . . +
e^{i \theta_{N} } a_N |1111111\rangle)
\end{equation}
While the amplitudes of the newly generated components
will be small, they would, nevertheless, have a
non-zero error probability.
These components, cannot be corrected by the code
and will, therefore, contribute to an residual
error probability.
The amplitudes implied by (7) will, for the 16 sequences
of the original codeword
after the error has enlarged the set, be somewhat different from
the original values.
So if we speak just of the 16 sequences
the amplitudes cannot be preserved without error.
Furthermore, the phase errors in (7) cannot be corrected.
These phases are of crucial importance in
many recent quantum algorithms.
It is normally understood that in classical systems if
error rate is smaller than a certain value, the error-correction
system will correct it.
In the quantum error-correction systems, this important
criterion is violated.
Only certain specific errors are corrected, others even
if smaller, are not.
In summary,
the proposed models are based on a
local error model while real errors
are nonlocal where we must consider the issues of
component proliferation and amplitude errors.
These codes are not capable of completely
correcting small errors that cause
new entangled component states to be created.
\section{The sensitivity to errors}
The nonlocal nature of the quantum errors is seen
clearly in the sensitivity characteristics of these
errors.
Consider that some data sets related to a problem are being
simultaneously processed by
a quantum machine.
Assume that by some process of phase switching and diffusion
the amplitude of the desired solution out of the entire set is slowly
increased at the expense of the others.
Nearing the end of the computation, the sensitivity of the
computations to errors will increase dramatically,
because the errors will, proportionately, increase for
the smaller amplitudes.
To see it differently, it will be much harder to reverse the
computation if the change in the amplitude or phase
is proportionally greater.
This means that the
``cost'' of quantum error-correction will depend on the
state of the computing system.
Even in the absence of errors, the sensitivity
will change as the state evolves,
a result, no doubt, due to the nonlocal nature of quantum errors.
These errors can be considered to be present
at the stage of state preparation and through
the continuing interaction with the environment
and also due to the errors in the applied
transformations to the data.
In addition, there may exist nonlocal correlations
of qubits with those in the environment. The
effect of such correlations will be unpredictable.
Quantum errors
cannot be localized. For example,
when speaking of rotation errors, there always exists some
$\theta_k > 0$ so that $prob (\theta > \theta_k) \rightarrow 1$.
When doing numerical calculations on a computer, it is
essential to have an operating regime that provides
reliable, fault-tolerant processing.
Such regimes exist in classical computing.
But the models currently under examiniation for
quantum computing cannot eliminate
errors completely.
The method of
syndrome decoding, adapted from the
theory of classical error-correcting codes,
appears not to be the answer to the problem of fault-tolerant
quantum computing.
New approaches to error-correction need to be investigated.
\section{Conclusions}
Nonlocality, related both to the evolution of the
quantum information system and errors,
defines a context in which error-correction based
on syndrome decoding will not work.
How should error-correction be defined then?
Perhaps through a system akin to associative
learning in spin glasses.
\section*{References}
\begin{enumerate}
\bibitem{Ka98a}
S. Kak, ``Quantum information in a distributed apparatus.''
{\it Foundations of Physics} 28, 1005 (1998).
\bibitem{Ka98b}
S. Kak, ``On initializing quantum registers and quantum gates.''
LANL e-print quant-ph/9805002.
\bibitem{Pr97}
J. Preskill,
``Fault-tolerant quantum computation.''
LANL e-print quant-ph/9712048.
\bibitem{Sh95}
P.W. Shor, ``Scheme for reducing decoherence in quantum computer memory,''
{\it Phys. Rev. A} 52, 2493 (1995).
\bibitem{Sh96}
P.W. Shor, ``Fault-tolerant quantum computation,''
LANL e-print quant-ph/9605011.
\bibitem{St96}
A.M. Steane, ``Error correcting codes in quantum theory,''
{\it Phys. Rev. Lett.} 77, 793 (1996).
\end{enumerate}
\end{document}
| 2024-02-18T23:40:14.330Z | 1998-11-02T22:56:50.000Z | algebraic_stack_train_0000 | 1,770 | 2,714 |
|
proofpile-arXiv_065-8677 | \section{Introduction}
The interest in
low-mass white dwarfs has increased recently because they appear frequently
as a binary component, especially in several millisecond pulsar systems.
Detailed evolutionary calculations of possible binary scenarios existed only
for isolated cases or limited mass ranges with
$ M > 0.2$~M$_{\odot}$ (Kippenhahn et al.\ 1967, 1968, Refsdal \& Weigert 1969,
Giannone et al.\ 1970, Iben \& Tutokov 1986, Castellani et al.\ 1994). Only
recently also calculations for $ M < 0.2$~M$_{\odot}$ have been presented
(Alberts et al.\ 1996, Sarna et al.\ 1998).
The need for more extended sets of white dwarf models with helium cores prompted
several authors to generate them from ad hoc assumed, simplified
starting configurations which appear not to be consistent with
evolutionary considerations (Althaus \& Benvenuto 1997, Benvenuto \& Althaus
1998, Hansen \& Phinney 1998). These calculations are based on the implicit
assumption that the contraction time to a {\em real\/} white dwarf structure
is short compared to the cooling time itself. Since also the size of
any unprocessed hydrogen-rich envelope can only be guessed, no definitive
statement on the importance of residual hydrogen burning can be made.
Thus, we felt it necessary to provide a grid of evolutionary models for
low-mass white dwarfs with structures being as consistent as possible with their expected
evolutionary history, which can be used with
confidence for interpreting observational data. In our study we aimed at addressing
the following questions in a systematic way:
\begin{itemize}
\item How large are the masses of the outer, still unburned hydrogen-rich
envelopes on top of the helium cores? Since the size of a white
dwarf depends critically on the mass content of its unprocessed
envelope, this question is closely related to the mass-radius
relation of white dwarfs.
\item Can the cooling properties of low-mass white dwarfs be reconciled
with estimated spin-down ages of millisecond pulsars?
\item Are simplified model calculations useful in interpreting
observational data?
\end{itemize}
\section{The evolutionary computations}
We used an evolutionary code with the following
basic input physics (Bl\"ocker 1995):
Nuclear burning was accounted for by a nucleosynthesis network
inclu\-ding 31 isotopes and 74 reactions up to carbon burning.
Radiative opacities were taken from Iglesias et al.\ (1992),
complemented with those of Iglesias \& Rogers (1996) and
Alexander \& Ferguson (1994) for the low temperature range,
all for $ (Y,Z) = (0.28, 0.02)$.
Convection was treated according to the mixing length theory.
The mixing length parameter was chosen to $\alpha = 1.7$,
calibrated by computing a solar model.
Coulomb corrections of the equation of state have been taken from
Slattery et al.\ (1982).
The outcome of the mass transfer in a close binary system was simulated in the
following simple way: the evolution of a 1 M$_{\odot}$ model
was calculated up to the tip of the red giant branch (RGB),
and depending on the desired final mass, high mass loss
was switched on at the appropriate positions. When the model started to leave
the RGB, mass loss was virtually switched off (cf.\ Iben \& Tutukov 1986,
Castellani et al.\ 1994). More details of our calculations are given in
Driebe et al.\ (1998).
Our method ensures that these red-giant remnants (= pre-white dwarf models)
have a structure which is
consistent with their previous evolutionary history, with an electron
degenerate helium core and a (mainly) unprocessed envelope.
The following evolution of the models across the Hertzsprung-Russell diagram
and down the cooling path depends only on their actual structure (and
mass-loss for larger luminosities), and not on the details
of previous heavy mass-loss episodes during the supposed binary evolution,
provided the mass donor regains its thermal equilibrium before it finally
shrinks below its Roche lobe.
\begin{figure}[th]
\plotone{driebet1.ps}
\caption{ \label{fig1}
Hetzsprung-Russell diagram with complete evolutionary tracks of RGB remnants
with different masses (from top: 0.414, 0.331, 0.300, 0.259, 0.234,
0.195, 0.179~M$_{\odot}$). The long-dashed curve shows the
evolutionary track of the 1 M$_{\odot}$ model we used for creating
the remnants by mass loss.
The short-dashed loops outline the very rapid redward excursions of the 0.259 and
0.234~M$_{\odot}$ models caused by hydrogen shell flashes.
}
\end{figure}
Fig.~\ref{fig1} illustrates the result of our calculations in the
Hertzsprung-Russell diagram, encompassing remnant masses from well below
0.2 up to above 0.4~M$_{\odot}$. The two sequences between 0.2 and
0.3~M$_{\odot}$ experienced typical thermal instabilities of thin burning shells
when the CNO cycle shuts off (cf.\ Kippenhahn et al.\ 1968, Iben \& Tutukov
1986, Castellani et al.\ 1994). The latter authors find CNO flashes for
masses above 0.3 M$_{\odot}$ only for Pop.\ II compositions.
\section{Structures and cooling properties}
We found a steep correlation between the remnant masses and the sizes of
their hydrogen-rich envelopes, ranging from $5\cdot 10^{-2}$ M$_{\odot}$ for our
0.179 M$_{\odot}$ model, down to $2\cdot 10^{-3}$ M$_{\odot}$ for
0.414 M$_{\odot}$. Our envelope masses agree, for the mass range
in common, $ M > 0.3$ M$_{\odot}$, with those given in Castellani et al.\
(1994). For $ M < 0.2$ M$_{\odot}$ they agree in mass
{\em and\/} helium enrichment with those of the Sarna et al.\ (1998) models.
It should be emphasized that these evolutionary envelope masses are
larger, for a given
remnant mass, than those adopted recently by Benvenuto \& Althaus (1998) and
Hansen \& Phinney (1998).
\begin{figure}[ht]
\plotone{driebet2.ps}
\caption{ \label{fig4}
$\log g-\log T_{\rm eff}$ diagram with evolutionary tracks of white
dwarfs with different masses (from top) of 0.179, 0.195, 0.234, 0.259,
0.300, 0.331, 0.414 M$_{\odot}$ with helium cores, and of 0.524,
0.605, 0.696, 0.836, 0.940 M$_{\odot}$ with carbon-oxygen cores.
Isochrones are given between 0.3 and 10 Gyr, and the position of the
PSR J1012+5307 companion is also indicated (van Kerkwijk et
al.\ 1996), together with the 6~Gyr isochrone (dotted).
}
\end{figure}
Hydrogen burning continues to increase the helium core at the expense of
the envelope, and the pp cycle
takes over on the cooling branch and completely determines the cooling rate
for the lower-mass models (Webbink 1975, Castellani et al.\ 1994). The
continued burning at the base of the envelope reduces its mass considerably
along the cooling path, leading to a complicated dependence of the white
dwarf's size on total mass and effective temperature. Using our evolutionary
mass-radius relationships instead of the ones available in the literature,
larger white dwarf masses would follow for given surface gravities and
effective temperatures (see Driebe et al.\ 1998 for more details).
At the hot end of the
cooling branch the hydrogen luminosity can exceed the gravothermal
contribution to the white dwarf's energy budget by up to a factor of 100.
Even at effective temperatures as low as 5\,000~K the pp cycle still
dominates in the models with $ M < 0.2$~M$_{\odot}$.
Since the evolution along the cooling sequence is slowed down accordingly,
the isochrones for helium dwarfs differ in shape from those for CO dwarfs:
they are shifted and turned over to the left (Fig.~\ref{fig4}).
The position of the PSR J1012+5307 white-dwarf companion is met by our
0.195 M$_{\odot}$ track for a cooling age of 6 Gyr, which
agrees well with the spin-down age estimate of the pulsar, 7 Gyr. It should be
mentioned that this result is in close agreement with that of Sarna et al.\
(1998) who made a rather detailed modeling of this particular system (see
also Alberts et al.\ 1996).
\section{Comparison with non-evolutionary models}
\begin{figure}[ht]
\plotone{driebet3.ps}
\caption{\label{fig5}
$\log g-\log T_{\rm eff}$ diagram with a 0.195~M$_{\odot}$
evolutionary and non-evolutionary white-dwarf track as computed by
us with identical chemical structures, and with three
non-evolutionary tracks taken from Althaus \& Benvenuto (1997).
Selected evolutionary ages are marked, and the
position of the PSR J1012+5307 companion as well.
}
\end{figure}
It is very instructive to compare the behaviour of our evolutionary models
with that of non-evolutionary ones, as is done in Fig.~\ref{fig5}
(see also Bl\"ocker et al.\ 1997).
Despite the completely different initial structures of the two sets of
non-evolutionary models shown in the figure, and somewhat
different physical assumptions as well, the cooling properties are
identical, and the models predict an age of only 0.4 Gyr for the
PSR J1012+5307 companion, in variance with the pulsar's spin-down age. Note
also that the structure of our non-evolutionary model does not approach that of the
evolutionary model before an effective temperature of about 5\,000 K is
reached.
From a thorough comparison between evolutionary and non-evolutionary
models (cf. Bl\"ocker et al.\ 1997) we can make
the following safe conclusions:
\begin{itemize} \item
Evolutionary white dwarf models are more compact than non-evolutionary
models of the same mass and chemical structure.
\item
Envelope masses are inversely correlated with the white-dwarf mass.
\item
At lower masses, hydrogen burning via the pp cycle controls the pace
of cooling.
\item
The thermo-mechanical structures of low-mass non-evolutionary models do
not converge with those of evolutionary models within a
reasonable time.
\end{itemize}
Given these facts, the use of non-evolutionary helium white-dwarf models to
interpret observational data appears not to be advisable.
\acknowledgments
F.H. and T.B. thank the DFG for financial support (grants Scho 394/13 and
Ko 738/12).
| 2024-02-18T23:40:14.423Z | 1998-11-12T09:40:20.000Z | algebraic_stack_train_0000 | 1,778 | 1,636 |
|
proofpile-arXiv_065-8719 | \section{Introduction}
The production mechanism\footnotetext{Talk at the Fourth Workshop
on Quantum Field Theory Under the Influence of External Conditions,
Leipzig, 14--18 September, 1998} of the intense flashes of light
which occur at the end of
bubble collapse in sonoluminescence remains mysterious.\cite{review}
A particularly intriguing possibility, put forth by Schwinger,
was that the Casimir effect in some dynamical manifestation was
responsible.\cite{js1,js2,js3,js4,js5} This idea was extended first
by Eberlein,\cite{eberlein} and later by Carlson, Liberati, and
others.\cite{carlson,visser}
Let us start by reviewing the relevant numbers for sonoluminscent
light emission. Typically, a bubble of air in water is held in the
node of an acoustic standing wave with overpressure of about 1
atmosphere,
of a frequency 20 kHz. The bubble goes from a maximum radius of $\sim
4\times
10^{-3}$ cm to a minimum radius of $\sim4\times 10^{-4}$ cm with a
time
scale $\tau_c$ of $10^{-5}$ s. The flash of light, which occurs near
minimum
radius, has a time scale $\tau_f$ of less than $10^{-11}$ s, and is
characterized by the emission of $10^6$ optical photons, so about
$10$ MeV of
light energy is emitted per flash.
It seems likely that the adiabatic approximation should be valid. If
the
flash scale is not orders of magnitude less than $10^{-11}$ s, that
scale is
long compared to the optical time scale, $\tau_o\sim10^{-15}$ s. In
that case,
we can immediately test the Casimir idea. The Casimir energy of a
dielectric
ball in vacuum is equivalent to that of a bubble in a dielectric
medium, and
has recently been definitively evaluated.\cite{brevik,barton} The
Casimir
energy of a dilute ball, of dielectric constant $\epsilon$,
$|\epsilon-1|\ll1$, of radius $R$
is
\begin{equation}
E={23\over1536\pi R}(\epsilon-1)^2,
\label{casball}
\end{equation}
which may be alternatively calculated by summing the van der Waals
energies
between the molecules that make up the medium.\cite{milton}
This value is 10 orders of magnitude too small to be relevant, as
well
as of the wrong sign. This is hardly a surprising result, since the
magnitude
of the effect is what one would expect from dimensional
considerations.
However, others have come to an opposite conclusion. In particular,
Schwinger,
\cite{js2} without relying on detailed calculations, asserted that
the
`dielectric energy, relative to that of the vacuum' was
\begin{equation}
E_c=-\int{(d{\bf r})(d{\bf k})\over(2\pi)^3}{1\over2}
k\left(1-{1\over
\epsilon({\bf r})^{1/2}}\right).
\label{casbulk}
\end{equation}
Although he argued this was true for slow variation in the dielectric
constant, he applied it to a hole of radius $a$ with a dielectric
medium,
therefore with a discontinuous boundary:
\begin{equation}
E_c={R^3\over12\pi}K^4\left(1-{1\over\epsilon^{1/2}}\right).
\label{casbulk2}
\end{equation}
Here $K$ represents an ultraviolet cutoff, which
Schwinger took to be $K\sim 2\times 10^5$ cm$^{-1}$, which gives a
sufficient energy, $E_c\sim 6$ MeV, to be relevant.
This conclusion is supported by the work of Carlson et
al.,\cite{carlson}
who obtain the identical result.
Why is there a discrepancy of the conclusion of these authors
with the result given in Eq.~(\ref{casball})?
The answer is simple. The term that Schwinger \cite{js2} and Carlson
et
al.\cite{carlson} keep is indeed present as a quartically divergent
term if one simply sums normal modes. But this is a intrinsic
contribution
to the self-energy of the dielectric medium. It was quite properly
subtracted off at the outset in the first paper on the Casimir energy
of
a dielectric ball,\cite{me} as it was in Schwinger's own detailed
papers
on the Casimir effect.\cite{jscas} A detailed analysis of this issue
is given
in Ref.~\cite{milton}. As Barton has noted, such divergent
volume and surface terms
`would be combined with other contributions to the bulk and to the
surface
energies of the material, and play no further role if one uses the
measured values.' \cite{barton} In other words, they serve to
renormalize
the phenomenological parameters of the model.
Further support for the irrelevance of the bulk energy comes from the
above-noted identity between the dilute Casimir energy and the van
der
Waals energy.\cite{brevik,barton,milton}
This would seem {\it prima facie\/} evidence that the
finite remainder is unambiguously determined. Note that the summed
van der
Waals energy must go like $(\epsilon-1)^2$, not the $\epsilon-1$
behavior
that Eq.~(\ref{casbulk}) displays.
\section{Acceleration and Temperature}
It seems plausible that the dynamical Casimir effect is closely
allied
with the so-called Unruh effect, \cite{unruh} wherein an accelerated
observer, with acceleration $a$, sees a bath of photons with
temperature $T$,
\begin{equation}
T={a\over2\pi}.
\label{unform}
\end{equation}
Indeed, the observed radiation in sonoluminescence is consistent with
the tail of a blackbody spectrum, with temperature $\sim$20,000
K.\footnote{The
temperature may be even higher. If so, $\tau_f$ is correspondingly
reduced.}
That is,
$kT$ is about 1 eV. Let us, rather naively, apply this to the
collapsing
bubble, where $a=d^2 R/dt^2\sim R/\tau_f^2$, where $\tau_f$ is some
relevant
time scale for the flash. We then have
\begin{equation}
kT\sim{R\over(c\tau_f)^2}\hbar c,
\end{equation}
or
\begin{equation}
1 \,\mbox{eV}\sim {10^{-3} \mbox{cm}\, 2\times 10^{-5}
\mbox{eV-cm}\over
\tau_f^2(3\times
10^{10}\mbox{cm\,s}^{-1})^2}\sim{10^{-29}\mbox{eV}\over\tau_f^2
(\mbox{s}^2)}.
\end{equation}
That is, $\tau_f\sim10^{-15}$ s, which seems implausibly short; it
implies a
characteristic velocity $R/\tau_f\sim10^{12}$ cm/s $\gg c$. It is
far shorter
than the upper limit to the flash duration, $10^{-11}$ s. Indeed, if
we
use the latter in the Unruh formula (\ref{unform}) we get a
temperature
about 1 milli Kelvin! This conclusion seems consistent with that of
Eberlein,\cite{eberlein} who indeed stressed the connection with
the Unruh effect, but whose numbers required superluminal velocities.
However, we must remain open to the possibility that discontinuities,
as in a shock, could allow changes on such short time scales without
requiring superluminal speeds. Indeed, Liberati et al.,\cite{visser}
following Schwinger's earlier suggestion,\cite{js1,js3} indeed assume
an extremely short time scale, so that rather than the adiabatic
approximation
discussed above being valid, a sudden approximation is more
appropriate.
We therefore turn to an analysis of that situation.
\section{Instantaneous collapse and photon production}
The picture offered by Liberati et al.\cite{visser} is that of the
abrupt
disappearance of the bubble at $t=0$, as shown in Fig.~\ref{fig1}.
\begin{figure}
\centering
\begin{picture}(200,100)
\thicklines
\put(100,0){\line(0,1){100}}
\thinlines
\put(50,50){\circle{20}}
\put(110,0){\line(1,1){90}}
\put(130,0){\line(1,1){70}}
\put(150,0){\line(1,1){50}}
\put(170,0){\line(1,1){30}}
\put(190,0){\line(1,1){10}}
\put(110,20){\line(1,1){80}}
\put(110,40){\line(1,1){60}}
\put(110,60){\line(1,1){40}}
\put(110,80){\line(1,1){20}}
\put(70,0){\line(1,1){20}}
\put(50,0){\line(1,1){40}}
\put(30,0){\line(1,1){60}}
\put(10,0){\line(1,1){40}}
\put(60,50){\line(1,1){30}}
\put(0,10){\line(1,1){40}}
\put(50,60){\line(1,1){40}}
\put(0,30){\line(1,1){70}}
\put(0,50){\line(1,1){50}}
\put(0,70){\line(1,1){30}}
\put(0,90){\line(1,1){10}}
\put(50,-10){\makebox(0,0){$t=0-$}}
\put(150,-10){\makebox(0,0){$t=0+$}}
\end{picture}
\caption{The sudden collapse of an otherwise static bubble.}
\label{fig1}
\end{figure}
On the face of it, this picture seems preposterous---the bubble
simply
disappears
and water is created out of nothing. It is no surprise that a large
energy
release would occur in such a case. Further, the static Casimir
effect
calculations employed in Ref.~\cite{visser} are invalid in this
instantaneously
changing model. Therefore, rather than computing Bogoliubov
coefficients
from the overlap of states belonging to two static configurations,
let us
follow the original methodology of Schwinger,\cite{js1,js3}
which is essentially equivalent.
As in Schwinger's papers, let us confine our attention to the
electric (TM)
modes. They are governed by the time-dependent Green's function
satisfying
\begin{equation}
(\partial_0\epsilon(x)\partial_0-\nabla^2)G(x,x')=\delta(x-x').
\end{equation}
The photon production is given by the effective two-photon source
\begin{equation}
\delta(JJ)=i\delta G^{-1}=i\partial_0\delta\epsilon(x)\partial_0.
\label{jj}
\end{equation}
The effectiveness for producing a photon in the momentum element
centered
about $\bf k$ is
\begin{equation}
J_k=\sqrt{{(d{\bf k})\over(2\pi)^3}{1\over2\omega}}\int(dx)
e^{-i({\bf k\cdot
r}-i\omega t)}J(x),\quad \omega=|{\bf k}|.
\label{jk}
\end{equation}
Let us follow Schwinger and consider one complete cycle of
disappearance
and re-appearance of the bubble, which we assume disappears for a
time $\tau_c$:
For a bubble centered at the origin, the dielectric constant as a
function
of time within the volume of the bubble is then taken to be
\begin{equation}
r<R:\quad \epsilon(r)=1+(\epsilon'-1)\eta(\tau_c/2-|t|).
\end{equation}
Here $\epsilon'$ is the dielectric constant of all space when the
bubble is gone. The dielectric constant of the region outside the
volume occupied by the bubble is
\begin{equation}
r>R:\quad
\epsilon(r)=\epsilon+(\epsilon'-\epsilon)\eta(\tau_c/2-|t|).
\end{equation}
Here $\epsilon$ is the dielectric constant outside the bubble when
the
bubble is present.
Occurring here is the unit step function,
\begin{equation}
\eta(x)=\left\{\begin{array}{cc}
1,&x>0,\\
0,&x<0.\end{array}\right.
\end{equation}
Clearly, this model is based on the assumption
that the disappearance time is short compared to the complete cycle
time of bubble collapse and re-expansion.
In the spirit of a first approximation, let us suppose all the
dielectric
constants are nearly unity, that is, that we are dealing with dilute
media. Let us further assume, appropriate to the instantaneous
approximation,
that the medium is a gas, which is capable of instantaneously filling
the
bubble. Then because the deviation of the dielectric constant from
unity is proportional to the matter number density $N$,
\begin{equation}
\epsilon-1=4\pi N\alpha,
\end{equation}
where $\alpha$ is the {\it constant\/} molecular polarizability,
matter conservation implies
\begin{equation}
(\epsilon'-1)V=(\epsilon-1)(V-v),
\end{equation}
where $V$ is the volume of all space, and $v$ is the volume of the
bubble.
Thus the change of the dielectric constant inside the bubble, and
outside,
respectively, is
\begin{eqnarray}
\delta\epsilon_{\rm in}&=&(\epsilon'-1)\eta(\tau_c/2-|t|),\nonumber\\
\delta\epsilon_{\rm out}&=&(\epsilon'-\epsilon)\eta(\tau_c/2-|t|)
=-(\epsilon'-1){v\over V-v}\eta(\tau_c/2-|t|).
\end{eqnarray}
The latter term here appears to be very small, and was therefore
disregarded
in Ref.~\cite{js1,js3,visser}. However, we will see that the
inclusion of
this term could be significant.
{}From Eqs.~(\ref{jj}) and (\ref{jk}), the two-photon production
amplitude is
proportional to ($v\ll V$)
\begin{eqnarray}
J_kJ_{k'}&=&\sqrt{{(d{\bf k})\over(2\pi)^3}{(d{\bf k'})\over(2\pi)^3}
{1\over2\omega2\omega'}}\int(d{\bf r'})\int_{-\tau/2}^{\tau/2}
dt \,e^{-i({\bf k+k')\cdot
r}+i(\omega+\omega')t}(-i\omega\omega')\nonumber\\
&&\times(\epsilon'-1)\left[\eta(a-r)-{v\over
V}\eta(r-a)\right]\nonumber\\
&\propto&(\epsilon'-1)\int_{-\tau/2}^{\tau/2}
dt\,e^{i(\omega+\omega')t}
(-i\omega\omega')\bigg[\int_{\rm in}(d{\bf r})e^{-i({\bf k+k')\cdot
r}}
\nonumber\\
&&\qquad\mbox{}-{v\over V}
\int_{\rm out}(d{\bf r})e^{-i({\bf k+k')\cdot r}}\bigg].
\label{twophoton}
\end{eqnarray}
The probability of emitting two photons is proportional to the
square
of this amplitude. For sufficiently short wavelengths, $\lambda\ll
R$,
the square of the quantity in square brackets in
Eq.~(\ref{twophoton})
is the product of $(2\pi)^3\delta({\bf k+k'})$ and $v$, that is, if
the
exterior contribution is negligible,
\begin{equation}
|J_kJ_{k'}|^2\propto(\epsilon'-1)^2\omega^2\sin^2\omega\tau_c\,
\delta({\bf k+k'})v.
\label{prob}
\end{equation}
This is the same result found by Schwinger,\cite{js1,js3} and by
Liberati
et al.\cite{visser} However, if as is plausible, the effective
exterior
volume $V$ is not much bigger that the volume of the bubble $v$, a
larger
contribution results. Indeed, a careful discretized version of the
momentum
integrals in Eq.~(\ref{twophoton}) gives in general for the factor
multiplying the delta function in Eq.~(\ref{prob})
$v(1+v/V)^2$. The interference is {\em
constructive}, not destructive as I erroneously
claimed in my Leipzig talk, and negligible
as $V\to\infty$. Taking the latter limit (but remembering that there
might
be up to a factor of 4 enhancement), and, appropriate for
$\tau_c/\tau_o
\gg1$, replacing $\sin^2\omega\tau_c\to1/2$, we obtain the
probability
of emitting a pair with momenta $\bf k$ and $\bf -k$ just as given by
Schwinger \cite{js3} (this now includes the equal contribution from
the
magnetic modes):
\begin{equation}
P_{\gamma\gamma}=v{(d{\bf
k})\over(2\pi)^3}\left(\epsilon-1\over4\right)^2,
\quad |\epsilon-1|\ll1.
\end{equation}
[For $|\epsilon-1|$ not small, Schwinger \cite{js3} generalized this
to
\begin{equation}
P_{\gamma\gamma}=2v{(d{\bf
k})\over(2\pi)^3}\ln{\epsilon^{1/4}+\epsilon^{-1/4}
\over2}.
\end{equation}
The numerical effect of this correction is not significant for a
first
estimate.]
The total number of photon pairs emitted is then, if dispersion is
ignored,
\begin{equation}
N=\left(4\pi\over3\right)^2\left(R\over\Lambda\right)^3
\left(\epsilon-1\over4
\right)^2,
\label{nophoton}
\end{equation}
where the cutoff wavelength is given by $K=2\pi/\Lambda$. Such a
divergent
result should be regarded as suspect.\footnote{Although it is not
clear how
this is to be related to the divergent energy (\ref{casbulk2}),
Schwinger obtained both in Ref.~\cite{js3} as the imaginary and real
parts,
respectively, of a complex action.}
It was Eberlein's laudable goal
\cite{eberlein} to put this type of argument on a sounder footing.
Nevertheless, if we put in plausible numbers, $\sqrt{\epsilon}=4/3$,
$R=4\times
10^{-3}$ cm, and, as in Schwinger's earlier estimate,
$\Lambda=3\times
10^{-5}$ cm, we obtain the required $N\sim 10^6$ photons per flash.
The problem with this estimate is one of time and length scales---for
the
instantaneous approximation to be valid, the flash time $\tau_f$ must
be
much less than the period of optical photons, $\tau_o\sim10^{-15}$ s.
This is consistent with the discussion in \S2, and acknowledged by
Liberati et al.\cite{visser} On the other hand, the collapse time
$\tau_c\sim
10^{-5}$ s is vastly longer than $\tau_f$, and is therefore totally
irrelevant to the photon production mechanism. The flash occurs near
minimum
radius, and thus the appropriate value of $R$ in Eq.~(\ref{nophoton})
would seem to be at least an order of magnitude smaller, $R\sim
10^{-4}$ em.
This would lead to $N<10^3$ photon pairs, totally insufficient.
\section{Conclusions}
We conclude by stating that the Casimir model fo sonoluminescence
remains
`unproven.' The static Casimir effect can be applied only in the
adiabatic
approximation, where it seems clearly irrelevant. The instantaneous
approximation grafted onto static configurations seems logically
deficient,
and again numerically irrelevant unless implausible parameters are
adopted. What is still needed is a dynamical calculation of the
Casimir
effect. The burden of proof is on the proponents of this mechanism.
\section*{Acknowledgments}
I thank Iver Brevik, Gabriel Barton, and Michael Bordag for useful
conversations. This work was supported in part by a grant from the
U.S. Department of Energy.
\section*{References}
| 2024-02-18T23:40:14.572Z | 1998-11-19T08:48:25.000Z | algebraic_stack_train_0000 | 1,789 | 2,738 |
|
proofpile-arXiv_065-8812 | \section{Principles of cosmoparticle physics}
CosmoParticle Physics studies mutual relationship and fundamental physical grounds of Cosmology
and Particle Physics \cite{1}. It provides unified treatment of the basic laws of the Universe and
elementary particles, establishes mutual correspondence between them and probes the fundamental
nature of micro- and macro-worlds in the proper combination of its indirect physical, astrophysical
and cosmological effects. It offers the nontrivial way out of the wrong circle of problems, to which
fundamental physics comes in its one-dimensional development.
Cosmoparticle physics is now being formed into selfconsistent new science, following internal
basic principles in its future development. This development revives the tradition of Natural
philosophy of the universal knowledge, the tradition to consider the world in its universal
completeness and unity.
Cosmoparticle physics reproduces in the largest and smallest scales the general feature of the
fundamental physics: the mutual correspondence between microscopic and macroscopic
descriptions, say, between thermodynamics, atomic theory, hydrodynamics and kinetics, or between
the fundamental macroscopic and microscopic quantities, e.g., between the Avogadro number and
the mass of proton. However, at the level of fundamental cosmology and particle physics this
correspondence acquires the new quality of their unity.
That is why the first basic principle of cosmoparticle physics is the idea of a world system, treating
in the unique framework the foundations of macro- and micro-physics. The second principle
assumes, that the world system establishes strict quantitatively definite mutual correspondence
between fundamental cosmological, astrophysical and micro-physical laws, i.e. postulates the
quantitatively definite correspondence between the structures at macro- and micro- levels. Finally,
the third principle assumes, that the set of world system parameters does not exceed the number of
its macro- and micro-scopic signatures.
One may easily find, that the first principle simply postulates the existence of a world system,
whereas the two other principles specify its necessary properties. The crucial point in this approach
is multidimensional solution, offered by the cosmoparticle physics to the problems, both cosmology
and particle theory face on. It may be shown, that this approach naturally embeds all the widely
known existing trends in studying links between cosmology and particle physics, such as
astroparticle physics, theories of everything, particle astrophysics, cosmoarcheology.
Here we'd like to specify some new types of links, following with necessity from the basic
principles of cosmoparticle physics and lying outside these widely discussed trends.
\section{Unified models of cosmology and particle physics}
Intensive efforts to construct the finite Theory of Everything, undertaken last decade on the base of
Superstring models, have not lead, unfortunately, to extensive theoretical framework, putting
together the modern cosmology and particle physics into the detailed and quantitatively definite
picture. The point is that the space of classical string vacuum has a vary large degeneracy, and there
is no objective criteria that distinguishes a particular string vacuum among the numerous
possibilities. The mathematical complexity is multiplied by the enormous variety of possible
embeddings of the Standard model (SM) of particle interactions into the structure of superstring
models. Indeed, the guiding principle of superstring phenomenology is very simple: it is to
reproduce the SM within the effective low energy field theory of a string model. Since only general
features such as the gauge group, number of families, etc. are considered, it leads to numerous
possibilities for embedding the SM in superstring phenomenology. For example \cite{kaku}, within
the framework of perturbative heterotic superstring, the total rank of the gauge group (for $N=1$,
space-time supersymmetric models) can be as large as $22$. After the SM
$SU(3)_C\bigotimes SU(2)_W\bigotimes U(1)_Y $
symmetry with the rank $4$ is reproduced, the rank of the residual gauge symmetry can be still as
large as $18$. Taking into account that the number of models grows (roughly) as a factorial of the
rank of the residual gauge symmetry, it becomes clear that we need additional arguments to restrict
the amount of models. One of them is to use grand unification and to embed the SM symmetry
within a simple gauge group $G\supset SU(3)_C\bigotimes SU(2)_W\bigotimes U(1)_Y$. To break
the grand unified gauge group $G$ down to that of the SM an adjoint representation of Higgs fields
must be present in effective field theory among the light degrees of freedom. In perturbative
heterotic superstring such states in the massless spectrum are compatible with $N=1$
supersymmetry and chiral fermions only if the grand unified gauge group is realized via a current
algebra at level $k>1$ (see \cite{2}). This condition leads to reduction of the total rank of the gauge
group, and, therefore, restricts the number of possible models. However, for example, for a grand
unified gauge group $G=SO(10)$ with, $k=3$, the rank of the residual gauge symmetry can be still
as large as $7$. Thus even grand unification constraint allows unacceptable amount of SM
embedding. In the case of more sophisticated and extensive string models the ambiguity grows,
making virtually impossible to use the main advantage of the string theory -- to calculate all the
fundamental macro- and microphysical quantities from the first principles.
Moreover, however extensive String models are, they do not represent the most general embedding
for the particle physics and the physics of space-time. The following motivations illustrate some
idea on the possible form of such a general framework.
Events are basic elements of space-time in relativistic theory. The intervals between them maintain
the geometry of space-time. So it seems physically meaningful to treat material processes, causing
the events, together with the space-time, they take place in. But such mutual dependence formally
should correspond to specific structure of the world, in which unified treatment of internal degrees
of freedom (reduced to gauge symmetries) and space-time coordinates may not be completely
covered by the string theory. Some more general mathematical framework may be appropriate, e.g.
the invariant formulation of the apparatus of fiber bundle theory (see \cite{3} and Refs. wherein),
treating space-time and internal variables on equal footing and making it possible to fix the true
symmetry of fundamental interactions and geometry of space-time from exact solutions for the
functional integral. The realization of such program can lead to the true physically selfconsistent
theory of space-time, elementary particles and fundamental natural forces. As a step in this
direction, elaboration of unified models of cosmology and particle physics is important.
Such models treat physically selfconsistent complete cosmological scenarios. Physical
selfconsitency means, that the physical grounds for inflation, baryosynthesis and dark matter are
considered in the unified theoretical framework on the base of the unique particle model, and the
degree of completeness assumes the accuracy, with which the astronomical observational data are
reproduced in the considered cosmological scenario. The degree of completeness of the
cosmological model should depend on the properties of the physical model only.
The easiest way to construct cosmologically selfconsistent particle models is to extend the SM by
addition to its $SU(3)_C\bigotimes SU(2)_W\bigotimes U(1)_Y $
symmetry some other global or local gauge symmetries or by inclusion of the SM symmetry group
into more general gauge group. As a result, the extended gauge model contains new particles and
fields, related to new symmetries added to the standard model. In the most cases, the masses of new
particles and strength of new interactions, mediated by new fields, correspond to superhigh energy
scales, inaccessible to direct experimental test at accelerators. At best, experimental high-energy
physics can put lower limits on some parameters, related to these scales. The only possibility is to
elaborate a system of indirect physical, astrophysical and cosmological constraints on the free
parameters of the "hidden" sector of particle model, to fix them and to specify the cosmological
scenario, following from this choice.
The strategy of cosmoparticle physics approach to unified models of cosmology and particles can
be stipulated as follows:
\begin{enumerate}
\item Physically motivated choice for extended gauge particle model.
\item Test for its cosmological selfconsistency -- study of its possibility to reproduce cosmological
and astrophysical phenomena and effects
\item Determination of free parameters of the "hidden" sector of particle model or set of constraints
on them from the combination of indirect cosmological, astrophysical and experimental physical
restrictions.
\item Elaboration of complete quantitatively definite cosmological scenario.
\item Formulation of the system of indirect experimental physical and astronomical effects,
providing the detailed test of the physical model and cosmological scenario, based on it.
\item Estimation of completeness of this scenario.
\end{enumerate}
Cosmoparticle physics puts traditional methods of observational astronomy and experimental
physics into nontrivial multidimensional complex system of links, thus enriching substantially the
collaboration between physics and astronomy established by astroparticle physics.
\section{The system of links between astronomical observations and laboratory physics
experiments}
Links between particle physics and cosmology are generally viewed by astroparticle physics as
system of linear relations. So, statements \cite{4}, that electron neutrino mass is about $30eV$,
immediately lead to cosmological consequences, since Big Bang cosmology predicts primordial
neutrino background with the concentration, equal to $3/11$ of the one of relic photons. By
multiplying the neutrino mass on the concentration of cosmological neutrino background one
immediately found, that the massive neutrino density should dominate in the modern Universe and
that gravitational instability in the nonrelativistic gas of massive neutrinos should play the dominant
role in the formation of the large scale structure of the Universe. Primordial massive neutrinos were
identified with the hot dark matter in the halo being one of the three classes of elementary particle
dark matter (DM) candidates.
In general hot DM refers to low mass neutral particles that where still in thermal equilibrium after
the QCD phase transition. Hot DM particles have a cosmological number density roughly
comparable to that of microwave background photons, which implies an upper limit to their mass
of a few ten $eV$. Neutrinos are the standard example of hot DM, although other possibilities such
as Majorons are discussed in the literature. Majorons are the pseudo-Goldstone bosons connected
with the Majorana nature of the mass of neutrino. Majorana mass of neutrino corresponds to lepton
number violation. In this case lepton number violating processes such as nuclear neutrinoless
double beta decay can take place. If at least two types of neutrino are massive and neutrino states of
definite mass do not coincide with the states with definite lepton number, neutrino oscillations
should take place. In the matter resonant enhancement of neutrino oscillations can take place, what
may be the solution for Solar neutrino puzzle at very small values of the difference of neutrino mass
squares $\delta m^2\simeq10^{-6}eV$. The detailed analysis of all these crossdisciplinary links,
undertaken by astroparticle physics, could not however lead to any definite conclusion in view of
evident troubles of the simple model of massive electron neutrinos in its confrontation with the
observational and experimental data.
The successive experimental measurements of electron neutrino mass in studies of beta
spectrum of tritium lead to ambiguous results, not confirming the original claims on the value of
$\simeq 30eV$. The upper limit on the electron neutrino mass is roughly $10eV\div 15eV$, a more
precise limit cannot be given since unexplained effects have resulted in the negative value of
$m(\nu_e)^2$ in recent tritium beta decay experiments. The (90\% C.L.) upper limit on an effective
Majorana neutrino mass $0.65eV$ from Heidelberg-Moscow $^{76}Ge$ neutrinoless $2\beta$
decay experiments \cite{5}. The upper limits from accelerator experiments on the masses of the
other neutrinos are $m(\nu_{\mu})<0.17MeV$ and $m(\nu_{\tau})<24MeV$ (95\% C.L.). The
events that appear to represent $\bar\nu_{\mu}\to\bar\nu_e$ oscillations followed by
$\bar\nu_e+p\to n+e^+$, $n+p\to D+\gamma$, with coincident detection of $e^+$ and the
$2.2MeV$ neutron-capture $\gamma$ ray in the Liquid Scintillator Neutrino Detector (LSND)
experiment at Los Alamos suggest that
$\Delta m^2_{e\mu }=\mid m(\nu_{\mu})^2- m(\nu_{e})^2\mid >0$ \cite{6}. Comparison with
exclusion plots from other experiments implies a lover limit
$\Delta m^2_{e\mu }=\mid m(\nu_{\mu})^2- m(\nu_{e})^2\mid >0.2eV^2$, implying in turn a
lower limit $m_{\nu}\ge 0.45eV$, or $\Omega_{\nu}\ge0.02(0.5/h)^2$. More data and analysis are
needed from LSND's $\nu_{\mu}\to\nu_e$ channel before the initial hint \cite{7} that
$\Delta m^2_{\mu e}\approx 6eV^2$ can be confirmed. Recent Super-Kamiokande data following
the Kamiokande data \cite{8} show that the deficit of $E>1.3GeV$ atmospheric $\nu_{\mu}$
increases with zenith angle. These data suggested that $\nu_{\mu}\to\nu_{\tau}$ oscillations length
is comparable to the height of the atmosphere, implying that
$\Delta m^2_{\tau\mu}\simeq 10^{-3}eV^2$ -- which in turn implies that if either $\nu_{\mu}$
or $\nu_{\tau}$ have large enough mass ($\ge 1eV$) to be a hot dark matter particles, then they
must be nearly degenerate in mass, i.e., the hot dark matter mass is shared between these two
neutrino species. However, the deficit of atmospheric $\nu_{\mu}$ even at small zenith angles,
corresponding to paths much smaller than oscillation length, causes serious doubts in the
interpretation of Super-Kamiokande and Kamiokande data \cite{9}. At
$\Omega_{\nu}\simeq 1$ neutrino free streaming strongly suppresses adiabatic fluctuations at
scales smaller than galaxy superclusters ($\simeq 10^{15}M_{\odot}$). With the use of the COBE
upper limit, hot DM with adiabatic fluctuations would hardly lead to any structure formation at all.
The proper choice of a possible solution for this problem - transition to more complicated cases, hot
DM plus some sort of seeds, such as cosmic strings (see for example \cite{10}) or to other class of
dark matter candidates, corresponding to cold DM (CDM) scenario - has in fact no fundamental
grounds in the framework of astroparticle physics. Moreover the physical grounds for neutrino
instability or for CDM particles are not alternative to the ones for neutrino rest mass, and from the
physical viewpoint the general case should account for all these possibilities. Cold DM consists of
particles for which the scale of free streaming is very small and its existence leads to strong
dynamical effects at galaxy scale.
The development of CDM models and their troubles in the framework of astroparticle physics seem
to confirm the general wisdom on true complexity of the world system. The two sorts for cold DM
that are best motivated remain supersymmetric particles (WIMPs) and axions.
Supesymmetry underlies almost all new ideas in particle physics, including superstrings. There are
two key feature of supesymmetry that make it especially relevant to DM, $R$ -- parity and the
connection between supersymmetry breaking and the electroweak scale. The $R$ -- parity of any
particle is $R\equiv (-1)^{L+B+S}$, where $L$, $B$, and $S$ are its lepton number, baryon
number, and spin. In most version of supersymmetry, $R$ -- parity is exactly conserved. This has
the powerful consequence that the lightest $R$ -- odd particle -- often called the "lightest
supersymmetric partner" (LSP)- must be stable, for there is no lighter $R$ -- odd particle for it to
decay into. The LSP is thus natural candidate to be the dark matter. In the standard version of
supersymmetry, there is an answer to the deep puzzle why there should be such a large difference in
mass between the GUT scale $M_{GUT}\simeq 10^{16}GeV$ and the electroweak scale
$M_W=80GeV$. Since both gauge symmetries are supposed to be broken by Higgs bosons which
moreover must interact with each other, the natural expectation would be that
$M_{GUT}\simeq M_W$
or that $M_W$ is induced by radiative correction $M_W\sim\alpha M_{GUT}$. The
supersymmetric answer to this "gauge hierarchy" problem is that the masses of the weak boson
$W^{\pm}$ and all other light particles are zero until supersymmetry itself breaks. Thus, there is a
close relationship between the masses of the supersymmetric partner particles and the electroweak
scale. Since the abundance of the LSP is determined by its annihilation in the early Universe, and
the corresponding cross section involves exchanges of weak bosons or supersymmetric particles -- all
of which have electromagnetic-strength couplings and masses $\simeq M_W$ -- the cross section
will be $\sigma\simeq e^2s/M^4_W$ (where $s$ is the square of the center of mass energy) i.e.,
comparable to the that of typical weak interaction processes. This in turn has the remarkable
consequence that the modern density of LSPs can be close to the critical density, i.e.
$\Omega_{LSP}\simeq 1$. The LSP is in the most cases a spin -- $1/2$ Majorana particle called
"neutralino", which represents the linear combination of photino (supersymmetric partner of the
photon), zino (partner of the $Z^0$), Higgsinos (partners of the two Higgs bosons associated with
electroweak symmetry breaking in supersymmetric theory), and axinos (partner of the axion).
Neutralinos are Weakly Interacting Massive Particles (WIMPs) with the mass from tens to hundreds
GeV, and thus are natural candidates for the cold DM.
The prediction of invisible axion follows from another line of theoretical argumentation, related to
the solution of the strong CP violation problem in QCD. Searches for axion emission in $\mu$,
$K$ decays and nuclear decays put lower limit on the scale of axion physics. Constraints on stellar
energy losses due to axion emission put this limit even higher: up to $10^6GeV$ in the case of
archion and up to $10^8GeV$ for the bulk of other invisible axion models. In cosmology,
primordial coherent axion field oscillations were found to behave in respect to gravitational
instability as gas of very heavy particles, making invisible axion popular CDM candidate.
Experimental searches for cosmic and Solar axion fluxes are under way, based on the predicted
effect of axion-photon conversion in time--varying electromagnetic field.
In the framework of astroparticle physics it is not possible to find physical motivations which
candidate on CDM particle -- neutralino or axion -- is more preferable. From particle physics
viewpoint the both candidates are important, since both supersymmetry and invisible axion solution
are necessary to remove internal inconsistencies of the standard model: supersymmetry removes
quadratic divergence of Higgs boson mass in the electroweak theory and axion recovers from strong
CP violation in QCD. Astroparticle physics has no theoretical tools to find the proper combination
for the both hypothetical phenomena. Moreover, recent analysis of the observational data on the
large scale structure and of the anisotropy of thermal electromagnetic background find troubles in
simple CDM model and favors more sophisticated dark matter scenario, such as mixed cold+hot
dark matter (see for example \cite{11}). It appeals for necessity in special methods to deal with
multiparameter space of physical and cosmological parameters, which astroparticle physics does not
possess.
Together with the proper combination of studies of cosmological large scale structure, relic
radiation, nucleosynthesis, tests for inflational, baryosynthesis and dark matter models
cosmoparticle physics invokes such forms of crossdisciplinary studies as cosmoarcheology or
experimental physical cosmology.
\section{Cosmoparticle approach to the problem of fermion masses and mixing}
The problem of fermion families is one of key problems in the modern particle physics. It has
different aspects, questioning the origin of family replication, quark and lepton mass spectrum and
mixing pattern, CP violation in weak interactions, CP conservation in strong interactions,
suppression of flavor changing neutral currents (FCNC), pattern of neutrino masses and oscillations,
etc. Thus the particle model of fermion families should offer the solution to all these problems.
The standard model (SM) is successful in describing various experimental data (see for example
\cite{12}) and it can be considered as a minimal necessary element of any theory of flavor. In SM
the three families, sharing the same quantum numbers under the
$SU(3)_C\bigotimes SU(2)_W\bigotimes U(1)_Y $ gauge symmetry, are introduced as an anomaly
free set of chiral left-handed fermions $q_i=(u_i,d_i)$, $u^c_i$, $d^c_i$;
$l_i(\nu_i,e_i)$, $e^c_i$,
where $i=1,2,3$ is a family index. In SM the masses of fermions and $W^{\pm}$, $Z$ gauge
bosons have the common origin in the Higgs mechanism. Quarks and charged leptons get masses
through the Yukawa couplings to the Higgs doublet $\phi$:
\begin{equation}
\label{1}
L_{Yuk}=\lambda^u_{ij}q_iCu^c_j\tilde\phi +\lambda^d_{ij}q_iCd^c_j\phi +
\lambda^e_{ij}l_iCe^c_j\phi\qquad (\tilde\phi =i\tau_2\phi^*)
\end{equation}
So, the fermion masses are related to the weak scale
$\langle\phi\rangle =v=174GeV$. However, the Yukawa constants are arbitrary, namely
$\hat\lambda^{u,d,e}$ are in general complex $3\times 3$ matrices. To reproduce the masses of
quarks and leptons one has to put by hands $27$ values of these matrix elements. The SM contains
no renormalizable couplings that could generate the neutrino masses:
\begin{equation}
\label{2}
L_{\nu}=\frac{\lambda^{\nu}_{ij}}{M}(l_i\tilde\phi )C(l_j\tilde\phi ),\qquad
\lambda^{\nu}_{ij}=\lambda^{\nu}_{ji}
\end{equation}
where $M>>v$ is the regulator mass, which depends on the mechanism of neutrino mass ({2})
generation. The matrices of coupling constants and the corresponding fermion mass matrices
$\hat m^f=\hat\lambda^fv$ $(f=u,d,e)$ and $\hat m^{\nu}=\hat\lambda^{\nu}(\nu^2/M)$
can be reduced to the diagonal form by the unitary transformations $V_f$ and $V_{\nu}$. Hence,
quarks are mixed in the charged current interactions, and these mixings are determined by Cabibbo-
Kobayashi-Maskawa (CKM) matrix. The CKM matrix is parameterized by three mixing angles and
CP-violating phase. In the case of massive neutrinos, a similar mixing matrix emerges also in the
lepton sector.
The fermion family puzzle consists in the following phenomena:
the mass spectrum of quarks and charged leptons is spread over five orders of magnitude, from
MeVs to hundred GeVs;
the weak transitions dominantly occur inside the families, and are suppressed between different
families thereby the SM exhibits the natural suppression of the flavor changing neutral currents
(FCNC), both in the gauge boson and Higgs exchanges;
the Yukawa constants in {1} are generally complex, the observed CP- violating phenomena can be
explained by the CKM mechanism with sufficiently large CP-phase $\simeq 1$. However, at the
same time it induces the strong CP violation problem (see for example \cite{13}): the overall phase
of the Yukawa matrices gives effective contribution to the vacuum $\Theta$- term in QCD and thus
induces the P and CP violation in strong interactions. On the other hand, the measurements of
dipole electric moment of neutron impose the strong bound $\Theta <10^{-9}$;
the experimental data show some ambiguous indications for neutrino masses and mixing.
The fermion mass and mixing problem can be formulated as a problem of matrices of the Yukawa
couplings $\hat\lambda^f$, which remain arbitrary in the SM. There is no explanation, what is the
origin of the observed hierarchy between their eigenvalues, why $\hat\lambda^u$ and
$\hat\lambda^d$ are small, what is the origin of the complex structure needed for the CP- violation
in weak interactions, why the $\Theta$- term is vanishingly small in spite of the complex Yukawa
matrices. It is attractive to think that at some scale above the electroweak scale there exists a more
fundamental theory which could allow to calculate the Yukawa couplings, or at least to fix the
relationship between them.
The structure of mass matrix can be related with the spontaneously broken horizontal symmetry
between fermion families. Consider, for example, model with all quark and lepton states
transforming as triplets
$f_{\alpha}=(q,l,u^c,d^c,e^c)_{\alpha}$ of the horizontal $SU(3)_H$ symmetry \cite{14},
($\alpha =1,2,3$ is a family index). Such a horizontal symmetry does not allow quarks and leptons
to have renormalizable Yukawa couplings. Thus, the fermion mass generation is possible only after
the $SU(3)_H$ breaking, through the high order (non-renormalizable) operators (HOPs) involving
some "horizontal" Higgses inducing this breaking at the scale $V_H>>v$. This suggests that the
observed mass hierarchy may emerge due to the hierarchy in the $SU(3)_H$ breaking. Full
$SU(3)_H$ breaking is achieved by introducing the horizontal scalars: a sextet
$\chi_3^{\{\alpha\beta\}}$
and two other sextets or triplets
$\chi_{1,2}^{[\alpha\beta ]}\simeq\varepsilon^{\alpha\beta\gamma}\chi_{\gamma}$.
The pattern of their $3\times 3$ VEV matrix can be chosen so that the first sextet VEV is acquired
by $(3,3)$ component, and in sextets (or triplets) $\chi_2$ and $\chi_1$ the smaller VEVs
$V_{23}$ and $V_{12}$ are acquired by $(2,3)$ and $(1,2)$ (or first and third ) components.
VEVs follow the hierarchy $V_{33}>>V_{23}>>V_{12}$, which is stable relative to radiative
corrections. Thus in the context of the $SU(5)\otimes SU(3)_H$ theory with fermions in
representations $(\bar 5+10)_{\alpha}$, the relevant HOPs \cite{14} can be induced through the
renormalizable interactions, as a result of integrating out the effects of hypothetical superheavy
particles (see, for example, \cite{15,16}). In the other words, the quark and lepton masses can be
induced through their mixing with superheavy -- fermions, in a direct analogy to the see--saw
mechanism of neutrino mass generation. In this case the VEV pattern of Higgs multiplets $\chi$ is
reflected in the Yukawa matrices, and the fermion mass hierarchy follows the hierarchy of
$SU(3)_H$ symmetry breaking. There are two possible choices for the representation of $F$ --
fermions, and, respectively, one can generate two types of the pattern of Yukawa mass matrices
\cite{17,18}. The first case corresponds to a direct hierarchy pattern. In particular, the VEV pattern
leads directly to the Fritzch texture. Another possibility is the inverse hierarchy. In the latter case
the VEV pattern is inverted in the fermion mass structure (see more detail \cite{17,18,16}).
Thus, the horizontal $SU(3)_H$ symmetry is attractive since it unifies all families. For the solution
of the strong CP- problem one can introduce the Peccei-Quinn (PQ) type symmetries \cite{19},
which in additionally could further restrict the mass matrix structure. In particular, in the horizontal
$SU(3)_H$ symmetry models the PQ symmetry can be naturally related to the phase
transformation of the horizontal scalars $\chi$ \cite{17,18}.
Consider as an example the application of the approach of cosmoparticle physics (section 2) to the
problem of fermion flavours. This strategy can be stipulated as follows.
{\bf Step 1.} The class of physically motivated extensions of SM is considered, namely, the class
of gauge models with horizontal family symmetry.
{\bf Step 2.} The inevitable consequences of chosen class of models, which are able to reproduce
cosmological and astrophysical phenomena and effects are the following:
\begin{itemize}
\item the existence of the specific type of invisible axion (archion), which is simultaneously
Majoron and familon \cite{17,18};
\item the existence of horizontal scalars $\chi$ with superhigh energy scale of VEVs;
\item the existence of neutrino Majorana mass with the hierarchy of neutrino masses;
\item the nonconservation of lepton number $\Delta L=2$;
\item the instability of neutrino relative to decays on more light neutrino and archion;
\item the Dirac see-saw mechanism and singlet scalar $\eta$ , connected with it;
\end{itemize}
{\bf Step 3.} One introduces the main free parameter $V_H$ of the hidden sector of the considered
model, namely, the scale of horizontal $SU(3)_H$ symmetry breaking. The set of indirect
cosmological, astrophysical and experimental physical restrictions on the hidden sector is revealed
from following phenomena:
\begin{itemize}
\item from the analysis of data of nondiagonal transitions (for example $\mu\to ea$ and $K\to\pi a$
(where $a$ is archion)) \cite{20,21};
\item from the astrophysical estimations of stellar energy losses due to archion emission \cite{18};
\item from the analysis of archion emission influence the time scale and energetics of neutrino flux
from collapsing star \cite{18};
\item from the analysis of inhomogeneities generated by the large scale modulation of coherent
axion field oscillations \cite {22,23,24};
\item from the analysis of primordial black holes formation in the second order phase transitions
connected with three stages of horizontal $SU(3)_H$ -- symmetry breaking, which take place at the
inflationary stage \cite{25,24};
\item from the effect of nonthermal horizontal symmetry restoration at postinflational dust-like
stage \cite{24,26};
\end{itemize}
Taking together all limits imposed by the pointed phenomena it is possible to extract two narrow
windows for the value of the parameter $V_H$. They are the "low" energy branch $V_6$
\cite{23,27} and the "high" energy branch $V_{10}$ \cite{24}.
{\bf Step 4.} With the use of the above restrictions one can elaborate the physically motivated full
cosmological model, which is based on the chosen horizontal extension of SM. This model has been
called the model of "horizontal" unification (MHU) \cite{23,24}.
\begin{itemize}
\item MHU solves the problems of SM connected with family problem and strong CP violation
problem in QCD; it predicts qualitatively new type of invisible axion (archion) \cite{17,18,28}; it
predicts the neutrino masses and neutrino flavour nondiagonal transitions with emission of archion.
\item MHU predicts the following history of Universe:
\begin{itemize}
\item The early Universe starts from the inflational stage \cite{23,24}, driven by the inflaton field
$\eta$, being singlet relative to all gauge groups. The VEV of this field plays the role of the
universal energy scale in the Dirac see-saw mechanism of the generation of masses of charged
fermions \cite{16,17,23,24}. When the inflational stage is finished the inflaton field decays due to
interactions assumed by the Dirac see-saw mechanism \cite{23,24}. It leads to reheating of the
Universe and consequently to transition to the standard Friedman cosmology.
\item The reheating temperature is sufficiently high for generation of the observed baryon
asymmetry. The baryogenesis mechanism in the MHU combines the $(B+L)$ nonperturbative
electroweak nonconservation at high temperatures with $\Delta L=2$ nonequilibrium transitions,
induced by Majorana neutrino interaction \cite{23}. The mechanism can provide inhomogeneous
baryosynthesis and even to the existence of antimatter domains in baryon asymmetrical Universe
\cite{24}.
\item There are two possible scenarios of large scale structure (LSS) formation:
\begin{itemize}
\item Hierarchic decay scenario (HDS) \cite{23,21}, realized at the "low" energetic scale ($V_6$).
In the HDS the LSS formation takes place in the succession of stages of dominance of unstable
neutrino and their relativistic decay products.
\item Mixed stable dark matter, realized at "high" energetic scale ($V_{10}$) \cite{24}. The
formation of LSS in this case occurs at the conditions of dominance of coherent oscillations of
axion field and massive stable neutrino (see \cite{24} in more detail).
\end{itemize}
\end{itemize}
\end{itemize}
{\bf Step 5.} The system of the detailed indirect test of MHU and MHU-based cosmological
scenario can use the following signatures:
\begin{itemize}
\item MHU predicts flavour nondiagonal decays of leptons, mesons and hyperons (see \cite{21,27}
in more detail);
\item MHU predicts the level of oscillations $K\to\bar K$, $B\to\bar B$ \cite{21};
\item astronomical search for invisible axions (see for example \cite{29}) and their two-photon
decays;
\item experimental searches for solar axions (see for example \cite{30});
\item experimental searches for the force, violating the Equivalence Principle, which is connected
with the existence of invisible axion (see for example \cite{31}).
\end{itemize}
{\bf Step 6.} The estimation of completeness of obtained scenario is necessary to determine the
direction of the further extension of the considered approach. In the other words the elaborated
cosmological model should incorporate the cosmological consequence of some other extensions of
the SM such as GUT and SUSY. In particular, the estimation of completeness of MHU can be
obtained by the comparison of the predicted consequences of the MHU-based scenario of inflation,
baryosynthesis and LSS formation with the astronomical observations (see \cite{24} in more
details).
To conclude, the development of cosmology and particle physics and the nontrivial tests of their
foundations in combination of indirect evidences follow the laws of cosmoparticle physics, that will
unify on the basis of its principles the existing trends in studies of mutual relationship of elementary
particles and the Universe, widely represented in the present proceedings.
| 2024-02-18T23:40:14.855Z | 1998-11-21T23:38:59.000Z | algebraic_stack_train_0000 | 1,812 | 5,425 |
|
proofpile-arXiv_065-8835 | \section{Introduction}
FG~Vir (=HD 106384) is a $\delta$~Scuti variable near the end of
its main-sequence evolution. 435 hours of photometric
measurements by the Delta Scuti Network determined 24 statistically
significant frequencies from 9.20 to 34.12~c/d (106 to 395~$\mu$Hz).
Details of this campaign as well as references to earlier measurements
and results can be found in Breger et al. (1998). The large number of
detected pulsation modes makes this star an excellent candidate for
asteroseismological investigations. This requires the identifications of
the observed pulsation frequencies with specific pulsation modes.
While the problem is rather complex, considerable progress has been
achieved, as shown by Breger et al. (1995), Guzik et al. (1998) and
Viskum et al. (1998).
\begin{table*}
\begin{center}
\caption{Pulsation frequencies of FG Vir}
\begin{tabular}{lcccc}
\hline
\noalign{\smallskip}
\multicolumn{3}{c}{Frequency}& 1995 V amplitude & $Q$ value \\
& c/d & $\mu$Hz& mmag & days\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{3}{l}{Statistically significant frequencies}\\
\noalign{\smallskip}
$f_1$ & 12.716 & 147.2 & 21.1 & .0323\\
$f_2$ & 24.228 & 280.4 & 4.5 & .0170\\
$f_3$ & 23.403 & 270.9 & 4.1 & .0176\\
$f_4$ & 21.052 & 243.7 & 3.7 & .0195\\
$f_5$ & 19.868 & 230.0 & 3.5 & .0207\\
$f_6$ & 12.154 & 140.7 & 3.5 & .0338\\
$f_7$ & 9.656 & 111.8 & 3.4 & .0426\\
$f_8$ & 9.199 & 106.5 & 3.1 & .0447\\
$f_9$ & 19.228 & 222.5 & 1.5 & .0214\\
$f_{10}$ & 20.288 & 234.8 & 1.3 & .0203\\
$f_{11}$ & 24.200 & 280.1 & 1.3 & .0170\\
$f_{12}$ & 16.074 & 186.0 & 1.0 & .0256\\
$f_{13}$ & 34.119 & 394.9 & 1.0 & .0121\\
$f_{14}$ & 21.232 & 245.7 & 1.0 & .0194\\
$f_{15}$ & 11.110 & 128.6 & 0.9 & .0370 \\
$f_{16}$ = 2$f_1$& 25.432 & 294.4 & 0.9 & .0162\\
$f_{17}$ & 33.056 & 382.6 & 0.6 & .0124 \\
$f_{18}$ & 21.551 & 249.4 & 0.8 & .0191 \\
$f_{19}$ & 28.140 & 325.7 & 0.6 & .0146\\
$f_{20}$ & 11.195 & 129.6 & 0.7 & .0367 \\
$f_{21}$ & 24.354 & 281.9 & 0.6 & .0169\\
$f_{22}$ & 11.870 & 137.4 & 0.4 & .0346 \\
$f_{23}$ = $f_1+f_7$ & 22.372 & 258.9 & 0.5 & .0184\\
$f_{24}$ = $f_3-f_1$ & 10.687 & 123.7 & 0.5 & .0385\\
\noalign{\smallskip}
\multicolumn{3}{l}{Probable frequencies}\\
\noalign{\smallskip}
$f_{25}$ & 25.37 & 293.7 & 0.4 & .0162\\
$f_{26}$ & 25.18 & 291.4 & 0.4 & .0163\\
$f_{27}$ & 29.50 & 341.4 & 0.4 & .0139\\
$f_{28}$ & 18.16 & 210.2 & 0.4 & .0226\\
$f_{29}$ & 19.65 & 227.4 & 0.4 & .0209\\
$f_{30}$ & 31.92 & 369.4 & 0.4 & .0129\\
$f_{31}$ & 20.83 & 241.1 & 0.4 & .0197\\
$f_{32}$ & 12.79 & 148.1 & 0.4 & .0322\\
\noalign{\smallskip}
\hline
\end{tabular}\newline
\end{center}
\end{table*}
The pulsation mode indentification from observed frequencies
requires accurate determinations of the basic
parameters of the star. From the available $uvby\beta$ photometry,
Mantegazza et al. (1994) derived $T_{\rm eff} = 7500\, $K and
$\log \, g = 3.95$. A correction for a misprint in the literature leads to a
correction of $\log \, g$ to 3.9. We can now improve these
values further by including the accurate Hipparcos parallax
which predicts $M_{\rm V} = 1.95 \pm 0.13\, $mag.
This is slightly fainter than the value of $1.71 \pm 0.25$ mag
predicted by $uvby\beta$ photometry. This leads to a corresponding shift
in $\log \, g$ to 4.00.
These values are in exact agreement with those derived by Viskum et al. (1998).
We estimate the uncertainties to be $\pm 100\, $K in temperature
and $\pm 0.1$ in $\log \, g$.
The values of the pulsation constants $Q$ can be estimated from the
following empiric equation:
\[\log Q_{i} = -6.456 + \log P_{i} + 0.5 \log g + 0.1\,M_{\rm bol} +
\log T_{\rm eff}.\]
\noindent
The constant,
--6.456, in the above formula is based on solar values
of $ M_{\rm bol} = 4.75\, $mag, $B.C. = - 0.08\, $mag,
$T_{\rm eff} = 5770\, $K and $\log \, g = 4.44$.
If the $Q$ values are calculated from $uvby\beta$ photometry,
the observational uncertainties in observing these parameters
lead to an uncertainty in $Q$ of about 18\%.
The corresponding $Q$ values are shown in Table~1.
\begin{figure*}
\centering
\includegraphics*[width=178mm]{Bppg1.eps}
\caption{
Histograms of frequency spacing between all specified pulsation modes.
{\em Top left:} The diagram demonstrates that for high orders the patterns
of frequency spacing clearly show adjacent radial orders ($\sim$ 4~c/d)
and the effects of rotational splitting, which is extremely asymmetric even
at $V_{\rm rot} = 45$~km/s.
{\em Top right:} The frequency spacing predicted from model~1
for $\ell = 1$ in the observed frequency range of 11 -- 35~c/d.
Note that the patterns from adjacent orders and rotational
splitting are still present. {\em Bottom panels:} Observed frequency
spacings in the observed range from 11 to 35~c/d. Although these are
a mixture of $\ell$ = 0, 1 and 2 modes, the effects of adjacent
radial orders and a small peak in the range of rotational splitting
can be seen.
To demonstrate that the results are not sensitive to which observed
frequencies are included, two different choices (see text) are shown}
\end{figure*}
\section{Regularities of frequency spacing}
The values of the observed frequencies and regularities in their
patterns can be an excellent intital tool for mode identifications,
if enough frequencies are excited and detected.
For high-order, low-degree p-mode pulsation,
the different radial orders show uniform frequency spacing,
with a mode of order $n$ and of degree $\ell$ being shifted from the
corresponding mode ($n$, $\ell + 1$) by half of the frequency difference
between the ($n$,$\ell$) and ($n+1$,$\ell$) modes (Vandakurov 1967).
In $\delta$~Scuti stars, the excited pulsation modes are of low order
($n$ up to 7), so that the asymptotic relations do not apply exactly.
Nevertheless, they also show some regularities.
Additionally g-modes invade the p-mode region and decrease
the spacing in a small frequency region of about two radial orders.
This effect, known as avoided crossing
(Osaki 1975, Aizenman, Smeyers \& Weigert 1977), complicates the theoretical
frequency spectra, but can provide information about the stellar interior
(Dziembowski \& Pamyatnykh 1991).
Moreover, stellar rotation splits multiplets
and this splitting is non-symmetric, if second-order effects of rotation
and effects of rotational mode coupling are taken into account
(Dziembowski \& Goode 1992, Soufi et al. 1998, Pamyatnykh et al. 1998).
Nevertheless, the spacing of adjacent radial orders as well as
the rotational splitting is still regular enough to be detectable, if
complete multiplets are excited and identified.
We will demonstrate this by using a pulsation model of a 1.85$\, M_\odot$
star with $T_{\rm eff} = 7515\, $K, $\log \, g = 3.99$, and
$V_{\rm rot} = 45\, $km/s. This model will be referred to as model~1.
The parameters for the model were not chosen at random,
but can be regarded as an estimate for FG~Vir.
To investigate the period regularities, Winget et al. (1991) have successfully
applied the method of the Fourier transform of the period spacing to the star
PG~1159+035. This method requires coherence over a large frequency range.
Handler et al. (1997) also found frequency regularities from Fourier
transformations of the frequency spectrum of the unevolved $\delta$~Scuti
star XX~Pyx. Since strict equidistant frequency or period spacing is not
expected for FG~Vir, the method is not optimal for this $\delta$~Scuti star.
Instead, we use a method which does not require such a coherence: an
examination of a histogram of the observed frequency differences between all
detected frequencies.
In such a diagram, regularities in the frequency spacing of adjacent radial
orders of modes with the same degree,
$\ell$, should show up as a peak. Furthermore, modes of different
degree are shifted in frequency relatively to each other, but would still
have similar patterns and, therefore, contribute to the peaks in the histogram.
The frequency spacing is examined in Fig.~1 with both the theoretically
predicted and observed spacings. Pulsation models show a typical frequency
spacing of $\Delta f \approx 4\, $c/d for adjacent radial orders of p-modes,
independent of the degree of the modes.
The leftmost peaks in the top panels of Fig.~1
are caused by rotationally split multiplets. A similar diagram for $\ell = 2$
(not plotted separately) does not show such strong peaks in the expected
region. The reason is that both the presence of g-modes in addition to the
p-modes and non-equidistant rotational splitting significantly disturb the
regularity in the distribution of quadrupole mode frequencies
(see Fig.~7 below.) As a result, the combined pattern
of frequency spacings for all $\ell = 0 - 2$ modes becomes much less clear.
Moreover, due to the fact that only low-order oscillations are present
in this frequency range, there is no additional peak at
$\Delta f \approx 2$ c/d as might be expected from the asymptotic
spacing between p-modes of adjacent degrees (see Fig.~7 for more details).
Next, we turn to the observed frequency spacing for the 24 certain and
8 probable frequency detections of FG~Vir (Table~1). The most
cautious approach would be to use the 24 certain frequencies
with a few exceptions: the 2$f_1$ term at 25.4~c/d (reflecting
the departure from a pure sinusoidal light curve shape of $f_1$), the
two combination frequencies (the pulsation models cannot yet predict
which combinations and resonances are excited), and the two low-frequency
modes for which the p-mode character can definitely be excluded
from the assumption that $f_6$ is the radial fundamental mode.
To show that the agreement between the theoretically
predicted and observed frequency spacing is not based on the choice
of frequencies, the analysis was repeated by including the 8 additional
'probable' modes listed in Table 1.
To conclude, the theoretical and observed frequency spacings agree quite
well. In particular, for frequency differences in the 0 -- 5~c/d range,
two features near 3.9 and 0.8~c/d stand out, suggesting an
identification with the spacing of successive radial orders and
rotational splitting, respectively.
\section{Pulsation mode identifications from photometric phase differences}
The relative phase difference between the temperature
and radius variations of a pulsating star leads to an observable
phase difference between the light curves at different wavelengths.
The sizes of these phases differences depend not only on the properties
of the star, but also on the type of pulsation mode. The observed
phase difference can then be used for mode typing. This was
already pointed out by Watson (1988).
Garrido et al. (1990) presented detailed calculations and predictions
for $\delta$ Scuti stars. They find that measurements through different
filters of the
Str\"omgren $uvby$ system provide discrimation between radial and low-order
nonradial pulsation, i.e. help determine the $\ell$ value \footnote
{It is necessary to note that rotational mode coupling
may enlarge the overlapping between
modes of different $\ell$ values in the amplitude-phase diagrams, as it was
discussed by Pamyatnykh et al. (1998):
for example, a quadrupole mode coupled by rotation with the
closest radial mode may be shifted in such a diagram
towards the region occupied by dipole modes.}.
\begin{table*}
\caption{Phase differences and mode identifications of FG Vir}
\begin{tabular}{lcccccc}
\hline
\noalign{\smallskip}
\multicolumn{3}{c}{Frequency}& \multicolumn{2}{c}{Phase differences
in degrees} & \multicolumn{2}{c}{ Pulsation degree, $\ell$}\\
& c/d & $\mu$Hz& \multicolumn{2}{c}{$\phi_v - \phi_y$} & Spectroscopy & Photometry\\
& & & 1995 & 1995/6 & Viskum et al. (1998) & Present\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$f_1$ & 12.716 & 147.2 & --1.0 $\pm$ 0.2 & --1.3 $\pm$ 0.2 & 1 & 1\\
$f_2$ & 24.228 & 280.4 & --3.0 $\pm$ 1.1 & --3.8 $\pm$ 1.1 & 1 & 1, 2\\
$f_3$ & 23.403 & 270.9 & --0.6 $\pm$ 1.2 & --1.1 $\pm$ 1.2 & 0 & 0, 1\\
$f_4$ & 21.052 & 243.7 & --5.7 $\pm$ 1.4 & --7.1 $\pm$ 1.5 & 2 & 2\\
$f_5$ & 19.868 & 230.0 & --4.9 $\pm$ 1.4 & --5.5 $\pm$ 1.5 & 2 & 2\\
$f_6$ & 12.154 & 140.7 & +6.8 $\pm$ 1.4 & +5.5 $\pm$ 1.4 & 0 & 0\\
$f_7$ & 9.656 & 111.8 & --2.2 $\pm$ 1.4 & --2.7 $\pm$ 1.4 & 2 & 1, 2\\
$f_8$ & 9.199 & 106.5 & --4.4 $\pm$ 1.6 & --7.6 $\pm$ 1.7 & 2 & 2\\
\noalign{\smallskip}
\multicolumn{3}{l}{Number of hours $y/v$}& 412/292 & 494/374\\
\noalign{\smallskip}
\hline
\end{tabular}\newline
\end{table*}
\begin{figure}
\centering
\includegraphics*[bb=24 30 279 510,width=88mm]{Bppg2.eps}
\caption{Diagnostic diagram to determine $\ell$ values of FG Vir from
Str\"omgren $v$ and $y$ colors. The axes represent amplitude ratios and phase
differences. Measurements are shown by crosses with error bars, while
the four-sided loops represent the models (see text).
The three panels represent the pulsation modes with different values
of the pulsation constant, $Q$}
\end{figure}
We have chosen the $v$ and $y$ filters to provide a relatively large baseline
in wavelength. The $u$ filter was not used by us because of the very large
potential for systematic observational errors. These details of
the measurements
can be found in Breger et al. (1998). The phase differences were determined
in the following manner: The values of the 24 known and well-determined
frequencies were optimized by making a common
solution of the available $y$ data from 1992 - 1996, while allowing
for the amplitude
variability of $f_3$. As discussed in Breger et al., all CCD measurements
were given a
weight of 0.19. With these optimized frequencies, for the year 1995
the best amplitudes
and phases were calculated from the available 412 hours
of $y$ and 292 hours of $v$ data.
Separate trial solutions indicate that the resulting phase differences
are relatively insensitive to the weights adopted. For the year 1996,
an additional 82 hours
of $uvby$ photometry are available (Viskum et al. 1998). The data were
combined with the larger data set from 1995 while allowing for
variable amplitudes of $f_3$ .We note
that the calculated uncertainties of the phase differences
are not reduced by including the additional data: the reason
is that the 1995 data have smaller deviations, e. g. 4 vs. 6 mmag
in $v$.
\begin{figure}
\centering
\includegraphics*[bb= 82 37 725 511, width = 87mm]{Bppg3.eps}
\caption{Comparison of the equivalent-width and photometric methods
to determine
$\ell$ values. Radial pulsation ($\ell$=0) can be found in the lower right,
$\ell$=1 in the middle, while $\ell$=2 is found near the top left.
The diagram shows that the two methods are in agreement, but also demonstrates
that the some of the agreement may be accidental once the error bars
are taken into consideration}
\end{figure}
The resulting phase differences are shown in Table 2.
The phase errors (in degrees)
were estimated from the formula $\sigma(\phi) = 180/\pi \cdot \sigma (m)/a$,
where $a$ is the amplitude and $\sigma (m)$
is the uncertainty of each data point (average deviation
per point from the fit).
We can now compare the observed phase differences with theoretical
modelling in order to determine the $\ell$ values. The ATLAS9 models of
Kurucz (1993) were used to construct a model atmosphere for FG Vir.
Garrido et al. (1990) presented calculations using values of $\Phi^T$
ranging from 90$\deg$ to 135$\deg$. For FG Vir, Viskum (1997) determined a
smaller range, viz. $\Phi^T = 126\deg \pm 20\deg$. This allowed us to
refine the calculations, although the results are very similar.
Another required constant, the deviation from adiabaticity, R,
has been changed slightly from the value used by Garrido et al. (1990).
Values of 0.20 (instead of 0.25) to 1.00 were used. This change was
indicated by measurements of high-amplitude $\delta$ Scuti stars.
The theoretical predictions are shown in Fig.~2 together with the
observations. The importance of considering the dependence on the
pulsation constant, $Q$, can be seen for $Q$ = 0.02, where one can
even find negative values of the phase difference for radial modes,
although the separation between radial and nonradial modes is always maintained.
Our best mode identifications based on Str\"omgren photometry are shown in Table 2.
We obtained five unambiguous $\ell$ values, while for three further modes we cannot
distinguish between two adjacent $\ell$ values. The frequency $f_6$, shown in the
middle panel, is situated to the right of the $\ell$ = 0 by 1.9 $\sigma$.
We note that the deviation is caused by only one subset of data (the CCD
measurements from Siding Spring Observatory, see Stankov et al. 1998),
without which a phase shift of +3.7 $\deg$ is found. Irrespective of which
of the two values for $f_6$ is accepted, an identification with radial pulsation is
consistent within the statistical uncertainties.
We can now compare the results from the photometric method
with those derived from a promising new technique of examining
the equivalent width variations of selected lines.
Bedding et al. (1996) have shown that
for low degree pulsation, the $\ell$-values of pulsation modes
can be inferred from simultaneous observations of several selected
absorption lines combined with simultaneous photometric observations.
Viskum et al. (1998) have applied this method to the star FG~Vir.
In particular, the equivalent-width changes of the H$\alpha$ and
H$\beta$ lines turned out to be good discriminators.
In their paper, $\ell$ identifications have been presented
for the eight dominant modes.
We note that on the observational side the photometric and
spectroscopic methods are independent. However, both methods
rely on similar model-atmosphere calculations, so that
they cannot be considered to be completely independent of each other.
The agreement between the photometric and spectroscopic mode determinations
is remarkable. It appears prudent to examine the comparison of the
results of the two methods in more detail, especially with
consideration of the (unavoidable) observational uncertainties.
In order to compare independent parameters with each other,
we pick the amplitude ratio of
A(H$\alpha$)/A(FeI) given by Viskum et al. (1998).
The comparison is shown in Fig.~3, where the numbers next to the
points refer to the frequency numbering in Table 1. The figure shows
that some of excellent agreement may be accidental once the
observational uncertainties are considered. Nevertheless, the
viability of both methods to determine $\ell$ values has been
demonstrated and for at least six modes the $\ell$ values have
been observationally determined. These determinations now need
to be used as input for pulsation models.
\section{Pulsation models for FG~Vir}
Since the initial discovery of multiperiodicity of FG Vir, several studies
attempted to fit the observed and theoretical frequency spectra of
the star, viz. Breger et al. (1995), Guzik, Templeton \& Bradley (1998),
and Viskum et al. (1998). We will now calculate new models utilizing the
newly discovered pulsation frequencies and mode identifications.
\subsection{Method of computation}
To compute models of FG~Vir we used a standard stellar evolution code
which was developed in its main parts by
B.~Paczy\'nski, M.~Koz{\l}owski and R.~Sienkiewicz (private communication).
The same code was used in our recent studies
of period changes in $\delta$~Scuti stars (Breger \& Pamyatnykh 1998)
and in a seismological study of XX~Pyx (Pamyatnykh et al. 1998).
These two papers include detailed descriptions of the model computations,
so that the present decription can be brief.
For the opacities, we used the latest version of the OPAL or the OP tables
(Iglesias \& Rogers 1996 and Seaton 1996, respectively)
supplemented with the low--temperature data of Alexander \& Ferguson (1994).
In all computations the OPAL equation of state was used (Rogers et al. 1996).
The computations were performed starting with
chemically uniform models on the ZAMS,
assuming typical Population I values of hydrogen
abundance, $X$, and heavy element abundance, $Z$. The initial heavy element
mixture of Grevesse \& Noels (1993) was adopted.
In some models, a possibility of overshooting from the convective core
was taken into account.
The overshooting distance, $d_{\rm over}$, was chosen
to be $0.2 \, H_{\rm p}$,
where $H_{\rm p}$ is the local pressure scale height at the edge
of the convective core.
Examples of evolutionary tracks for $\delta$~Scuti
models computed with and without overshooting are given
in Breger \& Pamyatnykh (1998).
In the stellar envelope the standard mixing-length theory of convection
with the mixing-length parameter $\alpha$ = 1.0 or 2.0 was used.
As we will see below, the choice of the mixing-length
parameter $\alpha $ has only a small effect on our models, because they are too
hot to have an effective energy transfer by convection in the stellar
envelope.
In all computations we assumed uniform (solid-body) stellar
rotation and conservation of global angular momentum during evolution
from the ZAMS. These assumptions were chosen due to their simplicity.
The influence of rotation on the evolutionary tracks of $\delta$~Scuti
models was demonstrated by Breger \& Pamyatnykh (1998).
We studied models of FG~Vir with equatorial rotational velocities from,
approximately, 30 to 90 km/s (on the ZAMS, the values are 5-10 km/s higher).
This range is consistent with the values of $v \sin i = 21 \pm 1\, $km/s
and $i = 31\deg \pm 5\deg$ found by Mantegazza et al. (1994) and an
equatorial velocity of $33 \pm 2\, $km/s obtained by Viskum et al. (1998).
At such low rotational velocities, the evolutionary tracks are located
very close to those for non-rotating stellar models.
The main effect of rotation to be considered
is the splitting of multiplets in the oscillation frequency spectra.
This splitting is non-symmetric even for slowly rotating stars, if
second-order effects are included.
The linear nonadiabatic analysis of low-degree oscillations ($\ell$\,$\leq$\,4)
was performed using the code developed by Dziembowski (1977).
In the modern version of the code, effects of slow stellar rotation
on oscillation frequencies are taken into account up to second order
in the rotational velocity (Dziembowski \& Goode 1992, Soufi et~al. 1998).
\subsection{Model constraints using oscillation data}
The models for FG~Vir were constructed with the observed mode
$f_6$ (12.154~c/d)
being identified with the radial fundamental mode (={\bf F}) (see Section~3).
Note that this determines the mean density of
all possible models of FG~Vir: with the pulsation constant
of about 0.032 -- 0.034~days,
which is typical for $\delta$~Scuti variables, we obtain
$\bar\rho/\bar\rho_{\odot} \approx$ 0.15--0.17.
A considerably more accurate value of the density will be obtained
later in this section.
We started with the construction of evolutionary tracks of 1.75 -- 1.95
$M_{\odot}$ models for initial abundances $X=0.70$ and $Z=0.02$
and using OPAL opacities. No overshooting from the convective core was allowed.
The initial equatorial rotational velocity on the ZAMS was chosen
to be 50 km/s.
With our assumption of conservation of global angular momentum,
the equatorial rotational velocity is decreasing during the
MS--evolution from 50 km/s at the ZAMS to about 40--41 km/s at the TAMS
(Terminal--Age--Main--Sequence).
The evolutionary tracks are shown in Fig.~4 together with the range in
effective temperature and gravity of FG~Vir (see Introduction)
derived from photometric calibrations
($T_{\rm eff} = 7500 \pm 100$~K and $\log \, g= 4.00 \pm 0.10$).
This range requires MS models and constrains the mass of the models to
1.75--1.95~$M_{\odot}$. The position of the models, whose {\bf F}-mode
has a value of 12.154~c/d is also shown. This further constrains
the mass to 1.815--1.875~$M_{\odot}$. We stress that this strong seismological
mass constraint depends on an accurate effective temperature determination.
The identification of $f_6 \equiv $ {\bf F}
agrees well with the gravity estimate for FG~Vir. This provides no
additional constraints on the stellar mass since the
lines of constant frequencies are approximately parallel
to those of constant gravity, as shown in the lower panel of Fig.~4.
For a given family of stellar models, the radial fundamental pulsation
constant, $Q$, is constant with a quite high accuracy due to the homologous
structure of models of different masses.
For $M$ = 1.80 -- 1.90 $M_{\odot}$ in the range
$\log \, T_{\rm eff}$ = 3.869 -- 3.881 ($T_{\rm eff}$ = 7400 -- 7600 K),
the pulsation constant $Q$ = 0.0326 with relative accuracy of about 0.2 \%.
The accuracy will be still higher by approximately one order of magnitude
if we consider only models based on $f_6 \equiv $ {\bf F}.
For these models we determine a mean
density of $\bar{\rho}/\bar{\rho_{\odot}} = 0.1570 \pm 0.0001$.
However, such an extremely high accuracy is based on
a fixed choice of input physics: stellar opacity,
initial chemical composition, rotational velocity and parameters of convection.
We will see in the next subsection that changing these parameters
results in significantly larger spread of mean density for FG Vir models.
\begin{figure}
\centering
\includegraphics*[width=88mm]{Bppg4.eps}
\caption{Evolutionary tracks of 1.75--1.95 $M_{\odot}$ standard models.
The equatorial rotational velocity
on the ZAMS was chosen to be 50~km/s. On the TAMS
(at the turn-off points to the left),
equatorial velocities are of about 40--41~km/s.
Dashed lines show effective temperature and $\log g$ ranges
of FG~Vir from photometric calibrations.
The thin solid line connects models whose radial fundamental mode
frequency is 12.154~c/d.
The rotational velocity of the models along this line is about 45~km/s}
\end{figure}
The strict constraint on mass is demonstrated in Fig.~5, where the
changes of radial and dipole frequencies during MS evolution are shown.
This agreement is an independent qualitative argument in favour
of the proposed models for FG~Vir.
Another important result is the good agreement of the predicted
frequency range of unstable modes with the observed frequency range of
9--34~c/d.
An additional test shows that should we identify $f_6$
with the first radial overtone instead of the {\bf F}-mode, we cannot
achieve agreement between the theoretical and observed frequency ranges:
in the corresponding models of $\approx 2.0\, M_{\odot}$ the instability
occurs in the frequency range of 8--30~c/d. The tendency in models of higher
mass to shift the instability range to lower frequencies
can be also seen in Fig.~5. There is an even stronger argument
against these higher-mass models: their luminosities are too high
($\log (L/L_{\odot}) \sim 1.3$)
to be consistent with both the photometric calibrations and the
Hipparcos parallax.
\begin{figure*}
\centering
\includegraphics*[width=150mm]{Bppg5.eps}
\caption{Main-sequence evolution of low-order frequency spectra of radial
and dipole oscillations of stellar models with masses 1.80, 1.85
and 1.90~$M_{\odot}$.
In each panel, lefmost and rightmost points correspond to the ZAMS and
to the TAMS models, respectively.
Large filled circles denote unstable modes.
For simplicity, for $\ell = 1$ only axisymmetric ($m = 0$) components
of the dipole multiplets are shown.
Rectangular boxes mark the observational frequency and
effective temperature range of FG Vir.
The vertical line in each panel denotes
a model whose radial fundamental frequency ({\bf F}-mode) fits
the observational frequency $f_6$ = 12.154 c/d.
Only models with masses 1.815--1.875~$M_{\odot}$
fit the allowed temperature range}
\end{figure*}
A number of gravity modes must be excited in FG~Vir,
if the assumption of $f_6 \equiv $ {\bf F} is true, because the two lowest
frequencies are more than 25\% lower than the {\bf F}-mode.
Moreover, during the MS-evolution the frequencies of low-order g-modes
increase and approach consecutively
the frequencies of low-order
p-modes resulting in mode interactions and avoided crossings
(see Unno et al. 1989 and references therein). The frequency spectrum
is much more complicated than in the case of pure p- or g-modes, as
shown in Fig.~5 for dipole modes.
The avoided crossing phenomenon takes place approximately in the middle of
the observed frequency interval. Therefore, most of the excited modes at
these and at lower frequencies are of mixed character: they behave like
p-modes in the envelope and like g-modes in the interior. In the
1.85~$M_{\odot}$ model with
$\log \, T_{\rm eff} = 3.876$, modes $g_1$, $p_2$ and $p_3$ are of
mixed character. The frequencies of modes at avoided crossing are sensitive
to the structure of the deep stellar interior. Consequently,
the detection of these modes is important for testing convective
overshooting theories (Dziewbowski \& Pamyatnykh 1991).
Avoided crossings for quadrupole modes in the models of FG~Vir
occur close to the upper border of the observed frequency interval and also
close to the {\bf F}-mode.
This means that most of $\ell =2$
p-modes in the interval already interacted with gravity modes and are of
mixed character.
\subsection{Effects of different input parameters on the FG~Vir models}
The 1.85 $M_{\odot}$ model for FG~Vir, which was discussed in the
previous subsection, will be referred to as the standard or reference model
with the input parameters: $X=0.70$, $Z=0.02$, $d_{\rm over}=0$,
$\alpha = 1.0 $, $V_{\rm rot,ZAMS}=50 $~km/s and
OPAL opacities.
To examine the effects of varying input parameters on the predicted
frequency spectrum, all these and the stellar mass were varied,
under the condition that ${\bf F} \equiv f_6$.
\begin{table*}
\begin{center}
\caption{Parameters of FG~Vir models with ${\textbf F} \equiv f_6$.
The symbols have their usual meaning (see text). For the opacity,
$\kappa$, the OPAL, OP or arficially
modified OPAL data were used.
$p_1 / p_4$ is the ratio of frequency of radial fundamental mode, $f(p_1)$,
to that of third overtone, $f(p_4)$}
\begin{tabular}{|cccccccccccccc|}
\hline
Model & $M/M_{\odot}$ & $X$ & $Z$ & $\log T_{\rm eff}$ & $\log L$
& $R/R_{\odot}$ & $\log \, g$ & $V_{\rm rot}$ & $\alpha$ & $d_{\rm over}$
& $\kappa$ & $\bar\rho/\bar\rho_{\odot}$ & $p_1 / p_4$\\
\hline
1 & 1.85 & 0.70 & 0.02 & 3.8760 & 1.1690 & 2.274 & 3.988 & 45 & 1.0 & 0.0 & OPAL &
0.1571 & 0.5236\\
2 & 1.82 & 0.70 & 0.02 & 3.8701 & 1.1406 & 2.261 & 3.985 & 45 & 1.0 & 0.0 & OPAL &
0.1571 & 0.5233\\
3 & 1.85 & 0.70 & 0.02 & 3.8761 & 1.1696 & 2.274 & 3.989 & 31 & 1.0 & 0.0 & OPAL &
0.1560 & 0.5231\\
4 & 1.85 & 0.70 & 0.02 & 3.8756 & 1.1676 & 2.274 & 3.983 & 67 & 1.0 & 0.0 & OPAL &
0.1570 & 0.5248\\
5 & 1.85 & 0.70 & 0.02 & 3.8753 & 1.1656 & 2.272 & 3.976 & 90 & 1.0 & 0.0 & OPAL &
0.1563 & 0.5266\\
\hline
6 & 1.85 & 0.70 & 0.02 & 3.8760 & 1.1691 & 2.274 & 3.988 & 45 & 2.0 & 0.0 & OPAL &
0.1570 & 0.5236\\
\hline
7 & 1.85 & 0.70 & 0.02 & 3.8796 & 1.1837 & 2.275 & 3.987 & 45 & 1.0 & 0.2 & OPAL &
0.1558 & 0.5231\\
\hline
8 & 1.85 & 0.65 & 0.03 & 3.8734 & 1.1562 & 2.267 & 3.990 & 45 & 1.0 & 0.0 & OPAL &
0.1584 & 0.5227\\
\hline
9 & 2.00 & 0.70 & 0.03 & 3.8748 & 1.1844 & 2.327 & 4.001 & 46 & 1.0 & 0.0 & OPAL &
0.1574 & 0.5231\\
\hline
10 & 1.72 & 0.70 & 0.02 & 3.8754 & 1.1507 & 2.233 & 3.972 & 45 & 1.0 & 0.0 & OP &
0.1542 & 0.5262\\
\hline
11 & 1.95 & 0.70 & 0.02 & 3.8712 & 1.1600 & 2.301 & 4.000 & 45 & 1.0 & 0.0 & mod.OPAL &
0.1588 & 0.5204\\
12 & 1.95 & 0.70 & 0.02 & 3.8746 & 1.1738 & 2.301 & 4.002 & 32 & 1.0 & 0.2 & mod.OPAL &
0.1597 & 0.5195\\
\hline
\end{tabular}
\end{center}
\end{table*}
The changes introduced by using different opacities or non-standard chemical
composition were mainly compensated by changes in mass, in order
to fulfill the only identification we use.
The main characteristics of twelve models of that series are given in Table~3.
Model 2 differs from model 1 (our reference model) in mass;
models 3, 4, 5 - in rotational velocity; model 6 versus 1
will demonstrate effect of changing the mixing-length parameter $\alpha$;
model 7 versus 1 will show effect of the overshooting;
models 8 and 9 have non-standard chemical composition;
finally, models 10, 11 and 12 differ from model 1 in opacity (additionally,
overshooting is taken into account in the model 12).
Note the significantly larger spread in stellar mass between different
models (1.72--2.00 $M_{\odot}$) than for the mass interval
of 1.815--1.875 $M_{\odot}$ in the case of the standard choice of input
parameters as was discussed in the previous subsection.
The same is true for the mean density range:
in Table~3 it varies between $\bar\rho/\bar\rho_{\odot}$ = 0.1542 and 0.1597
(or between 0.1558 and 0.1584 when using only OPAL opacities). This spread
is at least one order of magnitude larger than for the standard input data.
Nevertheless, this seismic estimate of the mean density, which is based both on
the well determined effective temperature and the one identification we are
using, provides a strong constraint on possible FG Vir models \footnote
{For the multiperiodic $\delta$~Scuti-type star XX~Pyxidis, for example,
there is no observational information about mode identification.
Therefore it was necessary to consider a large number of models with very
different mean densities (Pamyatnykh et al. 1998).
}.
We note that besides quite different stellar masses of 1.7 -- 2.0 $M_{\odot}$
(see Table~3) the evolutionary tracks for all 12 models in their MS-part
lie well inside the region of 1.80 -- 1.90 $M_{\odot}$ of the standard set.
Including a luminosity estimation $\log (L/L_{\odot}) \approx 1.1 - 1.2$
from trigonometric parallax determined by Hipparcos, all MS model
tracks pass the error box, as well as the error box in the
$\log g$-$\log T_{\rm eff}$-diagram.
On the contrary, none of the post-MS
models fits such a combination of parameters.
\subsection{The problem of the radial frequency ratio}
Viskum et al. (1998) identified $f_3$ as a radial mode. We note here
that the phase-difference method presented earlier in this paper allows
both $\ell$ = 0 and 1 for $f_3$, i. e. radial as well as nonradial pulsation. We
will now examine the radial hypothesis.
In the last column of Table~3 the ratio of frequencies of the radial
fundamental mode, $p_1$, and of the third overtone, $p_4$, is given
($f(p_1)/f(p_4)$).
For models 1--10 these values are close to, but not equal to the observed
ratio of $f_6 / f_3 = 0.5193$, independent of which parameter was changed.
This can also be seen in Fig.~6, where the ratio $f(p_1)/f(p_4)$, is plotted
against $f(p_1)$ for a wide range of parameters of $\delta$~Scuti star models.
There are well-defined
monotonic variations of this ratio with changing mass,
effective temperature or chemical composition, but the observed ratio
disagrees with all these results.
\begin{figure*}
\centering
\includegraphics*[width=130mm]{Bppg6.eps}
\caption{Frequency ratio of the radial fundamental mode to the third overtone
for a wide range of parameters of $\delta$~Scuti star models and
of some FG~Vir models from Table~3 (asterisks).
The large filled circle corresponds to the observed frequency
ratio, $f_6 / f_3$, of 0.5193.
Only the models with artificially modified
opacities (such as model 12 of Table~3) can fit the observed ratio}
\end{figure*}
The only exception is the model 12 with artificial opacities,
which was constructed in the following way. For FG Vir models, the frequency
ratio is most sensitive to the choice of opacities as it can be seen from
comparison of models 10 and 1 (OP versus OPAL opacities). Physically,
OP data differ from those of OPAL by underestimation of collective effects
in stellar plasma, therefore OP opacities are systematically lower than
OPAL in deep stellar interiors. For FG~Vir models,
this difference in opacity is about 20\% at temperatures above
$10^6$~K. In the envelope, at lower temperatures, OP opacity varies
slightly more monotonously along radius than does OPAL opacity: some dips
are slightly shallower and some bumps are more flat. The differences
do not exceed 8\%: for example, at a temperature of $14\, 000$~K the OP
opacity is 4\% smaller and at a temperature of $300\, 000$~K it is 7.5\%
higher than the OPAL opacity.
Using the fact that the difference in frequency ratios between model 1 (OPAL)
and model 10 (OP) is comparable with the difference between model 1 and the
observations (but these differences are of opposite sign, see Fig.~7),
we performed a very simple numerical experiment: we artificially scaled
OPAL opacities with a factor, which is the ratio of OPAL to OP data.
More clearly, we used
$\kappa_{\rm modified} = \kappa_{\rm OPAL} \cdot
[\kappa_{\rm OPAL} /\kappa_{\rm OP}] $.
Models 11 and 12 were constructed using $\kappa_{\rm modified}$.
For model 12 we additionally set $d_{\rm over} $ to $0.2\, H_p$
and lowered the rotational velocity.
This model fits the observed frequency ratio very nicely as demonstrated in
Fig.~7. However, this agreement should not be construed as an
indicator for a new revision of atomic physics data on opacity, since
the mode identification from Viskum et al. (1998) may not be unique due to
the size of the error bars. Moreover, we cannot exclude additional effects
like nonlinear mode interaction or rotational mode coupling, which may
influence the frequency spectrum. In the last section the problem of
rotational mode coupling will be briefly discussed. The observed variability
of the amplitude of mode $f_3$ is another argument in favour of
possible nonlinear mode interaction.
Viskum et al. (1998) were able to interpret the observed frequency ratio
$f_6$/$f_3 = 0.5193$ as the radial frequency ratio $f(p_1)/f(p_4)$. They did
not construct full evolutionary models but scaled a model
of 2.2$ M_{\odot}$, which was selected to match the observed frequency ratio
with the radial frequency ratio $f(p_1)/f(p_4)$. Using the homology argument,
they estimated the mean density of the true FG~Vir model by
multiplying the mean density
of the 2.2$ M_{\odot}$ model by the square of the ratio
$f_{\rm obs}$/$f_{\rm model}$. In such a way an agreement between observed
frequencies $f_6$, $f_3$ and a pair of radial modes of the scaled model
was achieved by definition. The estimated gravity, luminosity and distance
of the scaled model were found to be in good agreement with the photometric
and the spectroscopic data and with the Hipparcos
parallax. The authors noted that the high precision of their
asteroseismic density estimate
($\bar{\rho}/\bar{\rho_{\odot}} = 0.1645 \pm 0.0005$) is based on a fixed
(solar) metallicity for FG~Vir. Indeed, with our standard choice of chemical
abundances and opacity and assuming the fit ${\textbf F} \equiv f_6$
we estimated the mean density of the FG~Vir model with even five times higher
accuracy (see section 4.2), but possible variations of the global parameters
result in at least one order of magnitude worse precision of this estimate.
\subsection{Theoretical frequency spectra versus observations}
Frequency spectra of radial, dipole and quadrupole modes for all 12 models
from Table~3 are shown in Fig.~7. The effects of different choices of input
parameters can be estimated by comparison of the results for different
models. We discuss here both general properties and some peculiarities of these
frequency spectra.
\begin{figure*}
\centering
\includegraphics*[width=180mm]{Bppg7.eps}
\caption{Frequency spectra of radial, dipole and quadrupole
oscillations of various FG~Vir models.
Axisymmetric modes ($m=0$) are marked by enlarged circles.
Model numbers (see Table~3 for parameters) together with some model indicators
are given to the right of the panels.
Vertical solid and dashed lines correspond to observed frequencies --
statistically significant and probable, respectively.
Numbers above some observed frequencies give identifications for the degree
of the modes ($\ell$) based on multicolor photometry data and on the results
by Viskum et al. (1998)}
\end{figure*}
For nonradial oscillations,
evolutionary overlapping of frequency intervals of g- and p-modes
(see Fig.~5) results in avoided crossings, which disturb the approximately
equidistant frequency spacing between acoustic multiplets.
Gravity and mixed modes are very sensitive to the interior structure as can
be seen for models of different chemical composition, different opacities
and for models with and without overshooting. On the contrary, the change
of $\alpha$ (model 6 versus model 1) has a negligible influence on the
frequency spectrum due to ineffective convection in the relatively hot
envelope of FG~Vir.
Rotation splits nonradial multiplets and strongly complicates the frequency
spectra. Except for models with slowest rotation, we observe a forest
of quadrupole modes in the low-frequency part of the interval,
with overlapping components of the different multiplets.
The common property of the spectra is a large asymmetry of the rotational
splitting, which is caused by the second-order effects of rotation
(Dziembowski \& Goode 1992).
The asymmetry is higher the higher the order of the p-modes is.
It is not trivial to select a model reproducing the observed frequencies
exactly. Simple attempts to minimize frequency differences (O-C)
by a combined variation of input parameters of the reference model
fail due to strong and non-linear sensitivity of gravity modes
to interior structure. At the same time, this strong sensitivity may
help to fit some chosen frequencies without changing the rest of the spectrum,
(cf. models 1 and 7, for example). It is obvious from Fig.~7 that
generally it is much easier to fit a low-frequency mode than
a high-frequency mode, because the spectrum is more dense at lower
frequencies.
For some of the observed modes there is no satisfactory $\ell=0-2$ solution:
see, for example, the group $f_4$, $f_{14}$, $f_{18}$ around 21~c/d
or $f_{17}$ at 33~c/d.
Besides geometric cancellation, there are no objections to identify
frequencies in the gaps with modes of degree $\ell$=3 and $\ell$=4.
Note that even for $\ell=0-2$ the number of unstable modes is a few
times larger than the observed one: in the observational frequency interval
there are 6--7 radial modes, 24 dipole modes and 50--55 quadrupole modes.
Therefore, a presently unknown mode selection mechanism must exist.
Note that most of the models presented in Fig.~7 show a good fit
of the dominant observed mode $f_1$ (12.716 c/d) with $\ell = 1$ mode of
$m=-1$ or 0. This is in agreement with the mode identification from photometric
phase differences. On the contrary, it is quite difficult to achieve a similar
fit with a dipole mode for the observed mode $f_3$ (23.403 c/d).
In Table~4 we present some results to quantify the fitting of
21~modes (corresponding to the observed frequencies $f_1$ through $f_{22}$,
omitting $f_{16}$) for models 1 (reference model), 3 (low rotation),
8 ($X=0.65$, $Z=0.03$) and 12 (artificially modified opacity + overshooting).
In the cases of close observed frequencies (for example,
$f_1$, $f_{11}$, $f_{21}$ around 24~c/d,
or $f_4$, $f_{14}$, $f_{18}$ around 21~c/d)
we give a few possible identifications for each frequency.
Moreover, if there is no $\ell=0-2$ mode for a given observed frequency, we
show the closest mode of $\ell=3$ or 4: the frequency spectrum of these
modes is so dense that practically everywhere in the observed interval
a fitting within 0.1~c/d is possible.
\begin{table*}
\begin{center}
\caption{Best fits for some FG Vir models}
\begin{tabular}{|ccc|crrc|crrc|crrc|crrc|}
\hline
\multicolumn{3}{|c|}{Observations}& \multicolumn{4}{c|}{Model 1}
& \multicolumn{4}{c|}{Model 3}& \multicolumn{4}{c|}{Model 8}
& \multicolumn{4}{c|}{Model 12}\\
N & $f \,$[c/d] & id. & $\ell$ & $m$ & $f \,$[c/d] & $\mid$O-C$\mid$
& $\ell$ & $m$ & $f \,$[c/d] & $\mid$O-C$\mid$
& $\ell$ & $m$ & $f \,$[c/d] & $\mid$O-C$\mid$
& $\ell$ & $m$ & $f \,$[c/d] & $\mid$O-C$\mid$ \\
\hline
1 & 12.716 & 1 & 1 & -1 & 12.821 & .105 & 1 & -1 & 12.714 & .002 & 1 & -1 & 12.814 & .098
& 1 & -1 & 12.719 & .003\\
& & & 2 & -1 & 12.770 & .054 & & & & & & & &
& & & & \\
\hline
2 & 24.228 & 2/1 & 1 & 0 & 24.436 & .208 & 1 & 1 & 24.083 & .145 & 1 & 0 & 24.453 & .225
& 1 & 1 & 24.250 & .022\\
& & & 3 & -2 & 24.331 & .103 & 3 & -2 & 24.128 & .100 & 3 & -2 & 24.381 & .153
& 3 & -2 & 24.308 & .080\\
\hline
3 & 23.403 & 0 & 0 & 0 & 23.226 & .177 & 0 & 0 & 23.237 & .166 & 0 & 0 & 23.247 & .156
& 0 & 0 & 23.402 & .001\\
& & & 3 & 1 & 23.285 & .118 & & & & & 3 & 1 & 23.317 & .086
& & & & \\
\hline
4 & 21.052 & 2 & 1 & -1 & 20.828 & .224 & 1 & -1 & 20.758 & .294 & 1 & -1 & 20.667 & .385
& 1 & -1 & 20.739 & .313\\
& & & 2 & 2 & 21.767 & .715 & 2 & 2 & 22.005 & .953 & 2 & 2 & 21.723 & .671
& 2 & 2 & 22.106 & .054\\
& & & 3 & -2 & 21.079 & .027 & 3 & -3 & 21.092 & .040 & 3 & -2 & 20.961 & .091
& 3 & -3 & 21.024 & .028\\
& & & 4 & -3 & 21.048 & .004 & 4 & 1 & 21.091 & .039 & 4 & -3 & 21.037 & .015
& 4 & -4 & 21.085 & .033\\
\hline
5 & 19.868 & 2 & 2 & -2 & 19.996 & .128 & 2 & -2 & 19.800 & .068 & 1 & 1 & 19.893 & .025
& 2 & -2 & 19.774 & .094\\
& & & & & & & & & & & 2 & -2 & 19.899 & .031
& & & & \\
\hline
6 & 12.154 & 0 & 0 & 0 & 12.161 & .007 & 0 & 0 & 12.156 & .002 & 0 & 0 & 12.152 & .002
& 0 & 0 & 12.157 & .003\\
\hline
7 & 9.656 & 1/2 & 1 & -1 & 9.609 & .047 & 2 & -2 & 9.705 & .049 & 1 & -1 & 9.250 & .406
& 2 & 2 & 9.656 & .000\\
& & & 2 & -1 & 9.556 & .100 & & & & & 2 & 1 & 9.815 & .159
& & & & \\
& & & & & & & & & & & 3 & -3 & 9.661 & .005
& & & & \\
\hline
8 & 9.199 & 2 & 1 & 1 & 9.228 & .029 & 2 & 0 & 9.267 & .068 & 1 & -1 & 9.250 & .051
& 1 & -1 & 9.143 & .056\\
& & & 2 & 0 & 9.242 & .043 & & & & & 2 & -2 & 8.940 & .259
& & & & \\
\hline
9 & 19.228 & (0) & 1 & 0 & 19.204 & .024 & 1 & 0 & 19.217 & .011 & 2 & 0 & 19.265 & .037
& 2 & 0 & 19.326 & .098\\
\hline
10 & 20.288 & (1) & 1 & 1 & 20.162 & .126 & 1 & 1 & 20.293 & .005 & 1 & 0 & 20.385 & .097
& 1 & 1 & 20.206 & .082\\
& & & 3 & 0 & 20.333 & .045 & & & & & & & &
& & & & \\
\hline
11 & 24.200 & - & 1 & 0 & 24.436 & .236 & 1 & 1 & 24.083 & .117 & 1 & 0 & 24.453 & .253
& 1 & 1 & 24.250 & .050\\
& & & 3 & -2 & 24.331 & .131 & 3 & -2 & 24.128 & .072 & 4 & 1 & 24.074 & .126
& 3 & -2 & 24.308 & .108\\
\hline
12 & 16.074 & - & 1 & 0 & 16.191 & .117 & 1 & 0 & 16.170 & .096 & 1 & 0 & 16.124 & .050
& 1 & 0 & 16.211 & .137\\
& & & 2 & 2 & 15.946 & .128 & 3 & -1 & 16.079 & .005 & 3 & -1 & 16.016 & .058
& 2 & 2 & 16.123 & .049\\
& & & 4 & 0 & 16.060 & .014 & 4 & 0 & 16.082 & .008 & 4 & -1 & 16.010 & .064
& 4 & 2 & 16.114 & .040\\
\hline
13 & 34.119 & - & 2 & 0 & 34.231 & .112 & 2 & 0 & 34.221 & .102 & 2 & 1 & 33.916 & .203
& 2 & 2 & 33.970 & .149\\
& & & & & & & & & & & 3 & -2 & 34.172 & .053
& 3 & 0 & 34.019 & .100\\
\hline
14 & 21.232 & - & 1 & -1 & 20.828 & .404 & 1 & -1 & 20.758 & .474 & 1 & -1 & 20.667 & .565
& 1 & -1 & 20.739 & .493\\
& & & 3 & -2 & 21.079 & .153 & 3 & -3 & 21.092 & .140 & 2 & 2 & 21.723 & .491
& 3 & -3 & 21.024 & .208\\
& & & 4 & -4 & 21.368 & .136 & 4 & 0 & 21.367 & .135 & 4 & 0 & 21.167 & .065
& 4 & 0 & 21.147 & .085\\
\hline
15 & 11.110 & - & 2 & -1 & 10.975 & .135 & 2 & -2 & 11.099 & .011 & 2 & 2 & 11.423 & ..313
& 2 & 2 & 11.649 & .539\\
& & & 3 & 2 & 11.099 & .011 & 3 & -2 & 11.074 & .036 & 3 & 0 & 10.942 & .168
& 3 & -1 & 11.024 & .086\\
& & & 4 & 0 & 11.092 & .018 & 4 & 0 & 11.085 & .025 & 4 & 4 & 10.983 & .127
& 4 & 2 & 11.059 & .051\\
\hline
17 & 33.056 & - & 2 & 2 & 33.316 & .260 & 1 & -1 & 32.609 & .447 & 1 & -1 & 32.785 & .271
& 1 & -1 & 32.959 & .097\\
& & & 4 & -1 & 33.020 & .036 & 4 & -2 & 33.152 & .096 & 3 & 1 & 33.092 & .036
& 4 & 0 & 33.032 & .024\\
\hline
18 & 21.551 & - & 2 & 2 & 21.767 & .216 & 2 & 2 & 22.005 & .454 & 2 & 2 & 21.723 & .172
& 2 & 2 & 22.106 & .555\\
& & & 3 & -3 & 21.433 & .118 & 4 & -1 & 21.642 & .091 & 4 & -1 & 21.566 & .015
& 4 & -1 & 21.421 & .130\\
\hline
19 & 28.140 & - & 1 & 1 & 27.959 & .181 & 1 & 1 & 28.125 & .015 & 2 & -1 & 28.247 & .107
& 2 & 1 & 28.062 & .078\\
& & & 4 & 1 & 28.205 & .065 & 3 & -2 & 28.133 & .007 & 3 & -1 & 28.101 & .039
& 3 & -1 & 28.163 & .023\\
\hline
20 & 11.195 & - & 2 & -2 & 11.277 & .082 & 2 & -2 & 11.099 & .096 & 2 & 2 & 11.423 & .228
& 2 & 2 & 11.649 & .454\\
& & & 3 & -2 & 11.281 & .086 & 3 & -3 & 11.309 & .114 & 3 & -1 & 11.289 & .094
& 3 & 3 & 11.223 & .028\\
& & & 4 & 2 & 11.231 & .036 & 4 & 3 & 11.181 & .014 & 4 & -2 & 11.285 & .090
& 4 & 4 & 11.175 & .020\\
\hline
21 & 24.354 & - & 1 & 0 & 24.436 & .082 & 1 & 0 & 24.413 & .059 & 1 & 0 & 24.453 & .099
& 1 & 1 & 24.250 & .104\\
& & & 3 & -2 & 24.331 & .023 & 3 & -3 & 24.349 & .005 & 3 & -2 & 24.381 & .027
& 3 & -2 & 24.308 & .046\\
& & & 4 & 0 & 24.398 & .044 & & & & & 4 & 0 & 24.457 & .103
& 4 & 1 & 24.339 & .015\\
\hline
22 & 11.870 & - & 2 & 1 & 12.029 & .159 & 2 & 2 & 11.859 & .011 & 2 & 1 & 11.818 & .052
& 2 & 1 & 11.915 & .045\\
& & & 4 & 4 & 11.846 & .024 & & & & & 3 & -3 & 11.969 & .099
& & & & \\
\hline
\end{tabular}
\end{center}
\end{table*}
Model 12 seems to be the best-fitting model in our series. Note
the excellent agreements in both frequency and $\ell$--identifications for
all ten dominant frequencies.
The mean difference (O-C) is about 0.04~c/d for these modes. The fit
for most of the remaining frequencies is also good, once $\ell=3$
$\ell=4$ modes are considered. Possible discrepances at low frequencies
do not appear serious to us due to the strong sensitivity of g-modes
to model parameters. Because of the effects of avoided crossing some
of the quadrupole modes of higher
order are also quite sensitive to small variations of parameters
(cf. $\ell=2$ modes for models 1 and 2 at approximately 32~c/d).
In Fig.~8 we compare the range of observed frequencies and of excited modes
explicitly for the reference model as well as three other models.
FG~Vir is located in the HR-diagram in the middle of the instability strip
(see, for example, Breger \& Pamyatnykh 1998). Consequently, the driving of
oscillations is effective in a wide frequency region which extends over 7
radial overtones.
The independence of the driving efficiency on the mode degree, $\ell$, is
typical for the oscillations excited by the opacity mechanism.
We also note that the observed frequency spectrum is divided clearly into two groups:
at 9-13 c/d and at 19-25 c/d. However, the present models do not predict any
instability gap in frequency, as can be seen in Fig.~8.
\begin{figure}
\centering
\includegraphics*[width=88mm]{Bppg8.eps}
\caption{
Normalized growth-rates, $\eta$, of low-degree oscillation modes of some
FG~Vir models from Table 4. Only axisymmetric modes
($m=0$) are shown. Positive values correspond to unstable modes.
Vertical lines mark observed frequencies from Table~1, probable frequencies
$f_{25}$ to $f_{32}$ are shorter.
The longest line at 12.716~c/d corresponds to the mode $f_1$
with the highest amplitude}
\end{figure}
\subsection{Rotational mode coupling}
An additional factor which influences the frequencies of a rotating star
is the coupling between modes with close frequencies whose azimuthal orders,
$m$, are the same and whose degrees, $\ell$, are also the same or differ by 2.
The effect was described and discussed in detail
by Dziembowski \& Goode (1992) and by Soufi et al. (1998).
The frequency distance between two near-degenerate modes increases when
coupling is taken into account.
The significance of this rotational frequency perturbation was demonstrated
by Pamyatnykh et al. (1998) in application to XX~Pyx. It was shown that
at rotational velocity of about 90 km/s the frequency shifts of coupled
radial and quadrupole (or dipole and octupole) overtones achieve
0.1--0.2 c/d.
Therefore, in particular, a significant change
of radial frequency ratios may be expected.
We estimated this effect in some of our FG~Vir models and
found it to be unimportant at rotational velocities of about and less
than 45 km/s. For example, for the reference model 1, the frequencies of
the radial fundamental mode, $f(p_1)$, and of the third overtone, $f(p_4)$,
are changed due to coupling with closest axisymmetric quadrupole modes
by -0.0035 c/d and 0.0091 c/d, respectively.
The effect is much stronger
for more rapidly rotating models: for model 5 with $V_{\rm rot}$ = 90~km/s,
the radial fundamental mode and third overtone are shifted by -0.0436 c/d
and 0.1545 c/d, respectively, which results in the change of the
frequency ratio from 0.5266 to 0.5212.
As another example, we were able to reproduce the observed
ratio $f_6 / f_3 = 0.5193$ as the radial frequency ratio for a model with
initial abundances $X=0.65$, $Z=0.03$ and with $V_{\rm rot}$ = 91-92~km/s.
However, the rapid rotation seems to be rather improbable for FG~Vir with
$v \sin i = 21 \pm 1\, $km/s, as it was discussed by Viskum et al. (1998).
Note that the coupling effect is
higher for higher overtones: for model 5, the shift of the radial
sixth overtone frequency (34.306 c/d) is 0.520 c/d due to interaction
with the closest axisymmetric quadrupole mode of 34.219 c/d which is shifted
by the same quantity 0.520 c/d in the opposite direction. Moreover,
$\ell = 2$ modes are affected by coupling with $\ell = 4$ modes, and
so on. We conclude that for rapidly rotating models it is necessary to
take the rotational coupling into account in attempts to fit the observed
frequency spectrum with the theoretical one.
The rotational coupling results also in a mutual contamination of amplitudes
of spherical harmonic components of interacting modes (Soufi et al 1998,
see examples in Table 4 of Pamyatnykh et al. 1998).
This adversely affects the mode discrimination
by means of multicolor photometry (see the footnote in section 3) and should
influence spectroscopic determinations as well. However,
this effect was found to be important only in models with more rapid
rotation than found for FG~Vir. A more detailed discussion of the rotational
mode coupling problem in connection with the interpretation of the
observed multifrequency spectrum will be given by Dziembowski \& Goupil (1998).
\bigskip
\acknowledgements
We are grateful to M.~Viskum and S.~Frandsen for making the photometry
used in their FG Vir paper available to us and W.~Dziembowski for
stimulating discussions. Part of the investigation has been supported by the
Austrian Fonds zur F\"{o}rderung der wissenschaftlichen Forschung,
project number S7304. AAP acknowledges partial financial support
by the Polish Committee for Scientific Research (grant 2-P03D-014-14)
and by the Russian Foundation for Basic Research (grant 98-02-16734).
| 2024-02-18T23:40:14.924Z | 1998-11-20T18:34:00.000Z | algebraic_stack_train_0000 | 1,818 | 9,323 |
|
proofpile-arXiv_065-9026 | \section{Introduction}
The optical/ultraviolet spectra of quasars are similar over a wide range of
luminosities and radio properties. The spectra are characterized by strong
continuum emission, broad ($\Delta v >$ 2000 km s$^{-1}$)
emission lines arising from a broad-line region (BLR), and narrow emission
lines ($\Delta v <$ 1000 km $^{-1}$) arising from a more extended narrow-line
region (NLR). Early photoionization models of the
BLR (e.g., Baldwin \& Netzer 1978; Kwan \& Krolik 1981)
showed that it was possible to reproduce typical BLR line ratios with
a single type of AGN cloud (standard model reviewed and critiqued by, e.g.,
Ferland 1986), but it is clear that the BLR is heterogeneous: (1) high and
low-ionization lines are underpredicted by the standard model
(e.g., Netzer 1985), (2) lines show ionization-dependent velocity shifts
(Gaskell 1982; Espey et al. 1989, 1994), (3) different lines show different
time lags in response to continuum changes (e.g., Korista et al. 1995),
(4) different lines can show dramatic profile differences
(e.g., Netzer et al. 1994).
The fact that single-zone models work as well as they do can be attributed
to the powerful selection effects of ``locally optimally emitting clouds''
(Baldwin et al. 1995; Ferland this volume); many emission lines in the optical
and ultraviolet are preferentially emitted from clouds with a narrow range
of properties.
Even if beset by strong selection effects that dominate its emissions,
the BLR is still an important probe of the spatially unresolvable sub-parsec
environment of quasars. Gas to fuel the presumed supermassive black hole
central engine must pass through the BLR, as must ISM and IGM-enriching
outflows originating in disk winds or jets.
Statistical relationships among broad-line and other properties provide a
means of investigating physical parameters, such as the black hole
mass and accretion rate, that underlie the appearance of quasars.
As the size of carefully selected quasar samples grows, as well as the
quality of data available for such samples, likewise grows the need for
more sophisticated statistical techniques.
One such multivariate technique that has become increasingly applied in AGN
studies and other areas of astrophysics is principal component analysis or PCA
(e.g., Bernstein 1988).
Technically, PCA is the eigenanalysis of the correlation matrix
of a set of input variables; the results are the eigenvectors (or principal
components) and their corresponding eigenvalues. The eigenvectors can be
visualized as the directions in parameter space described by the elliptical
axes of the scatterplot of input variables, and the eigenvalues as a
quantification of the amount of variance in the direction of these axes.
Eigenvector 1 is then the direction in $n$-dimensional parameter space that
accounts for the most variation
in the data set, and can include correlations among many variables.
When PCA is effective, the many input variables (typically measured properties
that would a priori seem unrelated) can be transformed into a few eigenvectors
that may be interpreted as the effect of the important underlying physical
processes.
The discussion that follows describes eigenvector 1 correlations in the
ultraviolet and optical spectra of quasars, how they are related to each
other and other quasar properties, what physics underlies eigenvector 1,
and how all of this is related to the Baldwin effect.
The primary source of variance in the ultraviolet spectra of quasars
involves the equivalent widths of the emission lines, which is one of the
components of the Baldwin effect. If eigenvector 1 is luminosity independent
(points in a direction orthogonal to luminosity), then the physical parameter
underlying eigenvector 1 is the source of scatter in the Baldwin effect.
If eigenvector 1 depends on luminosity (points in a direction
parallel to luminosity), then the physical parameter underlying eigenvector 1
helps create the Baldwin effect. As will be discussed, current data sets
provide contradictory evidence for which is the case.
\section{Ultraviolet Eigenvector 1: The Intermediate Line Region}
Investigations of luminous quasars' broad UV lines
identified strong correlations involving emission-line widths,
shifts, equivalent widths, and ratios
(Francis et al. 1992; Wills et al. 1993; Brotherton et al. 1994a, b).
A simple model developed to explain these trends approximates UV broad lines
as emission from two regions, an intermediate-line region (ILR), and a
very-broad-line region (VBLR), together comprising the traditional BLR.
This decomposition is a simple, approximate explanation for the observation
that broader lined quasars have smaller equivalent widths and different line
ratios when compared to narrower lined quasars.
Figure 1 illustrates this using
a composite of narrow-lined quasar spectra (2000 km s$^{-1} < $FWHM$_{CIV} <
3500$\ km s$^{-1}$) and a composite of broad-lined quasar spectra
(6000 km s$^{-1} < $FWHM$_{CIV} < 8000$\ km s$^{-1}$).
The difference spectrum, or ILR spectrum, produced this way is essentially
identical to the first principal component (PC1) spectrum produced by
the spectral PCA of the Large Bright Quasar Survey (Brotherton et al. 1994b;
Francis et al. 1992).
\begin{figure}
\plottwo{FIG1A.ps}{FIG1B.ps}
\caption{The narrow ($solid$) and broad-lined ($dot\-ted$)
continuum-normalized (``EW'') composite spectra of the Ly $\alpha$ region
(left) and C IV (right). The difference spectra are displayed below.
From Brotherton et al. 1994b.}
\end{figure}
\subsection{Photoionization Modeling}
The emission-line ratios of the ILR (difference) spectrum and the VBLR
(broad-lined composite) can be modeled using photoionization codes
such as CLOUDY (Ferland 1993). While single-zone models fail to reproduce
the highest and lowest ionization lines, a two-zone model does a
better job reproducing the heterogeneous BLR. Brotherton et al. (1994b)
showed that the ILR spectrum could be well modeled, whereas the VBLR spectrum
is probably still too heterogeneous for a good single-zone model.
The discriminating diagnostic lines are O III] $\lambda$1663, a
semi-forbidden line, and Al III $\lambda$1860, an important coolant in
high density clouds, which suggest that the ILR is more distant from the
nucleus and less dense than the rest of the BLR.
The observed and derived properties of the ILR, VBLR, and the NLR are
tabulated (from Brotherton et al. 1994b).
\begin{table}
\begin{center}
\centerline{Table 1. Comparison of Emission-Line Regions}
\begin{tabular}{llll}
\tableline\tableline
Property & NLR & ILR & VBLR \\
\tableline
Velocity Dispersion (km s$^{-1}$) & $\sim$500& $\sim$2000& $\sim$7000 \\
Radial Distance (pc) & 10$^{2-3}$ & $\sim$1 & $\sim$0.1 \\
Gas Density ($n_H$, cm$^{-3}$) & 10$^{4-6}$ & $\sim$10$^{10}$ & $\sim$10$^{12.5} $ \\
Ionization Parameter (U = $\phi_i$/n$_H$) & $\sim$0.01 & $\sim$0.01 & $\sim$0.01 \\
Redshift cf. Systemic (km s$^{-1}$) & 0 & $\sim$0 & $\sim$ $-$1000 \\
Covering Factor ($\Omega$/4$\pi$) & $\leq$0.02 & $\leq$0.03 & $\sim$0.24 \\
\tableline
\end{tabular}
\end{center}
\normalsize
\end{table}
Keep in mind that these results were obtained by modeling spectra
derived from the most luminous quasars known, and at least the
size scales can be expected to vary with luminosity (e.g., Kaspi et al. 1996).
This two-component BLR breakdown may be generalized to $n$-components as strong
selection effects permit an ensemble of clouds experiencing a very wide range
of physical conditions to reproduce, not badly, a typical quasar spectrum
(Baldwin et al. 1995). The designations of ``ILR'' and ``VBLR,'' and more
specifically the ratio of ILR to VBLR emission, may simply represent the limits
of such an ensemble distribution. Differences in the relative emission of these
limits account for much of the diversity of broad-line profiles, as well
as relations among line strength, line width, asymmetry and peak blueshift.
Comparison with other AGN emission-line regions shows
that the ILR spectrum tends to be intermediate between that of the VBLR and
that of gas more distant from the ionizing continuum, such as the NLR and
extended Ly~$\alpha$ nebulosity. This suggests that the ILR may be more
properly identified as an inner extension of the NLR rather than as a
component of the BLR, a hypothesis strengthened below (\S\ 4).
\section{Optical Eigenvector 1: The Fe II -- [O III] Anti-Correlation}
The object-to-object variation in the optical spectra of low-redshift
quasars is dominated by the inverse correlation between narrow
[O III] $\lambda$5007 (FWHM $\sim$ 500 km s$^{-1}$)
and optical Fe II emission (eigenvector 1 of the PCA of Boroson \& Green 1992
of the parameterized spectra of optically selected quasars from the Bright
Quasar Survey, or BQS). Figure 2 illustrates this trend.
It is important to note that it is not just the equivalent width (EW) of
[O III] $\lambda$5007 involved in the correlation, but also the
luminosity of [O III] $\lambda$5007. There are a number of secondary
properties also correlated with eigenvector 1:
quasars with prominent [O III] $\lambda$5007 and weak optical Fe II
preferentially have broad, red-asymmetric H$\beta$ and are radio-loud
and strong in hard (2 keV) X-rays (e.g., Corbin 1993).
Furthermore, Laor et al. (1997), using a complete subset
of the PG quasars, found that the soft X-ray spectral slope, $\alpha_x$,
is strongly correlated with [O III]/Fe II in the sense that strong Fe II
emitting objects have steep soft X-ray spectra (the extreme of these are
identified with the narrow-line Seyfert 1 objects).
\begin{figure}
\plotfiddle{fe2o3.eps}{6.6cm}{0}{39}{39}{-120}{-75}
\caption{Low-redshift quasars, after Fig. 2 of Wills \& Brotherton (1996).
EWs are restframe \AA. Log EW[FeII] = 1 are $\sim$1.5$\sigma$ upper limits.
Log EW[OIII] = 0.5 are $\sim$3$\sigma$ upper limits. Low-ionization BALQSOs
show excessive Fe II and negligible [O III] $\lambda$5007 emission.}
\end{figure}
\section{The ``Unified'' Eigenvector 1}
In order to study simultaneously the statistical behavior of both
optical and ultraviolet spectral properties, it is necessary to observe
a wide range of wavelengths: for low-redshift quasars, the optical and
the ultraviolet; for high-redshift quasars, the optical and
the near-infrared. This has only recently become possible with the
same data quality as in the optical because of the Hubble Space Telescope (HST)
and new generations of near-IR detectors.
Brotherton (1996b) obtained near-IR spectra of the H$\beta$--[O~III]
$\lambda$5007 region for 32 intermediate to high redshift quasars with
a range in ILR strengths. The strength of narrow-line emission, characterized
by [O III] $\lambda$5007 relative to the continuum and H$\beta$,
is indeed correlated with that of the line cores\footnote{The term ``line
core'' refers to the ILR contribution alone, not simply the emission within
some velocity interval of the peak.} of C IV and C III], and
inversely correlated with optical Fe II emission.
Eigenvector 1 in the optical and the ultraviolet is the same.
This result is corroborated by Wills et al. (this volume), who obtained
HST ultraviolet spectra of a subsample of the BQS.
Marziani et al. (1996) and Wang et al. (1996--based on IUE data)
find that the strength of optical Fe II multiplets is inversely related
to the equivalent width of C IV $\lambda$1549. This is consistent
with our result that the ILR emission (which is the main determinant of
EW C IV, Wills et al. 1993), is inversely correlated with optical Fe II
emission. Thus the relationships among ILR, NLR, and the Fe II emission
appear to hold in lower redshift, lower luminosity quasars.
Table 2 summarizes a large but not exhaustive set of correlated
properties that together comprise a ``unified'' eigenvector 1.
If these quantities related by eigenvector 1 can be understood in terms
of the underlying physics, their variance might constitute a
``fundamental plane'' for quasars by analogy with the ``fundamental plane'' for
galaxies. Therefore understanding eigenvector 1 may allow important physical
parameters to be estimated on the basis of a few easy-to-measure observables.
\begin{table}
\begin{center}
\centerline{Table 2. Correlated Eigenvector 1 Properties}
\begin{tabular}{lll}
\tableline\tableline
{Weak ILR} & {Strong ILR} & Ref. \\
\tableline
Broad Ly $\alpha$, C IV, C III] & Narrow Ly $\alpha$, C IV, C III] & 1, 2, 3, 4 \\
Small EW C IV & Large EW C IV & 1, 2\\
Small EW Ly $\alpha$ & Large EW Ly $\alpha$ & 1, 4 \\
Small Ly $\alpha$/C IV & Large Ly $\alpha$/C IV & 1, 4 \\
Large C IV/$\lambda$1400 Feature & Small C IV/$\lambda$1400 Feature & 2 \\
``Flat-topped'' C IV & ``Sharply Peaked'' C IV & 1, 2 \\
C IV and C III] Blueshifted & C IV and C III] at Systemic $z$ & 3 \\
Weak [O III] $\lambda$5007 & Strong [O III] $\lambda$5007 & 5, 6 \\
Strong Optical Fe II & Weak Optical Fe II & 5, 6 \\
Weak Radio-jets (Radio-quiet) & Strong Radio-jets (Radio-loud) & 3, 5, 6, 7 \\
Steep Soft-X-ray Slope & Flat Soft-X-ray-Slope & 8 \\
Small Hard X-ray Luminosity & Large Hard X-ray Luminosity & 6, 8, 9, 10 \\
Small [O III] $\lambda$5007 Luminosity & Large [O III] $\lambda$5007 Luminosity & 6 \\
Narrow H$\beta$ with Blue Wing & Broad H$\beta$ with Red Wings & 6, 8 \\
Mg II BALQSOs & no Mg II BALQSOs & 11, 12 \\
\tableline
\end{tabular}
\end{center}
REF. 1=Francis et al. 1992. 2=Wills et al. 1993. 3=Brotherton et al. 1994a. 4=Brotherton et al. 1994b. 5=Brotherton 1996b. 6=Boroson \& Green 1992. 7=Francis et al. 1993. 8=Laor et al. 1994, 1997. 9=Corbin 1993. 10=Green 1998. 11=Boroson \& Meyers 1992. 12=Wills \& Brotherton 1996.
\normalsize
\end{table}
\subsection{Physical Explanations}
There is as yet no generally accepted cause for the eigenvector 1 variance.
The large number of related properties poses a special problem as well as an
opportunity. Simple explanations are likely to fail because these
properties represent conditions on all size scales associated with the AGN
phenomenon. For instance, while the soft X-ray slope and ionizing continuum
correlate with eigenvector 1 (e.g., Laor et al. 1997), this alone cannot
explain the extreme range in [O III] $\lambda$5007 equivalent width,
although it might explain some of the line ratio differences (e.g., Mushotsky
\& Ferland 1984; Korista this volume).
While radio-loudness correlates with eigenvector 1, the trends appear to hold
for radio-quiet samples alone, so this property is unlikely to be
fundamental. Rather we are faced with developing an explanation for some
aspects of eigenvector 1 directly, and others more indirectly.
A few possibilities include:
\subsubsection{Orientation.}
The framework of orientation can explain the variation of
many of the eigenvector 1 properties, and probably contributes to the observed
correlations. The line intensity ratio of optical Fe II to [O III]
$\lambda$5007 is larger in core-dominant quasars than in lobe-dominant quasars
(e.g., Zheng \& O'Brien 1990; Jackson \& Browne 1991;
Brotherton 1996a), radio-loud classes believed to differ because of their
orientation to our line of sight (e.g., Orr \& Browne 1982;
Wills \& Brotherton 1995). The line width and asymmetry of H$\beta$ also
vary with core dominance (Wills \& Browne 1986; Brotherton 1996a). Often these
trends have been explained in terms of anisotropic line and axisymmetric
continuum emission (Jackson et al. 1989; Jackson \& Browne 1991).
Others have invoked accretion
disks as the source of the strong, possibly anisotropic Fe II emission (e.g.,
Collin-Souffrin, Hameury, \& Joly 1988; Kwan et al. 1995).
Hard X-ray emission may also vary consistently with inclination
(``face-on'' implies smaller 2 keV flux) if the hard X-rays are produced
by Comptonization by nonthermal electrons above an accretion disk
(Ghisellini et al. 1991).
Orientation appears to fall short in accounting for at least one key item:
[O III] $\lambda$5007 luminosity. Boroson \& Green (1992)
argued against orientation because the luminosity of [O III] $\lambda$5007,
which they took to be an isotropic property (Jackson et al. 1989),
correlated with eigenvector 1, and this was inconsistent with
the strong correlation between continuum luminosity (enhanced for ``face-on''
quasars in the beaming model, thus decreasing EW [O III] $\lambda$5007)
and [O III] $\lambda$5007 luminosity.
An extrapolation of the results of Baker (1997) may provide a boost
for orientation, however. Baker finds, for a large sample of
low-radio-frequency-selected quasars, evidence for aspect-dependent
extinction from dust toroidally distributed between
the BLR and NLR. Trends of the Balmer decrement and optical slope
with core dominance are consistent with this interpretation.
At large angles, [O III] $\lambda$5007 emitting clouds may be partially
obscured. But it is not clear if the obscuration is enough to explain the
range in observed luminosity.
Orientation may be involved in driving eigenvector 1, but if so, it requires
several elements including beaming effects, dust reddening, and selection
biases.
\subsubsection{Eddington Fraction.}
The Eddington fraction is the ratio between the luminosity of an
accreting mass and its Eddington luminosity (the point at which radiation
pressure balances gravity for accreting material). Laor et al. (1997),
following the suggestion that steep $\alpha_x$ quasars are analogous to
`high'-state Galactic black hole candidates (e.g., White, Fabian, \& Mushotsky
1984; Pounds, Done, \& Osborne 1995), explained the $\alpha_x$ vs.
FWHM H$\beta$ correlation in terms of range of Eddington fraction: for a
given luminosity ``narrow'' broad lines (i.e., H$\beta$)
imply a higher Eddington fraction if the line width is gravitational.
An additional point in favor of this interpretation is that a
steep soft $\alpha_x$ is predicted to arise from a weaker hard X-ray component,
and for the $ROSAT$-observed sample of Laor et al. (1997), it does
appear to be changes in the hard X-rays leading to changes in
$\alpha_x$. Also see Wandel \& Boller (1998).
Boroson \& Green (1992) also argued that the Eddington fraction was
the important parameter. They surmised that optical Fe II emission was
dependent on the covering fraction of the BLR, and that more BLR clouds
(and hence higher accretion rate) would obscure the more distant NLR.
Thus the covering fraction increases from the radio-loud strong [O III]
$\lambda$5007, weak Fe II quasars to the radio-quiet weak
[O III] $\lambda$5007, strong Fe II quasars. They also noted that PG
1700+518, a broad absorption line quasar or BALQSO, is found at the high
covering fraction end.
\subsubsection{Age.}
This explanation is related to the above in the details, but is more
fundamental. Sanders et al. (1988) proposed a scenario in which galaxy mergers
produce dust-rich ultraluminous infrared galaxies, which then evolve
into quasars as the dust and gas are blown out by the AGN activity.
This fueling episode might correspond to a high Eddington fraction.
BALQSOs such as PG 1700+518 might then be characterized as young
or recently refueled quasars. Boroson \& Meyers (1992) noted that
low-ionization BALQSOs constitute 10\% of IR-selected quasars
(not the 1\% of optically selected quasars), and that they show very
weak narrow [O III] $\lambda$5007 emission, and, in an HST survey, Turnshek et
al. (1997) find that 1/3 of weak [O III] $\lambda$5007 quasar show BALs.
Voit et al. (1993) argue that low-ionization BALs are a manifestation of a
``quasar's efforts to expel a thick shroud of gas and dust.'' {\em All} of the
low-ionization BALQSOs in Figure 2 lie at the extreme low [O III] $\lambda$5007,
high Fe II corner.
PG 1700+518 shows evidence for a recent interaction: a nuclear starburst ring
(Hines et al. 1998) and a companion galaxy with a 100 million year old
starburst (Stockton et al. 1998).
Such an environment with a high covering factor of high density/column density
clouds can quench or frustrate radio jets (at least for certain models of
jets), which would explain the anti-correlation between the presence of BALs
and radio power (Stocke et al. 1992; Brotherton et al. 1998) and the
association of radio-loud quasars with strong NLR and ILR emission (Fig. 2;
Francis, Hooper, \& Impey 1993).
\section{Application to the Baldwin Effect}
Even without a physical explanation for eigenvector 1, the relationships
can be used empirically.
In the investigations of Wills et al. (1993) and Brotherton et al. (1994a),
C IV $\lambda$1549 did not display a significant Baldwin effect. These samples
covered only a small range in quasar luminosity. The EW$_{CIV}$
strongly correlated with FWHM$_{CIV}$, suggesting eigenvector 1 to be
the source of scatter in the Baldwin effect. Multiple regression using
both EW$_{CIV}$ {\em and} FWHM$_{CIV}$ as predictors of luminosity should
produce a tightened Baldwin effect if this hypothesis is true.
The Large Bright Quasar Survey or LBQS (Hewett et al. 1995) is the largest
complete optically selected sample anyone has yet studied in detail
(although the luminosity range is relatively small).
Francis et al. (1992) measured the Baldwin effect for a
high-redshift subsample of the LBQS, both for the entire C IV $\lambda$1549
line and also the line core and line wings separately. They found marginally
significant differences suggesting that the line cores contributed most
to the effect.
Figure 3 shows data from the LBQS investigation by Francis et al. (1992),
with an addition: a vector showing the magnitude and direction of eigenvector
1 (from their spectral PCA) for each quasar. The distribution does not
appear independent of luminosity: the left part of
the plot is heavy with up vectors, the right side with down vectors.
Eigenvector 1 is a primary cause of the Baldwin effect in this sample.
Correcting for eigenvector 1 would not reduce the Baldwin effect scatter,
but the Baldwin effect itself.
\begin{figure}
\plotfiddle{lbqsbeland.eps}{6.5cm}{-90}{38}{38}{-150}{220}
\caption{The Baldwin effect in the LBQS sample from Francis et al. (1992).
The arrows on each point indicate the value of principal component 1 (PC1) from
their spectral principal component analysis. Large up arrows indicate
narrow peaky C IV lines, and large down arrows indicate broad, flat-topped
profiles. The fact that the distribution of PC1 weights changes with
luminosity suggests that its variation also drives the Baldwin effect.} \label{fig-1}
\end{figure}
Osmer, Porter, \& Green (1994) created composite spectra for samples of
quasars with different luminosities. Difference spectra showed that the
change in emission-line equivalent width was confined to the low-velocity
gas.
Unfortunately the situation appears to be different at low luminosities.
Boroson \& Green (1992) identify luminosity with eigenvector 2 in their
PCA of the BQS. Wills et al. (this volume) also identifies luminosity and
the Baldwin effect with an eigenvector 2. The luminosity ranges of these
samples are again not ideal for investigating the Baldwin effect.
\section{Conclusions and Future Directions}
There is conflicting evidence about whether or not eigenvector 1 correlations
are driving the Baldwin effect or driving scatter in the Baldwin effect.
The answer is of course high-quality data on a large carefully selected sample
covering a wide range of luminosity, followed by multivariate analysis.
An appropriate data set does not as yet appear to exist.
\acknowledgments
I would like to thank Bev Wills for her contributions,
both tangible and intangible.
This work has been performed under the auspices of the U.S. Department of Energy
by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.
| 2024-02-18T23:40:15.549Z | 1998-11-05T03:46:36.000Z | algebraic_stack_train_0000 | 1,848 | 3,842 |
|
proofpile-arXiv_065-9073 | \section{Introduction}
\paragraph*{}
Earlier Figueiredo,Dami\~{a}o Soares and Tiomno (FTD)
\cite{1,2,3} investigated the gravitational coupling of
Klein-Gordon and Dirac fields to matter vorticity and
torsion in some cosmological models.Some years earlier
H.Rumpf \cite{4} considered a particular case of that
problem namely a Riemann-flat spectrum for Dirac
particles in the case of constant torsion and
electromagnetic fields.More recently Claus
L\" {a}mmerzhal \cite{5} have considered the case of
the coupling of space-time torsion to the Dirac equation showing the effects on
the energy levels of atoms which can be tested by the
Hughes-Drever experiments.In this paper he was able to
place a limit on the axial torsion testing the
anisotropy of anomalous spin couplings and mass.Yet
more recently I.L.Shapiro \cite{6} and Bagrov,
Buchbinder and Shapiro \cite{7} considered the
possibility of testing torsion theories in the low
energy limit by making use of a non-Minimal coupling
Lagrangian with torsion and scalar fields.At this point
is important to note that in references \cite{1,2,3,4,5}
the authors have considered that the torsion did not
propagate or in other words they considered to be
dealing with the Einstein-Cartan gravity torsion does
not propagate.Here on the contrarywe consider the
general case where torsion propagates and besides we
assume a non-minimal coupling where the Klein-Gordon
scalar fields couple with torsion, contrary to the (FDT)
approach where only Dirac fields couple to torsion
through the spinorial connection.This is also the
point of view addopted by Carroll and Field \cite{8}
who showed that for a wide class of models the only
modes of the torsion tensor which interact with matter
are either a massive scalar or a massive spin-1 boson.
In fact the Lagrangian we addopt here is nothing but a
slight modification of their Lagrangian
\begin{equation}
L=a{\partial}_{\mu}T_{\nu}{\partial}^{\mu}T^{\nu}+b({\partial}_{\mu}T^{\mu})^{2}+cT_{\mu}T^{\mu}
\label{1}
\end{equation}
where $ T^{\mu} $ is the torsion pseudo-vector.Our
Lagrangian is given by
\begin{equation}
L_{T}=L ={\partial}_{\mu}{\phi}{\partial}^{\mu}{\phi}+{\partial}_{\mu}T^{\mu}+T_{\mu}T^{\mu}-m^{2}{\phi}^{2}
\label{2}
\end{equation}
This Lagrangian can in fact be obtained from the
following Lagrangian
\begin{equation}
L={\partial}_{\mu}{\phi}{\partial}^{\mu}{\phi}+ sR{\phi}^{2}-m^{2}{\phi}^{2}
\label{3}
\end{equation}
Where $ s $ represents the torsion coupling and the last term is the potential energy term and the other terms in equation (\ref{3}) are obtained by
considering that the scalar curvature $ R $ is Riemann-
flat and given by
\begin{equation}
R={\partial}_{\mu}T^{\mu}+T_{\mu}T^{\mu}
\label{4 }
\end{equation}
To simplify matters we shall from now on suppress
indices.Thus variation of Lagrangian (\ref{2}) with
respect to torsion and the scalar field $ {\phi} $
yields respectively the equations
\begin{equation}
T=-\frac{{\partial}{\phi}}{\phi}
\label{5}
\end{equation}
and
\begin{equation}
{{\partial}^{2}}{\phi}+s{\partial}{\phi}-\frac{{m}^{2}}{2}{\phi}=0
\label{6}
\end{equation}
Let us now solve this system for the case of plane
symmetry similar to the case of domain walls in
Riemann-Cartan space-time \cite{9}).In this case the
above equations may be written in the form
\begin{equation}
{\phi}^{"}+s{\phi}^{`}-\frac{{m}^{2}}{2}{\phi}=0
\label{7}
\end{equation}
where the upper primes denote derivatives with respect
to coordinate-z.Note that the second term in equation
(\ref{7}) represents a damping like a friction in domain
walls,thus one may say that torsion introduces a sort of
friction into the problem.Discrete spectrum can be found
for example in refence \cite{1} in the case of the
G\"{o}del cosmological model.The LHS of equation (\ref{7}) can be
written in operator form as
\begin{equation}
\hat{H}{\phi}=(\hat{H}_{0}+\hat{H}_{torsion}){\phi}
\label{8}
\end{equation}
where $ \hat{H}_{0} $ is the basic Klein-Gordon Hamiltonian
operator and $ \hat{H}_{torsion} $ is the Hamiltonian torsion
operator given by
\begin{equation}
\hat{H}_{torsion}= s{\frac{\partial}{{\partial}z}}
\label{9}
\end{equation}
A similar operator but much more involved have been
constructed by L\"{a}mmerzhal \cite{5}.To solve
(\ref{6}) we make use of a simple ansatz
\begin{equation}
{\phi}=e^{cz}
\label{10}
\end{equation}
Substitution of (\ref{10}) into (\ref{6}) yields the
following constraint equation
\begin{equation}
c^{2}+sc+m^{2}=0
\label{11}
\end{equation}
which is an algebraic very simple equation which yields
\begin{equation}
c=\frac{-s \pm (s^{2}-4m^{4})^{\frac{1}{2}}}{2}
\label{12}
\end{equation}
substitution of (\ref{12}) into (\ref{10}) yields the
scalar field
\begin{equation}
{\phi}=e^{\frac{-s \pm (s^{2}-4m^{4})^{\frac{1}{2}}}{2}z}
\label{13}
\end{equation}
one may calculate from (\ref{13}) the following energy
\begin{equation}
{\epsilon}=|{\phi}^{'}|^{2}+V({\phi})
\label{14}
\end{equation}
substitution of (\ref{13}) into (\ref{14}) yields
\begin{equation}
{\epsilon}=|\frac{-s \pm (s^{2}-4m^{4})^{\frac{1}{2}}}{2}|^{2}e^{cz}
\label{15}
\end{equation}
From this last expression one notes immediatly that
there is a splitting of the spectral lines of the
Klein-Gordon fields always that torsion does not
coincide with the double of the mass squared.The
physical constraint required here unfortunatly makes
the task of measuring torsion directly since at least
in astrophysical stellarobjects the mass is always much
higher the torsion imagine the mass squared.Nevertheless
indirect measurements can be sugested.Although there is
a double splitting in the energy one must observe that
this spectrum is continuos.In reference (\ref{1}) a
similar situation appears, nevertheless in their case
vorticity plays the role of mass here.In other words is
the simultaneous presence of torsion and mass here that
produces the splitting of the energy levels.It is also
important to note that the torsionless case is not
allowed here since in this case the energy would be a
complex number.Discrete spectrum can also be found in
reference \cite{1} in the case of the G\"{o}del
cosmological model.Yet in reference \cite{7} Bagrov et al. obtained a double splitting of the spectral lines of the Hydrogen atom by considering the low energy limit of torsion in the Schr\"{o}dinger equation.Also in their case the splitting is a pure torsion effect and does not depend on the magnetic field.A more detailed analysis of these
experiments may appear elsewhere.Besides a small change in the potential in our Lagrangian would allows to investigate gravitational torsion kinks or domain walls.
\section*{Acknowledgments}
\paragraph*{}
I am very grateful to Prof.Claus L\"{a}mmerzhal for
providing fundamental ideas for the developement of
this work.Thanks are also due to to Prof.Ilya Shapiro
and my colleagues Jim Skea and Rudnei Ramos for helpful
discussions on the subject of this paper.Thanks are
due to CNPq. and DAAD(Bonn) from financial support.
| 2024-02-18T23:40:15.717Z | 1998-11-03T23:24:30.000Z | algebraic_stack_train_0000 | 1,853 | 1,206 |
|
proofpile-arXiv_065-9160 | \section{Introduction}
Rutherford backscattering is one of the most important and most
commonly applied techniques in surface
analysis. Its main advantages are that it is fully quantitative
and that precisions less
than 1\% can be achieved \cite{Jeynes97}. The interpretation of the data, however, is in many
cases not
straightforward. During the last decade several computer programs for the simulation and
analysis of spectra obtained from RBS were developed, such as RUMP \cite{Doolittle85}
or SIMNRA \cite{Mayer97}.
With these programs the determination of a depth profile is, however, a matter of
trial and error.
The user has to prescribe depth profiles of all
elements and has to compare the simulated
spectrum calculated from the input profiles with the data.
The depth profiles are then adjusted until one obtains a reasonable agreement of
simulated and measured data.
Obviously this evaluation procedure has several shortcomings.
It is a time-consuming cumbersome task,
the accuracy of the achieved depth profile is unknown and in many cases there is an ambiguity
between different depth profiles which fit the data equally well.
The combination of the adaptive kernel method in the Bayesian framework \cite{Fischer97}
with an RBS-simulation
program allows to
overcome these disadvantages and extends the
potential of Rutherford backscattering spectroscopy.
\section{Basic Concepts of Rutherford Backscattering}
In RBS-analysis, a sample is exposed to a beam of ions with mass $m_{0}$
(e.g. He-particles) with a well defined energy $E_{0}$
in the order of MeV. Ions undergoing elastic Coulomb
collisions with sample atoms
are recorded in a solid state detector
which views at a fixed deflection angle $\theta$. The Rutherford cross-section for
this coulombic projectile-target interaction is
quantitatively known. The energy $E'$ of the backscattered ions depends
on the energy $E$ before
the collision, the mass of the ions $m_{0}$, the mass of their colliding
partner $M_{i}$ and the
deflection angle $\theta$ :
\begin{equation}
E'=E\left[\frac{\sqrt{1-\left(\frac{m_{0}}{M_{i}}\right)^{2}\sin^{2}\theta}+\frac{m_{0}}{M_{i}}\cos\theta}{1+\frac{m_{0}}{M_{i}}}\right]^{2}.
\end{equation}
From Eq. 1 we see that ions undergoing a collision with a heavy target atom
loose less
energy than ions colliding with a target atom of lower atomic mass.
In addition, both primary ions and scattered ions loose energy on their way through the sample,
depending on the
stopping power. This is the main reason which enables RBS to be depth sensitive.
The stopping power depends on the energy of the particles and the composition of the sample.\\
\begin{figure}[htb]
\centerline{\psfig{file=figure1.ps,width=11cm,height=7cm}}
\caption{\textit{Schematic diagram of a RBS-experiment a) and the corresponding spectrum b).}}
\label{fig.RBS_Schematic}
\end{figure}
Fig. {\ref{fig.RBS_Schematic}a} depicts a typical RBS experiment. A thin overlayer
(A) of atoms with a high atomic mass $M_{A}$ is on top of the bulk substrate (B) with a
lower atomic mass $M_{B}$. In the energy spectrum of backscattered
particles (Fig. {\ref{fig.RBS_Schematic}b}), the
film A leads to a spectral peak at higher energies, broadened by the apparatus
transfer function
and the statistical fluctuations of the energy
loss of the ions.
Scattering from B produces a broadened step at lower energies.
The high energy side of this step
originates from
scattering from the topmost B-Layer. The increase of the spectrum with decreasing energy
results mainly from the $\frac{1}{E^{2}}$
dependence of the Rutherford cross section.
\section{Simulation of RBS-Spectra}
For a spectrum synthesis the sample is divided into sub-layers with thickness $\Delta x$.
The spectrum is calculated from the superimposed contributions of scattering processes
from all elements in all sub-layers of the sample.
For each sub-layer the concentrations on the layer-boundaries must be given. Inside
the sub-layer the concentration profile is assumed to interpolate linearly.
In each sub-layer the energy loss of the ions inside this layer and the
cross-sections are determined.
\paragraph{Cross-Section Data:}
The actual cross-section deviates from the well known \\
Rutherford cross-section \cite{Tesmer95} at
both, high and low energies. The low-energy discrepancy is caused by partial screening of the
nuclear charges by the electronic shells \cite{Tesmer95}. This screening is taken into
account by a
correction factor $C(E,\Theta)$ \cite{Anderson80}.
At high energies the cross sections deviate from the Rutherford cross-section due to the
influence of the nuclear force \cite{Bozoin95}. This is unimportant in the present case.
\paragraph{Stopping Power Data:}
The two dominant processes of energy loss of a penetrating ion are the interactions of the
moving ion with bound or free electrons in the target, and the interactions of the moving
ion with the screened or unscreened nuclei of the target atoms.
The electronic stopping power data are taken from Ziegler, Biersack and Littmark
\cite{Ziegler85}.
The nuclear stopping power for helium is calculated
from \cite{Ziegler85}.
In compound materials, Bragg's rule is used,
\begin{equation}
{\left(\frac{dE}{dx}\right)}_{total}=\sum_{i}c_{i}{\left(\frac{dE}{dx}\right)}_{i},
\end{equation}
to calculate the effective stopping power ${\left(\frac{dE}{dx}\right)}_{total}$ from the concentrations $c_{i}$
and the stopping power ${\left(\frac{dE}{dx}\right)}_{i}$ of each individual component $i$.
The key assumption of Bragg's rule that the interaction between the ion and a target atom is
independent of the environment holds in most cases. In some compounds such as
oxides the deviations from Bragg's rule predictions may, however, be of the order of
10\% to 20\% \cite{Ziegler88}.
\paragraph{Energy Loss Straggling:}
The energy loss of charged particles penetrating material is accompanied by a spread of the
beam energy which is due to statistical fluctuations of the energy transfer
in the loss channels.
As the number of interactions is high, the energy broadening is well described by a
Gaussian. The program uses Bohr's theory of energy-loss straggling \cite{Bohr48}, together with
corrections by Chu \cite{Chu76}, which include the electron binding in the target atoms.
The energy dependence of the stopping power results further in a non-stochastic
broadening (or squeezing)
of the energy distribution of the ion beam. The energy width $\Delta E_{f}$ after passing the
sub-layer is given by \cite{Szilagy95}:
\begin{equation}
\Delta E_{f}=\frac{S(E_{f})}{S(E_{i})}\Delta E_{i}
\end{equation}
with $E_{i}$, $E_{f}$ as the mean energies and $S(E_{i})$, $S(E_{f})$ as the stopping powers at
the entrance and exit of the sub-layer, respectively.
\section{Experiment}
The interpretation of RBS data is required for the analysis of erosion
measurements of plasma facing materials in fusion experiments.
The solid inner walls surrounding the plasma are
subjected to an intense bombardment by plasma particles because the confinement of
the plasma by the confining
magnetic field is not perfect. The surfaces of the inner walls are mainly modified
by ion implantation, erosion and
by deposition of material from other wall areas. \\
One major problem in fusion research is to
find a wall material where wall erosion rate and wall modifications are small and
tolerable \cite{Behrisch88}.
The importance of this problem for planned fusion power plants is emphasized by an erosion
analysis
for ITER \cite{Brooks98}. The modeled gross erosion yield of a carbon-divertor could reach
a maximum of $5$m/burning-year, which is reduced
by redeposition down to about $0.5$m/burning-year. The modeling, however, faces
exceptional difficulties
due to complex hydrocarbon transport phenomena and the lack of
input data (e.g. for low energy sputtering).
Therefore experimental determination of erosion and redeposition yields
is necessary to
validate the modeling and to improve the quantitative knowledge of the fundamental
erosion processes.\\
To determine carbon erosion rates in the divertor of ASDEX Upgrade, graphite probes
which were covered with a $150$nm layer of $^{13}C$ were exposed to single plasma discharges.
$^{13}C$ was used because chemical erosion is unaffected by isotope
substitution and to allow the measurement of redeposited $^{12}C$ eroded at other
plasma facing components.
Furthermore the stopping power in $^{13}C$ and $^{12}C$
is the same and so the
limited accuracy of the stopping power in the simulation cancels.
\begin{figure}[htb]
\centerline{\psfig{file=figure2.ps,width=9cm}}
\caption{\textit{Poloidal cross-section of ASDEX-Upgrade. The circle
indicates the position of the sample on the outer divertor
in ASDEX-Upgrade.
The separatrix is the outermost closed magnetic flux line. The point
the separatrix touches the divertor is called the strike point}}
\label{fig.Divertor}
\end{figure}
The sample was introduced in the outer divertor of ASDEX Upgrade
(circle in Fig. {\ref{fig.Divertor}})
covering in particular the strike point, which is the point where the outermost
last closed magnetic
flux line touches the plate surface with a corresponding maximum of the power load.
\\
The samples were analyzed before and after plasma exposure with a total
exposure time of 4 seconds using RBS with 2.0 MeV $^{4}$He ions.
The backscattered particles were detected at a scattering angle
of $\Theta=165^{\circ{}}$. The width of the apparatus transfer function
is about 19keV FWHM \cite{Dose97}.
Fig. {\ref{fig.RBS_Daten}} shows typical spectra before and after plasma exposure.
\begin{figure}[htb]
\centerline{\psfig{file=figure3.ps,width=9cm}}
\caption{\textit{RBS-spectra before and after plasma exposure. The shift of the high
energy edge is clearly visible.}}
\label{fig.RBS_Daten}
\end{figure}
Before plasma exposure the signal from the $^{13}C$-layer at higher energy is separated by a
gap from the part of the spectrum corresponding to the underlying $^{12}C$-bulk material.
After plasma exposure the high energy edge of the signal from $^{13}C$ has shifted
towards lower energies. This indicates that there is no
longer $^{13}C$ at
the surface of the sample. The peak at $430$ keV is due to
the $^{12}C$ at the sample surface and from the $^{13}C$ fraction below the surface.
The difference of the RBS-spectra before and after exposure contains the information
about the erosion and redeposition yields.
\section{Results}
To determine the concentration depth profiles from the measured RBS data a simple
$\chi^{2}$-fit is insufficient and results in useless rapidly oscillating depth profiles.
This is due to the ill-conditioned nature of the inversion problem which results from the
energy-straggling broadening, the finite apparatus-induced energy resolution and the
counting statistics.
Furthermore the optimal grid, given by the thickness of the sub-layers
the sample is divided in,
is unknown. \\
For this kind of problems the adaptive kernel method is well suited.
The concept of adaptive kernels provides local smoothness which makes the
result robust against noise corruption. The locality of the information content of
the data is taken into consideration by the local varying kernel widths.
Constraints
like positivity or other prior knowledge (like bulk concentrations) are easy to include.
The used adaptive kernel method is presented in detail in this proceeding \cite{Rainer99}.
\\Fig. {\ref{fig.Depth_Profiles}a} shows the reconstructed $^{12}C$
and $^{13}C-$depth
\begin{figure}[htb]
\centerline{\psfig{file=figure4.ps,width=11cm}}
\caption{\textit{panels a) and b): $^{12}C$ and $^{13}C$-distribution before
and after plasma exposure.
Panel c): RBS-data
(black dots) and the calculated RBS-spectrum
(grey line) from the depth profile in the panel b).}}
\label{fig.Depth_Profiles}
\end{figure}
profiles of a sample before plasma exposure. The concentrations in each layer sum up to one.
The surface concentration of $^{13}C$
(on the left side) is above 90\% and decreases only slightly to a depth of about $150$nm.
The remaining 10\% fraction of $^{12}C$ is caused by impurities in the coating process.
The broad transition between the $^{13}C-$layer and the $^{12}C-$bulk can be explained
by the interface roughness of the virgin sample.
After 4 seconds of plasma exposure the depth profiles have changed
dramatically, as shown in Fig. {\ref{fig.Depth_Profiles}b}.
There is a $^{12}C-$layer with a thickness of about
$70$nm on top of the $^{13}C$. The maximum concentration of $^{13}C$ has decreased, however,
the thickness of the $^{13}C$-layer is with about $170$nm nearly unchanged.
Furthermore, there is a continuous
level of $^{12}C$ in the whole sample with a minimum concentration of 20\%.
Since diffusion due to thermal effects could be excluded, the impacting $^{12}C$-atoms must
have mixed the material.
Fig. {\ref{fig.Depth_Profiles}c} shows the RBS-data
as black dots and the calculated RBS-spectrum (solid line) based on the depth profile shown
in Fig. {\ref{fig.Depth_Profiles}b}. The
agreement is within the counting statistics.\\
With samples in different distances to the strike point we achieved a
laterally resolved determination of erosion and deposition
as shown in Fig. {\ref{fig.Result}}. The height of the $^{13}C$-tracer
was $153$nm before exposure (dashed line in Fig. {\ref{fig.Result}}).
\begin{figure}[htb]
\centerline{\psfig{file=figure5.ps,width=9.5cm}}
\caption{\textit{Schematic picture of the $^{12}C$ and $^{13}C$ distribution before and
after plasma exposure. The grey dashed line gives the height of the $^{13}C$-tracer
before plasma exposure. The grey shaded area marks the height of $^{13}C$ after plasma
exposure and the difference between the upper black line and the grey shaded area gives the
height of deposited $^{12}C$.}}
\label{fig.Result}
\end{figure}
The grey shaded area marks the thickness of the $^{13}C$-layer after plasma exposure. The
highest erosion of $40$nm was observed at the strike point.
With increasing distance the erosion
reduces slightly to $\simeq{30}$nm in $5$cm distance. The solid line represents the
joint height of the $^{13}C$ and deposited $^{12}C$ under the assumption that no $^{12}C$
from the bulk was eroded. The difference between the solid line and the grey shaded
area of $^{13}C$ is the height of deposited $^{12}C$. The amount of $^{12}C$ which covers the
$^{13}C$ is largest at the strike point with over $100$nm and reduces down to $10$nm
in a distance of $5.5$cm.
Near the strike point the redeposition of carbon is larger than the erosion,
which makes this location a net deposition zone. By contrast, in a distance larger
than $1.5$cm from the strike
point there is a net erosion area. Fig. {\ref{fig.Result}} is only a schematic representation
which shows the total amount of $^{12}C$ and $^{13}C$ in a simplified distribution.
It can be seen from the depth profiles in Fig. {\ref{fig.Depth_Profiles}}
that after
plasma exposure there are no longer clearly separated layers of the two different isotopes
and pronounced mixing has occurred. The large spatial variation of erosion and deposition
rates shows, that the lifetime of plasma facing components can only be evaluated
for specific local conditions.
\section{Conclusions}
With the used combination of the RBS-simulation program and the adaptive kernel method
the capabilities of RBS-data evaluation have been considerably extended.
This allows to study erosion, deposition and mixing of carbon as inner wall material
in fusion experiments by using different isotopes which have no influence on the
chemical erosion.
The experiment shows a spatially varying net erosion/deposition rate with large mixing.
Further investigations are necessary to answer the question of the long-time behavior
of the erosion of the inner wall materials facing different plasma conditions.
| 2024-02-18T23:40:15.998Z | 1998-11-02T10:38:08.000Z | algebraic_stack_train_0000 | 1,865 | 2,610 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.