Dataset Viewer
Auto-converted to Parquet
id
stringlengths
18
42
text
stringlengths
0
2.38M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
208 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
406k
proofpile-arXiv_065-4
\section{Introduction}\label{sec:intro} Space provides a useful vantage point for monitoring large-scale trends on the surface of the Earth~\cite{manfreda2018use,albert2017using,yeh2020using}. Accordingly, numerous EO satellite missions have been launched or are being planned. Many EO satellites carry multispectral or hyperspectral sensors that measure the electromagnetic radiations emitted or reflected from the surface, which are then processed to form \emph{data cubes}. The data cubes are the valuable inputs to the EO applications. However, two thirds of the surface of the Earth is under cloud cover at any given point in time~\cite{jeppesen2019cloud}. In many EO applications, the clouds occlude the targets of interest and reduce the value of the data. In fact, many weather prediction tasks actually require clear-sky measurements~\cite{liu2020hyperspectral}. Dealing with cloud cover is part-and-parcel of practical EO processing pipelines~\cite{transon2018survey, li2019deep-ieee, paoletti2019deep, mahajan2020cloud, yuan2021review}. Cloud mitigation strategies include segmenting and masking out the portion of the data that is affected by clouds~\cite{griffin2003cloud,gomez-chova2007cloud}, and restoring the cloud-affected regions~\cite{li2019cloud,meraner2020cloud,zi2021thin} as a form of data enhancement. Increasingly, deep learning forms the basis of the cloud mitigation routines~\cite{li2019deep-ieee,castelluccio2015land,sun2020satellite,yang2019cdnet}. \begin{figure}[t]\centering \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/rgb_cloudy.pdf} \caption{Cloudy image (in RGB).} \end{subfigure} \hspace{0.5em} \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/rgb_notcloudy.pdf} \caption{Non-cloudy image (in RGB).} \end{subfigure} \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/b128_patch.pdf} \caption{Adversarial cube to bias the detector in the cloud-sensitive bands.} \label{fig:falsecolor} \end{subfigure} \hspace{0.5em} \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/rgb_patch.pdf} \caption{Adversarial cube blended in the environment in the RGB domain.} \end{subfigure} \vspace{-0.5em} \caption{(Row 1) Cloudy and non-cloudy scenes. (Row 2) Our \emph{adversarial cube} fools the multispectral cloud detector~\cite{giuffrida2020cloudscout} to label the non-cloudy scene as cloudy with high confidence.} \label{fig:example} \end{figure} As the onboard compute capabilities of satellites improve, it has become feasible to conduct cloud mitigation directly on the satellites~\cite{li2018onboard,giuffrida2020cloudscout}. A notable example is CloudScout~\cite{giuffrida2020cloudscout}, which was tailored for the PhiSat-1 mission~\cite{esa-phisat-1} of the European Space Agency (ESA). PhiSat-1 carries the HyperScout-2 imager~\cite{esposito2019in-orbit} and the Eyes of Things compute payload~\cite{deniz2017eyes}. Based on the multispectral measurements, a convolutional neural network (CNN) is executed on board to perform cloud detection, which, in the case of~\cite{giuffrida2020cloudscout}, involves making a binary decision on whether the area under a data cube is \emph{cloudy} or \emph{not cloudy}; see Fig.~\ref{fig:example} (Row 1). To save bandwidth, only \emph{non-cloudy} data cubes are downlinked, while \emph{cloudy} ones are not transmitted to ground~\cite{giuffrida2020cloudscout}. However, deep neural networks (DNNs) in general and CNNs in particular are vulnerable towards adversarial examples, \ie, carefully crafted inputs aimed at fooling the networks into making incorrect predictions~\cite{akhtar2018threat, yuan2019adversarial}. A particular class of adversarial attacks called physical attacks insert adversarial patterns into the environment that, when imaged together with the targeted scene element, can bias DNN inference~\cite{athalye2018synthesizing, brown2017adversarial, eykholt2018robust, sharif2016accessorize, thys2019fooling}. In previous works, the adversarial patterns were typically colour patches optimised by an algorithm and fabricated to conduct the attack. It is natural to ask if DNNs for EO data are susceptible to adversarial attacks. In this paper, we answer the question in the affirmative by developing a physical adversarial attack against a multispectral cloud detector~\cite{giuffrida2020cloudscout}; see Fig.~\ref{fig:example} (Row 2). Our adversarial pattern is optimised in the multispectral domain (hence is an \emph{adversarial cube}) and can bias the cloud detector to assign a \emph{cloudy} label to a \emph{non-cloudy} scene. Under the mission specification of CloudScout~\cite{giuffrida2020cloudscout}, EO data over the area will not be transmitted to ground. \vspace{-1em} \paragraph{Our contributions} Our specific contributions are: \begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item We demonstrate the optimisation of adversarial cubes to be realised as an array of exterior paints that exhibit the multispectral reflectance to bias the cloud detector. \item We propose a novel multi-objective adversarial attack concept, where the adversarial cube is optimised to bias the cloud detector in the cloud sensitive bands, while remaining visually camouflaged in the visible bands. \item We investigate mitigation strategies against our adversarial attack and propose a simple robustification method. \end{enumerate} \vspace{-1em} \paragraph{Potential positive and negative impacts} Research into adversarial attacks can be misused for malicious activities. On the other hand, it is vital to highlight the potential of the attacks so as to motivate the development of mitigation strategies. Our contributions above are aimed towards the latter positive impact, particularly \#3 where a defence method is proposed. We are hopeful that our work will lead to adversarially robust DNNs for cloud detection. \section{Related work}\label{sec:related_work} Here, we review previous works on dealing with clouds in EO data and adversarial attacks in remote sensing. \subsection{Cloud detection in EO data}\label{sec:related_hyperspectral} EO satellites are normally equipped with multispectral or hyperspectral sensors, the main differences between the two being the spectral and spatial resolutions~\cite{madry2017electrooptical,transon2018survey}. Each ``capture'' by a multi/hyperspectral sensor produces a data cube, which consists of two spatial dimensions with as many channels as spectral bands in the sensor. Since 66-70\% of the surface of the Earth is cloud-covered at any given time~\cite{jeppesen2019cloud,li2018onboard}, dealing with clouds in EO data is essential. Two major goals are: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Cloud detection, where typically the location and extent cloud coverage in a data cube is estimated; \item Cloud removal~\cite{li2019cloud,meraner2020cloud,zi2021thin}, where the values in the spatial locations occluded by clouds are restored. \end{itemize} Since our work relates to the former category, the rest of this subsection is devoted to cloud detection. Cloud detection assigns a \emph{cloud probability} or \emph{cloud mask} to each pixel of a data cube. The former indicates the likelihood of cloudiness at each pixel, while the latter indicates discrete levels of cloudiness at each pixel~\cite{sinergise-cloud-masks}. In the extreme case, a single binary label (\emph{cloudy} or \emph{not cloudy}) is assigned to the whole data cube~\cite{giuffrida2020cloudscout}; our work focusses on this special case of cloud detection. Cloud detectors use either \emph{hand-crafted features} or \emph{deep features}. The latter category is of particular interest because the methods have shown state-of-the-art performance~\cite{lopezpuigdollers2021benchmarking,liu2021dcnet}. The deep features are extracted from data via a series of hierarchical layers in a DNN, where the highest-level features serve as optimal inputs (in terms of some loss function) to a classifier, enabling discrimination of subtle inter-class variations and high intra-class variations~\cite{li2019deep-ieee}. The majority of cloud detectors that use deep features are based on an extension or variation of Berkeley's fully convolutional network architecture~\cite{long2015fully, shelhamer2017fully}, which was designed for pixel-wise semantic segmentation and demands nontrivial computing resources. For example, \cite{li2019deep} is based on SegNet~\cite{badrinarayanan2017segnet}, while \cite{mohajerani2018cloud, jeppesen2019cloud, yang2019cdnet, lopezpuigdollers2021benchmarking, liu2021dcnet, zhang2021cnn} are based on U-Net~\cite{ronneberger2015u-net}, all of which are not suitable for on-board implementation. \subsection{On-board processing for cloud detection} On-board cloud detectors can be traced back to the thresholding-based Hyperion Cloud Cover algorithm~\cite{griffin2003cloud}, which operated on 6 of the hyperspectral bands of the EO-1 satellite. Li \etal's on-board cloud detector~\cite{li2018onboard} is an integrative application of the techniques of decision tree, spectral angle map~\cite{decarvalhojr2000spectral}, adaptive Markov random field~\cite{zhang2011adaptive} and dynamic stochastic resonance~\cite{chouhan2013enhancement}, but no experimental feasibility results were reported. Arguably the first DNN-based on-board cloud detector is CloudScout~\cite{giuffrida2020cloudscout}, which operates on the HyperScout-2 imager~\cite{esposito2019in-orbit} and Eye of Things compute payload~\cite{deniz2017eyes}. As alluded to above, the DNN assigns a single binary label to the whole input data cube; details of the DNN will be provided in Sec.~\ref{sec:training}. \subsection{Adversarial attacks in remote sensing} Adversarial examples can be \emph{digital} or \emph{physical}. Digital attacks apply pixel-level perturbations to legitimate test images, subject to the constraints that these perturbations look like natural occurrences, \eg, electronic noise. Classic white-box attacks such as the FGSM~\cite{goodfellow2015explaining} have been applied to attacking CNN-based classifiers for RGB images~\cite{xu2021assessing}, multispectral images~\cite{kalin2021automating} and synthetic aperture radio images~\cite{li2021adversarial}. A key observation is the generalisability of attacks from RGB to multispectral images~\cite{ortiz2018integrated, ortiz2018on}. Generative adversarial networks have been used to generate natural-looking hyperspectral adversarial examples~\cite{burnel2021generating}. Physical attacks, as defined in Sec.~\ref{sec:intro}, need only access to the environment imaged by the victim, whereas digital attacks need access to the victim's test images (\eg, in a memory buffer); in this sense, physical attacks have weaker operational requirements and the associated impact is more concerning. For \emph{aerial/satellite RGB imagery}, physical attacks on a classifier~\cite{czaja2018adversarial}, aircraft detectors~\cite{den2020adversarial, lu2021scale} and a car detector~\cite{du2022physical} have been investigated but only \cite{du2022physical} provided real-world physical test results. For \emph{aerial/satellite multi/hyperspectral imagery}, our work is arguably the first to consider physical adversarial attacks. \section{Threat model}\label{sec:threat_model} We first define the threat model that serves as a basis for our proposed adversarial attack. \begin{description}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item[Attacker's goals] The attacker aims to generate an adversarial cube that can bias a pretrained multispectral cloud detector to label non-cloudy space-based observation of scenes on the surface as cloudy. In addition, the attacker would like to visually camouflage the cube in a specific \textbf{region of attack (ROA)}; see Fig.~\ref{fig:rgb_scenes} for examples. Finally, the cube should be physically realisable. \begin{figure}[ht]\centering \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{./figures/threat_model/hills-roa.pdf} \caption{Hills.} \label{fig:hills} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{./figures/threat_model/desert-roa.pdf} \caption{Desert.} \label{fig:desert} \end{subfigure} \vspace{-0.5em} \caption{Sample regions of attack.} \label{fig:rgb_scenes} \end{figure} \item[Attacker's knowledge] The attacker has full information of the targeted DNN, including architecture and parameter values, \ie, white-box attack. This is a realistic assumption due to the publication of detailed information on the model and training data~\cite{giuffrida2020cloudscout}. Moreover, from a threat mitigation viewpoint, assuming the worst case is useful. \item[Attacker's strategy] The attacker will optimise the adversarial cube on training data sampled from the same input domain as the cloud detector; the detailed method will be presented in Sec.~\ref{sec:attacking}. The cube will then be fabricated and placed in the environment, including the ROA, although Sec.~\ref{sec:limitations} will describe limitations on real-world evaluation of the proposed attack in our study. \end{description} \section{Building the cloud detector}\label{sec:training} We followed Giuffrida \etal.~\cite{giuffrida2020cloudscout} to build a multispectral cloud detector suitable for satellite deployment. \subsection{Dataset}\label{sec:cloud_detectors} We employed the Cloud Mask Catalogue~\cite{francis_alistair_2020_4172871}, which contains cloud masks for 513 Sentinel-2A~\cite{2021sentinel-2} data cubes collected from a variety of geographical regions, each with 13 spectral bands and 20 m ground resolution (1024$\times$1024 pixels). Following Giuffrida \etal., who also used Sentinel-2A data, we applied the Level-1C processed version of the data, \ie, top-of-atmosphere reflectance data cubes. We further spatially divide the data into 2052 data (sub)cubes of 512$\times$512 pixels each. To train the cloud detector model, the data cubes were assigned a binary label (\textit{cloudy} vs.~\textit{not cloudy}) by thresholding the number of cloud pixels in the cloud masks. Following Giuffrida \etal., two thresholds were used: 30\%, leading to dataset version TH30, and 70\%, leading to dataset version TH70 (the rationale will be described later). Each dataset was further divided into training, validation, and testing sets. Table~\ref{tab:cm_dataset} in the supp.~material summarises the datasets. \subsection{Model} We employed the CNN of Giuffrida \etal., which contains four convolutional layers in the feature extraction layers and two fully connected layers in the decision layers (see Fig.~\ref{fig:cnn_model} in the supp.~material for more details). The model takes as input 3 of the 13 bands of Sentinel-2A: band 1 (coastal aerosol), band 2 (blue), and band 8 (NIR). These bands correspond to the cloud-sensitive wavelengths; see Fig.~\ref{fig:falsecolor} for a false colour image in these bands. Using only 3 bands also leads to a smaller CNN ($\le 5$ MB) which allows it to fit on the compute payload of CloudScout~\cite{giuffrida2020cloudscout}. Calling the detector ``multispectral'' can be inaccurate given that only 3 bands are used. However, in Sec.~\ref{sec:mitigation}, we will investigate adversarial robustness by increasing the input bands and model parameters of Giuffrida \etal.'s model. \subsection{Training} Following~\cite{giuffrida2020cloudscout}, a two stage training process was applied: \begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Train on TH30 to allow the feature extraction layers to recognise ``cloud shapes''. \item Then, train on TH70 to fine-tune the decision layers, while freezing the weights in the feature extraction layers. \end{enumerate} The two stage training is also to compensate for unbalanced distribution of training samples. Other specifications (\eg, learning rate and decay schedule, loss function) also follow that of Giuffrida \etal.; see~\cite{giuffrida2020cloudscout} for details. Our trained model has a memory footprint of 4.93 MB (1,292,546 32-bit float weights), and testing accuracy and false positive rate of 95.07\% and 2.46\%, respectively. \section{Attacking the cloud detector}\label{sec:attacking} Here, we describe our approach to optimising adversarial cubes to attack multispectral cloud detectors. \subsection{Adversarial cube design}\label{sec:material_selection} Digitally, an adversarial cube $\mathbf{P}$ is the tensor \begin{equation*} \mathbf{P} = \begin{pmatrix} \mathbf{p}_{1,1} & \mathbf{p}_{1,2} & \cdots & \mathbf{p}_{1,N} \\ \mathbf{p}_{2,1} & \mathbf{p}_{2,2} & \cdots & \mathbf{p}_{2,N} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{p}_{M,1} & \mathbf{p}_{M,2} & \cdots & \mathbf{p}_{M,N} \end{pmatrix} \in [0,1]^{M \times N \times 13}, \end{equation*} where $M$ and $N$ (in pixels) are the sizes of the spatial dimensions, and $\mathbf{p}_{i,j} \in [0,1]^{13}$ is the intensity at pixel $(i,j)$ corresponding to the 13 multispectral bands of Sentinel-2A. Physically, $\mathbf{P}$ is to be realised as an array of exterior paint mixtures (see Fig.~\ref{fig:colour_swatches}) that exhibit the multispectral responses to generate the attack. The real-world size of each pixel of $\mathbf{P}$ depends on the ground resolution of the satellite-borne multispectral imager (more on this in Sec.~\ref{sec:limitations}). \subsubsection{Material selection and measurement} To determine the appropriate paint mixtures for $\mathbf{P}$, we first build a library of multispectral responses of exterior paints. Eighty exterior paint swatches (see Fig.~\ref{fig:colour_swatches_real}) were procured and scanned with a Field Spec Pro 3 spectrometer~\cite{asd2008fieldspec3} to measure their reflectance (Fig.~\ref{fig:paint_reflectance}) under uniform illumination. To account for solar illumination when viewed from the orbit, the spectral power distribution of sunlight (specifically, using the AM1.5 Global Solar Spectrum\cite{astm2003specification}; Fig.~\ref{fig:solar_spectrum}) was factored into our paint measurements via element-wise multiplication to produce the apparent reflection; Fig.~\ref{fig:paint_apparent_reflectance}. Lastly, we converted the continuous spectral range of the apparent reflectance of a colour swatch to the 13 Sentinel-2A bands by averaging over the bandwidth of each band; Fig.~\ref{fig:paint_13bands}. The overall result is the matrix \begin{align} \mathbf{C} = \left[ \begin{matrix} \mathbf{c}_1, \mathbf{c}_2, \dots, \mathbf{c}_{80} \end{matrix} \right] \in [0,1]^{13 \times 80} \end{align} called the \emph{spectral index}, where $\mathbf{c}_q \in [0,1]^{13}$ contains the reflectance of the $q$-th colour swatch over the 13 bands. \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches_diagram.pdf} \vspace{-2.0em} \caption{The adversarial cube (digital size $4 \times 5$ pixels in the example) is to be physically realised as a mixture of exterior paint colours that generate the optimised multispectral responses.} \label{fig:colour_swatches} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches.pdf} \vspace{-1.5em} \caption{A subset of our colour swatches (paint samples).} \label{fig:colour_swatches_real} \end{figure} \begin{figure*}[ht]\centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/ybr_reflectance.pdf} \caption{Reflectance of a colour swatch.} \label{fig:paint_reflectance} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/solar_spectrum.pdf} \caption{AM1.5 Global Solar Spectrum.} \label{fig:solar_spectrum} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/ybr_apparent_reflectance.pdf} \caption{Apparent reflectance of (a).} \label{fig:paint_apparent_reflectance} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/ybr_13bands.pdf} \caption{13 Sentinel-2 bands of (c).} \label{fig:paint_13bands} \end{subfigure} \vspace{-0.5em} \caption{Process of obtaining the 13 Sentinel-2 spectral bands of a colour swatch.} \label{fig:spectrometer} \end{figure*} \subsubsection{Adversarial cube parametrisation} We obtain $\mathbf{p}_{i,j}$ as a linear combination of the spectral index \begin{align}\label{eq:convex} \mathbf{p}_{i,j} = \mathbf{C}\cdot \sigma(\mathbf{a}_{i,j}), \end{align} where $\mathbf{a}_{i,j}$ is the real vector \begin{align} \mathbf{a}_{i,j} = \left[ \begin{matrix} a_{i,j,1} & a_{i,j,2} & \dots & a_{i,j,80} \end{matrix} \right]^T \in \mathbb{R}^{80}, \end{align} and $\sigma$ is the softmax function \begin{align} \sigma(\mathbf{a}_{i,j}) = \frac{1}{\sum^{80}_{d=1} e^{a_{i,j,d}}} \left[ \begin{matrix} e^{a_{i,j,1}} & \dots & e^{a_{i,j,80}} \end{matrix} \right]^T. \end{align} Effectively, $\mathbf{p}_{i,j}$~\eqref{eq:convex} is a convex combination of $\mathbf{C}$. Defining each $\mathbf{p}_{i,j}$ as a linear combination of $\mathbf{C}$ supports the physical realisation of each $\mathbf{p}_{i,j}$ through proportional mixing of the existing paints, as in colour printing~\cite{sharma2017digital}. Restricting the combination to be convex, thereby placing each $\mathbf{p}_{i,j}$ in the convex hull of $\mathbf{C}$, contributes to the sparsity of the coefficients~\cite{caratheodory-theorem}. In Sec.~\ref{sec:opimisation}, we will introduce additional constraints to further enhance physical realisability. To enable the optimal paint mixtures to be estimated, we collect the coefficients for all $(i,j)$ into the set \begin{align} \mathcal{A} = \{ \mathbf{a}_{i,j} \}^{j = 1,\dots,N}_{i=1,\dots,M}, \end{align} and parametrise the adversarial cube as \begin{equation*} \mathbf{P}(\mathcal{A}) = \begin{pmatrix} \mathbf{C}\sigma(\mathbf{a}_{1,1}) & \mathbf{C}\sigma(\mathbf{a}_{1,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{1,N}) \\ \mathbf{C}\sigma(\mathbf{a}_{2,1}) & \mathbf{C}\sigma(\mathbf{a}_{2,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{2,N}) \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{C}\sigma(\mathbf{a}_{M,1}) & \mathbf{C}\sigma(\mathbf{a}_{M,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{M,N}) \end{pmatrix}, \end{equation*} and where $\mathbf{p}_{i,j}(\mathcal{A})$ is pixel $(i,j)$ of $\mathbf{P}(\mathcal{A})$. Optimising a cube thus reduces to estimating $\mathcal{A}$. \subsection{Data collection for cube optimisation}\label{sec:data_collection} Based on the attacker's goals (Sec.~\ref{sec:threat_model}), we collected Sentinel-2A Level-1C data products~\cite{2021copernicus} over the globe with a distribution of surface types that resembles the Hollstein dataset~\cite{hollstein2016ready-to-use}. The downloaded data cubes were preprocessed following~\cite{francis_alistair_2020_4172871}, including spatial resampling to achieve a ground resolution of 20~m and size $512 \times 512 \times 13$. Sen2Cor~\cite{main-knorn2017sen2cor} was applied to produce probabilistic cloud masks, and a threshold of 0.35 was applied on the probabilities to decide \textit{cloudy} and \textit{not cloudy} pixels. The binary cloud masks were further thresholded with 70\% cloudiness (Sec.~\ref{sec:cloud_detectors}) to yield a single binary label for each data cube. The data cubes were then evaluated with the cloud detector trained in Sec.~\ref{sec:training}. Data cubes labelled \emph{not cloudy} by the detector was separated into training and testing sets \begin{align} \mathcal{D} = \{ \mathbf{D}_k \}^{2000}_{k=1}, \;\;\;\; \mathcal{E} = \{ \mathbf{E}_\ell \}^{400}_{\ell=1}, \end{align} for adversarial cube training. One data cube $\mathbf{T} \in \mathcal{D}$ is chosen as the ROA (Sec.~\ref{sec:threat_model}). \begin{figure*}[ht]\centering \includegraphics[width=0.95\linewidth]{./figures/methods/pipeline.pdf} \vspace{-0.5em} \caption{Optimisation process for generating adversarial cubes.} \label{fig:pipeline} \end{figure*} \subsection{Optimising adversarial cubes}\label{sec:patch} We adapted Brown \etal's~\cite{brown2017adversarial} method, originally developed for optimising adversarial patches (visible domain). Fig.~\ref{fig:pipeline} summarises our pipeline for adversarial cube optimisation, with details provided in the rest of this subsection. \vspace{-1em} \paragraph{Subcubes} First, we introduce the subcube notation. Let $b \subseteq \{1,2,\dots,13\}$ index a subset of the Sentinel-2A bands. Using $b$ in the superscript of a data cube, e.g., $\mathbf{P}^{b}$, implies extracting the subcube of $\mathbf{P}$ with the bands indexed by $b$. Of particular interest are the following two band subsets: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item $c = \{1, 2, 8\}$, \ie, the cloud sensitive bands used in~\cite{giuffrida2020cloudscout}. \item $v = \{2, 3, 4\}$, \ie, the visible bands. \end{itemize} \subsubsection{Cube embedding and augmentations}\label{sec:augmentations} Given the current $\mathcal{A}$, adversarial cube $\mathbf{P}(\mathcal{A})$ is embedded into a training data cube $\mathbf{D}_k$ through several geometric and spectral intensity augmentations that simulate the appearance of the adversarial cube when captured in the field by a satellite. The geometric augmentations include random rotations and positioning to simulate variations in placement of $\mathbf{P}(\mathcal{A})$ in the scene. The spectral intensity augmentations include random additive noise, scaling and corruption to simulate perturbation by ambient lighting. \subsubsection{Loss function and optimisation}\label{sec:opimisation} Define $\mathbf{D}_k(\mathcal{A})$ as the training data cube $\mathbf{D}_k$ embedded with $\mathbf{P}(\mathcal{A})$ (with the augmentations described in Sec.~\ref{sec:augmentations}). The data cube is forward propagated through the cloud detector $f$ to estimate the \emph{confidence} \begin{align} \hat{y}_k = f(\mathbf{D}^c_k(\mathcal{A})) \end{align} of $\mathbf{D}_k(\mathcal{A})$ being in the \emph{cloudy} class. Note that the cloud detector considers only the subcube $\mathbf{D}^c_k(\mathcal{A})$ corresponding to the cloud sentitive bands. Since we aim to bias the detector to assign high $\hat{y}_k$ to $\mathbf{D}_k(\mathcal{A})$, we construct the loss \begin{align}\label{eq:loss} \Psi(\mathcal{A},\mathcal{D}) = \sum_k -\log(f(\mathbf{D}^c_k(\mathcal{A}))). \end{align} In addition to constraining the spectral intensities in $\mathbf{P}(\mathcal{A})$ to be in the convex hull of $\mathbf{C}$, we also introduce the multispectral non-printability score (NPS) \begin{align}\label{eq:nps_loss} \Phi(\mathcal{A}, \mathbf{C}) = \frac{1}{M N} \sum_{i,j} \left( \min_{\textbf{c} \in \mathbf{C}} \left\| \textbf{p}_{i,j}(\mathcal{A}) - \mathbf{c}\right\|_2 \right). \end{align} Minimising $\Phi$ encourages each $\textbf{p}_{i,j}(\mathcal{A})$ to be close to (one of) the measurements in $\textbf{C}$, which sparsifies the coefficients $\sigma(\mathbf{a}_{i,j})$ and helps with the physical realisability of $\mathbf{P}(\mathcal{A})$. The multispecral NPS is an extension of the original NPS for optimising (visible domain) adversarial patches~\cite{sharif2016accessorize}. To produce an adversarial cube that is ``cloaked'' in the visible domain in the ROA defined by $\mathbf{T}$, we devise the term \begin{align}\label{eq:cloaking_loss} \Omega(\mathcal{A}, \mathbf{T}) = \left\| \textbf{P}^{v}(\mathcal{A}) - \mathbf{T}^v_{M \times N} \right\|_2, \end{align} where $\mathbf{T}^v_{M \times N}$ is a randomly cropped subcube of spatial height $M$ and width $N$ in the visible bands $\mathbf{T}^v$ of $\mathbf{T}$. The overall loss is thus \begin{equation} L(\mathcal{A}) = \underbrace{\Psi(\mathcal{A},\mathcal{D})}_{\textrm{cloud sensitive}} + \alpha\cdot \underbrace{\Phi(\mathcal{A}, \mathbf{C})}_{\textrm{multispectral}} + \beta \cdot \underbrace{\Omega(\mathcal{A}, \mathbf{T})}_{\textrm{visible domain}}, \label{eq:overall_loss} \end{equation} where weights $\alpha, \beta \ge 0$ control the relative importance of the terms. Notice that the loss incorporates multiple objectives across different parts of the spectrum. \vspace{-1em} \paragraph{Optimisation} Minimising $L$ with respect to $\mathcal{A}$ is achieved using the Adam~\cite{kingma2014adam} stochastic optimisation algorithm. Note that the pre-trained cloud detector $f$ is not updated. \vspace{-1em} \paragraph{Parameter settings} See Sec.~\ref{sec:results}. \subsection{Limitations on real-world testing}\label{sec:limitations} While our adversarial cube is optimised to be physically realisable, two major constraints prevent physical testing: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Lack of precise knowledge of and control over the operation of a real satellite makes it difficult to perform coordinated EO data capture with the adversarial cube. \item Cube dimensions of about 100$\times$100 pixels are required for effective attacks, which translates to 2 km$\times$2 km = 4 km$^2$ ground size (based on the ground resolution of the data; see Sec.~\ref{sec:data_collection}). This prevents full scale fabrication on an academic budget. However, the size of the cube is well within the realm of possibility, \eg, solar farms and airports can be much larger than $4$ km$^2$~\cite{ong2013land}. \end{itemize} We thus focus on evaluating our attack in the digital domain, with real-world testing left as future work. \section{Measuring effectiveness of attacks}\label{sec:metrics} Let $\mathbf{P}^\ast = \mathbf{P}(\mathcal{A}^\ast)$ be the adversarial cube optimised by our method (Sec.~\ref{sec:attacking}). Recall from Sec.~\ref{sec:data_collection} that both datasets $\mathcal{D}$ and $\mathcal{E}$ contain \emph{non-cloudy} data cubes. We measure the effectiveness of $\mathbf{P}^\ast$ on the training set $\mathcal{D}$ via two metrics: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Detection accuracy of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e., \begin{equation}\label{eq:accuracy} \text{Accuracy}({\mathcal{D}}) \triangleq \frac{1}{|\mathcal{D}|} \sum^{|\mathcal{D}|}_{k=1} \mathbb{I}(f(\mathbf{D}^c_k(\mathcal{A}^\ast)) \le 0.5), \end{equation} where the lower the accuracy, the less often $f$ predicted the correct class label (\emph{non-cloudy}, based on confidence threshold $0.5$), hence the more effective the $\mathbf{P}^\ast$. \item Average confidence of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e., \begin{equation}\label{eq:average_probability} \text{Cloudy}({\mathcal{D}}) \triangleq \frac{1}{|\mathcal{D}|} \sum^{|\mathcal{D}|}_{k=1} f(\mathbf{D}^c_k(\mathcal{A}^\ast). \end{equation} The higher the avg confidence, the more effective the $\mathbf{P}^\ast$. \end{itemize} To obtain the effectiveness measures on the testing set $\mathcal{E}$, simply swap $\mathcal{D}$ in the above with $\mathcal{E}$. \section{Results}\label{sec:results} We optimised adversarial cubes of size 100$\times$100 pixels on $\mathcal{D}$ (512$\times$512 pixel dimension) under different loss configurations and evaluated them digitally (see Sec.~\ref{sec:limitations} on obstacles to real-world testing). Then, we investigated different cube designs and mitigation strategies for our attack. \subsection{Ablation tests}\label{sec:ablation} Based on the data collected, we optimised adversarial cubes under different combinations of loss terms: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item $\Psi$: Adversarial biasing in the cloud-sensitive bands~\eqref{eq:loss}. \item $\Phi$: Multispectral NPS~\eqref{eq:nps_loss}. \item $\Omega$-Hills: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Hills (Fig.~\ref{fig:hills}). \item $\Omega$-Desert: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Desert (Fig.~\ref{fig:desert}). \end{itemize} The weights in~\eqref{eq:overall_loss} were empirically determined to be $\alpha = 5.0$ and $\beta = 0.05$. \vspace{-1em} \paragraph{Convex hull and NPS} Fig.~\ref{fig:cubes_hull} shows the optimised cubes $\mathbf{P}^\ast$ and its individual spectral intensities $\mathbf{p}^\ast_{i,j}$ in the cloud sensitive bands (false colour) and visible domain. Note that without the convex hull constraints, the intensities (green points) are scattered quite uniformly, which complicates physical realisability of the paint mixtures. The convex hull constraints predictably limit the mixtures to be in the convex hull of $\mathbf{C}$. Carath{\'e}odory's Theorem~\cite{caratheodory-theorem} ensures that each $\mathbf{p}^\ast_{i,j}$ can be obtained by mixing at most 13 exterior paints. In addition, the multispectral NPS term encourages the mixtures to cluster closely around the columns of $\mathbf{C}$ (red points), \ie, close to an existing exterior paint colour. \vspace{-1em} \paragraph{Visual camouflage} Fig.~\ref{fig:cubes_loss_images} illustrates optimised cubes $\mathbf{P}^\ast$ embedded in the ROA Hills and Desert, with and without including the cloaking term~\eqref{eq:cloaking_loss} in the loss function. Evidently the cubes optimised with $\Omega$ are less perceptible. \vspace{-1em} \paragraph{Effectiveness of attacks} Table~\ref{tab:result_loss} shows quantitative results on attack effectiveness (in terms of the metrics in Sec.~\ref{sec:metrics}) on the training $\mathcal{D}$ and testing $\mathcal{E}$ sets---again, recall that these datasets contain only \emph{non-cloudy} data cubes. The results show that the optimised cubes are able to strongly bias the pretrained cloud detector, by lowering the accuracy by at least $63\%$ (1.00 to 0.37) and increasing the cloud confidence by more than $1000\%$ (0.05 to 0.61). The figures also indicate the compromise an attacker would need to make between the effectiveness of the attack, physical realisablity and visual imperceptibility of the cube. \begin{table}[ht] \setlength\tabcolsep{1pt} \centering \begin{tabular}{p{4.0cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}} \rowcolor{black} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\ \hline \textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\ \hline - (no adv.~cubes) & 1.00 & 1.00 & 0.05 & 0.05 \\ $\Psi$ (no convex hull constr.) & 0.04 & 0.03 & 0.95 & 0.95 \\ $\Psi$ & 0.13 & 0.12 & 0.81 & 0.83 \\ $\Psi + \alpha\Phi$ & 0.22 & 0.19 & 0.73 & 0.75 \\ $\Psi + \beta\Omega$-Hills & 0.17 & 0.14 & 0.77 & 0.80 \\ $\Psi + \beta\Omega$-Desert & 0.23 & 0.25 & 0.72 & 0.73 \\ $\Psi + \alpha\Phi + \beta\Omega$-Hills & 0.25 & 0.28 & 0.71 & 0.70 \\ $\Psi + \alpha\Phi + \beta\Omega$-Desert & 0.37 & 0.37 & 0.61 & 0.61 \\ \end{tabular} \vspace{-0.5em} \caption{Effectiveness of 100$\times$100 adversarial cubes optimised under different loss configurations (Sec.~\ref{sec:ablation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack.} \label{tab:result_loss} \end{table} \begin{figure*}[ht]\centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{./figures/results/hull/log_nohull.pdf} \caption{$L = \Psi$ (without convex hull constraints).} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{./figures/results/hull/log_hull.pdf} \caption{$L = \Psi$.} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{./figures/results/hull/log+nps_hull.pdf} \caption{$L = \Psi + \alpha \cdot \Phi$.} \end{subfigure} \vspace{-0.5em} \caption{Effects of convex hull constraints and multispectral NPS on optimised cube $\mathbf{P}^\ast$. The top row shows the cube and individual pixels $\mathbf{p}^\ast_{i,j}$ (green points) in the visible bands $v$, while the bottom row shows the equivalent values in the cloud sensitive bands $c$ (in false colour). In the 3-dimensional plots, the red points indicate the columns of the spectral index $\mathbf{C}$ and black lines its convex hull.} \label{fig:cubes_hull} \end{figure*} \begin{figure}[ht]\centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/not_camo_hills.pdf} \caption{$L = \Psi + \alpha \Phi$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/camo_hills.pdf} \caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Hills}$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/not_camo_desert.pdf} \caption{$L = \Psi + \alpha \Phi$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/camo_desert.pdf} \caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Desert}$.} \end{subfigure} \vspace{-0.5em} \caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ with and without the cloaking term~\eqref{eq:cloaking_loss}.} \label{fig:cubes_loss_images} \end{figure} \subsection{Different cube configurations}\label{sec:multcube} Can the physical footprint of the adversarial cube be reduced to facilitate real-world testing? To answer this question, we resize $\mathbf{P}$ to 50$\times$50 pixels and optimise a number of them (4 or 6) instead. We also tested random configurations with low and high proximity amongst the cubes. The training pipeline for the multi-cube setting remains largely the same. Fig.~\ref{fig:cubes_config_images} shows (in visible domain) the optimised resized cubes embedded in a testing data cube. Quantitative results on the effectiveness of the attacks are given in Table~\ref{tab:result_cubeconfig}. Unfortunately, the results show a significant drop in attack effectiveness when compared against the 100$\times$100 cube on all loss configurations. This suggests that the size and spatial continuity of the adversarial cube are important factors to the attack. \begin{table}[ht] \setlength\tabcolsep{1pt} \centering \begin{tabular}{p{0.7cm} | p{1.50cm} | p{1.80cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}} \rowcolor{black} \multicolumn{3}{l |}{\textcolor{white}{\textbf{Cube configurations}}} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\ \hline \textbf{\#} & \textbf{Size} & \textbf{Proximity} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\ \hline \multicolumn{3}{l |}{- (no adv.~cubes)} & 1.00 & 1.00 & 0.05 & 0.05 \\ 4 & 50$\times$50 & Low & 0.87 & 0.87 & 0.26 & 0.27 \\ % 6 & 50$\times$50 & Low & 0.71 & 0.72 & 0.33 & 0.33 \\ 4 & 50$\times$50 & High & 0.63 & 0.62 & 0.42 & 0.44 \\ 6 & 50$\times$50 & High & 0.63 & 0.63 & 0.40 & 0.41 \\ \end{tabular} \vspace{-0.5em} \caption{Effectiveness of 50$\times$50 adversarial cubes under different cube configurations (Sec.~\ref{sec:multcube}) optimised with loss $L = \Psi + \alpha\Phi$. Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.} \label{tab:result_cubeconfig} \end{table} \begin{figure}[ht]\centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/four_random.pdf} \caption{Four 50$\times$50 cubes (low prox).} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/six_random.pdf} \caption{Six 50$\times$50 cubes (low prox).} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/four_fixed.pdf} \caption{Four 50$\times$50 cubes (high prox).} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/six_fixed.pdf} \caption{Six 50$\times$50 cubes (high prox).} \end{subfigure} \vspace{-0.5em} \caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ of different cube configurations.} \label{fig:cubes_config_images} \end{figure} \subsection{Mitigation strategies}\label{sec:mitigation} We investigated several mitigation strategies against our adversarial attack: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item 13 bands: Increasing the number of input bands of the cloud detector from 3 to 13 (all Sentinel-2A bands); \item $\sqrt{2}$: Doubling the model size of the cloud detector by increasing the number of filter/kernels in the convolutional layers and activations in the fully connected layers by $\sqrt{2}$ \item $2\times$ CONV: Doubling the model size of the cloud detector by adding two additional convolutional layers. \end{itemize} Table~\ref{tab:result_mitigations} shows that using a ``larger'' detector (in terms of the number of input channels and layers) yielded slightly worse cloud detection accuracy. However, increasing the number of input bands significantly reduced our attack effectiveness, possibly due to the increased difficulty of biasing all 13 channels simultaneously. This argues for using greater satellite-borne compute payloads than that of~\cite{giuffrida2020cloudscout}. \begin{table}[ht] \setlength\tabcolsep{1pt} \centering \begin{tabular}{p{1.5cm} | p{2.5cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}} \rowcolor{black} & & \multicolumn{2}{l |} {\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l} {\textcolor{white}{\textbf{Cloudy}}} \\ \hline \textbf{Detectors} & \textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\ \hline 13 bands & - (no adv.~cubes) & 1.00 & 1.00 & 0.06 & 0.06 \\ & $\Psi + \alpha\Phi$ & 0.94 & 0.96 & 0.15 & 0.14 \\ \hline $\sqrt{2}$ & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\ & $\Psi + \alpha\Phi$ & 0.36 & 0.38 & 0.62 & 0.60 \\ \hline $2\times$CONV & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\ & $\Psi + \alpha\Phi$ & 0.26 & 0.25 & 0.74 & 0.73 \\ \end{tabular} \vspace{-0.75em} \caption{Effectiveness of 100$\times$100 adversarial cubes optimised for different cloud detector designs (Sec.~\ref{sec:mitigation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.} \label{tab:result_mitigations} \end{table} \section{Conclusions and limitations}\label{sec:conclusion} We proposed a physical adversarial attack against a satellite-borne multispectral cloud detector. Our attack is based on optimising exterior paint mixtures that exhibit the required spectral signatures to bias the cloud detector. Evaluation in the digital domain illustrates the realistic threat of the attack, though the simple mitigation strategy of using all input multispectral bands seems to offer good protection. As detailed in Sec.~\ref{sec:limitations}, our work is limited to digital evaluation due to several obstacles. Real-world testing of our attack and defence strategies will be left as future work. \vfill \section{Usage of existing assets and code release} The results in this paper were partly produced from ESA remote sensing data, as accessed through the Copernicus Open Access Hub~\cite{2021copernicus}. Source code and/or data used in our paper will be released subject to securing permission. \vfill \section*{Acknowledgements}\label{sec:acknowledgement} Tat-Jun Chin is SmartSat CRC Professorial Chair of Sentient Satellites. {\small \bibliographystyle{ieee_fullname}
2024-02-18T23:39:39.782Z
2021-12-06T02:11:43.000Z
algebraic_stack_train_0000
0
6,601
proofpile-arXiv_065-66
\section{Preface} \label{s_preface} This paper primarily serves as a reference for my Ph.D. dissertation, which I am currently writing. As a consequence, the framework is not under active development. The presented concepts, problems, and solutions may be interesting regardless, even for other problems than Neural Architecture Search (NAS). The framework's name, UniNAS, is a wordplay of University and Unified NAS since the framework was intended to incorporate almost any architecture search approach. \section{Introduction and Related Work} \label{s_introduction} An increasing supply and demand for automated machine learning causes the amount of published code to grow by the day. Although advantageous, the benefit of such is often impaired by many technical nitpicks. This section lists common code bases and some of their disadvantages. \subsection{Available NAS frameworks} \label{u_introduction_available} The landscape of NAS codebases is severely fragmented, owing to the vast differences between various NAS methods and the deep-learning libraries used to implement them. Some of the best supported or most widely known ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item {NASLib~\citep{naslib2020}} \item { Microsoft NNI \citep{ms_nni} and Archai \citep{ms_archai} } \item { Huawei Noah Vega \citep{vega} } \item { Google TuNAS \citep{google_tunas} and PyGlove \citep{pyglove} (closed source) } \end{itemize} Counterintuitively, the overwhelming majority of publicly available NAS code is not based on any such framework or service but simple and typical network training code. Such code is generally quick to implement but lacks exact comparability, scalability, and configuration power, which may be a secondary concern for many researchers. In addition, since the official code is often released late or never, and generally only in either TensorFlow~\citep{tensorflow2015-whitepaper} or PyTorch~\citep{pytorch}, popular methods are sometimes re-implemented by some third-party repositories. Further projects include the newly available and closed-source cloud services by, e.g., Google\footnote{\url{https://cloud.google.com/automl/}} and Microsoft\footnote{\url{https://www.microsoft.com/en-us/research/project/automl/}}. Since they require very little user knowledge in addition to the training data, they are excellent for deep learning in industrial environments. \subsection{Common disadvantages of code bases} \label{u_introduction_disadvantages} With so many frameworks available, why start another one? The development of UniNAS started in early 2020, before most of these frameworks arrived at their current feature availability or were even made public. In addition, the frameworks rarely provide current state-of-the-art methods even now and sometimes lack the flexibility to include them easily. Further problems that UniNAS aims to solve are detailed below: \paragraph{Research code is rigid} The majority of published NAS code is very simplistic. While that is an advantage to extract important method-related details, the ability to reuse the available code in another context is severely impaired. Almost all details are hard-coded, such as: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { the used gradient optimizer and learning rate schedule } \item { the architecture search space, including candidate operations and network topology } \item { the data set and its augmentations } \item { weight initialization and regularization techniques } \item { the used hardware device(s) for training } \item { most hyper-parameters } \end{itemize} This inflexibility is sometimes accompanied by the redundancy of several code pieces that differ slightly for different experiments or phases in NAS methods. Redundancy is a fine way to introduce subtle bugs or inconsistencies and also makes the code confusing to follow. Hard-coded details are also easy to forget, which is especially crucial in research where reproducibility depends strongly on seemingly unimportant details. Finally, if any of the hard-coded components is ever changed, such as the optimizer, configurations of previous experiments can become very misleading. Their details are generally not part of the documented configuration (since they are hard-coded), so earlier results no longer make sense and become misleading. \paragraph{A configuration clutter} In contrast to such simplistic single-purpose code, frameworks usually offer a variety of optimizers, schedules, search spaces, and more to choose from. By configuring the related hyper-parameters, an optimizer can be trivially and safely exchanged for another. Since doing so is a conscious and intended choice, it is also documented in the configuration. In contrast, the replacement of hard-coded classes was not intended when the code was initially written. The disadvantage of this approach comes with the wealth of configurable hyper-parameters, in different ways: Firstly, the parametrization is often cluttered. While implementing more classes (such as optimizers or schedules) adds flexibility, the list of available hyper-parameters becomes increasingly bloated and opaque. The wealth of parametrization is intimidating and impractical since it is often nontrivial to understand exactly which hyper-parameters are used and which are ineffective. As an example, the widely used PyTorch Image Models framework~\citep{rw2019timm} (the example was chosen due to the popularity of the framework, it is no worse than others in this respect) implements an intimidating mix of regularization and data augmentation settings that are partially exclusive.\footnote{\url{https://github.com/rwightman/pytorch-image-models/blob/ba65dfe2c6681404f35a9409f802aba2a226b761/train.py}, checked Dec. 1st 2021; see lines 177 and below.} Secondly, to reduce the clutter, parameters can be used by multiple mutually exclusive choices. In the case of the aforementioned PyTorch Image Models framework, one example would be the selection of gradient-descent optimizers. Sharing common parameters such as the learning rate and the momentum generally works well, but can be confusing since, once again, finding out which parameters affect which modules necessitates reading the code or documentation. Thirdly, even with an intimidating wealth of configuration choices, not every option is covered. To simplify and reduce the clutter, many settings of lesser importance always use a sensible default value. If changing such a parameter becomes necessary, the framework configurations become more cluttered or changing the hard-coded default value again results in misleading configurations of previous experiments. To summarize, the hyper-parametrization design of a framework can be a delicate decision, trying for them to be complete but not cluttered. While both extremes appear to be mutually exclusive, they can be successfully united with the underlying configuration approach of UniNAS: argument trees. \paragraph{} Nonetheless, it is great if code is available at all. Many methods are published without any code that enables verifying their training or search results, impairing their reproducibility. Additionally, even if code is overly simplistic or accompanied by cluttered configurations, reading it is often the best way to clarify a method's exact workings and obtain detailed information about omitted hyper-parameter choices. \section{Argument trees} \label{u_argtrees} The core design philosophy of UniNAS is built on so-called \textit{argument trees}. This concept solves the problems of Section~\ref{u_introduction_disadvantages} while also providing immense configuration flexibility. As its basis, we observe that any algorithm or code piece can be represented hierarchically. For example, the task to train a network requires the network itself and a training loop, which may use callbacks and logging functions. Sections~\ref{u_argtrees_modularity} and~\ref{u_argtrees_register} briefly explain two requirements: strict modularity and a global register. As described in Section~\ref{u_argtrees_tree}, this allows each module to define which other types of modules are needed. In the previous example, a training loop may use callbacks and logging functions. Sections~\ref{u_argtrees_config} and~\ref{u_argtrees_build} explain how a configuration file can fully detail these relationships and how the desired code class structure can be generated. Finally, Section~\ref{u_argtrees_gui} shows how a configuration file can be easily manipulated with a graphical user interface, allowing the user to create and change complex experiments without writing a single line of code. \subsection{Modularity} \label{u_argtrees_modularity} As practiced in most non-simplistic codebases, the core of the argument tree structure is strong modularity. The framework code is fragmented into different components with clearly defined purposes, such as training loops and datasets. Exchanging modules of the same type for one another is a simple issue, for example gradient-descent optimizers. If all implemented code classes of the same type inherit from one base class (e.g., AbstractOptimizer) that guarantees specific class methods for a stable interaction, they can be treated equally. In object-oriented programming, this design is termed polymorphism. UniNAS extends typical PyTorch~\citep{pytorch} classes with additional functionality. An example is image classification data sets, which ordinarily do not contain information about image sizes. Adding this specification makes it possible to use fake data easily and to precompute the tensor shapes in every layer throughout the neural network. \begin{figure*}[ht] \hfill \begin{minipage}[c]{0.97\textwidth} \begin{python} @Register.task(search=True) class SingleSearchTask(SingleTask): @classmethod def args_to_add(cls, index=None) -> [Argument]: return [ Argument('is_test_run', default='False', type=str, is_bool=True), Argument('seed', default=0, type=int),` Argument('save_dir', default='{path_tmp}', type=str), ] @classmethod def meta_args_to_add(cls) -> [MetaArgument]: methods = Register.methods.filter_match_all(search=True) return [ MetaArgument('cls_device', Register.devices_managers, num=1), MetaArgument('cls_trainer', Register.trainers, num=1), MetaArgument('cls_method', methods, num=1), ] \end{python} \end{minipage} \vskip-0.3cm \caption{ UniNAS code excerpt for a SingleSearchTask. The decorator function in Line~1 registers the class with type ''task'' and additional information. The method in Line~5 returns all arguments for the task to be set in a config file. The method in Line~13 defines the local tree structure by stating how many modules of which types are needed. It is also possible to specify additional requirements, as done in Line~14. } \label{u_fig_register} \end{figure*} \subsection{A global register} \label{u_argtrees_register} A second requirement for argument trees is a global register for all modules. Its functions are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { Allow any module to register itself with additional information about its purpose. The example code in Figure~\ref{u_fig_register} shows this in Line~1. } \item { List all registered classes, including their type (task, model, optimizer, data set, and more) and their additional information (search, regression, and more). } \item { Filter registered classes by types and matching information. } \item { Given only the name of a registered module, return the class code located anywhere in the framework's files. } \end{itemize} As seen in the following Sections, this functionality is indispensable to UniNAS' design. The only difficulties in building such a register is that the code should remain readable and that every module has to register itself when the framework is used. Both can be achieved by scanning through all code files whenever a new job starts, which takes less than five seconds. Python executes the decorators (see Figure~\ref{u_fig_register}, Line~1) by doing so, which handle registration in an easily readable fashion. \subsection{Tree-based dependency structures} \label{u_argtrees_tree} \begin{figure*} \vskip-0.7cm \begin{minipage}[l]{0.42\linewidth} \centering \includegraphics[trim=0 320 2480 0, clip, width=\textwidth]{./images/uninas/args_tree_s1_col.pdf} \vskip-0.2cm \caption{ Part of a visualized SingleSearchTask configuration, which describes the training of a one-shot super-network with a specified search method (omitted for clarity, the complete tree is visualized in Figure~\ref{app_u_argstree_img}). The white colored tree nodes state the type and number of requested classes, the turquoise boxes the specific classes used. For example, the \textcolor{red}{SingleSearchTask} requires exactly one type of \textcolor{orange}{hardware device} to be specified, but the \textcolor{cyan}{SimpleTrainer} accepts any number of \textcolor{green}{callbacks} or loggers. \\ \hfill } \label{u_argstree_trimmed_img} \end{minipage} \hfill \begin{minipage}[r]{0.5\textwidth} \begin{small} \begin{lstlisting}[backgroundcolor = \color{white}] "cls_task": <@\textcolor{red}{"SingleSearchTask"}@>, "{cls_task}.save_dir": "{path_tmp}/", "{cls_task}.seed": 0, "{cls_task}.is_test_run": true, "cls_device": <@\textcolor{orange}{"CudaDevicesManager"}@>, "{cls_device}.num_devices": 1, "cls_trainer": <@\textcolor{cyan}{"SimpleTrainer"}@>, "{cls_trainer}.max_epochs": 3, "{cls_trainer}.ema_decay": 0.5, "{cls_trainer}.ema_device": "cpu", "cls_exp_loggers": <@\textcolor{black}{"TensorBoardExpLogger"}@>, "{cls_exp_loggers#0}.log_graph": false, "cls_callbacks": <@\textcolor{green}{"CheckpointCallback"}@>, "{cls_callbacks#0}.top_n": 1, "{cls_callbacks#0}.key": "train/loss", "{cls_callbacks#0}.minimize_key": true, \end{lstlisting} \end{small} \vskip-0.2cm \caption{ Example content of the configuration text-file (JSON format) for the tree in Figure~\ref{u_argstree_trimmed_img}. The first line in each text block specifies the used class(es), the other lines their detailed settings. For example, the \textcolor{cyan}{SimpleTrainer} is set to train for three epochs and track an exponential moving average of the network weights on the CPU. } \label{u_argstree_trimmed_text} \end{minipage} \end{figure*} A SingleSearchTask requires exactly one hardware device and exactly one training loop (named trainer, to train an over-complete super-network), which in turn may use any number of callbacks and logging mechanisms. Their relationship is visualized in Figure~\ref{u_argstree_trimmed_img}. Argument trees are extremely flexible since they allow every hierarchical one-to-any relationship imaginable. Multiple optional callbacks can be rearranged in their order and configured in detail. Moreover, module definitions can be reused in other constellations, including their requirements. The ProfilingTask does not need a training loop to measure the runtime of different network topologies on a hardware device, reducing the argument tree in size. While not implemented, a MultiSearchTask could use several trainers in parallel on several devices. The hierarchical requirements are made available using so-called MetaArguments, as seen in Line~16 of Figure~\ref{u_fig_register}. They specify the local structure of argument trees by stating which other modules are required. To do so, writing the required module type and their amount is sufficient. As seen in Line~14, filtering the modules is also possible to allow only a specific subset. This particular example defines the upper part of the tree visualized in Figure~\ref{u_argstree_trimmed_img}. The names of all MetaArguments start with "cls\_" which improves readability and is reflected in the visualized arguments tree (Figure~\ref{u_argstree_trimmed_img}, white-colored boxes). \subsection{Tree-based argument configurations} \label{u_argtrees_config} While it is possible to define such a dynamic structure, how can it be represented in a configuration file? Figure~\ref{u_argstree_trimmed_text} presents an excerpt of the configuration that matches the tree in Figure~\ref{u_argstree_trimmed_img}. As stated in Lines~6 and~9 of the configuration, CudaDevicesManager and SimpleTrainer fill the roles for the requested modules of types "device" and "trainer". Lines~14 and~17 list one class of the types ''logger'' and ''callback'' each, but could provide any number of comma-separated names. Also including the stated "task" type in Line~1, the mentioned lines state strictly which code classes are used and, given the knowledge about their hierarchy, define the tree structure. Additionally, every class has some arguments (hyper-parameters) that can be modified. SingleSearchTask defined three such arguments (Lines~7 to~9 in Figure~\ref{u_fig_register}) in the visualized example, which are represented in the configuration (Lines~2 to~4 in Figure~\ref{u_argstree_trimmed_text}). If the configuration is missing an argument, maybe to keep it short, its default value is used. Another noteworthy mechanism in Line~2 is that "\{cls\_task\}.save\_dir" references whichever class is currently set as "cls\_task" (Line~1), without naming it explicitly. Such wildcard references simplify automated changes to configuration files since, independently of the used task class, overwriting "\{cls\_task\}.save\_dir" is always an acceptable way to change the save directory. A less general but perhaps more readable notation is "SingleSearchTask.save\_dir", which is also accepted here. A very interesting property of such dynamic configuration files is that they contain only the hyper-parameters (arguments) of the used code classes. Adding any additional arguments will result in an error since the configuration-parsing mechanism, described in Section~\ref{u_argtrees_build}, is then unable to piece the information together. Even though UniNAS implements several different optimizer classes, any such configuration only contains the hyper-parameters of those used. Generated configuration files are always complete (contain all available arguments), sparse (contain only the available arguments), and never ambiguous. A debatable design decision of the current configuration files, as seen in Figure~\ref{u_argstree_trimmed_text}, is that they do not explicitly encode any hierarchy levels. Since that information is already known from their class implementations, the flat representation was chosen primarily for readability. It is also beneficial when arguments are manipulated, either automatically or from the terminal when starting a task. The disadvantage is that the argument names for class types can only be used once ("cls\_device", "cls\_trainer", and more); an unambiguous assignment is otherwise not possible. For example, since the SingleSearchTask already owns "cls\_device", no other class that could be used in the same argument tree can use that particular name. While this limitation is not too significant, it can be mildly confusing at times. Finally, how is it possible to create configuration files? Since the dynamic tree-based approach offers a wide variety of possibilities, only a tiny subset is valid. For example, providing two hardware devices violates the defined tree structure of a SingleSearchTask and results in a parsing failure. If that happens, the user is provided with details of which particular arguments are missing or unexpected. While the best way to create correct configurations is surely experience and familiarity with the code base, the same could be said about any framework. Since UniNAS knows about all registered classes, which other (possibly specified) classes they use, and all of their arguments (including defaults, types, help string, and more), an exhaustive list can be generated automatically. However, resulting in almost 1600 lines of text, this solution is not optimal either. The most convenient approach is presented in Section~\ref{u_argtrees_gui}: Creating and manipulating argument trees with a graphical user interface. \begin{algorithm} \caption{ Pseudo-code for building the argument tree, best understood with Figures~\ref{u_argstree_trimmed_img} and~\ref{u_argstree_trimmed_text} For a consistent terminology of code classes and tree nodes: If the $Task$ class uses a $Trainer$, then in that context, $Trainer$ the child. Lines starting with \# are comments. } \label{alg_u_argtree} \small \begin{algorithmic} \Require $Configuration$ \Comment{Content of the configuration file} \Require $Register$ \Comment{All modules in the code are registered} \State{} \State{$\#$ recursive parsing function to build a tree} \Function{parse}{$class,~index$} \Comment{E.g. $(SingleSearchTask,~0)$} \State $node = ArgumentTreeNode(class,~index)$ \State{} \State{$\#$ first parse all arguments (hyper-parameters) of this tree node} \ForEach{($idx, argument\_name$) \textbf{in} $class.get\_arguments()$} \Comment{E.g. (0, $''save\_dir''$)} \State $value = get\_used\_value(Configuration,~class,~index,~argument\_name)$ \State $node.add\_argument(argument\_name,~value)$ \EndFor \State{} \State{$\#$ then recursively parse all child classes, for each module type...} \ForEach{$child\_class\_type$ \textbf{in} $class.get\_child\_types()$} \Comment{E.g. $cls\_trainer$} \State $class\_names = get\_used\_classes(Configuration,~child\_classes\_type)$ \Assert{ The number of $class\_names$ is in the specified limits} \State{} \State{$\#$ for each module type, check all configured classes} \ForEach{($idx,~class\_name$) \textbf{in} $class\_names$} \Comment{E.g. (0, $''SimpleTrainer''$)} \State $child\_class = Register.get(child\_class\_name)$ \State $child\_node = $\Call{parse}{$child\_class,~idx$} \State $node.add\_child(child\_class\_type,~idx,~child\_node)$ \EndFor \EndFor \Returnx{ $node$} \EndFunction \State{} \State $tree = $\Call{parse}{$Main, 0$} \Comment{Recursively parse the tree, $Main$ is the entry point} \Ensure every argument in the configuration has been parsed \end{algorithmic} \end{algorithm} \subsection{Building the argument tree and code structure} \label{u_argtrees_build} The arguably most important function of a research code base is to run experiments. In order to do so, valid configuration files must be translated into their respective code structure. This comes with three major requirements: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Classes in the code that implement the desired functionality. As seen in Section~\ref{u_argtrees_tree} and Figure~\ref{u_argstree_trimmed_img}, each class also states the types, argument names and numbers of additionally requested classes for the local tree structure. } \item{ A configuration that describes which code classes are used and which values their parameters take. This is described in Section~\ref{u_argtrees_config} and visualized in Figure~\ref{u_argstree_trimmed_text}. } \item{ To connect the configuration content to classes in the code, it is required to reference code modules by their names. As described in Section~\ref{u_argtrees_register} this can be achieved with a global register. } \end{itemize} Algorithm~\ref{alg_u_argtree} realizes the first step of this process: parsing the hierarchical code structure and their arguments from the flat configuration file. The result is a tree of \textit{ArgumentTreeNodes}, of which each refers to exactly one class in the code, is connected to all related tree nodes, and knows all relevant hyper-parameter values. While they do not yet have actual class instances, this final step is no longer difficult. \begin{figure*}[h] \vskip -0.0in \begin{center} \includegraphics[trim=30 180 180 165, clip, width=\linewidth]{images/uninas/gui/gui1desc.png} \hspace{-0.5cm} \caption{ The graphical user interface (left) that can manipulate the configurations of argument trees (visualized right). Since many nodes are missing classes of some type ("cls\_device", ...), their parts in the GUI are highlighted in red. The eight child nodes of DartsSearchMethod are omitted for visual clarity. } \label{fig_u_gui} \end{center} \end{figure*} \subsection{Creating and manipulating argument trees with a GUI} \label{u_argtrees_gui} Manually writing a configuration file can be perplexing since one must keep track of tree specifications, argument names, available classes, and more. The graphical user interface (GUI) visualized in Figures~\ref{fig_u_gui} and~\ref{app_u_gui} solves these problems to a large extent, by providing the following functionality: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Interactively add and remove nodes in the argument tree, thus also in the configuration and class structure. Highlight violations of the tree specification. } \item{ Setting the hyper-parameters of each node, using checkboxes (boolean), dropdown menus (choice from a selection), and text fields (other cases like strings or numbers) where appropriate. } \item{ Functions to save and load argument trees. Since it makes sense to separate the configurations for the training procedure and the network design to swap between different constellations easily, loading partial trees is also supported. Additional functions enable visualizing, resetting, and running the current argument tree. } \item{ A search function that highlights all matches since the size of some argument trees can make finding specific arguments tedious. } \end{itemize} In order to do so, the GUI manipulates \textit{ArgumentTreeNodes} (Section~\ref{u_argtrees_build}), which can be easily converted into configuration files and code. As long as the required classes (for example, the data set) are already implemented, the GUI enables creating and changing experiments without ever touching any code or configuration files. While not among the original intentions, this property may be especially interesting for non-programmers that want to solve their problems quickly. Still, the current version of the GUI is a proof of concept. It favors functionality over design, written with the plain Python Tkinter GUI framework and based on little previous GUI programming experience. Nonetheless, since the GUI (frontend) and the functions manipulating the argument tree (backend) are separated, a continued development with different frontend frameworks is entirely possible. The perhaps most interesting would be a web service that runs experiments on a server, remotely configurable from any web browser. \subsection{Using external code} \label{u_external} There is a variety of reasons why it makes sense to include external code into a framework. Most importantly, the code either solves a standing problem or provides the users with additional options. Unlike newly written code, many popular libraries are also thoroughly optimized, reviewed, and empirically validated. External code is also a perfect match for a framework based on argument trees. As shown in Figure~\ref{u_fig_external_import}, external classes of interest can be thinly wrapped to ensure compatibility, register the module, and specify all hyper-parameters for the argument tree. The integration is seamless so that finding out whether a module is locally written or external requires an inspection of its code. On the other hand, if importing the AdaBelief~\citep{zhuang2020adabelief} code fails, the module will not be registered and therefore not be available in the graphical user interface. UniNAS fails to parse configurations that require unregistered modules but informs the user which external sources can be installed to extend its functionality. Due to this logistic simplicity, several external frameworks extend the core of UniNAS. Some of the most important ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ pymoo~\citep{pymoo}, a library for multi-objective optimization methods. } \item{ Scikit-learn~\citep{sklearn}, which implements many classical machine learning algorithms such as Support Vector Machines and Random Forests. } \item{ PyTorch Image Models~\citep{rw2019timm}, which provides the code for several optimizers, network models, and data augmentation methods. } \item{ albumentations~\citep{2018arXiv180906839B}, a library for image augmentations. } \end{itemize} \begin{figure*} \hfill \begin{minipage}[c]{0.95\textwidth} \begin{python} from uninas.register import Register from uninas.training.optimizers.abstract import WrappedOptimizer try: from adabelief_pytorch import AdaBelief # if the import was successful, # register the wrapped optimizer @Register.optimizer() class AdaBeliefOptimizer(WrappedOptimizer): # wrap the original ... except ImportError as e: # if the import failed, # inform the user that optional libraries are not installed Register.missing_import(e) \end{python} \end{minipage} \vskip-0.3cm \caption{ Excerpt of UniNAS wrapping the official AdaBelief optimizer code. The complete text has just 45 lines, half of which specify the optimizer parameters for the argument trees. } \label{u_fig_external_import} \end{figure*} \section{Dynamic network designs} \label{u_networks} As seen in the previous Sections, the unique design of UniNAS enables powerful customization of all components. In most cases, a significant portion of the architecture search configuration belongs to the network design. The FairNAS search example in Figure~\ref{app_u_argstree_img} contains 25 configured classes, of which 11 belong to the search network. While it would be easy to create a single configurable class for each network architecture of interest, that would ignore the advantages of argument trees. On the other hand, there are many technical difficulties with highly dynamic network topologies. Some of them are detailed below. \subsection{Decoupling components} In many published research codebases, network and architecture weights jointly exist in the network class. This design decision is disadvantageous for multiple reasons. Most importantly, changing the network or NAS method requires a lot of manual work. The reason is that different NAS methods need different amounts of architecture parameters, use them differently, and optimize them in different ways. For example: \begin{itemize}[noitemsep,parsep=0pt,partopsep=0pt] \item{ DARTS~\citep{liu2018darts} requires one weight vector per architecture choice. They weigh all different paths, candidate operations, in a sum. Updating the weights is done with an additional optimizer (ADAM), using gradient descent. } \item{ MDENAS~\citep{mdenas} uses a similar vector for a weighted sample of a single candidate operation that is used in this particular forward pass. Global network performance feedback is used to increase or decrease the local weightings. } \item{ Single-Path One-Shot~\citep{guo2020single} does not use weights at all. Paths are always sampled uniformly randomly. The trained network is used as an accuracy prediction model and used by a hyper-parameter optimization method. } \item{ FairNAS~\citep{FairNAS} extends Single-Path One-Shot to make sure that all candidate operations are used frequently and equally often. It thus needs to track which paths are currently available. } \end{itemize} \begin{figure}[t] \vskip -0.0in \begin{center} \includegraphics[trim=0 0 0 0, clip, width=\linewidth]{images/draw/search_net.pdf} \hspace{-0.5cm} \caption{ The network and architecture weights are decoupled. \textbf{Top}: The structure of a fully sequential super-network. Every layer (cell) uses the same set of candidate operations and weight strategy. \textbf{Bottom left}: One set of candidate operations that is used multiple times in the network. This particular experiment uses the NAS-Bench-201 candidate operations. \textbf{Bottom right}: A weight strategy that manages everything related to the used NAS method, such as creating the architecture weights or which candidates are used in each forward pass. } \label{fig_u_decouple} \end{center} \end{figure} The same is also true for the set of candidate operations, which affect the sizes of the architecture weights. Once the definitions of the search space, the candidate operations, and the NAS method (including the architecture weights) are mixed, changing any part is tedious. Therefore, strictly separating them is the best long-term approach. Similar to other frameworks presented in Section~\ref{u_introduction_available}, architectures defined in UniNAS do not use an explicit set of candidate architectures but allow a dynamic configuration. This is supported by a \textit{WeightStrategy} interface, which handles all NAS-related operations such as creating and updating the architecture weights. The interaction between the architecture definition, the candidate operations, and the weight strategy is visualized in Figure~\ref{fig_u_decouple}. The easy exchange of any component is not the only advantage of this design. Some NAS methods, such as DARTS, update network and architecture weights using different gradient descent optimizers. Correctly disentangling the weights is trivial if they are already organized in decoupled structures but hard otherwise. Another advantage is that standardizing functions to create and manage architecture weights makes it easy to present relevant information to the user, such as how many architecture weights exist, their sizes, and which are shared across different network cells. An example is presented in Figure~\ref{app_text}. \begin{figure}[hb!] \begin{minipage}[c]{0.24\textwidth} \centering \includegraphics[height=11.5cm]{./images/draw/mobilenetv2.pdf} \end{minipage} \hfill \begin{minipage}[c]{0.5\textwidth} \small \begin{python} "cell_3": { "name": "SingleLayerCell", "kwargs": { "name": "cell_3", "features_mult": 1, "features_fixed": -1 }, "submodules": { "op": { "name": "MobileInvConvLayer", "kwargs": { "kernel_size": 3, "kernel_size_in": 1, "kernel_size_out": 1, "stride": 1, "expansion": 6.0, "padding": "same", "dilation": 1, "bn_affine": true, "act_fun": "relu6", "act_inplace": true, "att_dict": null, "fused": false } } } }, \end{python} \end{minipage} \caption{ A high-level view on the MobileNet~V2 architecture~\citep{sandler2018mobilenetv2} in the top left, and a schematic of the inverted bottleneck block in the bottom left. This design uses two 1$\times$1 convolutions to change the channel count \textit{n} by an expansion factor of~6, and a spatial 3$\times$3 convolution in their middle. The text on the right-hand side represents the cell structure by referencing the modules by their names ("name") and their keyworded arguments ("kwargs"). } \label{u_fig_conf} \end{figure} \subsection{Saving, loading, and finalizing networks} \label{u_networks_save} As mentioned before, argument trees enable a detailed configuration of every aspect of an experiment, including the network topology itself. As visualized in Figure~\ref{app_u_argstree_img}, such network definitions can become almost arbitrarily complex. This becomes disadvantageous once models have to be saved or loaded or when super-networks are finalized into discrete architectures. Unlike TensorFlow~\citep{tensorflow2015-whitepaper}, the used PyTorch~\citep{pytorch} library saves only the network weights without execution graphs. External projects like ONNX~\citep{onnx} can be used to export limited graph information but not to rebuild networks using the same code classes and context. The implemented solution is inspired by the official code\footnote{\url{https://github.com/mit-han-lab/proxylessnas/tree/master/proxyless_nas}} of ProxylessNAS~\citep{proxylessnas}, where every code module defines two functions that enable exporting and importing the entire module state and context. As typical for hierarchical structures, the state of an outer module contains the states of all modules within. An example is visualized in Figure~\ref{u_fig_conf}, where one cell in the famous MobileNet V2 architecture is represented as readable text. The global register can provide any class definition by name (see Section~\ref{u_argtrees_register}) so that an identical class structure can be created and parameterized accordingly. The same approach that enables saving and loading arbitrary class compositions can also be used to change their structure. More specifically, an over-complete super-network containing all possible candidate operations can export only a specific configuration subset. The network recreated from this reduced configuration is the result of the architecture search. This is made possible since the weight strategy controls the use of all candidate operations, as visualized in Figure~\ref{fig_u_decouple}. Similarly, when their configuration is exported, the weight strategy controls which candidates should be part of the finalized network architecture. In another use case, some modules behave differently in super-networks and finalized architectures. For example, Linear Transformers~\citep{ScarletNAS} supplement skip connections with linear 1$\times$1 convolutions in super-networks to stabilize the training with variable network depths. When the network topology is finalized, it suffices to simply export the configuration of a skip connection instead of their own. Another practical way of rebuilding code structures is available through the argument tree configuration, which defines every detail of an experiment (see Section~\ref{u_argtrees_config}). Parsing the network design and loading the trained weights of a previous experiment requires no further user interaction than specifying its save directory. This specific way of recreating experiment environments is used extensively in \textit{Single-Path One-Shot} tasks. In the first step, a super-network is trained to completion. Afterward, when the super-network is used to make predictions for a hyper-parameter optimization method (such as Bayesian optimization or evolutionary algorithms), the entire environment of its training can be recreated. This includes the network design and the dataset, data augmentations, which parts were reserved for validation, regularization techniques, and more. \section{Discussion and Conclusions} \label{u_conclusions} We presented the underlying concepts of UniNAS, a PyTorch-based framework with the ambitious goal of unifying a variety of NAS algorithms in one codebase. Even though the use cases for this framework changed over time, mostly from DARTS-based to SPOS-based experiments, its underlying design approach made reusing old code possible at every step. However, several technical details could be changed or improved in hindsight. Most importantly, configuration files should reflect the hierarchy levels (see Section~\ref{u_argtrees_config}) for code simplicity and to avoid concerns about using module types multiple times. The current design favors readability, which is now a minor concern thanks to the graphical user interface. Other considered changes would improve the code readability but were not implemented due to a lack of necessity and time. In summary, the design of UniNAS fulfills all original requirements. Modules can be arranged and combined in almost arbitrary constellations, giving the user an extremely flexible tool to design experiments. Furthermore, using the graphical user interface does not require writing even a single line of code. The resulting configuration files contain only the relevant information and do not suffer from a framework with many options. These features also enable an almost arbitrary network design, combined with any NAS optimization method and any set of candidate operations. Despite that, networks can still be saved, loaded, and changed in various ways. Although not covered here, several unit tests ensure that the essential framework components keep working as intended. Finally, what is the advantage of using argument trees over writing code with the same results? Compared to configuration files, code is more powerful and versatile but will likely suffer from problems described in Section~\ref{u_introduction_available}. Argument trees make any considerations about which parameters to expose unnecessary and can enforce the use of specific module types and subsets thereof. However, their strongest advantage is the visualization and manipulation of the entire experiment design with a graphical user interface. This aligns well with Automated Machine Learning (AutoML), which is also intended to make machine learning available to a broader audience. {\small \bibliographystyle{iclr2022_conference}
2024-02-18T23:39:40.040Z
2021-12-06T02:16:42.000Z
algebraic_stack_train_0000
11
5,863
proofpile-arXiv_065-142
\section{Introduction} To estimate a regression when the errors have a non-identity covariance matrix, we usually turn first to generalized least squares (GLS). Somewhat surprisingly, GLS proves to be computationally challenging in the very simple setting of the unbalanced crossed random effects models that we study here. For that problem, the cost to compute the GLS estimate on $N$ data points grows at best like $O(N^{3/2})$ under the usual algorithms. If we additionally assume Gaussian errors, then \cite{crelin} show that even evaluating the likelihood one time costs at least a multiple of $N^{3/2}$. These costs make the usual algorithms for GLS infeasible for large data sets such as those arising in electronic commerce. In this paper, we present an iterative algorithm based on a backfitting approach from \cite{buja:hast:tibs:1989}. This algorithm is known to converge to the GLS solution. The cost of each iteration is $O(N)$ and so we also study how the number of iterations grows with~$N$. The crossed random effects model we consider has \begin{equation}\label{eq:refmodel} Y_{ij} =x_{ij}^\mathsf{T}\beta+a_i+b_j+e_{ij},\quad 1\le i\le R,\quad 1\le j\le C \end{equation} for random effects $a_i$ and $b_{j}$ and an error $e_{ij}$ with a fixed effects regression parameter $\beta\in\mathbb{R}^p$ for the covariates $x_{ij}\in\mathbb{R}^p$. We assume that $a_i\stackrel{\mathrm{iid}}{\sim} (0,\sigma^2_A)$, $b_j\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_B)$, and $e_{ij}\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_E)$ are all independent. It is thus a mixed effects model in which the random portion has a crossed structure. The GLS estimate is also the maximum likelihood estimate (MLE), when $a_i$, $b_{j}$ and $e_{ij}$ are Gaussian. Because we assume that $p$ is fixed as $N$ grows, we often leave $p$ out of our cost estimates, giving instead the complexity in $N$. The GLS estimate $\hat\beta_\mathrm{GLS}$ for crossed random effects can be efficiently estimated if all $R\times C$ values are available. Our motivating examples involve ratings data where $R$ people rate $C$ items and then it is usual that the data are very unbalanced with a haphazard observational pattern in which only $N\ll R\times C$ of the $(x_{ij},Y_{ij})$ pairs are observed. The crossed random effects setting is significantly more difficult than a hierarchical model with just $a_i+e_{ij}$ but no $b_{j}$ term. Then the observations for index $j$ are `nested within' those for each level of index $i$. The result is that the covariance matrix of all observed $Y_{ij}$ values has a block diagonal structure allowing GLS to be computed in $O(N)$ time. Hierarchical models are very well suited to Bayesian computation \citep{gelm:hill:2006}. Crossed random effects are a much greater challenge. \cite{GO17} find that the Gibbs sampler can take $O(N^{1/2})$ iterations to converge to stationarity, with each iteration costing $O(N)$ leading once again to $O(N^{3/2})$ cost. For more examples where the costs of solving equations versus sampling from a covariance attain the same rate see \cite{good:soka:1989} and \cite{RS97}. As further evidence of the difficulty of this problem, the Gibbs sampler was one of nine MCMC algorithms that \cite{GO17} found to be unsatisfactory. Furthermore, \cite{lme4} removed the {\tt mcmcsamp} function from the R package lme4 because it was considered unreliable even for the problem of sampling the posterior distribution of the parameters from previously fitted models, and even for those with random effects variances near zero. \cite{papa:robe:zane:2020} present an exception to the high cost of a Bayesian approach for crossed random effects. They propose a collapsed Gibbs sampler that can potentially mix in $O(1)$ iterations. To prove this rate, they make an extremely stringent assumption that every index $i=1,\dots,R$ appears in the same number $N/C$ of observed data points and similarly every $j=1,\dots,C$ appears in $N/R$ data points. Such a condition is tantamount to requiring a designed experiment for the data and it is much stronger than what their algorithm seems to need in practice. Under that condition their mixing rate asymptotes to a quantity $\rho_{\mathrm{aux}}$, described in our discussion section, that in favorable circumstances is $O(1)$. They find empirically that their sampler has a cost that scales well in many data sets where their balance condition does not hold. In this paper we study an iterative linear operation, known as backfitting, for GLS. Each iteration costs $O(N)$. The speed of convergence depends on a certain matrix norm of that iteration, which we exhibit below. If the norm remains bounded strictly below $1$ as $N\to\infty$, then the number of iterations to convergence is $O(1)$. We are able to show that the matrix norm is $O(1)$ with probability tending to one, under conditions where the number of observations per row (or per column) is random and even the expected row or column counts may vary, though in a narrow range. While this is a substantial weakening of the conditions in \cite{papa:robe:zane:2020}, it still fails to cover many interesting cases. Like them, we find empirically that our algorithm scales much more broadly than under the conditions for which scaling is proved. We suspect that the computational infeasibility of GLS leads many users to use ordinary least squares (OLS) instead. OLS has two severe problems. First, it is \myemph{inefficient} with $\var(\hat\beta_\mathrm{OLS})$ larger than $\var(\hat\beta_\mathrm{GLS})$. This is equivalent to OLS ignoring some possibly large fraction of the information in the data. Perhaps more seriously, OLS is \myemph{naive}. It produces an estimate of $\var(\hat\beta_\mathrm{OLS})$ that can be too small by a large factor. That amounts to overestimating the quantity of information behind $\hat\beta_\mathrm{OLS}$, also by a potentially large factor. The naivete of OLS can be countered by using better variance estimates. One can bootstrap it by resampling the row and column entities as in \cite{pbs}. There is also a version of Huber-White variance estimation for this case in econometrics. See for instance \cite{came:gelb:mill:2011}. While these methods counter the naivete of OLS, the inefficiency of OLS remains. The method of moments algorithm in \cite{crelin} gets consistent asymptotically normal estimates of $\beta$, $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$. It produces a GLS estimate $\hat\beta$ that is more efficient than OLS but still not fully efficient because it accounts for correlations due to only one of the two crossed random effects. While inefficient, it is not naive because its estimate of $\var(\hat\beta)$ properly accounts for variance due to $a_i$, $b_{j}$ and $e_{ij}$. In this paper we get a GLS estimate $\hat\beta$ that takes account of all three variance components, making it efficient. We also provide an estimate of $\var(\hat\beta)$ that accounts for all three components, so our estimate is not naive. Our algorithm requires consistent estimates of the variance components $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ in computing $\hat\beta$ and $\widehat\var(\hat\beta)$. We use the method of moments estimators from \cite{GO17} that can be computed in $O(N)$ work. By \citet[Theorem 4.2]{GO17}, these estimates of $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ are asymptotically uncorrelated and each of them has the same asymptotic variance it would have had were the other two variance components equal to zero. It is not known whether they are optimally estimated, much less optimal subject to an $O(N)$ cost constraint. The variance component estimates are known to be asymptotically normal \citep{gao:thesis}. The rest of this paper is organized as follows. Section~\ref{sec:missing} introduces our notation and assumptions for missing data. Section~\ref{sec:backfitting} presents the backfitting algorithm from \cite{buja:hast:tibs:1989}. That algorithm was defined for smoothers, but we are able to cast the estimation of random effect parameters as a special kind of smoother. Section~\ref{sec:normconvergence} proves our result about backfitting being convergent with a probability tending to one as the problem size increases. Section~\ref{sec:empiricalnorms} shows numerical measures of the matrix norm of the backfitting operator. It remains bounded below and away from one under more conditions than our theory shows. We find that even one iteration of the lmer function in lme4 package \cite{lme4} has a cost that grows like $N^{3/2}$ in one setting and like $N^{2.1}$ in another, sparser one. The backfitting algorithm has cost $O(N)$ in both of these cases. Section~\ref{sec:stitch} illustrates our GLS algorithm on some data provided to us by Stitch Fix. These are customer ratings of items of clothing on a ten point scale. Section~\ref{sec:discussion} has a discussion of these results. An appendix contains some regression output for the Stitch Fix data. \section{Missingness}\label{sec:missing} We adopt the notation from \cite{crelin}. We let $Z_{ij}\in\{0,1\}$ take the value $1$ if $(x_{ij},Y_{ij})$ is observed and $0$ otherwise, for $i=1,\dots,R$ and $j=1,\dots,C$. In many of the contexts we consider, the missingness is not at random and is potentially informative. Handling such problems is outside the scope of this paper, apart from a brief discussion in Section~\ref{sec:discussion}. It is already a sufficient challenge to work without informative missingness. The matrix $Z\in\{0,1\}^{R\times C}$, with elements $Z_{ij}$ has $N_{i\sumdot} =\sum_{j=1}^CZ_{ij}$ observations in `row $i$' and $N_{\sumdot j}=\sum_{i=1}^RZ_{ij}$ observations in `column $j$'. We often drop the limits of summation so that $i$ is always summed over $1,\dots,R$ and $j$ over $1,\dots,C$. When we need additional symbols for row and column indices we use $r$ for rows and $s$ for columns. The total sample size is $N=\sum_i\sum_jZ_{ij} =\sum_iN_{i\sumdot} = \sum_jN_{\sumdot j}$. There are two co-observation matrices, $Z^\mathsf{T} Z$ and $ZZ^\mathsf{T}$. Here $(Z^\tran Z)_{js}=\sum_iZ_{ij}Z_{is}$ gives the number of rows in which data from both columns $j$ and $s$ were observed, while $(ZZ^\tran)_{ir}=\sum_jZ_{ij}Z_{rj}$ gives the number of columns in which data from both rows $i$ and $r$ were observed. In our regression models, we treat $Z_{ij}$ as nonrandom. We are conditioning on the actual pattern of observations in our data. When we study the rate at which our backfitting algorithm converges, we consider $Z_{ij}$ drawn at random. That is, the analyst is solving a GLS conditionally on the pattern of observations and missingness, while we study the convergence rates that analyst will see for data drawn from a missingness mechanism defined in Section~\ref{sec:modelz}. If we place all of the $Y_{ij}$ into a vector $\mathcal{Y}\in\mathbb{R}^N$ and $x_{ij}$ compatibly into a matrix $\mathcal{X}\in\mathbb{R}^{N\times p}$, then the naive and inefficient OLS estimator is \begin{align}\label{eq:bhatols} \hat\beta_\mathrm{OLS} = (\mathcal{X}^\mathsf{T} \mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{Y}. \end{align} This can be computed in $O(Np^2)$ work. We prefer to use the GLS estimator \begin{align}\label{eq:bhatgls}\hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{V}^{-1}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{V}^{-1}\mathcal{Y}, \end{align} where $\mathcal{V}\in\mathbb{R}^{N\times N}$ contains all of the $\cov(Y_{ij},Y_{rs})$ in an ordering compatible with $\mathcal{X}$ and $\mathcal{Y}$. A naive algorithm costs $O(N^3)$ to solve for $\hat\beta_\mathrm{GLS}$. It can actually be solved through a Cholesky decomposition of an $(R+C)\times (R+C)$ matrix \citep{sear:case:mccu:1992}. That has cost $O(R^3+C^3)$. Now $N\le RC$, with equality only for completely observed data. Therefore $\max(R,C)\ge \sqrt{N}$, and so $R^3+C^3\ge N^{3/2}$. When the data are sparsely enough observed it is possible that $\min(R,C)$ grows more rapidly than $N^{1/2}$. In a numerical example in Section~\ref{sec:empiricalnorms} we have $\min(R,C)$ growing like $N^{0.70}$. In a hierarchical model, with $a_i$ but no $b_{j}$ we would find $\mathcal{V}$ to be block diagonal and then $\hat\beta_\mathrm{GLS}$ could be computed in $O(N)$ work. A reviewer reminds us that it has been known since \cite{stra:1969} that systems of equations can be solved more quickly than cubic time. Despite that, current software is still dominated by cubic time algorithms. Also none of the known solutions are quadratic and so in our setting the cost would be at least a multiple of $(R+C)^{2+\gamma}$ for some $\gamma>0$ and hence not $O(N)$. We can write our crossed effects model as \begin{align}\label{eq:cemodelviaz} \mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} + \mathcal{Z}_B\boldsymbol{b} + \boldsymbol{e} \end{align} for matrices $\mathcal{Z}_A\in\{0,1\}^{N\times R}$ and $\mathcal{Z}_B\in\{0,1\}^{N\times C}$. The $i$'th column of $\mathcal{Z}_A$ has ones for all of the $N$ observations that come from row $i$ and zeroes elsewhere. The definition of $\mathcal{Z}_B$ is analogous. The observation matrix can be written $Z = \mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B$. The vector $\boldsymbol{e}$ has all $N$ values of $e_{ij}$ in compatible order. Vectors $\boldsymbol{a}$ and $\boldsymbol{b}$ contain the row and column random effects $a_i$ and $b_{j}$. In this notation \begin{equation} \label{eq:Vee} \mathcal{V} = \mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\sigma^2_A + \mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}\sigma^2_B + I_N\sigma^2_E, \end{equation} where $I_N$ is the $N \times N$ identity matrix. Our main computational problem is to get a value for $\mathcal{U}=\mathcal{V}^{-1}\mathcal{X}\in\mathbb{R}^{N\times p}$. To do that we iterate towards a solution $\boldsymbol{u}\in\mathbb{R}^N$ of $\mathcal{V} \boldsymbol{u}=\boldsymbol{x}$, where $\boldsymbol{x}\in\mathbb{R}^N$ is one of the $p$ columns of $\mathcal{X}$. After that, finding \begin{equation} \label{eq:betahat} \hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{U})^{-1}(\mathcal{Y}^\mathsf{T}\mathcal{U})^\mathsf{T} \end{equation} is not expensive, because $\mathcal{X}^\mathsf{T}\mathcal{U}\in\mathbb{R}^{p\times p}$ and we suppose that $p$ is not large. If the data ordering in $\mathcal{Y}$ and elsewhere sorts by index $i$, breaking ties by index $j$, then $\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\in\{0,1\}^{N\times N}$ is a block matrix with $R$ blocks of ones of size $N_{i\sumdot}\timesN_{i\sumdot}$ along the diagonal and zeroes elsewhere. The matrix $\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}$ will not be block diagonal in that ordering. Instead $P\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T} P^\mathsf{T}$ will be block diagonal with $N_{\sumdot j}\timesN_{\sumdot j}$ blocks of ones on the diagonal, for a suitable $N\times N$ permutation matrix $P$. \section{Backfitting algorithms}\label{sec:backfitting} Our first goal is to develop computationally efficient ways to solve the GLS problem \eqref{eq:betahat} for the linear mixed model~\eqref{eq:cemodelviaz}. We use the backfitting algorithm that \cite{hast:tibs:1990} and \cite{buja:hast:tibs:1989} use to fit additive models. We write $\mathcal{V}$ in (\ref{eq:Vee}) as $\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B +I_N\right)$ with $\lambda_A=\sigma^2_E/\sigma^2_A$ and $\lambda_B=\sigma^2_E/\sigma^2_B$, and define $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$. Then the GLS estimate of $\beta$ is \begin{align} \hat\beta_{\mathrm{GLS}}&=\arg\min_\beta (\mathcal{Y}-\mathcal{X}\beta)^\mathsf{T}\mathcal{W}(\mathcal{Y}-\mathcal{X}\beta) = (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{Y}\label{eq:betahatw} \end{align} and $\cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}$. It is well known (e.g., \cite{robinson91:_that_blup}) that we can obtain $\hat\beta_{\mathrm{GLS}}$ by solving the following penalized least-squares problem \begin{align}\label{eq:minboth} \min_{\beta,\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 +\lambda_A\Vert\boldsymbol{a}\Vert^2 +\lambda_B\Vert\boldsymbol{b}\Vert^2. \end{align} Then $\hat\beta=\hat\beta_{\mathrm{GLS}}$ and $\hat \boldsymbol{a}$ and $\hat \boldsymbol{b}$ are the best linear unbiased prediction (BLUP) estimates of the random effects. This derivation works for any number of factors, but it is instructive to carry it through initially for one. \subsection{One factor}\label{sec:one-factor} For a single factor, we simply drop the $\mathcal{Z}_B\boldsymbol{b}$ term from \eqref{eq:cemodelviaz} to get \begin{equation*} \mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} +\boldsymbol{e}. \end{equation*} Then $\mathcal{V}=\cov(\mathcal{Z}_A\boldsymbol{a}+\boldsymbol{e})= \sigma^2_A\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T} +\sigma^2_E I_N$, and $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$ as before. The penalized least squares problem is to solve \begin{align}\label{eq:equivmina} \min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2. \end{align} We show the details as we need them for a later derivation. The normal equations from~\eqref{eq:equivmina} yield \begin{align} \boldsymbol{0} & = \mathcal{X}^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a}),\quad\text{and}\label{eq:normbeta}\\ \boldsymbol{0} & = \mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a}) -\lambda_A\hat\boldsymbol{a}.\label{eq:normbsa} \end{align} Solving~\eqref{eq:normbsa} for $\hat\boldsymbol{a}$ and multiplying the solution by $\mathcal{Z}_A$ yields $$ \mathcal{Z}_A\hat\boldsymbol{a} = \mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta) \equiv \mathcal{S}_A(\mathcal{Y}-\mathcal{X}\hat\beta), $$ for an $N\times N$ ridge regression ``smoother matrix'' $\mathcal{S}_A$. As we explain below this smoother matrix implements shrunken within-group means. Then substituting $\mathcal{Z}_A\hat\boldsymbol{a}$ into equation~\eqref{eq:normbeta} yields \begin{equation} \label{eq:onefactor} \hat\beta = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{Y}. \end{equation} Using the Sherman-Morrison-Woodbury (SMW) identity, one can show that $\mathcal{W}=I_N-\mathcal{S}_A$ and hence $\hat\beta$ above equals $\hat\beta_\mathrm{GLS}$ from~\eqref{eq:betahatw}. This is not in itself a new discovery; see for example \cite{robinson91:_that_blup} or \cite{hast:tibs:1990} (Section 5.3.3). To compute the solution in (\ref{eq:onefactor}), we need to compute $\mathcal{S}_A \mathcal{Y}$ and $\mathcal{S}_A\mathcal{X}$. The heart of the computation in $\mathcal{S}_A \mathcal{Y}$ is $(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}\mathcal{Y}$. But $\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A=\mathrm{diag}(N_{1\text{\tiny$\bullet$}},N_{2\text{\tiny$\bullet$}},\ldots,N_{R\text{\tiny$\bullet$}})$ and we see that all we are doing is computing an $R$-vector of shrunken means of the elements of $\mathcal{Y}$ at each level of the factor $A$; the $i$th element is $\sum_jZ_{ij} Y_{ij}/(N_{i\text{\tiny$\bullet$}}+\lambda_A)$. This involves a single pass through the $N$ elements of $Y$, accumulating the sums into $R$ registers, followed by an elementwise scaling of the $R$ components. Then pre-multiplication by $\mathcal{Z}_A$ simply puts these $R$ shrunken means back into an $N$-vector in the appropriate positions. The total cost is $O(N)$. Likewise $\mathcal{S}_A\mathcal{X}$ does the same separately for each of the columns of $\mathcal{X}$. Hence the entire computational cost for \eqref{eq:onefactor} is $O(Np^2)$, the same order as regression on $\mathcal{X}$. What is also clear is that the indicator matrix $\mathcal{Z}_A$ is not actually needed here; instead all we need to carry out these computations is the factor vector $f_A$ that records the level of factor $A$ for each of the $N$ observations. In the R language \citep{R:lang:2015} the following pair of operations does the computation: \begin{verbatim} hat_a = tapply(y,fA,sum)/(table(fA)+lambdaA) hat_y = hat_a[fA] \end{verbatim} where {\tt fA} is a categorical variable (factor) $f_A$ of length $N$ containing the row indices $i$ in an order compatible with $Y\in\mathbb{R}^N$ (represented as {\tt y}) and {\tt lambdaA} is $\lambda_A=\sigma^2_A/\sigma^2_E$. \subsection{Two factors}\label{sec:two-factors} With two factors we face the problem of incompatible block diagonal matrices discussed in Section~\ref{sec:missing}. Define $\mathcal{Z}_G=(\mathcal{Z}_A\!:\!\mathcal{Z}_B)$ ($R+C$ columns), $\mathcal{D}_\lambda=\mathrm{diag}(\lambda_AI_R,\lambda_BI_C)$, and $\boldsymbol{g}^\mathsf{T}=(\boldsymbol{a}^\mathsf{T},\boldsymbol{b}^\mathsf{T})$. Then solving \eqref{eq:minboth} is equivalent to \begin{align}\label{eq:ming} \min_{\beta,\boldsymbol{g}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_G\boldsymbol{g}\Vert^2 +\boldsymbol{g}^\mathsf{T}\mathcal{D}_\lambda\boldsymbol{g}. \end{align} A derivation similar to that used in the one-factor case gives \begin{equation} \label{eq:gfactor} \hat\beta = H_\mathrm{GLS}\mathcal{Y}\quad\text{for}\quad H_\mathrm{GLS} = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G), \end{equation} where the hat matrix $H_\mathrm{GLS}$ is written in terms of a smoother matrix \begin{equation} \label{eq:defcsg} \mathcal{S}_G=\mathcal{Z}_G(\mathcal{Z}_G^\mathsf{T} \mathcal{Z}_G + \mathcal{D}_\lambda)^{-1}\mathcal{Z}_G^\mathsf{T}. \end{equation} We can again use SMW to show that $I_N-\mathcal{S}_G=\mathcal{W}$ and hence the solution $\hat\beta=\hat\beta_{\mathrm{GLS}}$ in \eqref{eq:betahatw}. But in applying $\mathcal{S}_G$ we do not enjoy the computational simplifications that occurred in the one factor case, because \begin{equation*} \mathcal{Z}_G^\mathsf{T}\mathcal{Z}_G= \left( \begin{array}{cc} \mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B\\[0.25ex] \mathcal{Z}_B^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B \end{array} \right) =\begin{pmatrix} \mathrm{diag}(N_{i\sumdot}) & Z\\ Z^\mathsf{T} & \mathrm{diag}(N_{\sumdot j}) \end{pmatrix}, \end{equation*} where $Z\in\{0,1\}^{R\times C}$ is the observation matrix which has no special structure. Therefore we need to invert an $(R+C)\times (R+C)$ matrix to apply $\mathcal{S}_G$ and hence to solve \eqref{eq:gfactor}, at a cost of at least $O(N^{3/2})$ (see Section~\ref{sec:missing}). Rather than group $\mathcal{Z}_A$ and $\mathcal{Z}_B$, we keep them separate, and develop an algorithm to apply the operator $\mathcal{S}_G$ efficiently. Consider a generic response vector $\mathcal{R}$ (such as $\mathcal{Y}$ or a column of $\mathcal{X}$) and the optimization problem \begin{align}\label{eq:minab} \min_{\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{R}-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 +\lambda_A\|\boldsymbol{a}\|^2+\lambda_B\|\boldsymbol{b}\|^2. \end{align} Using $\mathcal{S}_G$ defined at~\eqref{eq:defcsg} in terms of the indicator variables $\mathcal{Z}_G\in\{0,1\}^{N\times (R+C)}$ it is clear that the fitted values are given by $\widehat\mathcal{R}=\mathcal{S}_G\mathcal{R}$. Solving (\ref{eq:minab}) would result in two blocks of estimating equations similar to equations \eqref{eq:normbeta} and \eqref{eq:normbsa}. These can be written \begin{align}\label{eq:backfit} \begin{split} \mathcal{Z}_A\hat\boldsymbol{a} & = \mathcal{S}_A(\mathcal{R}-\mathcal{Z}_B\hat\boldsymbol{b}),\quad\text{and}\\ \mathcal{Z}_B\hat\boldsymbol{b} & = \mathcal{S}_B(\mathcal{R}-\mathcal{Z}_A\hat\boldsymbol{a}), \end{split} \end{align} where $\mathcal{S}_A=\mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$ is again the ridge regression smoothing matrix for row effects and similarly $\mathcal{S}_B=\mathcal{Z}_B(\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B + \lambda_BI_C)^{-1}\mathcal{Z}_B^\mathsf{T}$ the smoothing matrix for column effects. We solve these equations iteratively by block coordinate descent, also known as backfitting. The iterations converge to the solution of~\eqref{eq:minab} \citep{buja:hast:tibs:1989, hast:tibs:1990}. It is evident that $\mathcal{S}_A,\mathcal{S}_B\in\mathbb{R}^{N\times N}$ are both symmetric matrices. It follows that the limiting smoother $\mathcal{S}_G$ formed by combining them is also symmetric. See \citet[page 120]{hast:tibs:1990}. We will need this result later for an important computational shortcut. Here the simplifications we enjoyed in the one-factor case once again apply. Each step applies its operator to a vector (the terms in parentheses on the right hand side in (\ref{eq:backfit})). For both $\mathcal{S}_A$ and $\mathcal{S}_B$ these are simply the shrunken-mean operations described for the one-factor case, separately for factor $A$ and $B$ each time. As before, we do not need to actually construct $\mathcal{Z}_B$, but simply use a factor $f_B$ that records the level of factor $B$ for each of the $N$ observations. The above description holds for a generic response $\mathcal{R}$; we apply that algorithm (in parallel) to $\mathcal{Y}$ and each column of $\mathcal{X}$ to obtain the quantities $\mathcal{S}_G\mathcal{X}$ and $\mathcal{S}_G\mathcal{Y}$ that we need to compute $H_{\mathrm{GLS}}\mathcal{Y}$ in \eqref{eq:gfactor}. Now solving (\ref{eq:gfactor}) is $O(Np^2)$ plus a negligible $O(p^3)$ cost. These computations deliver $\hat\beta_{\mathrm{GLS}}$; if the BLUP estimates $\hat\boldsymbol{a}$ and $\hat{\boldsymbol{b}}$ are also required, the same algorithm can be applied to the response $\mathcal{Y}-\mathcal{X}\hat\beta_{\mathrm{GLS}}$, retaining the $\boldsymbol{a}$ and $\boldsymbol{b}$ at the final iteration. We can also write \begin{equation}\label{eq:covbhat} \cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E(\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}. \end{equation} It is also clear that we can trivially extend this approach to accommodate any number of factors. \subsection{Centered operators} \label{sec:centered-operators} The matrices $\mathcal{Z}_A$ and $\mathcal{Z}_B$ both have row sums all ones, since they are factor indicator matrices (``one-hot encoders''). This creates a nontrivial intersection between their column spaces, and that of $\mathcal{X}$ since we always include an intercept, that can cause backfitting to converge more slowly. In this section we show how to counter this intersection of column spaces to speed convergence. We work with this two-factor model \begin{align}\label{eq:equivmina1} \min_{\beta,\boldsymbol{a},\boldsymbol{b}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2+\lambda_B\Vert\boldsymbol{b}\Vert^2. \end{align} \begin{lemma} If $\mathcal{X}$ in model~\eqref{eq:equivmina1} includes a column of ones (intercept), and $\lambda_A>0$ and $\lambda_B>0$, then the solutions for $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfy $\sum_{i=1}^R a_i=0$ and $\sum_{j=1}^C b_j=0$. \end{lemma} \begin{proof} It suffices to show this for one factor and with $\mathcal{X}=\mathbf{1}$. The objective is now \begin{align}\label{eq:equivsimp} \min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathbf{1}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2. \end{align} Notice that for any candidate solution $(\beta,\{a_i\}_1^R)$, the alternative solution $(\beta+c,\{a_i-c\}_1^R)$ leaves the loss part of \eqref{eq:equivsimp} unchanged, since the row sums of $\mathcal{Z}_A$ are all one. Hence if $\lambda_A>0$, we would always improve $\boldsymbol{a}$ by picking $c$ to minimize the penalty term $\sum_{i=1}^R(a_i-c)^2$, or $c=(1/R)\sum_{i=1}^Ra_i$. \end{proof} It is natural then to solve for $\boldsymbol{a}$ and $\boldsymbol{b}$ with these constraints enforced, instead of waiting for them to simply emerge in the process of iteration. \begin{theorem}\label{thm:smartcenter} Consider the generic optimization problem \begin{align}\label{eq:equivsimp2} \min_{\boldsymbol{a}} \Vert \mathcal{R} -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2\quad \mbox{subject to } \sum_{i=1}^Ra_i=0. \end{align} Define the partial sum vector $\mathcal{R}^+ = \mathcal{Z}_A^\mathsf{T}\mathcal{R}$ with components $\mathcal{R}^+_{i} = \sum_jZ_{ij}\mathcal{R}_{ij}$, and let $$w_i=\frac{(N_{i\text{\tiny$\bullet$}}+\lambda)^{-1}}{\sum_{r}(N_{r\sumdot}+\lambda)^{-1}}.$$ Then the solution $\hat \boldsymbol{a}$ is given by \begin{align}\label{eq:ahatsoln} \hat a_i=\frac{\mathcal{R}^+_{i}-\sum_{r}w_r\mathcal{R}^+_{r}}{N_{i\text{\tiny$\bullet$}}+\lambda_A}, \quad i=1,\ldots,R. \end{align} Moreover, the fit is given by $$\mathcal{Z}_A\hat\boldsymbol{a}=\tilde\mathcal{S}_A\mathcal{R},$$ where $\tilde \mathcal{S}_A$ is a symmetric operator. \end{theorem} The computations are a simple modification of the non-centered case. \begin{proof} Let $M$ be an $R\times R$ orthogonal matrix with first column $\mathbf{1}/\sqrt{R}$. Then $\mathcal{Z}_A\boldsymbol{a}=\mathcal{Z}_AMM^\mathsf{T}\boldsymbol{a}=\tilde \mathcal{G}\tilde\boldsymbol{\gamma}$ for $\mathcal{G}=\mathcal{Z}_AM$ and $\tilde\boldsymbol{\gamma}=M^\mathsf{T}\boldsymbol{a}$. Reparametrizing in this way leads to the equivalent problem \begin{align}\label{eq:equivsimp2} \min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\tilde\mathcal{G}\tilde\boldsymbol{\gamma}\Vert^2 + \lambda_A \Vert\tilde\boldsymbol{\gamma}\Vert^2,\quad \mbox{subject to } \tilde\gamma_1=0. \end{align} To solve (\ref{eq:equivsimp2}), we simply drop the first column of $\tilde \mathcal{G}$. Let $\mathcal{G}=\mathcal{Z}_AQ$ where $Q$ is the matrix $M$ omitting the first column, and $\boldsymbol{\gamma}$ the corresponding subvector of $\tilde\boldsymbol{\gamma}$ having $R-1$ components. We now solve \begin{align}\label{eq:equivsimp3} \min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\mathcal{G}\boldsymbol{\gamma}\Vert^2 + \lambda_A \Vert\tilde\boldsymbol{\gamma}\Vert^2 \end{align} with no constraints, and the solution is $\hat\boldsymbol{\gamma}=(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}$. The fit is given by $\mathcal{G}\hat\boldsymbol{\gamma}=\mathcal{G}(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}=\tilde \mathcal{S}_A\mathcal{R}$, and $\tilde \mathcal{S}_A$ is clearly a symmetric operator. To obtain the simplified expression for $\hat\boldsymbol{a}$, we write \begin{align} \mathcal{G}\hat\gamma&=\mathcal{Z}_AQ(Q^\mathsf{T}\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T} \mathcal{Z}_A^\mathsf{T}\mathcal{R}\nonumber\\ &=\mathcal{Z}_AQ(Q^\mathsf{T} D Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T} \mathcal{R}^+\label{eq:tosimplify}\\ &=\mathcal{Z}_A\hat\boldsymbol{a},\nonumber \end{align} with $D=\mathrm{diag}(N_{i\sumdot})$. We write $H=Q(Q^\mathsf{T} D Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T}$ and $\tilde Q=(D+\lambda_A I_R)^{\frac12}Q$, and let \begin{align} \tilde H&= (D+\lambda_A I_R)^{\frac12} H (D+\lambda_A I_R)^{\frac12} = \tilde Q(\tilde Q^\mathsf{T}\tilde Q)^{-1}\tilde Q^\mathsf{T}.\label{eq:Qproj} \end{align} Now (\ref{eq:Qproj}) is a projection matrix in $\mathbb{R}^R$ onto a $R-1$ dimensional subspace. Let $\tilde q = (D+\lambda_A I_R)^{-\frac12}\mathbf{1}.$ Then $\tilde q^\mathsf{T} \tilde Q={\boldsymbol{0}}$, and so $$\tilde H=I_R-\frac{\tilde q\tilde q^\mathsf{T}}{\Vert \tilde q\Vert^2}.$$ Unraveling this expression we get $$ H=(D+\lambda_AI_R)^{-1} -(D+\lambda_AI_R)^{-1}\frac{\mathbf{1}\bone^\mathsf{T}}{\mathbf{1}^\mathsf{T}(D+\lambda_AI_R)^{-1}\mathbf{1}}(D+\lambda_AI_R)^{-1}.$$ With $\hat\boldsymbol{a}=H\mathcal{R}^+$ in (\ref{eq:tosimplify}), this gives the expressions for each $\hat a_i$ in~\eqref{eq:ahatsoln}. Finally, $\tilde \mathcal{S}_A = \mathcal{Z}_A H\mathcal{Z}_A^\mathsf{T}$ is symmetric. \end{proof} \subsection{Covariance matrix for $\hat\beta_{\mathrm{GLS}}$ with centered operators} \label{sec:covar-matr-hatb} In Section~\ref{sec:two-factors} we saw in (\ref{eq:covbhat}) that we get a simple expression for $\cov(\hat\beta_{\mathrm{GLS}})$. This simplicity relies on the fact that $I_N-\mathcal{S}_G=\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$, and the usual cancelation occurs when we use the sandwich formula to compute this covariance. When we backfit with our centered smoothers we get a modified residual operator $I_N-\widetilde \mathcal{S}_G$ such that the analog of (\ref{eq:gfactor}) still gives us the required coefficient estimate: \begin{equation} \label{eq:gfactorc} \hat\beta_{\mathrm{GLS}} = (\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{Y}. \end{equation} However, $I_N-\widetilde\mathcal{S}_G\neq \sigma^2_E\mathcal{V}^{-1}$, and so now we need to resort to the sandwich formula $ \cov(\hat\beta_{\mathrm{GLS}})=H_\mathrm{GLS} \mathcal{V} H_\mathrm{GLS}^\mathsf{T}$ with $H_\mathrm{GLS}$ from \eqref{eq:gfactor}. Expanding this we find that $\cov(\hat\beta_{\mathrm{GLS}})$ equals \begin{align*} (\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G) \cdot \mathcal{V}\cdot (I_N-\widetilde\mathcal{S}_G)\mathcal{X}(\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}. \end{align*} While this expression might appear daunting, the computations are simple. Note first that while $\hat\beta_{\mathrm{GLS}}$ can be computed via $\tilde\mathcal{S}_G\mathcal{X}$ and $\tilde\mathcal{S}_G\mathcal{Y}$ this expression for $\cov(\hat\beta_{\mathrm{GLS}})$ also involves $\mathcal{X}^\mathsf{T} \tilde\mathcal{S}_G$. When we use the centered operator from Theorem~\ref{thm:smartcenter} we get a symmetric matrix $\tilde \mathcal{S}_G$. Let $\widetilde \mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$, the residual matrix after backfitting each column of $\mathcal{X}$ using these centered operators. Then because $\widetilde\mathcal{S}_G$ is symmetric, we have \begin{align} \hat\beta_{\mathrm{GLS}}&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\mathcal{Y},\quad\text{and} \notag\\ \cov(\hat\beta_{\mathrm{GLS}})&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\cdot\mathcal{V}\cdot\widetilde\mathcal{X}(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}.\label{eq:covbhatgls} \end{align} Since $\mathcal{V}=\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B +I_N\right)$ (two low-rank matrices plus the identity), we can compute $\mathcal{V}\cdot \widetilde\mathcal{X}$ very efficiently, and hence also the covariance matrix in~\eqref{eq:covbhatgls}. The entire algorithm is summarized in Section~\ref{sec:wholeshebang}. \section{Convergence of the matrix norm}\label{sec:normconvergence} In this section we prove a bound on the norm of the matrix that implements backfitting for our random effects $\boldsymbol{a}$ and $\boldsymbol{b}$ and show how this controls the number of iterations required. In our algorithm, backfitting is applied to $\mathcal{Y}$ as well as to each non-intercept column of $\mathcal{X}$ so we do not need to consider the updates for $\mathcal{X}\hat\beta$. It is useful to take account of intercept adjustments in backfitting, by the centerings described in Section~\ref{sec:backfitting} because the space spanned by $a_1,\dots,a_R$ intersects the space spanned by $b_1,\dots,b_C$ because both include an intercept column of ones. In backfitting we alternate between adjusting $\boldsymbol{a}$ given $\boldsymbol{b}$ and $\boldsymbol{b}$ given $\boldsymbol{a}$. At any iteration, the new $\boldsymbol{a}$ is an affine function of the previous $\boldsymbol{b}$ and then the new $\boldsymbol{b}$ is an affine function of the new $\boldsymbol{a}$. This makes the new $\boldsymbol{b}$ an affine function of the previous $\boldsymbol{b}$. We will study that affine function to find conditions where the updates converge. If the $\boldsymbol{b}$ updates converge, then so must the $\boldsymbol{a}$ updates. Because the updates are affine they can be written in the form $$ \boldsymbol{b} \gets M\boldsymbol{b} + \eta $$ for $M\in\mathbb{R}^{C\times C}$ and $\eta\in\mathbb{R}^C$. We iterate this update and it is convenient to start with $\boldsymbol{b} = \boldsymbol{0}$. We already know from \cite{buja:hast:tibs:1989} that this backfitting will converge. However, we want more. We want to avoid having the number of iterations required grow with $N$. We can write the solution $\boldsymbol{b}$ as $$ \boldsymbol{b} = \eta +\sum_{k=1}^\infty M^k\eta, $$ and in computations we truncate this sum after $K$ steps producing an error $\sum_{k>K}M^k\eta$. We want $\sup_{\eta\ne0}\Vert \sum_{k>K}M^k\eta\Vert/\Vert\eta\Vert<\epsilon$ to hold with probability tending to one as the sample size increases for any $\epsilon$, given sufficiently large $K$. For this it suffices to have the spectral radius $\lambda_{\max}(M)<1-\delta$ hold with probability tending to one for some $\delta>0$. Now for any $1\le p\le\infty$ we have $$ \lambda_{\max}(M)\le \Vert M\Vert_{p} \equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}} \frac{\Vert M\boldsymbol{x}\Vert_p}{\Vert \boldsymbol{x}\Vert_p}. $$ The explicit formula $$ \Vert M\Vert_{1} \equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}} \frac{\Vert M\boldsymbol{x}\Vert_1}{\Vert \boldsymbol{x}\Vert_1} = \max_{1\le s\le C}\sum_{j=1}^C | M_{js}| $$ makes the matrix $L_1$ matrix norm very tractable theoretically and so that is the one we study. We look at this and some other measures numerically in Section~\ref{sec:empiricalnorms}. \subsection{Updates} Recall that $Z\in\{0,1\}^{R\times C}$ describes the pattern of observations. In a model with no intercept, centering the responses and then taking shrunken means as in \eqref{eq:backfit} would yield these updates \begin{align*} a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A}\quad\text{and}\quad b_j \gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}. \end{align*} The update from the old $\boldsymbol{b}$ to the new $\boldsymbol{a}$ and then to the new $\boldsymbol{b}$ takes the form $\boldsymbol{b}\gets M\boldsymbol{b}+\eta$ for $M=M^{(0)}$ where $$ M^{(0)}_{js} = \frac1{N_{\sumdot j}+\lambda_B}\sum_i \frac{\zisZ_{ij}}{N_{i\sumdot}+\lambda_A}.$$ This update $M^{(0)}$ alternates shrinkage estimates for $\boldsymbol{a}$ and $\boldsymbol{b}$ but does no centering. We don't exhibit $\eta$ because it does not affect the convergence speed. In the presence of an intercept, we know that $\sum_ia_i=0$ should hold at the solution and we can impose this simply and very directly by centering the $a_i$, taking \begin{align*} a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A} -\frac1R\sum_{r=1}^R\frac{\sum_s Z_{rs}(Y_{rs}-b_s)}{N_{r\sumdot}+\lambda_A}, \quad\text{and}\\ b_j &\gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}. \end{align*} The intercept estimate will then be $\hat\beta_0=(1/C)\sum_jb_j$ which we can subtract from $b_j$ upon convergence. This iteration has the update matrix $M^{(1)}$ with \begin{align}\label{eq:monejs} M^{(1)}_{js} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rs}(Z_{rj}-N_{\sumdot j}/R)}{N_{r\sumdot}+\lambda_A} \end{align} after replacing a sum over $i$ by an equivalent one over $r$. In practice, we prefer to use the weighted centering from Section~\ref{sec:centered-operators} to center the $a_i$ because it provides a symmetric smoother $\tilde\mathcal{S}_G$ that supports computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$. While it is more complicated to analyze it is easily computable and it satisfies the optimality condition in Theorem~\ref{thm:smartcenter}. The algorithm is for a generic response $\mathcal{R}\in\mathbb{R}^N$ such as $\mathcal{Y}$ or a column of $\mathcal{X}$. Let us illustrate it for the case $\mathcal{R}=\mathcal{Y}$. We begin with vector of $N$ values $Y_{ij}-b_{j}$ and so $Y^+_i = \sum_sZ_{is}(Y_{is}-b_s).$ Then $w_i = (N_{i\sumdot}+\lambda_A)^{-1}/\sum_r(N_{r\sumdot}+\lambda_A)^{-1}$ and the updated $a_r$ is \begin{align*} \frac{Y^+_r-\sum_iw_i Y^+_i}{N_{r\sumdot}+\lambda_A} &= \frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i \sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A}. \end{align*} Using shrunken averages of $Y_{ij}-a_i$, the new $b_{j}$ are \begin{align*} b_{j} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_rZ_{rj} \biggl(Y_{rj}- \frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i \sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A} \biggr). \end{align*} Now $\boldsymbol{b} \gets M\boldsymbol{b}+\eta$ for $M=M^{(2)}$, where \begin{align}\label{eq:mtwojs} M^{(2)}_{js} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} \biggl(Z_{rs} - \frac{\sum_{i}\frac{Z_{is}}{N_{i\sumdot}+\lambda_{A}}}{\sum_i{\frac{1}{N_{i\sumdot}+\lambda_{A}}}}\biggr). \end{align} Our preferred algorithm applies the optimal update from Theorem~\ref{thm:smartcenter} to both $\boldsymbol{a}$ and $\boldsymbol{b}$ updates. With that choice we do not need to decide beforehand which random effects to center and which to leave uncentered to contain the intercept. We call the corresponding matrix $M^{(3)}$. Our theory below analyzes $\VertM^{(1)}\Vert_1$ and $\VertM^{(2)}\Vert_1$ which have simpler expressions than $\VertM^{(3)}\Vert_1$. Update $M^{(0)}$ uses symmetric smoothers for $A$ and $B$. Both are shrunken averages. The naive centering update $M^{(1)}$ uses a non-symmetric smoother $\mathcal{Z}_A(I_R-\mathbf{1}_R\mathbf{1}_R^\mathsf{T}/R)(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A+\lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$ on the $a_i$ with a symmetric smoother on $b_{j}$ and hence it does not generally produce a symmetric smoother needed for efficient computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$. The update $M^{(2)}$ uses two symmetric smoothers, one optimal and one a simple shrunken mean. The update $M^{(3)}$ takes the optimal smoother for both $A$ and $B$. Thus both $M^{(2)}$ and $M^{(3)}$ support efficient computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$. A subtle point is that these symmetric smoothers are matrices in $\mathbb{R}^{N\times N}$ while the matrices $M^{(k)}\in\mathbb{R}^{C\times C}$ are not symmetric. \subsection{Model for $Z_{ij}$}\label{sec:modelz} We will state conditions on $Z_{ij}$ under which both $\Vert M^{(1)}\Vert_1$ and $\Vert M^{(2)}\Vert_1$ are bounded below $1$ with probability tending to one, as the problem size grows. We need the following exponential inequalities. \begin{lemma}\label{lem:hoeff} If $X\sim\mathrm{Bin}(n,p)$, then for any $t\ge0$, \begin{align*} \Pr( X\ge np+t ) &\le \exp( -2t^2/n ),\quad\text{and}\\ \Pr( X\le np-t ) &\le \exp( -2t^2/n ) \end{align*} \end{lemma} \begin{proof} This follows from Hoeffding's theorem. \end{proof} \begin{lemma}\label{lem:binounionbound} Let $X_i\sim\mathrm{Bin}(n,p)$ for $i=1,\dots,m$, not necessarily independent. Then for any $t\ge0$, \begin{align*} \Pr\Bigl( \max_{1\le i\le m} X_{i} \ge np+t \Bigr) &\le m\exp( -2t^2/n ) ,\quad\text{and}\\ \Pr\Bigl( \min_{1\le i\le m} X_{i} \le np-t \Bigr) &\le m\exp( -2t^2/n ). \end{align*} \end{lemma} \begin{proof} This is from the union bound applied to Lemma~\ref{lem:hoeff}. \end{proof} Here is our sampling model. We index the size of our problem by $S\to\infty$. The sample size $N$ will satisfy $\mathbb{E}(N)\ge S$. The number of rows and columns in the data set are $$R = S^\rho\quad\text{and}\quad C=S^\kappa$$ respectively, for positive numbers $\rho$ and $\kappa$. Because our application domain has $N\ll RC$, we assume that $\rho+\kappa>1$. We ignore that $R$ and $C$ above are not necessarily integers. In our model, $Z_{ij}\sim\mathrm{Bern}(p_{ij})$ independently with \begin{align}\label{eq:defab} \frac{S}{RC} \le p_{ij} \le \Upsilon\frac{S}{RC} \quad\text{for}\quad 1\le\Upsilon<\infty. \end{align} That is $1\le p_{ij} S^{\rho+\kappa-1}\le\Upsilon$. Letting $p_{ij}$ depend on $i$ and $j$ allows the probability model to capture stylistic preferences affecting the missingness pattern in the ratings data. \subsection{Bounds for row and column size} Letting $X \preccurlyeq Y$ mean that $X$ is stochastically smaller than $Y$, we know that \begin{align*} \mathrm{Bin}(R, S^{1-\rho-\kappa}) &\preccurlyeq N_{\sumdot j} \preccurlyeq \mathrm{Bin}( R, \Upsilon S^{1-\rho-\kappa}),\quad\text{and}\\ \mathrm{Bin}(C,S^{1-\rho-\kappa}) &\preccurlyeq N_{i\sumdot} \preccurlyeq \mathrm{Bin}( C, \Upsilon S^{1-\rho-\kappa}). \end{align*} By Lemma \ref{lem:hoeff}, if $t\ge0$, then \begin{align*} \Pr( N_{i\sumdot} \ge S^{1-\rho}(\Upsilon+t)) &\le \Pr\bigl( \mathrm{Bin}(C,\Upsilon S^{1-\rho-\kappa}) \ge S^{1-\rho}(\Upsilon+t)\bigr)\\ &\le \exp(-2(S^{1-\rho}t)^2/C)\\ &= \exp(-2S^{2-\kappa-2\rho}t^2). \end{align*} Therefore if $2\rho+\kappa<2$, we find using using Lemma~\ref{lem:binounionbound} that \begin{align*} &\Pr\bigl( \max_iN_{i\sumdot} \ge S^{1-\rho}(\Upsilon+\epsilon)\bigr) \le S^\rho\exp(-2S^{2-\kappa-2\rho}\epsilon^2)\to0 \end{align*} for any $\epsilon>0$. Combining this with an analogous lower bound, \begin{align}\label{eq:boundnid} \lim_{S\to\infty}\Pr\bigl( (1-\epsilon) S^{1-\rho}\le \min_i N_{i\sumdot} \le \max_i N_{i\sumdot} \le (\Upsilon+\epsilon) S^{1-\rho}\bigr)=1 \end{align} Likewise, if $\rho+2\kappa<2$, then for any $\epsilon>0$, \begin{align}\label{eq:boundndj} \lim_{S\to\infty}\Pr\bigl( (1-\epsilon)S^{1-\kappa}\le \min_j N_{\sumdot j} \le \max_j N_{\sumdot j} \le (\Upsilon+\epsilon) S^{1-\kappa}\bigr)=1 \end{align} \subsection{Interval arithmetic} We will replace $N_{i\sumdot}$ and other quantities by intervals that asymptotically contain them with probability one and then use interval arithmetic in order to streamline some of the steps in our proofs. For instance, $$N_{i\sumdot}\in [(1-\epsilon)S^{1-\rho},(\Upsilon+\epsilon)S^{1-\rho}] = [1-\epsilon,\Upsilon+\epsilon]\times S^{1-\rho} = [1-\epsilon,\Upsilon+\epsilon]\times \frac{S}{R}$$ holds simultaneously for all $1\le i\le R$ with probability tending to one as $S\to\infty$. In interval arithmetic, $$[A,B]+[a,b]=[a+A,b+B]\quad\text{and}\quad [A,B]-[a,b]=[A-b,B-a].$$ If $0<a\le b<\infty$ and $0<A\le B<\infty$, then $$[A,B]\times[a,b] = [Aa,Bb]\quad\text{and}\quad [A,B]/[a,b] = [A/b,B/a].$$ Similarly, if $a<0<b$ and $X\in[a,b]$, then $|X|\in[0,\max(|a|,|b|)]$. Our arithmetic operations on intervals yield new intervals guaranteed to contain the results obtained using any members of the original intervals. We do not necessarily use the smallest such interval. \subsection{Co-observation} Recall that the co-observation matrices are $Z^\mathsf{T} Z\in\{0,1\}^{C\times C}$ and $ZZ^\mathsf{T}\in\{0,1\}^{R\times R}$. If $s\ne j$, then $$ \mathrm{Bin}\Bigl( R,\frac{S^2}{R^2C^2}\Bigr) \preccurlyeq (Z^\tran Z)_{sj}\preccurlyeq \mathrm{Bin}\Bigl( R,\frac{\Upsilon^2S^2}{R^2C^2}\Bigr). $$ That is $\mathrm{Bin}(S^\rho, S^{2-2\rho-2\kappa}) \preccurlyeq (Z^\tran Z)_{sj} \preccurlyeq \mathrm{Bin}(S^\rho, \Upsilon^2S^{2-2\rho-2\kappa}). $ For $t\ge0$, \begin{align*} \Pr\Bigl( \max_s\max_{j\ne s}(Z^\tran Z)_{sj}\ge (\Upsilon^2+t)S^{2-\rho-2\kappa}\Bigr) &\le \frac{C^2}2\exp( -(tS^{2-\rho-2\kappa})^2/R)\\ &= \frac{C^2}2\exp( -t^2 S^{4-3\rho-4\kappa}). \end{align*} If $3\rho+4\kappa<4$ then \begin{align*} &\Pr\Bigl( \max_s\max_{j\ne s} \,(Z^\tran Z)_{sj} \ge (\Upsilon^2+\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0, \quad\text{and}\\ &\Pr\Bigl( \min_s\min_{j\ne s} \,(Z^\tran Z)_{sj} \le (1-\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0, \end{align*} for any $\epsilon>0$. \subsection{Asymptotic bounds for $\Vert M\Vert_1$} Here we prove upper bounds for $\Vert M^{(k)}\Vert_1$ for $k=1,2$ of equations~\eqref{eq:monejs} and~\eqref{eq:mtwojs}, respectively. The bounds depend on $\Upsilon$ and there are values of $\Upsilon>1$ for which these norms are bounded strictly below one, with probability tending to one. \begin{theorem}\label{thm:m1norm1} Let $Z_{ij}$ follow the model from Section~\ref{sec:modelz} with $\rho,\kappa\in(0,1)$, that satisfy $\rho+\kappa>1$, $2\rho+\kappa<2$ and $3\rho+4\kappa<4$. Then for any $\epsilon>0$, \begin{align}\label{eq:claim1} & \Pr\bigl( \Vert M^{(1)} \Vert_1\le \Upsilon^2-\Upsilon^{-2}+\epsilon \bigr)\to1 ,\quad\text{and}\\ &\Pr\bigl( \Vert M^{(2)}\Vert_1\le \Upsilon^2-\Upsilon^{-2}+\epsilon\bigr)\to1 \label{eq:claim2} \end{align} as $S\to\infty$. \end{theorem} \begin{figure}[t! \centering \includegraphics[width=.8\hsize]{figdomain2} \caption{ \label{fig:domainofinterest} The large shaded triangle is the domain of interest $\mathcal{D}$ for Theorem~\ref{thm:m1norm1}. The smaller shaded triangle shows a region where the analogous update to $\boldsymbol{a}$ would have acceptable norm. The points marked are the ones we look at numerically, including $(0.88,0.57)$ which corresponds to the Stitch Fix data in Section~\ref{sec:stitch}. } \end{figure} \begin{proof} Without loss of generality we assume that $\epsilon<1$. We begin with~\eqref{eq:claim2}. Let $M=M^{(2)}$. When $j\ne s$, \begin{align*} M_{js}&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} (Z_{rs} -\bar Z_{\text{\tiny$\bullet$} s}),\quad\text{for}\\ \bar Z_{\text{\tiny$\bullet$} s}&= \sum_i \frac{Z_{is}}{N_{i\sumdot}+\lambda_A} \Bigm/ {\sum_{i}\frac{1}{N_{i\sumdot}+\lambda_{A}}}. \end{align*} Although $|Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}|\le1$, replacing $Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}$ by one does not prove to be sharp enough for our purposes. Every $N_{r\sumdot}+\lambda_A\in S^{1-\rho} [1-\epsilon, \Upsilon+\epsilon]$ with probability tending to one and so \begin{align*} \frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} &\in \frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{[1-\epsilon,\Upsilon+\epsilon]S^{1-\rho}}\\ &\subseteq [1-\epsilon,\Upsilon+\epsilon]^{-1}\bar Z_{\text{\tiny$\bullet$} s} S^{\rho-1}. \end{align*} Similarly \begin{align*} \bar Z_{\text{\tiny$\bullet$} s} &\in \frac{\sum_iZ_{is}[1-\epsilon,\Upsilon+\epsilon]^{-1}} {R[1-\epsilon,\Upsilon+\epsilon]^{-1}} \subseteq\frac{N_{\sumdot s}}{R}[1-\epsilon,\Upsilon+\epsilon][1-\epsilon,\Upsilon+\epsilon]^{-1}\\ &\subseteq S^{1-\rho-\kappa} [1-\epsilon,\Upsilon+\epsilon]^2[1-\epsilon,\Upsilon+\epsilon]^{-1} \end{align*} and so \begin{align}\label{eq:zrsbarpart} \frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} \in S^{-\kappa} \frac{[1-\epsilon,\Upsilon+\epsilon]^2}{[1-\epsilon,\Upsilon+\epsilon]^2} \subseteq \frac1C \Bigl[ \Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2 , \Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2 \Bigr]. \end{align} Next using bounds on the co-observation counts, \begin{align}\label{eq:zrspart} \frac1{N_{\sumdot j}+\lambda_B}\sum_r\frac{Z_{rj}Z_{rs}}{N_{r\sumdot}+\lambda_A} \in \frac{S^{\rho+\kappa-2}(Z^\tran Z)_{sj}}{[1-\epsilon,\Upsilon+\epsilon]^2} \subseteq \frac1C \frac{[1-\epsilon,\Upsilon^2+\epsilon]}{[1-\epsilon,\Upsilon+\epsilon]^2}. \end{align} Combining~\eqref{eq:zrsbarpart} and~\eqref{eq:zrspart} \begin{align*} M_{js} \in & \frac1C \Bigl[ \frac{1-\epsilon}{(\Upsilon+\epsilon)^2}- \Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2 , \frac{\Upsilon^2+\epsilon}{1-\epsilon} -\Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2 \Bigr] \end{align*} For any $\epsilon'>0$ we can choose $\epsilon$ small enough that $$M_{js} \in C^{-1}[\Upsilon^{-2}-\Upsilon^2-\epsilon', \Upsilon^2-\Upsilon^{-2}+{\epsilon'}] $$ and then $|M_{js}|\le (\Upsilon^2-\Upsilon^{-2}+\epsilon')/C$. Next, arguments like the preceding give $|M_{jj}|\le (1-\epsilon')^{-2}(\Upsilon+\epsilon')S^{\rho-1}\to0$. Then with probability tending to one, $$ \sum_j|M_{js}| \le\Upsilon^2-\Upsilon^{-2} +2\epsilon'. $$ This bound holds for all $s\in\{1,2,\dots,C\}$, establishing~\eqref{eq:claim2}. The proof of~\eqref{eq:claim1} is similar. The quantity $\bar Z_{\text{\tiny$\bullet$} s}$ is replaced by $(1/R)\sum_iZ_{is}/(N_{i\sumdot}+\lambda_A)$. \end{proof} It is interesting to find the largest $\Upsilon$ with $\Upsilon^2-\Upsilon^{-2}\le1$. It is $((1+5^{1/2})/2)^{1/2}\doteq 1.27$. \section{Convergence and computation}\label{sec:empiricalnorms} In this section we make some computations on synthetic data following the probability model from Section~\ref{sec:normconvergence}. First we study the norms of our update matrix $M^{(2)}$ which affects the number of iterations to convergence. In addition to $\Vert\cdot\Vert_1$ covered in Theorem~\ref{thm:m1norm1} we also consider $\Vert\cdot\Vert_2$, $\Vert\cdot\Vert_\infty$ and $\lambda_{\max}(\cdot)$. Then we compare the cost to compute $\hat\beta_\mathrm{GLS}$ by our backfitting method with that of lmer \citep{lme4}. The problem size is indexed by $S$. Indices $i$ go from $1$ to $R=\lceil S^\rho\rceil$ and indices $j$ go from $1$ to $C=\lceil S^\kappa\rceil$. Reasonable parameter values have $\rho,\kappa\in(0,1)$ with $\rho+\kappa>1$. Theorem~\ref{thm:m1norm1} applies when $2\rho+\kappa<2$ and $3\rho+4\kappa<4$. Figure~\ref{fig:domainofinterest} depicts this triangular domain of interest $\mathcal{D}$. There is another triangle $\mathcal{D}'$ where a corresponding update for $\boldsymbol{a}$ would satisfy the conditions of Theorem~\ref{thm:m1norm1}. Then $\mathcal{D}\cup\mathcal{D}'$ is a non-convex polygon of five sides. Figure~\ref{fig:domainofinterest} also shows $\mathcal{D}'\setminus\mathcal{D}$ as a second triangular region. For points $(\rho,\kappa)$ near the line $\rho+\kappa=1$, the matrix $Z$ will be mostly ones unless $S$ is very large. For points $(\rho,\kappa)$ near the upper corner $(1,1)$, the matrix $Z$ will be extremely sparse with each $N_{i\sumdot}$ and $N_{\sumdot j}$ having nearly a Poisson distribution with mean between $1$ and $\Upsilon$. The fraction of potential values that have been observed is $O(S^{1-\rho-\kappa})$. Given {$p_{ij}$}, we generate our observation matrix via $Z_{ij} \stackrel{\mathrm{ind}}{\sim}\mathrm{Bern}({p_{ij})}$. These probabilities are first generated via ${p_{ij}}= U_{ij}S^{1-\rho-\kappa}$ where $U_{ij}\stackrel{\mathrm{iid}}{\sim}\mathbb{U}[1,\Upsilon]$ and $\Upsilon$ is the largest value for which $\Upsilon^2-\Upsilon^{-2}\le1$. For small $S$ and $\rho+\kappa$ near $1$ we can get some values ${p_{ij}>1}$ and in that case we take ${p_{ij}=1}$. The following $(\rho,\kappa)$ combinations are of interest. First, $(4/5,2/5)$ is the closest vertex of the domain of interest to the point $(1,1)$. Second, $(2/5,4/5)$ is outside the domain of interest for the $\boldsymbol{b}$ but within the domain for the analogous $\boldsymbol{a}$ update. Third, among points with $\rho=\kappa$, the value $(4/7,4/7)$ is the farthest one from the origin that is in the domain of interest. We also look at some points on the $45$ degree line that are outside the domain of interest because the sufficient conditions in Theorem~\ref{thm:m1norm1} might not be necessary. In our matrix norm computations we took $\lambda_A=\lambda_B=0$. This completely removes shrinkage and will make it harder for the algorithm to converge than would be the case for the positive $\lambda_A$ and $\lambda_B$ that hold in real data. The values of $\lambda_A$ and $\lambda_B$ appear in expressions $N_{i\sumdot}+\lambda_A$ and $N_{\sumdot j}+\lambda_B$ where their contribution is asymptotically negligible, so conservatively setting them to zero will nonetheless be realistic for large data sets. \begin{figure \centering \includegraphics[width=.8\hsize]{norm_n_log_xy_with_lines_revised} \caption{\label{fig:1normvsn} Norm $\Vert M^{(2)}\Vert_1$ of centered update matrix versus problem size $S$ for different $(\rho, \kappa)$. } \end{figure} \noindent We sample from the model multiple times at various values of $S$ and plot $\Vert M^{(2)}\Vert_1$ versus $S$ on a logarithmic scale. Figure~\ref{fig:1normvsn} shows the results. We observe that $\Vert M^{(2)}\Vert_1$ is below $1$ and decreasing with $S$ for all the examples $(\rho,\kappa)\in\mathcal{D}$. This holds also for $(\rho,\kappa)=(0.60,0.60)\not\in\mathcal{D}$. We chose that point because it is on the convex hull of $\mathcal{D}\cup\mathcal{D}'$. The point $(\rho,\kappa)=(0.40,0.80)\not\in\mathcal{D}$. Figure~\ref{fig:1normvsn} shows large values of $\VertM^{(2)}\Vert_1$ for this case. Those values increase with $S$, but remain below $1$ in the range considered. This is a case where the update from $\boldsymbol{a}$ to $\boldsymbol{a}$ would have norm well below $1$ and decreasing with $S$, so backfitting would converge. We do not know whether $\VertM^{(2)}\Vert_1>1$ will occur for larger $S$. The point $(\rho,\kappa)=(0.70,0.70)$ is not in the domain $\mathcal{D}$ covered by Theorem~\ref{thm:m1norm1} and we see that $\VertM^{(2)}\Vert_1>1$ and generally increasing with $S$ as shown in Figure~\ref{fig:7070norms}. This does not mean that backfitting must fail to converge. Here we find that $\VertM^{(2)}\Vert_2<1$ and generally decreases as $S$ increases. This is a strong indication that the number of backfitting iterations required will not grow with $S$ for this $(\rho,\kappa)$ combination. We cannot tell whether $\VertM^{(2)}\Vert_2$ will decrease to zero but that is what appears to happen. We consistently find in our computations that $\lambda_{\max}(M^{(2)})\le \VertM^{(2)}\Vert_2\le\VertM^{(2)}\Vert_1$. The first of these inequalities must necessarily hold. For a symmetric matrix $M$ we know that $\lambda_{\max}(M)=\Vert M\Vert_2$ which is then necessarily no larger than $\Vert M\Vert_1$. Our update matrices are nearly symmetric but not perfectly so. We believe that explains why their $L_2$ norms are close to their spectral radius and also smaller than their $L_1$ norms. While the $L_2$ norms are empirically more favorable than the $L_1$ norms, they are not amenable to our theoretical treatment. \begin{figure \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.4]{norm_vs_S_with_lines_70_L1_written_norm_logxy} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.4]{norm_vs_S_with_lines_70_L2_written_norm_logxy_main_correct} \end{subfigure} \caption{\label{fig:7070norms} The left panel shows $\VertM^{(2)}\Vert_1$ versus $S$. The right panel shows $\VertM^{(2)}\Vert_2$ versus $S$ with a logarithmic vertical scale. Both have $(\rho,\kappa)=(0.7,0.7)$. } \end{figure} We believe that backfitting will have a spectral radius well below $1$ for more cases than we can as yet prove. In addition to the previous figures showing matrix norms as $S$ increases for certain special values of $(\rho,\kappa)$ we have computed contour maps of those norms over $(\rho,\kappa)\in[0,1]$ for $S=10{,}000$. See Figure~\ref{fig:contours}. To compare the computation times for algorithms we generated $Z_{ij}$ as above and also took $x_{ij}\stackrel{\mathrm{iid}}{\sim}\mathcal{N}(0,I_7)$ plus an intercept, making $p=8$ fixed effect parameters. Although backfitting can run with $\lambda_A=\lambda_B=0$, lmer cannot do so for numerical reasons. So we took $\sigma^2_A=\sigma^2_B=1$ and $\sigma^2_E=1$ corresponding to $\lambda_A=\lambda_B=1$. The cost per iteration does not depend on $Y_{ij}$ and hence not on $\beta$ either. We used $\beta=0$. Figure~\ref{fig:comptimes} shows computation times for one single iteration when $(\rho,\kappa)=(0.52,0.52)$ and when $(\rho,\kappa)=(0.70,0.70)$. The time to do one iteration in lmer grows roughly like $N^{3/2}$ in the first case. For the second case, it appears to grow at the even faster rate of $N^{2.1}$. Solving a system of $S^\kappa\times S^\kappa$ equations would cost $S^{3\kappa} = S^{2.1} = O(N^{2.1})$, which explains the observed rate. This analysis would predict $O(N^{1.56})$ for $\rho=\kappa=0.52$ but that is only minimally different from $O(N^{3/2})$. These experiments were carried out in R on a computer with the macOS operating system, 16 GB of memory and an Intel i7 processor. Each backfitting iteration entails solving \eqref{eq:backfit} along with the fixed effects. The cost per iteration for backfitting follows closely to the $O(N)$ rate predicted by the theory. OLS only takes one iteration and it is also of $O(N)$ cost. In both of these cases $\VertM^{(2)}\Vert_2$ is bounded away from one so the number of backfitting iterations does not grow with $S$. For $\rho=\kappa=0.52$, backfitting took $4$ iterations to converge for the smaller values of $S$ and $3$ iterations for the larger ones. For $\rho=\kappa=0.70$, backfitting took $6$ iterations for smaller $S$ and $4$ or $5$ iterations for larger $S$. In each case our convergence criterion was a relative change of $10^{-8}$ as described in Section~\ref{sec:wholeshebang}. Further backfitting to compute BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$ given $\hat\beta_{\mathrm{GLS}}$ took at most $5$ iterations for $\rho=\kappa=0.52$ and at most $10$ iterations for $\rho=\kappa=0.7$. In the second example, lme4 did not reach convergence in our time window so we ran it for just $4$ iterations to measure its cost per iteration. \begin{figure}[!t] \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.28]{one_norm_reshaped.png} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.28]{infinity_norm_reshaped.png} \end{subfigure} \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[height = 5.2cm, width = 5.5cm]{two_norm_reshaped.png} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[height = 5.2cm, width = 5.44cm]{spectral_radius_reshaped.png} \end{subfigure} \caption{\label{fig:contours} Numerically computed matrix norms for $M^{(2)}$ using $S=10{,}000$. The color code varies with the subfigures. } \end{figure} \begin{figure \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=1\linewidth]{time_per_iter_vs_n_last_point_1_point_2716_reference_slope_at_end_52_52_review.pdf} \caption{$(\rho, \kappa) = (0.52,0.52)$} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=1\linewidth]{backfitting_lmer_time_total} \caption{$(\rho, \kappa) = (0.70,0.70)$} \end{subfigure} \caption{\label{fig:comptimes} Time for one iteration versus the number of observations, $N$ at two points $(\rho,\kappa)$. The cost for lmer is roughly $O(N^{3/2})$ in the top panel and $O(N^{2.1})$ in the bottom panel. The costs for OLS and backfitting are $O(N)$. } \end{figure} \section{Example: ratings from Stitch Fix}\label{sec:stitch} We illustrate backfitting for GLS on some data from Stitch Fix. Stitch Fix sells clothing. They mail their customers a sample of items. The customers may keep and purchase any of those items that they want, while returning the others. It is valuable to predict the extent to which a customer will like an item, not just whether they will purchase it. Stitch Fix has provided us with some of their client ratings data. It was anonymized, void of personally identifying information, and as a sample it does not reflect their total numbers of clients or items at the time they provided it. It is also from 2015. While it does not describe their current business, it is a valuable data set for illustrative purposes. The sample sizes for this data are as follows. We received $N=5{,}000{,}000$ ratings by $R=762{,}752$ customers on $C=6{,}318$ items. These values of $R$ and $C$ correspond to the point $(0.88,0.57)$ in Figure~\ref{fig:domainofinterest}. Thus $C/N\doteq 0.00126$ and $R/N\doteq 0.153$. The data are not dominated by a single row or column because $\max_iN_{i\sumdot}/R\doteq 9\times 10^{-6}$ and $\max_jN_{\sumdot j}/N\doteq 0.0143$. The data are sparse because $N/(RC)\doteq 0.001$. \subsection{An illustrative linear model} The response $Y_{ij}$ is a rating on a ten point scale of the satisfaction of customer $i$ with item $j$. The data come with features about the clients and items. In a business setting one would fit and compare possibly dozens of different regression models to understand the data. Our purpose here is to study large scale GLS and compare it to ordinary least squares (OLS) and so we use just one model, not necessarily one that we would have settled on. For that purpose we use the same model that was used in \cite{crelin}. It is not chosen to make OLS look as bad as possible. Instead it is potentially the first model one might look at in a data analysis. For client $i$ and item $j$, \begin{align} Y_{ij}& = \beta_0+\beta_1\mathrm{match}_{ij}+\beta_2\mathbb{I}\{\mathrm{client\ edgy}\}_i+\beta_3\mathbb{I}\{\mathrm{item\ edgy}\}_j \notag \\ &\phe + \beta_4\mathbb{I}\{\mathrm{client\ edgy}\}_i*\mathbb{I}\{\mathrm{item\ edgy}\}_j+\beta_5\mathbb{I}\{\mathrm{client\ boho}\}_i \notag \\ &\phe + \beta_6\mathbb{I}\{\mathrm{item\ boho}\}_j+\beta_7\mathbb{I}\{\mathrm{client\ boho}\}_i*\mathbb{I}\{\mathrm{item\ boho}\}_j \notag \\ &\phe + \beta_8\mathrm{material}_{ij}+a_i+b_j+e_{ij}. \notag \end{align} Here $\mathrm{material}_{ij}$ is a categorical variable that is implemented via indicator variables for each type of material other than the baseline. Following \cite{crelin}, we chose ‘Polyester’, the most common material, as the baseline. Some customers and some items were given the adjective `edgy' in the data set. Another adjective was `boho', short for `Bohemian'. The variable match$_{ij}\in[0,1]$ is an estimate of the probability that the customer keeps the item, made before the item was sent. The match score is a prediction from a baseline model and is not representative of all algorithms used at Stitch Fix. All told, the model has $p=30$ parameters. \subsection{Estimating the variance parameters}\label{sec:estim-vari-param} We use the method of moments method from \cite{crelin} to estimate $\theta^\mathsf{T}=(\sigma^2_A, \sigma^2_B, \sigma^2_E)$ in $O(N)$ computation. That is in turn based on the method that \cite{GO17} use in the intercept only model where $Y_{ij} = \mu+a_i+b_{j}+e_{ij}$. For that model they set \begin{align*} U_{A} &= \sum_{i} \sum_{j} Z_{ij} \Bigl( Y_{ij}-\frac{1}{N_{i\sumdot}}\sum_{j^{\prime}}Z_{ij'} Y_{ij^{\prime}}\Bigr)^{2}, \\ U_{B} &= \sum_{j}\sum_{i} Z_{ij} \Bigl(Y_{ij}-\frac{1}{N_{\sumdot j}}\sum_{i^{\prime}}Z_{i'j} Y_{i^{\prime}j}\Bigr)^{2}, \quad\text{and}\\ U_{E} &= N\sum_{i j} Z_{i j} \Bigl(Y_{i j}-\frac{1}{N}\sum_{i^{\prime} j^{\prime}}Z_{i'j'} Y_{i^{\prime} j^{\prime}}\Bigr)^{2}. \end{align*} These are, respectively, sums of within row sums of squares, sums of within column sums of squares and a scaled overall sum of squares. Straightforward calculations show that \begin{align*} \mathbb{E}(U_{A})&=\bigl(\sigma^2_B+\sigma^2_E\bigr)(N-R), \\ \mathbb{E}(U_{B})&=\bigl(\sigma^2_A+\sigma^2_E \bigr)(N-C), \quad\text{and}\\ \mathbb{E}(U_{E})&=\sigma^2_A\Bigl(N^{2}-\sum_{i} N_{i\sumdot}^{2}\Bigr)+\sigma^2_B\Bigl(N^{2}-\sum_{j} N_{\sumdot j}^{2}\Bigr)+\sigma^2_E(N^{2}-N). \end{align*} By matching moments, we can estimate $\theta$ by solving the $3 \times 3$ linear system $$\begin{pmatrix} 0& N-R & N-R \\[.25ex] N-C & 0 & N-C \\[.25ex] N^{2}-\Sigma N_{i}^{2} & N^{2}-\Sigma N_{j}^{2} & N^{2}-N \end{pmatrix} \begin{pmatrix} \sigma^2_A \\[.25ex] \sigma^2_B \\[.25ex] \sigma^2_E\end{pmatrix} =\begin{pmatrix} U_{A}\\[.25ex] U_{B} \\[.25ex] U_{E}\end{pmatrix} $$ for $\theta$. Following \cite{GO17} we note that $\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\beta = a_i+b_{j}+e_{ij}$ has the same parameter $\theta$ as $Y_{ij}$ have. We then take a consistent estimate of $\beta$, in this case $\hat\beta_{\mathrm{OLS}}$ that \cite{GO17} show is consistent for $\beta$, and define $\hat\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\hat\beta_\mathrm{OLS}$. We then estimate $\theta$ by the above method after replacing $Y_{ij}$ by $\hat\eta_{ij}$. For the Stitch Fix data we obtained $\hat{\sigma}_{A}^{2} = 1.14$ (customers), $\hat{\sigma}^{2}_{B} = 0.11$ (items) and $\hat{\sigma}^{2}_{E} = 4.47$. \subsection{Computing $\hat\beta_\mathrm{GLS}$}\label{sec:wholeshebang} The estimated coefficients $\hat\beta_\mathrm{GLS}$ and their standard errors are presented in a table in the appendix. Open-source R code at \url{https://github.com/G28Sw/backfit_code} does these computations. Here is a concise description of the algorithm we used: \begin{compactenum}[\quad 1)] \item Compute $\hat\beta_\mathrm{OLS}$ via \eqref{eq:bhatols}. \item Get residuals $\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{OLS}}$. \item Compute $\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ by the method of moments on $\hat\eta_{ij}$. \item Compute $\widetilde\mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$ using doubly centered backfitting $M^{(3)}$. \item Compute $\hat\beta_{\mathrm{GLS}}$ by~\eqref{eq:covbhatgls}. \item If we want BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$ backfit $\mathcal{Y} -\mathcal{X}\hat\beta_{\mathrm{GLS}}$ to get them. \item Compute $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ by plugging $\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ into $\mathcal{V}$ at~\eqref{eq:covbhatgls}. \end{compactenum} \smallskip Stage $k$ of backfitting provides $(\tilde\mathcal{S}_G\mathcal{X})^{(k)}$. We iterate until $$ \frac{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k+1)}-(\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F}{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F} < \epsilon $$ where $\Vert \cdot \Vert_F$ is the Frobenius norm (root mean square of all elements). Our numerical results use $\epsilon =10^{-8}$. { When we want $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ then we need to use a backfitting strategy with a symmetric smoother $\tilde\mathcal{S}_G$. This holds for $M^{(0)}$, $M^{(2)}$ and $M^{(3)}$ but not $M^{(1)}$. After computing $\hat\beta_{\mathrm{GLS}}$ one can return to step 2, form new residuals $\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{GLS}}$ and continue through steps 3--7. We have seen small differences from doing this. } \subsection{Quantifying inefficiency and naivete of OLS} In the introduction we mentioned two serious problems with the use of OLS on crossed random effects data. The first is that OLS is naive about correlations in the data and this can lead it to severely underestimate the variance of $\hat\beta$. The second is that OLS is inefficient compared to GLS by the Gauss-Markov theorem. Let $\hat\beta_\mathrm{OLS}$ and $\hat\beta_\mathrm{GLS}$ be the OLS and GLS estimates of $\beta$, respectively. We can compute their corresponding variance estimates $\widehat\cov_\mathrm{OLS}(\hat\beta_\mathrm{OLS})$ and $\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{GLS})$. We can also find $\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{OLS})$, the variance under our GLS model of the linear combination of $Y_{ij}$ values that OLS uses. This section explore them graphically. We can quantify the naivete of OLS via the ratios $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$ for $j=1,\dots,p$. Figure~\ref{fig:OLSisnaive} plots these values. They range from $ 1.75$ to $345.28$ and can be interpreted as factors by which OLS naively overestimates its sample size. The largest and second largest ratios are for material indicators corresponding to `Modal' and `Tencel', respectively. These appear to be two names for the same product with Tencel being a trademarked name for Modal fibers (made from wood). We can also identify the linear combination of $\hat\beta_\mathrm{OLS}$ for which $\mathrm{OLS}$ is most naive. We maximize the ratio $x^\mathsf{T}\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})x/x^\mathsf{T}\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}})x$ over $x\ne0$. The resulting maximal ratio is the largest eigenvalue of $$\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}}) ^{-1} \widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})$$ and it is about $361$ for the Stitch Fix data. \begin{figure} \centering \includegraphics[width=.9\hsize]{figOLSisnaive_katelyn_interaction_polyester_reference} \caption{\label{fig:OLSisnaive} OLS naivete $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$ for coefficients $\beta_j$ in the Stitch Fix data. } \end{figure} We can quantify the inefficiency of OLS via the ratio $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$ for $j=1,\dots,p$. Figure~\ref{fig:OLSisinefficient} plots these values. They range from just over $1$ to $50.6$ and can be interpreted as factors by which using OLS reduces the effective sample size. There is a clear outlier: the coefficient of the match variable is very inefficiently estimated by OLS. The second largest inefficiency factor is for the intercept term. The most inefficient linear combination of $\hat\beta$ reaches a variance ratio of $52.6$, only slightly more inefficient than the match coefficient alone. \begin{figure} \centering \includegraphics[width=.9\hsize]{figOLSisinefficient_katelyn_interaction_polyester_reference} \caption{\label{fig:OLSisinefficient} OLS inefficiency $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$ for coefficients $\beta_j$ in the Stitch Fix data. } \end{figure} The variables for which OLS is more naive tend to also be the variables for which it is most inefficient. Figure~\ref{fig:naivevsinefficient} plots these quantities against each other for the $30$ coefficients in our model. \begin{figure}[t] \centering \includegraphics[width=.8\hsize]{fignaivevsinefficient_katelyn_interaction_polyester_reference} \caption{\label{fig:naivevsinefficient} Inefficiency vs naivete for OLS coefficients in the Stitch Fix data. } \end{figure} \subsection{Convergence speed of backfitting} The Stitch Fix data have row and column sample sizes that are much more uneven than our sampling model for $Z$ allows. Accordingly we cannot rely on Theorem~\ref{thm:m1norm1} to show that backfitting must converge rapidly for it. The sufficient conditions in that theorem may not be necessary and we can compute our norms and the spectral radius on the update matrices for the Stitch Fix data using some sparse matrix computations. Here $Z\in\{0,1\}^{762,752\times6318}$, so $M^{(k)}\in\mathbb{R}^{6318\times 6318}$ for $k \in \lbrace0,1,2,3\rbrace$. The results are $$ \begin{pmatrix} \Vert M^{(0)}\Vert_1 \ & \ \Vert M^{(0)}\Vert_2 \ & \ |\lambda_{\max}(M^{(0)})|\\[.25ex] \Vert M^{(1)}\Vert_1 \ & \ \Vert M^{(1)}\Vert_2 \ & \ |\lambda_{\max}(M^{(1)})|\\[.25ex] \Vert M^{(2)}\Vert_1 \ & \ \Vert M^{(2)}\Vert_2 \ & \ |\lambda_{\max}(M^{(2)})|\\[.25ex] \Vert M^{(3)}\Vert_1 \ & \ \Vert M^{(3)}\Vert_2 \ & \ |\lambda_{\max}(M^{(3)})| \end{pmatrix} =\begin{pmatrix} 31.9525 \ & \ 1.4051 \ & \ 0.64027 \\[.75ex] 11.2191 \ & \ 0.4512 \ & \ 0.33386\\[.75ex] \phz8.9178 \ & \ 0.4541 \ & \ 0.33407\\[.75ex] \phz9.2143\ & \ 0.4546 & \ 0.33377\\ \end{pmatrix}. $$ All the updates have spectral radius comfortably below one. The centered updates have $L_2$ norm below one but the uncentered update does not. Their $L_2$ norms are somewhat larger than their spectral radii because those matrices are not quite symmetric. The two largest eigenvalue moduli for $M^{(0)}$ are $0.6403$ and $0.3337$ and the centered updates have spectral radii close to the second largest eigenvalue of $M^{(0)}$. This is consistent with an intuitive explanation that the space spanned by a column of $N$ ones that is common to the columns spaces of $\mathcal{Z}_A$ and $\mathcal{Z}_B$ is the {biggest impediment} to $M^{(0)}$ and that all three centering strategies essentially remove it. The best spectral radius is for $M^{(3)}$, which employs two principled centerings, although in this data set it made little difference. Our backfitting algorithm took $8$ iterations when applied to $\mathcal{X}$ and $12$ more to compute the BLUPs. We used a convergence threshold of $10^{-8}.$ \section{Discussion}\label{sec:discussion} We have shown that the cost of our backfitting algorithm is $O(N)$ under strict conditions that are nonetheless much more general than having $N_{i\sumdot} = N/C$ for all $i=1,\dots,R$ and $N_{\sumdot j} = N/R$ for all $j=1,\dots,C$ as in \cite{papa:robe:zane:2020}. As in their setting, the backfitting algorithm scales empirically to much more general problems than those for which rapid convergence can be proved. Our contour map of the spectral radius of the update matrix $M$ shows that this norm is well below $1$ over many more $(\rho,\kappa)$ pairs that our theorem covers. The difficulty in extending our approach to those settings is that the spectral radius is a much more complicated function of the observation matrix $Z$ than the $L_1$ norm is. Theorem 4 of \cite{papa:robe:zane:2020} has the rate of convergence for their collapsed Gibbs sampler for balanced data. It involves an auxilliary convergence rate $\rho_{\mathrm{aux}}$ defined as follows. Consider the Gibbs sampler on $(i,j)$ pairs where given $i$ a random $j$ is chosen with probability $Z_{ij}/N_{i\sumdot}$ and given $j$ a random $i$ is chosen with probability $Z_{ij}/N_{\sumdot j}$. That Markov chain has invariant distribution $Z_{ij}/N$ on $(i,j)$ pairs and $\rho_{\mathrm{aux}}$ is the rate at which the chain converges. In our notation $$ \rho_{\mathrm{PRZ}} = \frac{N\sigma^2_A}{N\sigma^2_A+R\sigma^2_E}\times\frac{N\sigma^2_B}{N\sigma^2_B+C\sigma^2_E}\times\rho_{\mathrm{aux}}. $$ In sparse data $\rho_{\mathrm{PRZ}}\approx\rho_{\mathrm{aux}}$ and under our asymptotic setting $|\rho_{\mathrm{aux}}-\rho_{\mathrm{PRZ}}|\to0$. \cite{papa:robe:zane:2020} remark that $\rho_{\mathrm{aux}}$ tends to decrease as the amount of data increases. When it does, then their algorithm takes $O(1)$ iterations and costs $O(N)$. They explain that $\rho_{\mathrm{aux}}$ should decrease as the data set grows because the auxiliary process then gets greater connectivity. That connectivity increases for bounded $R$ and $C$ with increasing $N$ and from their notation, allowing multiple observations per $(i,j)$ pair it seems like they have this sort of infill asymptote in mind. For sparse data from electronic commerce we think that an asymptote like the one we study where $R$, $C$ and $N$ all grow is a better description. It would be interesting to see how $\rho_{\mathrm{aux}}$ develops under such a model. In Section 5.3 \cite{papa:robe:zane:2020} state that the convergence rate of the collapsed Gibbs sampler is $O(1)$ regardless of the asymptotic regime. That section is about a more stringent `balanced cells' condition where every $(i,j)$ combination is observed the same number of times, so it does not describe the `balanced levels' setting where $N_{i\sumdot}=N/R$ and $N_{\sumdot j}=N/C$. Indeed they provide a counterexample in which there are two disjoint communities of users and two disjoint sets of items and each user in the first community has rated every item in the first item set (and no others) while each user in the second community has rated every item in the second item set (and no others). That configuration leads to an unbounded mixing time for collapsed Gibbs. It is also one where backfitting takes an increasing number of iterations as the sample size grows. There are interesting parallels between methods to sample a high dimensional Gaussian distribution with covariance matrix $\Sigma$ and iterative solvers for the system $\Sigma \boldsymbol{x} = \boldsymbol{b}$. See \cite{good:soka:1989} and \cite{RS97} for more on how the convergence rates for these two problems coincide. We found that backfitting with one or both updates centered worked much better than uncentered backfitting. \cite{papa:robe:zane:2020} used a collapsed sampler that analytically integrated out the global mean of their model in each update of a block of random effects. Our approach treats $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ as nuisance parameters. We plug in a consistent method of moments based estimator of them in order to focus on the backfitting iterations. In Bayesian computations, maximum a posteriori estimators of variance components under non-informative priors can be problematic for hierarchical models \cite{gelm:2006}, and so perhaps maximum likelihood estimation of these variance components would also have been challenging. Whether one prefers a GLS estimate or a Bayesian one depends on context and goals. We believe that there is a strong computational advantage to GLS for large data sets. The cost of one backfitting iteration is comparable to the cost to generate one more sample in the MCMC. We may well find that only a dozen or so iterations are required for convergence of the GLS. A Bayesian analysis requires a much larger number of draws from the posterior distribution than that. For instance, \cite{gelm:shir:2011} recommend an effective sample size of about $100$ posterior draws, with autocorrelations requiring a larger actual sample size. \cite{vats:fleg:jone:2019} advocate even greater effective sample sizes. It is usually reasonable to assume that there is a selection bias underlying which data points are observed. Accounting for any such selection bias must necessarily involve using information or assumptions from outside the data set at hand. We expect that any approach to take proper account of informative missingness must also make use of solutions to GLS perhaps after reweighting the observations. Before one develops any such methods, it is necessary to first be able to solve GLS without regard to missingness. Many of the problems in electronic commerce involve categorical outcomes, especially binary ones, such as whether an item was purchased or not. Generalized linear mixed models are then appropriate ways to handle crossed random effects, and we expect that the progress made here will be useful for those problems. \section*{Acknowledgements} This work was supported by the U.S.\ National Science Foundation under grant IIS-1837931. We are grateful to Brad Klingenberg and Stitch Fix for sharing some test data with us. We thank the reviewers for remarks that have helped us improve the paper. \bibliographystyle{imsart-nameyear}
2024-02-18T23:39:40.364Z
2021-03-22T01:05:18.000Z
algebraic_stack_train_0000
25
14,154
proofpile-arXiv_065-234
"\\section{Introduction}\nIt is well known that in certain disordered media wave propagation can be (...TRUNCATED)
2024-02-18T23:39:40.767Z
2020-10-16T02:04:59.000Z
algebraic_stack_train_0000
42
6,370
proofpile-arXiv_065-287
"\\section{Data Specifications Table}\n\n\\begin{table}[htb]\n\\centering\n\\footnotesize\n\\label{D(...TRUNCATED)
2024-02-18T23:39:40.960Z
2021-05-31T02:02:16.000Z
algebraic_stack_train_0000
55
17,146
proofpile-arXiv_065-326
"\\section{Introduction}\nThe prospect of achieving non-reciprocity in elastic systems is becoming i(...TRUNCATED)
2024-02-18T23:39:41.125Z
2020-10-16T02:05:56.000Z
algebraic_stack_train_0000
64
9,677
proofpile-arXiv_065-479
"\\section*{Note Added:}\n\\noindent After this letter was accepted for publication, we became aware(...TRUNCATED)
2024-02-18T23:39:41.603Z
1997-04-22T21:45:00.000Z
algebraic_stack_train_0000
92
82
proofpile-arXiv_065-484
"\\section*{Introduction}\n\nThere has been recently an important activity in the study of {$N=2$} s(...TRUNCATED)
2024-02-18T23:39:41.618Z
1996-09-27T16:59:39.000Z
algebraic_stack_train_0000
94
6,252
proofpile-arXiv_065-810
"\\section{Introduction} \n\\vspace*{-0.5pt}\n\\noindent\nThe production of heavy quarkonium(...TRUNCATED)
2024-02-18T23:39:42.653Z
1996-09-20T10:44:50.000Z
algebraic_stack_train_0000
161
4,253
proofpile-arXiv_065-892
"\\section{THE IMAGES OF NGC~7582}\nNGC~7582 was observed in the\nnear infrared $J,H$ and $K$ bands (...TRUNCATED)
2024-02-18T23:39:42.924Z
1997-01-15T18:06:17.000Z
algebraic_stack_train_0000
176
1,763
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
5