Dataset Viewer
id
stringlengths 18
42
| text
stringlengths 0
2.92M
| added
stringlengths 24
24
| created
stringlengths 20
24
| source
stringclasses 4
values | original_shard_dir
stringclasses 189
values | original_shard_idx
int64 0
311k
| num_tokens
int64 1
494k
|
---|---|---|---|---|---|---|---|
proofpile-arXiv_065-46 | \section{Introduction}
\label{sec:intro}
Few-shot object detection (FSOD) \cite{yan2019meta,wang2020frustratingly,xiao2020few,wu2020multi,zhu2021semantic} aims at detecting objects out of base training set with few support examples per class.
It has received increasing attention from the robotics community due to its vital role in autonomous exploration, since robots are often expected to detect novel objects in an unknown environment but only a few examples can be provided online.
For example, in a mission of rescue shown in \fref{fig:1} (a), the robots are required to detect some uncommon objects such as drill, rope, helmet, vent.
\begin{figure}[!t]
\centering
\includegraphics[width=1\columnwidth]{fig1.pdf}
\caption{Representative images from robots’ exploration and performance comparison of the state-of-the-arts \cite{fan2020few,xiao2020few,wu2020multi,wang2020frustratingly,faster} and proposed AirDet. Solid lines denote results with no fine-tuning and dashed lines indicate results fine-tuned on few-shot data. Without further updating, AirDet can outperform prior work. Besides, unlike the fine-tuned models meeting bottleneck in small objects, AirDet has set an outstanding new level.}
\label{fig:1}
\end{figure}
Despite its recent promising developments, most of existing methods \cite{kang2019few,sun2021fsce,zhang2021accurate,wang2019meta,wang2020frustratingly,wu2020multi,qiao2021defrcn,cao2021nips,li2021few,fan2021generalized} require a careful \textit{offline} fine-tuning stage on novel images before inference. However, the requirement of the fine-tuning process is infeasible for robotic \textit{online} applications, since
(\textbf{1}) new object categories can be dynamically added during exploration, thus re-fine-tuning the model with limited onboard computational resources for novel classes is extremely inefficient for the time-starved tasks such as search and rescue \cite{tariq2018dronaid,farooq2018ground,wang2020visual,chen_tro}.
(\textbf{2}) to save human effort, only very few samples can be provided online\footnote{Since online annotation is needed during mission execution, only 1-5 samples can be provided in most of the robotic applications, which is the main focus of this paper.}, thus the fine-tuning stage \cite{kang2019few,sun2021fsce,zhang2021accurate,wang2019meta,wang2020frustratingly,wu2020multi,qiao2021defrcn,fan2021generalized,li2021few} needs careful \textit{offline} hyper-parameter tuning to avoid over-fitting, which is infeasible for \textit{online} exploration, and (\textbf{3}) fine-tuned models usually perform well for in-domain test \cite{wang2020frustratingly,wu2020multi,xiao2020few,faster,qiao2021defrcn,li2021few}, while suffer from cross-domain test, which is unfavourable for robotic applications.
Therefore, we often expect a few-shot detector that is able to inference without fine-tuning such as \cite{fan2020few}. However, the performance of \cite{fan2020few} is still severely hampered for challenging robotics domain due to (\textbf{1}) ineffective multi-scale detection;
(\textbf{2}) ineffective feature aggregation from multi-support images; and (\textbf{3}) inaccurate bounding box location prediction. Surprisingly, in this paper, we find that all three problems can be effectively solved by learning \textit{class-agnostic relation} with support images. We name the new architecture AirDet, which can produce promising results even without abominable fine-tuning as shown in \fref{fig:1}, which, to the best of our knowledge, is the first feasible few-shot detection model for autonomous robotic exploration. Specifically, the following three modules are proposed based on \textit{class-agnostic relation}, respectively.
\myparagraph{Support-guided Cross-Scale Fusion (SCS) for Object Proposal} One reason for performance degradation in multi-scale detection is that the region proposals are not effective for small scale objects, even though some existing works adopt multiple scale features from query images \cite{zhang2021accurate,zhu2021semantic}. We argue that the proposal network should also include cross-scale information from the support images. To this end, we present a novel SCS module, which explicitly extracts multi-scale features from cross-scale relations between support and query images.
\myparagraph{Global-Local Relation (GLR) for Shots Aggregation} Most prior work \cite{fan2020few,yan2019meta,wu2020multi} simply average multi-shot support feature to obtain a class prototype for detection head. However, this cannot fully exploit the little but valuable information from every support image. Instead, we construct a shots aggregation module by learning the relationship between the multi-support examples, which achieves significant improvements with more shots.
\myparagraph{Prototype Relation Embedding (PRE) for Location Regression} Some existing works \cite{fan2020few,zhang2021accurate} introduced a relation network \cite{sung2018learning} into the classification branch; however, the location regression branch is often neglected. To settle this, we introduce cross-correlation between the proposals from SCS and support features from GLR into the regression branch.
This results in the PRE module, which explicitly utilize support images for precise object localization.
In summary, AirDet is a fully relation-based few-shot object detector, which can be applied directly to the novel classes without fine-tuning. It surprisingly produces comparable or even better results than exhaustively fine-tuned SOTA methods \cite{wang2020frustratingly,faster,xiao2020few,wu2020multi,fan2020few}, as shown in \fref{fig:1} (b).
Besides, as shown in \fref{fig:1} (c), AirDet maintains high robustness in small objects due to the SCS module, which fully takes advantage of the multi-scale support feature.
Note that in this paper, fine-tuning is undesired because it cannot satisfy the online responsive requirement for robots, but it can still improve the performance of AirDet.
\section{Related Works}
\subsection{General Object Detection}
The task of object detection \cite{faster, yolo, liu2016ssd, RCNN,fast,mask} is to find out all the pre-defined objects in an image, predicting their categories and locations, which is one of the core problems in the field of computer vision.
Object detection algorithms are mainly divided into: two-stage approaches \cite{RCNN,fast,faster,mask} and one-stage approaches \cite{liu2016ssd,yolo,yolo2,yolo3}. R-CNN \cite{RCNN}, and its variants \cite{RCNN,fast,faster,mask} serve as the foundation of the former branch; among them, Faster R-CNN \cite{faster} used region proposal network (RPN) to generate class-agnostic proposals from the dense anchors, which greatly improved the speed of object detection based on R-CNN \cite{fast}. On the other hand, YOLO series \cite{yolo,yolo2,yolo3} fall into the second branch, which tackles object detection as an end-to-end regression problem. Besides, the known SSD series \cite{liu2016ssd,li2017fssd} propose to utilize pre-defined bounding boxes to adjust to various object scales inspired by \cite{faster}.
One shortcoming of the above methods is that they require abundant labeled data for training. Moreover, the types and number of object categories are fixed after training (80 classes in COCO, for instance), which is not applicable to robot's autonomous exploration, where unseen, novel objects often appear online.
\subsection{Few-shot Object Detection}
Trained with abundant data for base classes, few-shot object detectors can learn to generalize only using a few labeled novel image shots. Two main branches leading in FSOD are meta-learning-based approaches \cite{yan2019meta,xiao2020few,wu2020multi,fan2020few,han2021query} and transfer-learning-based approaches \cite{wang2020frustratingly,zhu2021semantic,sun2021fsce,wu2021universal,qiao2021defrcn}.
Transfer-learning approaches seek for the best learning strategy of general object detectors \cite{faster} on a few novel images. Wang \textit{et al.}~ \cite{wang2020frustratingly} proposed to fine-tune only the last layer with a cosine similarity-based classifier. Using manually defined positive refinement branch, MPSR \cite{wu2020multi} mitigated the scale scarcity issue. Recent works have introduced semantic relations between novel-base classes \cite{zhu2021semantic} and contrastive proposal encoding \cite{sun2021fsce}.
Aiming at training meta-models on episodes of individual tasks, meta-learning approaches \cite{yan2019meta,xiao2020few,fan2020few,Hu2021CVPR,Zhang2021CVPR,han2021query} generally contain two branches, one for extracting support information and the other for detection on the query image. Among them, Meta R-CNN \cite{yan2019meta}, and FSDet \cite{xiao2020few} target at support guided query channel attention. With novel attention RPN and multi-relation classifier, A-RPN \cite{fan2020few} has set the current SOTA. Very recent works also cover support-query mutual guidance \cite{Zhang2021CVPR}, aggregating context information \cite{Hu2021CVPR}, and constructing heterogeneous graph convolutional networks on proposals \cite{han2021query}.
\subsection{Relation Network for Few-shot Learning}
In few-shot image classification, relation network \cite{sung2018learning}, also known as learning to compare, has been introduced to train a classifier by modeling the class-agnostic relation between a query image and the support images. Once trained and provided with a few novel support images, inference on novel query images can be implemented without further updating.
For few-shot object detection, such relation has only been utilized for the classification branch so far in very few works. For example, Fan \textit{et al.}~ proposed a multi-relation classification network, which consists of global, local, and patch relation branches \cite{fan2020few}. Zhang \textit{et al.}~ leveraged general relation network \cite{sung2018learning} architecture to build multi-level proposal scoring, and support weighting modules \cite{Zhang2021CVPR}. In this work, we thoroughly explore such relation in few-shot detection and propose a fully relation-based architecture.
\subsection{Multi-Scale Feature Extraction}
Multi-scale features have been exhaustively exploited for multi-scale objects in general object detection \cite{liu2016ssd,Shen2017dsod,Kong2016HyperNet,yolo2,lin2017fpn,li2017fssd}. For example, FSSD \cite{li2017fssd} proposed to fuse multi-scale feature and implement detection on the fused feature map. Lin \textit{et al.}~ constructed the feature pyramid network (FPN) \cite{lin2017fpn}, which builds a top-down architecture and employs multi-scale feature map for detection.
For few-shot detection, standard FPN \cite{lin2017fpn} has been widely adopted in prior transfer-learning-based methods \cite{wang2020frustratingly,zhu2021semantic,sun2021fsce,wu2020multi}. In meta-learning, existing meta-learner \cite{Zhang2021CVPR} employs all scales from FPN and implements detection on each scale in parallel, which is computationally inefficient.
\section{Preliminary}
In few-shot object detection \cite{yan2019meta,xiao2020few,wu2020multi,Deng2009imagenet}, the classes are divided into $B$ base classes $\mathcal{C}_{\rm{b}}$ and $N$ novel ones $\mathcal{C}_{\rm{n}}$, satisfying that $\mathcal{C}_{\rm{b}}\cap\mathcal{C}_{\rm{n}}=\varnothing$.
The objective is to train a model that can detect novel classes in $\mathcal{C}_{\rm{n}}$ by only providing $k$-shot labeled samples for $\mathcal{C}_{\rm{n}}$ and abundant images from base classes $\mathcal{C}_{\rm{b}}$.
During training, we adopt the episodic paradigm \cite{yan2019meta}.
Basically, images from the base classes $\mathcal{C}_{\rm{b}}$ are split into query images $\mathbf{Q}_{{\rm{b}}}$ and support images $\mathbf{S}_{{\rm{b}}}$.
Given all support images $\mathbf{S}_{{\rm{b}}}$, the model learns to detect objects in query images $\mathbf{Q}_{{\rm{b}}}$. During test, the model is to detect objects in novel query images $\mathbf{Q}_{{\rm{n}}}$ by only providing few (1-5) labeled novel support images $\mathbf{S}_{{\rm{n}}}$.
\noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Most existing methods \cite{wang2020frustratingly,zhu2021semantic,sun2021fsce,yan2019meta,xiao2020few,wu2020multi,wu2021universal,cao2021nips,qiao2021defrcn} have to be fine-tuned on $\mathbf{S}_{{\rm{n}}}$ due to the class-specific model design, while AirDet can be applied directly to $\mathbf{Q}_{{\rm{n}}}$ by providing $\mathbf{S}_{{\rm{n}}}$ without fine-tuning.
\section{Methodology}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\textwidth]{main_fig.pdf}
\caption{The pipeline of the autonomous exploration task and the framework of AirDet. During exploration, a few prior raw images that potentially contain novel objects (helmet) are sent to a human user first. Provided with online annotated few-shot data, the robot explorer is able to detect those objects by observing its surrounding environment. AirDet includes 4 modules, \textit{i.e.}, the shared backbone, support-guided cross-scale (SCS) feature fusion module for region proposal, global-local relation (GLR) module for shots aggregation, and relation-based detection head, which are visualized by different colors. }
\label{fig:main}
\end{figure*}
Since only a few shots are given during model test, information from the support images is little but valuable.
We believe that the major limitation of the existing algorithms is that such information from support images is not fully exploited.
Therefore, we propose to learn \textit{class-agnostic relation} with the support images in all the modules of AirDet.
As exhibited in \fref{fig:main}, the structure of AirDet is simple: except for the shared backbones, it only consists of three modules, \textit{i.e.}, a support-guided cross-scale fusion (SCS) module for regional proposal, a global-local relation (GLR) module for shots aggregation, and a relation-based detection head, containing prototype relation embedding (PRE) module for location regression and a multi-relation classifier \cite{fan2020few}. We next introduce two kinds of \textit{class-agnostic relation}, which will be used by the three modules.
\subsection{Class-Agnostic Relation}
To exploit the relation between two features from different aspects, we define two relation modules, \textit{i.e.}, spatial relation $\mathcal{R}_{\rm{s}}(\cdot, \cdot)$ and channel relation $\mathcal{R}_{\rm{c}}(\cdot, \cdot)$.
\myparagraph{1. Spatial Relation:}
Object features from the same category are often correlated along the spatial dimension, thus we define the spatial relation features $\mathcal{R}_{\rm{s}}$ in \eqref{eqn:inner} leveraging on the regular and depth-wise convolution.
\begin{equation}\label{eqn:inner}
\mathcal{R}_{\rm{s}}(\mathbf{A}, \mathbf{B}) = \mathbf{A} \odot \mathrm{MLP}\Big(\mathrm{Flatten}\big(\mathrm{Conv}(\mathbf{B})\big)\Big),
\end{equation}
where inputs $\mathbf{A}, \mathbf{B}\in\mathbb{R}^{C\times W\times H}$ denote 2 general tensors. $\rm{Flatten}$ means flatten the features in spatial (image) domain and $\mathrm{MLP}$ denotes multilayer perceptron (MLP), so that $\rm{MLP}\Big(\rm{Flatten}\big(\rm{Conv}(\mathbf{B})\big)\Big) \in \mathbb{R}^{C\times 1\times 1}$. $\odot$ indicates depth-wise convolution \cite{fan2020few}. Note that we use convolution to calculate correlation since both operators are composed of inner products.
\myparagraph{2. Channel Relation:}
Inspired by the phenomenon that features of different classes are often stored in different channels \cite{li2019siamrpn}, we propose a simple but effective channel relation $\mathcal{R}_{\rm{c}}(\cdot, \cdot)$ in \eqref{eqn:channel} to extract the cross-class relation features.
\begin{equation}\label{eqn:channel}
\mathcal{R}_{\rm{c}}(\mathbf{A}, \mathbf{B}) = \mathrm{{Conv}}\big(\mathrm{Cat}(\mathbf{A}, \mathbf{B})\big) + \mathrm{Cat}\big(\mathrm{{Conv}}(\mathbf{A}), \mathrm{{Conv}}(\mathbf{B})\big),
\end{equation}
where $\mathrm{Cat(\cdot, \cdot)}$ is to concatenate features along the channel dimension.
\noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: The two simple but effective \textit{class-agnostic relation} learners are fundamental building blocks of AirDet, which, to the best of our knowledge, is the first attempt towards a fully relation-based structure in few-shot detection.
\subsection{Support-guided Cross-Scale Fusion (SCS) for Object Proposal}
As mentioned earlier, existing works generate object proposals only using single scale information from query images \cite{kang2019few,xiao2020few,wang2019meta,wu2020multi}, while such strategy may not be effective for small scale novel objects.
Differently, we propose support-guided cross-scale fusion (SCS) in AirDet to introduce multi-scale features and take the relation between query and support images for region proposal.
As shown in \fref{fig:scs} (a), SCS takes support and query features from different backbone blocks (ResNet2, 3, and 4 block) as input.
We first apply \textit{spatial relation}, where the query and support features from the same backbone block are $\mathbf{A}, \mathbf{B}$ in \eqref{eqn:inner}, respectively.
Then we use \textit{channel relation} to fuse the ResNet2 and ResNet3 block features, which are $\mathbf{A}, \mathbf{B}$ in \eqref{eqn:channel}, respectively.
The fused channel relation feature is later merged with the spatial relation feature from ResNet4 block.
The final merged feature is sent to the region proposal network (RPN) \cite{faster} to generate region proposals.
\begin{figure}[ht]
\centering
\includegraphics[width=1\columnwidth]{SCS_GLR.pdf}
\caption{Network architecture of SCS for region proposal and GLR for shots aggregation.}
\label{fig:scs}
\end{figure}
\subsection{Global-Local Relation (GLR) for Shots Aggregation}
In prior attempts \cite{yan2019meta,kang2019few,xiao2020few,wu2020multi}, the support object features from multiple shots are usually averaged to represent the class prototype, which is then used for regression and classification.
Although it can be effective with fine-tuning, we argue that a simple average cannot fully utilize the information from few-shot data. To this end, we built global-local relation (GLR) module in \fref{fig:scs} (b), which leverages the features from every shot to construct the final prototype.
Suppose the $k$-shot deepest support features are $\phi({\mathbf{s}^i})$,~$i=1,\cdots,k$, our final class prototype $\mathbf{e}$ can be expressed as a weighted average of the features:
\begin{equation}
\mathbf{e}=\sum_{i=1}^{k}(\phi({\mathbf{s}^i})\otimes\mathbf{M}^i),
\end{equation}
where $\otimes$ is the element-wise multiplication, and $\mathbf{M}^i$ is a confidence map:
\begin{equation}
\mathbf{M}^{i}=\mathrm{SoftMax}\Big(\mathrm{MLP}\big(\mathbf{f}^{i}\big)\Big),
\end{equation}
where $\mathbf{f}^{i}$ is the output from the channel relation extractor:
\begin{equation}\label{eq:shot-relation}
\mathbf{f}^{i} = \mathcal{R}_{\rm{c}}\left(\mathrm{Conv}(\phi(\mathbf{s}^i)), \frac{1}{k}\sum_{i=1}^{k}\mathrm{Conv}(\phi(\mathbf{s}^i))\right).
\end{equation}
Note that to include both ``global" (all shots) and ``local" (single shot) features, the inputs of the channel relation extractor in \eqref{eq:shot-relation} include both the feature from that shot and the averaged features over all shots.
\noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Unlike prior work \cite{yan2019meta,kang2019few,xiao2020few,wu2020multi} relying on fine-tuning with more support data for performance gain, AirDet can extract stronger prototype to achieve improvement on more shots without fine-tuning.
\subsection{Prototype Relation Embedding (PRE) for Location Regression}
It has been demonstrated that a multi-relation network \cite{fan2020few} is effective for the classification branch.
Inspired by its success, we further build a prototype relation embedding (PRE) network for the location regression branch.
Given a prototype exemplar $\mathbf{e}\in\mathbb{R}^{C\times a\times a}$, we utilize the spatial relation \eqref{eqn:inner} to embed information from the exemplar to the proposal features $\mathbf{p}^j$,~$j= 1,2,\cdots,p$ as:
\begin{equation}
\mathbf{l}^j = \mathbf{p}^j + \mathcal{R}_{\rm{s}}(\mathbf{p}^j, \mathbf{e}),
\end{equation}
where we take $3\times 3$ convolution layer in \eqref{eqn:inner} for spatial feature extraction.
The proposal features $\mathbf{l}^j$ is then used for bounding box regression through an MLP module following Faster-RCNN \cite{faster}.
\noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Class-related feature $\mathbf{l}^j$ contains information from support objects, which turns out more effective for location regression even if the objects have never been seen in the training set.
\section{Experiments}
\subsection{Implementation}\label{sec:Imple}
We adopt the training pipeline from \cite{fan2020few}.
To maintain a fair comparison with other methods \cite{wang2020frustratingly,xiao2020few,wu2020multi,faster}, we mainly adopt ResNet101 \cite{He2016res} pre-trained on ImageNet \cite{Deng2009imagenet} as backbone.
The performance of other backbones is presented in \appref{sec:backbone}.
For fair comparison \cite{wang2020frustratingly,xiao2020few,wu2020multi,fan2020few,faster}, we utilized their official implementation, support examples, and models (if provided) in all the experiments. AirDet and the baseline \cite{fan2020few} take the \textbf{same} supports in all the settings.
We use 4 NVIDIA GeForce Titan-X Pascal GPUs for experiments.
Detailed configuration of AirDet can be found in \appref{sec:config} and the source code.
\noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: To save human effort, only very few support examples (1-5 samples per class) can be provided during online exploration. Therefore, we mainly focused on $k=1, 2, 3, 5$-shot evaluation. Since the objects from exploration are usually unseen, we only test novel classes throughout the experiments.
\subsection{In-domain Evaluation}\label{sec:indomain}
We first present the in-domain evaluation on COCO benchmark \cite{lin2014microsoft}, where the models are trained and tested on the same dataset.
Following prior works \cite{yan2019meta,kang2019few,wu2020multi,fan2020few,wang2020frustratingly,xiao2020few,wu2021universal,cao2021nips,fan2021generalized,sun2021fsce,zhu2021semantic}, the 80 classes are split into 60 non-VOC base classes and 20 novel classes. During training, the base class images from COCO trainval2014 are considered available. With few-shot samples per novel class, the models are evaluated on 5,000 images from COCO val2014 dataset.
\begin{table}[!t]
\centering
\setlength{\tabcolsep}{0.2mm}
\fontsize{5.5}{6.5}\selectfont
\caption{Performance comparison on COCO validation dataset. In each setting, \red{red} and \green{green} fonts denote the best and second-best performance, respectively. AirDet achieves significant performance gain on baseline without fine-tuning. With fine-tuning, AirDet sets a new SOTA performance. $^\dag$We randomly sampled 3-5 different groups of support examples and reported the average performance and their standard deviation.}
\begin{threeparttable}
\begin{tabular}{cc|ccc|ccc|ccc|ccc}
\toprul
\multicolumn{2}{c|}{Shots} & \multicolumn{3}{c|}{1} & \multicolumn{3}{c|}{2} & \multicolumn{3}{c|}{3} & \multicolumn{3}{c}{5} \\
\midrule
Method & Fine-tune & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\
\multirow{2}{*}{A-RPN \cite{fan2020few}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & 4.32 & 7.62 & 4.3 & 4.67 & 8.83 & 4.49 & 5.28 & 9.95 & 5.05 & 6.08 & 11.17 & 5.88 \\
& & $\pm$0.7 &$\pm$1.3 & $\pm$0.7 & $\pm$0.3 & $\pm$0.5 & $\pm$0.3 & $\pm$0.6 & $\pm$0.8 & $\pm$0.6 & $\pm$0.3 & $\pm$0.4 & $\pm$0.3 \\
\cmidrule{1-14}
\multirow{2}{*}{\textbf{AirDet (Ours)}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \red{\textbf{5.97}} & \red{\textbf{10.52}} & \red{\textbf{5.98}} & \red{\textbf{6.58}} & \red{\textbf{12.02}} & \red{\textbf{6.33}} & \red{\textbf{7.00}} & \red{\textbf{12.95}} & \red{\textbf{6.71}} & \red{\textbf{7.76}} & \red{\textbf{14.28}} & \red{\textbf{
7.31}} \\
& & \textbf{$\pm$0.4} &\textbf{$\pm$0.9} &\textbf{ $\pm$0.2} & \textbf{$\pm$0.2} & \textbf{$\pm$0.4} & \textbf{$\pm$0.2} & \textbf{$\pm$0.5} & \textbf{$\pm$0.8} & $\pm$0.7& \textbf{$\pm$0.3} & \textbf{$\pm$0.4} & $\pm$0.4 \\
\midrule
FRCN \cite{faster} & \checkmark & 3.26 & 6.66 & 3.04 & 3.73 & 7.79 & 3.22 & 4.59 & 9.52 & 4.07 & 5.32 & 11.20 & 4.54 \\
TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & \checkmark & 2.78 & 5.39 & 2.36 & 4.14 & 7.98 & 4.01 & 6.33 & 12.10 & 5.94 & 7.92 & 15.58 & 7.29 \\
TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & \checkmark & 3.09 & 5.24 & 3.21 & 4.21 & 7.70 & 4.35 & 6.05 & 11.48 & 5.93 & 7.61 & 14.56 & 7.17 \\
FSDetView \cite{xiao2020few} & \checkmark & 2.20 & 6.20 & 0.90 & 3.40 & 10.00 & 1.50 & 5.20 & 14.70 & 2.10 & 8.20 & \red{21.60} & 4.70 \\
MPSR \cite{wu2020multi} & \checkmark & 3.34 & 6.11 & 3.25 & 5.41 & 9.68 & 5.52 & 5.70 & 10.54 & 5.50 & 7.20 & 13.55 & 6.89 \\
A-RPN \cite{fan2020few} & \checkmark & {4.59} & {8.85} & {4.37} & {6.15} & {12.05} & {5.76} & {8.24} & {15.52} & {7.92} & {9.02} & 17.29 & {8.53} \\
W. Zhang \textit{et al.}~ \cite{zhang2021hallucination} & \checkmark & {4.40} & {7.50} & {4.90} & {5.60} & {9.90} & {5.90} & {7.20} & {13.30} & 7.40 & - & - & - \\
FADI \cite{cao2021nips} & \checkmark & \green{5.70} & \green{10.40} & \green{6.00} & \green{7.00} & \green{13.01} & \green{7.00} & \green{8.60} & \green{15.80} & \green{8.30} & \green{10.10} & 18.60 & \red{11.90} \\
\textbf{AirDet (Ours)} & \checkmark & \textbf{\red{6.10}} & \textbf{\red{11.40}} & \textbf{\red{6.04}} & \textbf{\red{8.73}} & \textbf{\red{16.24}} & \textbf{\red{8.35}} & \textbf{\red{9.95}} & \textbf{\red{19.39}} & \textbf{\red{9.09}} & \textbf{\red{10.81}} &\green{\textbf{20.75}} & \textbf{\green{10.27}} \\
\bottomrul
\end{tabular}\label{tab:coco}%
\end{threeparttable}
\end{table}%
\myparagraph{Overall Performance}
As shown in \tref{tab:coco}, AirDet achieves significant performance gain on the baseline \cite{fan2020few}.
AirDet without fine-tuning amazingly also achieves comparable or even better results than many fine-tuned methods.
With fine-tuning, AirDet outperformed existing SOTAs \cite{fan2020few,wang2020frustratingly,xiao2020few,wu2020multi,faster,cao2021nips,zhang2021hallucination}.
Since the results without fine-tuning may be sensitive to support images, we report the averaged performance, and the standard deviation on 3-5 randomly sampled support images, where we surprisingly find AirDet more robust to the variance of support images compared with the baseline \cite{fan2020few}.
\myparagraph{Multi-scale Objects}
\begin{table*}[!t]
\centering
\setlength{\tabcolsep}{0.2mm}
\fontsize{5.5}{6.5}\selectfont
\caption{Performance evaluation on multi-scale objects from COCO. Highest-ranking and second-best scores are marked out with \red{red} and \green{green}, respectively. Without fine-tuning, AirDet can avoid over-fitting and shows robustness on small-scale objects. By virtue of the SCS module, AirDet can achieve higher results than those with FPN.}
\begin{threeparttable}
\begin{tabular}{ccc|ccc|ccc|ccc|ccc}
\toprul
\multicolumn{3}{c|}{Shots} & \multicolumn{3}{c|}{1} & \multicolumn{3}{c|}{2} & \multicolumn{3}{c|}{3} & \multicolumn{3}{c}{5} \\
\midrule
Method & FPN & Fine-tune & AP$_s$ & AP$_m$ & AP$_l$ & AP$_s$ & AP$_m$ & AP$_l$ & AP$_s$ & AP$_m$ & AP$_l$ & AP$_s$ & AP$_m$ & AP$_l$ \\
\multirow{2}{*}{A-RPN \cite{fan2020few}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \multirow{2}{*}{\text{\ding{55}}} & 2.43 & 5.00 & 6.74 & 2.67 & 5.01 & 7.18 & 3.42 & 6.15 & 8.77 & 3.54 & 6.73 & 9.97 \\
& & & $\pm$0.4 &$\pm$1.0 & $\pm$1.1 & $\pm$0.3 & $\pm$0.3 & $\pm$0.4 & $\pm$0.2 & $\pm$0.5 & $\pm$0.8 & $\pm$0.3 & $\pm$0.03 & $\pm$0.2 \\
\midrule
\multirow{2}{*}{\textbf{AirDet (Ours)}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \multirow{2}{*}{\text{\ding{55}}} & \red{\textbf{2.85}} & \red{\textbf{6.33}} & \red{\textbf{9.00}} & \red{\textbf{4.00}} & \red{\textbf{6.84}} & \red{\textbf{9.94}} & \red{\textbf{4.13}} & \red{\textbf{7.95}} & \red{\textbf{11.30}} & \red{\textbf{4.22}} & \red{\textbf{8.24}} & \red{\textbf{12.90}} \\
& & & \textbf{$\pm$0.3} &\textbf{$\pm$0.7} &\textbf{ $\pm$0.8} & \textbf{$\pm$0.3} & \textbf{$\pm$0.1} & \textbf{$\pm$0.3} & \textbf{$\pm$0.1} & \textbf{$\pm$0.5} & $\pm$0.9 & \textbf{$\pm$0.2} & $\pm$0.04 & $\pm$0.5 \\
\midrule
FRCN \cite{faster} & \checkmark & \checkmark & 1.05 & 3.68 & 5.41 & 0.94 & 4.39 & 6.42 & 1.12 & 5.11 & 7.83 & 1.99 & 5.30 & 8.84 \\
TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & \checkmark & \checkmark & 1.06 & 2.71 & 4.38 & 1.17 & 4.02 & 7.05 & 1.97 & 5.48 & 11.09 & 2.40 & 6.86 & 12.86 \\
TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & \checkmark & \checkmark & 1.07 & 2.78 & 5.12 & 1.64 & 4.12 & 7.27 & 2.34 & 5.48 & 10.43 & 2.82 & 6.70 & 12.21 \\
FSDetView \cite{wang2020frustratingly} & \text{\ding{55}} & \checkmark & 0.70 & 2.70 & 3.70 & 0.60 & 4.00 & 4.20 & 1.80 & 5.10 & 8.00 & \green{3.00} & 9.70 & 12.30 \\
MPSR \cite{wu2020multi} & \checkmark & \checkmark & 1.23 & 3.82 & 5.58 & 1.89 & 5.69 & 8.73 & 0.86 & 4.60 & 9.96 & 1.62 & 6.78 & 11.66 \\
A-RPN \cite{fan2020few} & \text{\ding{55}} & \checkmark & \green{1.74} & \green{5.17} & \green{6.96} & \green{2.20} & \green{7.55} & \green{10.49} & \green{2.72} & \green{9.51} & \green{14.74} & 2.92 & \green{10.67} & \green{16.08} \\
\textbf{AirDet (Ours)} & \text{\ding{55}} & \checkmark & \red{\textbf{3.05}} & \red{\textbf{6.40}} & \red{\textbf{10.03}} & \red{\textbf{4.00}} & \red{\textbf{9.65}} & \red{\textbf{13.91}} & \red{\textbf{3.46}} & \red{\textbf{11.44}} & \red{\textbf{16.04}} & \red{\textbf{3.27}} & \red{\textbf{11.20}} & \red{\textbf{18.64}} \\
\bottomrul
\end{tabular}\label{tab:coco_scale}%
\end{threeparttable}
\end{table*}%
We next report the performance of methods \cite{wang2020frustratingly,xiao2020few,wu2020multi,fan2020few,faster} and AirDet on multi-scale objects in \tref{tab:coco_scale}.
Thanks to SCS, AirDet achieves the highest performance for multi-scale objects among all the SOTAs.
Especially for small objects, given 5-shots, AirDet can achieve a surprising \textbf{4.22} AP$_s$, nearly doubling the fine-tuned methods with multi-scale FPN features \cite{wang2020frustratingly,wu2020multi}.
\myparagraph{Comparison of 10-Shot}
\begin{table*}[!t]
\centering
\setlength{\tabcolsep}{0.1mm}
\fontsize{5}{6.5}\selectfont
\caption{Performance comparison with 10-shot on COCO validation dataset. \red{Red} and \green{green} fonts indicate best and second-best scores, respectively. AirDet achieves comparable results without fine-tuning and outperforms most methods with fine-tuning, which strongly demonstrates its effectiveness.}
\begin{threeparttable}
\begin{tabular}{ccccccccccccccc}
\toprule[1pt]
Method & Venue & Fine-tune & AP & AP$_{50}$ & AP$_{75}$ & AP$_s$ & AP$_m$ & AP$_l$ & AR$_{1}$ & AR$_{10}$ & AR$_{100}$ & AR$_s$ & AR$_m$ & AR$_l$ \\
\midrule
LSTD \cite{chen2018lstd} & AAAI 2018 & \checkmark & 3.2 & 8.1 & 2.1 & 0.9 & 2.0 & 6.5 & 7.8 & 10.4 & 10.4 & 1.1 & 5.6 & 19.6 \\
MetaDet \cite{wang2019meta} & ICCV 2019 & \checkmark & 7.1 & 14.6 & 6.1 & 1.0 & 4.1 & 12.2 & 11.9 & 15.1 & 15.5 & 1.7 & 9.7 & 30.1 \\
FSRW \cite{kang2019few} & ICCV 2019 & \checkmark & 5.6 & 12.3 & 4.6 & 0.9 & 3.5 & 10.5 & 10.1 & 14.3 & 14.4 & 1.5 & 8.4 & 28.2 \\
Meta RCNN \cite{yan2019meta}& ICCV 2019 & \checkmark & 8.7 & 19.1 & 6.6 & 2.3 & 7.7 & 14.0 & 12.6 & 17.8 & 17.9 & 7.8 & 15.6 & 27.2 \\
TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & ICML 2020 & \checkmark & 9.1 & 17.3 & 8.5 & - & - & - & - & - & - & - & - & - \\
TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & ICML 2020 & \checkmark & 9.1 & 17.1 & 8.8 & - & - & - & - & - & - & - & - & - \\
FSDetView \cite{xiao2020few}& ECCV 2020 & \checkmark & 12.5 & \red{27.3} & 9.8 & 2.5 & 13.8 & 19.9 & 20.0 & 25.5 & 25.7 & 7.5 & 27.6 & 38.9 \\
MPSR \cite{wu2020multi} & ECCV 2020 & \checkmark & 9.8 & 17.9 & 9.7 & 3.3 & 9.2 & 16.1 & 15.7 & 21.2 & 21.2 & 4.6 & 19.6 & 34.3 \\
A-RPN \cite{fan2020few} & CVPR 2020 & \checkmark & 11.1 & 20.4 & 10.6 & - & - & - & - & - & - & - & - & - \\
SRR-FSD \cite{zhu2021semantic}& CVPR 2021 & \checkmark & 11.3 & 23.0 & 9.8 & - & - & - & - & - & - & - & - & - \\
FSCE \cite{sun2021fsce} & CVPR 2021 & \checkmark & 11.9 & - & 10.5 & - & - & - & - & - & - & - & - & - \\
DCNet \cite{Hu2021CVPR} & CVPR 2021 & \checkmark & \green{12.8} & 23.4 & 11.2 & 4.3 & \green{13.8} & \green{21.0} & 18.1 & 26.7 & 25.6 & 7.9 & 24.5 & 36.7 \\
Y. Li \textit{et al.}~ \cite{li2021few} & CVPR 2021 & \checkmark & 11.3 & 20.3 & - & - & - & - & - & - & - & - & - & - \\
FADI \cite{cao2021nips} & NIPS 2021 & \checkmark & 12.2 & 22.7 & \green{11.9} & - & - & - & - & - & - & - & - & - \\
QA-FewDet \cite{han2021query} & ICCV 2021 & \checkmark & 11.6 & 23.9 & 9.8 & - & - & - & - & - & - & - & - & - \\
FSOD$^{up}$ \cite{wu2021universal} & ICCV 2021 & \checkmark & 11.0 & - & 10.7 & \red{4.5} & 11.2 & 17.3 & - & - & - & - & - & - \\
\midrule
\textbf{AirDet} & \textbf{Ours} & \text{\ding{55}} & 8.7 & 15.3 & 8.8 & 4.3 & 9.7 & 14.8 & \green{\textbf{19.1}} & \red{\textbf{33.8}} & \red{\textbf{34.8}} & \red{\textbf{13.0}} & \red{\textbf{37.4}} & \green{\textbf{52.9}} \\
\textbf{AirDet} & \textbf{Ours} & \checkmark & \red{\textbf{13.0}} & \green{\textbf{23.9}} & \red{\textbf{12.4}} & \red{\textbf{4.5}} & \red{\textbf{15.2}} & \red{\textbf{22.8}} & \red{\textbf{20.5}} & \green{\textbf{33.7}} & \green{\textbf{34.4}} & \green{\textbf{9.6}} & \green{\textbf{36.4}} & \red{\textbf{55.0}} \\
\bottomrule[1pt]
\end{tabular}\label{tab:10shot}%
\end{threeparttable}
\end{table*}%
For a more thorough comparison, we present the 10-shot evaluation on the COCO validation dataset in \tref{tab:10shot}. Without fine-tuning, AirDet can surprisingly achieve comparable performance against recent work \cite{wu2020multi,wang2020frustratingly,yan2019meta}, while all of them require a careful fine-tuning stage. Moreover, our fine-tuned model outperforms most prior methods \cite{li2021few,cao2021nips,han2021query,wu2021universal,Hu2021CVPR,sun2021fsce,zhu2021semantic,fan2020few,wu2020multi,xiao2020few,wang2020frustratingly,yan2019meta,kang2019few,wang2019meta,chen2018lstd} in most metrics, especially average recall rate (AR). Besides, the performance superiority on small objects (AP$_s$ and AR$_s$) further demonstrates the effectiveness of AirDet on multi-scale, especially the small-scale objects.
\myparagraph{Efficiency Comparison}
We report the fine-tuning and inference time of AirDet, and the SOTA methods \cite{fan2020few,wang2020frustratingly,xiao2020few,wu2020multi} in a setting of 3-shot one class in \tref{tab:time}, in which the official code and implementation with ResNet101 as the backbone are adopted.
Without fine-tuning, AirDet can make direct inferences on novel objects with a comparable speed, while the others methods \cite{fan2020few,xiao2020few,wu2020multi,wang2020frustratingly} require a fine-tuning time of about 3-30 minutes, which cannot meet the requirements of online exploration.
Note that the fine-tuning time is measured on TITAN X GPU, while such computational power is often unavailable on robots.
\noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Many methods \cite{li2021few,cao2021nips,han2021query,wu2021universal,Hu2021CVPR,sun2021fsce,zhu2021semantic,fan2020few,wu2020multi,xiao2020few,wang2020frustratingly,yan2019meta,kang2019few,wang2019meta,chen2018lstd} also require an offline process to fine-tune hyper-parameters for different shots.
While such \textit{off-line} tuning is infeasible for robotic \textit{online} exploration.
Instead, AirDet can adopt \textbf{the same} base-trained model without fine-tuning for implementation.
\begin{table}[!t]
\centering
\setlength{\tabcolsep}{0.1mm}
\caption{Efficiency comparison with official source code. We adopt the pre-trained models provided by \cite{wang2020frustratingly}, so their fine-tuning time is unavailable.}
\fontsize{5}{7.5}\selectfont
\begin{tabular}{cccccccc}
\toprule
Method & \textbf{AirDet} & \multicolumn{1}{l}{A-RPN} \cite{fan2020few} & \multicolumn{1}{l}{FSDet} \cite{xiao2020few}& \multicolumn{1}{l}{MPSR} \cite{wu2020multi} & \multicolumn{1}{l}{TFA$_{\mathrm{fc}}$} \cite{wang2020frustratingly} & \multicolumn{1}{l}{TFA$_{\mathrm{cos}}$} \cite{wang2020frustratingly} & \multicolumn{1}{l}{FRCN$_{\mathrm{ft}}$} \cite{wang2020frustratingly} \\
\midrule
Fine-tuning (min) & \textbf{0} & 21 & 11 & 3 & - & - & - \\
Inference (s/img) & \textbf{0.081} & 0.076 & 0.202 & 0.109 & 0.085 & 0.094 & 0.091 \\
\bottomrule
\end{tabular}%
\label{tab:time}%
\end{table}%
\begin{table}[!t]
\centering
\setlength{\tabcolsep}{0.2mm}
\fontsize{5.5}{6.5}\selectfont
\caption{Cross-domain performance on VOC-2012 validation dataset. \red{Red} and \green{green} fonts denote the first and second place, respectively. AirDet has been demonstrated strong generalization capability, maintaining obvious superiority against others.}
\begin{threeparttable}
\begin{tabular}{cc|ccc|ccc|ccc|ccc}
\toprul
\multicolumn{2}{c|}{Shots} & \multicolumn{3}{c|}{1} & \multicolumn{3}{c|}{2} & \multicolumn{3}{c|}{3} & \multicolumn{3}{c}{5} \\
\midrule
Method & Fine-tune & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\
\multirow{2}{*}{A-RPN \cite{fan2020few}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & 10.45 & 18.10 & 10.32 & 13.10 & 22.60 & 13.17 & 14.05 & 24.08 & 14.24 & 14.87 & 25.03 & 15.26 \\
& & $\pm$0.1 &$\pm$0.1 & $\pm$0.1 & $\pm$0.2 & $\pm$0.4 & $\pm$0.2 & $\pm$0.2 & $\pm$0.2 & $\pm$0.2 & $\pm$0.08 & $\pm$0.07 & $\pm$0.1 \\
\midrule
\multirow{2}{*}{\textbf{AirDet (Ours)}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \red{\textbf{11.92}} & \red{\textbf{21.33}} & \red{\textbf{11.56}} & \red{\textbf{15.80}} & \red{\textbf{26.80}} & \red{\textbf{16.08}} & \red{\textbf{16.89}} & \red{\textbf{28.61}} & \red{\textbf{17.36}} & \red{\textbf{17.83}} & \red{\textbf{29.78}} & \red{\textbf{
18.38}} \\
& & \textbf{$\pm$0.06} &\textbf{$\pm$0.08} &\textbf{ $\pm$0.08} & \textbf{$\pm$0.08} & \textbf{$\pm$0.3} & \textbf{$\pm$0.05} & \textbf{$\pm$0.1} & \textbf{$\pm$0.1} & \textbf{$\pm$0.1} & \textbf{$\pm$0.03} & \textbf{$\pm$0.03} & \textbf{$\pm$0.1} \\
\midrule
FRCN \cite{faster} & \checkmark & \multicolumn{1}{c}{4.49} & \multicolumn{1}{c}{9.44} & \multicolumn{1}{c|}{3.85} & \multicolumn{1}{c}{5.20} & \multicolumn{1}{c}{11.92} & \multicolumn{1}{c|}{3.84} & \multicolumn{1}{c}{6.50} & \multicolumn{1}{c}{14.39} & \multicolumn{1}{c|}{5.11} & \multicolumn{1}{c}{6.55} & \multicolumn{1}{c}{14.48} & \multicolumn{1}{c}{5.09} \\
TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & \checkmark & \multicolumn{1}{c}{4.66} & \multicolumn{1}{c}{7.97} & \multicolumn{1}{c|}{5.14} & \multicolumn{1}{c}{6.59} & \multicolumn{1}{c}{11.91} & \multicolumn{1}{c|}{6.49} & \multicolumn{1}{c}{8.78} & \multicolumn{1}{c}{17.09} & \multicolumn{1}{c|}{8.15} & \multicolumn{1}{c}{10.46} & \multicolumn{1}{c}{20.93} & \multicolumn{1}{c}{9.53} \\
TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & \checkmark & \multicolumn{1}{c}{4.40} & \multicolumn{1}{c}{8.60} & \multicolumn{1}{c|}{4.21} & \multicolumn{1}{c}{7.02} & \multicolumn{1}{c}{13.80} & \multicolumn{1}{c|}{6.21} & \multicolumn{1}{c}{9.24} & \multicolumn{1}{c}{18.48} & \multicolumn{1}{c|}{8.03} & \multicolumn{1}{c}{11.11} & \multicolumn{1}{c}{22.83} & \multicolumn{1}{c}{9.78} \\
FSDetView \cite{xiao2020few} & \checkmark & 4.80 & 14.10 & 1.40 & 3.70 & 11.60 & 0.60 & 6.60 & 22.00 & 1.20 & 10.80 & 26.50 & 5.50 \\
MPSR \cite{wu2020multi} & \checkmark & \multicolumn{1}{c}{6.01} & \multicolumn{1}{c}{11.23} & \multicolumn{1}{c|}{5.74} & \multicolumn{1}{c}{8.20} & \multicolumn{1}{c}{15.08} & \multicolumn{1}{c|}{8.22} & \multicolumn{1}{c}{10.08} & \multicolumn{1}{c}{18.29} & \multicolumn{1}{c|}{9.99} & \multicolumn{1}{c}{11.49} & \multicolumn{1}{c}{21.33} & \multicolumn{1}{c}{11.06} \\
A-RPN \cite{fan2020few} & \checkmark & \green{9.49} & \green{17.41} & \green{9.42} & \green{12.71} & \green{23.66} & \green{12.44} & \green{14.89} & \green{26.30} & \green{14.76} & \green{15.09} & \green{28.08} & \green{14.17} \\
\textbf{AirDet (Ours)} & \checkmark & \textbf{\red{13.33}} & \textbf{\red{24.64}} & \textbf{\red{12.68}} & \textbf{\red{17.51}} & \textbf{\red{30.35}} & \textbf{\red{17.61}} & \textbf{\red{17.68}} & \textbf{\red{32.05}} & \textbf{\red{17.34}} & \textbf{\red{18.27}} & \textbf{\red{33.02}} & \textbf{\red{17.69}} \\
\bottomrule[1.2pt]
\end{tabular}\label{tab:voc}%
\end{threeparttable}
\end{table}%
\subsection{Cross-domain Evaluation}\label{sec:cross}
Robots are often deployed to novel environments that have never been seen during training, thus cross-domain test is crucial for robotic applications.
In this section, we adopt the same model trained on COCO, while test on PASCAL VOC \cite{everingham2010pascal} and LVIS \cite{gupta2019lvis} to evaluate the model generalization capability.
\myparagraph{PASCAL VOC}
We report the overall performance on PASCAL VOC-2012 \cite{everingham2010pascal} for all methods in \tref{tab:voc}.
In the cross-domain setting, even without fine-tuning, AirDet achieves better performance than methods \cite{wu2020multi,fan2020few,faster,xiao2020few,wang2020frustratingly} that perform relatively well in in-domain test.
This means AirDet has a much stronger generalization capability than most fine-tuned prior methods.
\myparagraph{LVIS}
We randomly sample LVIS \cite{gupta2019lvis} to form 4 splits of classes, each of which contains 16 different classes.
To provide valid evaluation, the classes that have 20 to 200 images are taken for the test.
More details can be found in \appref{sec:lvis}.
The averaged performance with 5-shot without fine-tuning is presented in \tref{tab:lvis-cross}, where AirDet outperforms the baseline \cite{fan2020few} in every split under all metrics. Since the novel categories in the 4 LVIS splits are more (64 classes in total) and rarer (many of them are uncommon) than the VOC 20 classes, the superiority of AirDet in \tref{tab:lvis-cross} highly demonstrate its robustness under class variance.
\begin{table}[!t]
\setlength{\tabcolsep}{.6mm}
\centering
\fontsize{5.5}{6.5}\selectfont
\caption{Cross-domain performance of A-RPN \cite{fan2020few} and AirDet on LVIS dataset. We report the results for 5-shot without fine-tuning on 4 random splits.}
\begin{tabular}{c|cccc|cccc|cccc|cccc}
\toprul
\multicolumn{1}{c|}{Split} & \multicolumn{4}{c|}{1} & \multicolumn{4}{c|}{2} & \multicolumn{4}{c|}{3} & \multicolumn{4}{c}{4} \\
\midrule
Metrict & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ \\
\textbf{AirDet} & \textbf{6.71} & \textbf{12.31} & \textbf{6.51} & \textbf{27.57} & \textbf{9.35} & \textbf{14.23} & \textbf{9.98} & \textbf{25.42} & \textbf{9.09} & \textbf{15.64} & \textbf{8.82} & \textbf{34.64} & \textbf{11.07} & \textbf{16.90} & \textbf{12.30} & \textbf{25.76} \\
A-RPN & 5.49 & 10.04 & 5.27 & 26.59 & 8.85 & 13.41 & 9.46 & 24.45 & 7.49 & 12.34 & 8.13 & 33.85 & 10.80 & 15.46 & 12.24 & 25.05 \\
\bottomrul
\end{tabular}%
\label{tab:lvis-cross}%
\end{table}%
\subsection{Ablation Study and Deep Visualization}\label{sec:abla}
In this section, we address the effectiveness of the proposed three modules via quantitative results and qualitative visualization using Grad-Cam \cite{gradcam}.
\myparagraph{Quantitative Evaluation}
We report the overall performance on 3-shot and 5-shot for the baseline \cite{fan2020few} and AirDet by enabling the three modules, respectively.
It can be seen in \tref{tab:ABLA} that AirDet outperforms the baseline in all cases. With the modules enabled one by one, the results get gradually higher, which strongly demonstrates the necessity and effectiveness of SCS, GLR, and PRE.
\begin{table}[!t]
\centering
\setlength{\tabcolsep}{0.2mm}
\fontsize{5.5}{6.5}\selectfont
\caption{Ablation study of the three modules, \textit{i.e.}, PRE, GLR, and SCS in AirDet. With each module enabled, the performance is improved step by step on our baseline. With the full modules, AirDet can amazingly achieve up to \textbf{35\%} higher results.}
\begin{tabular}{ccc|cccccc|cccccc}
\toprul
\multicolumn{3}{c|}{Module} & \multicolumn{5}{c}{3} & & \multicolumn{6}{c}{5} \\
\midrule
PRE & GLR & SCS & \multicolumn{1}{c}{AP} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{50}$} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{75}$} & $\Delta\%$ & \multicolumn{1}{c}{AP} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{50}$} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{75}$} & $\Delta\%$ \\
\multicolumn{3}{c|}{Baseline \cite{fan2020few} } & 4.80 & 0.00 & 9.24 & 0.00 & 4.49 & 0.00 & 5.73 & 0.00 & 10.68 & 0.00 & 5.53 & 0.00 \\
\checkmark & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & 5.15 & +7.29 & 10.11 & +9.41 & 4.71 & +4.90 & 5.94 & +3.66 & 11.54 & +8.05 & 5.34 & -3.43 \\
\checkmark & \checkmark & \multicolumn{1}{c|}{} & 5.59 & +16.46 & 10.61 & +14.83 & 5.12 & +14.03 & 6.44 & +12.39 & 12.08 & +13.11 & 6.06 & +9.58 \\
\midrule
\checkmark & \checkmark & \checkmark & \textbf{6.50} & \textbf{+35.41} & \textbf{12.30} & \textbf{+33.12} & \textbf{6.11} & \textbf{+36.08} & \textbf{7.27} & \textbf{+26.78} & \textbf{13.63} & \textbf{+27.62} & \textbf{6.71} & \textbf{+21.34} \\
\bottomrul
\end{tabular}%
\label{tab:ABLA}%
\end{table}%
\begin{figure}[!t]
\centering
\includegraphics[width=1\columnwidth]{deep.pdf}
\caption{Deep visualization comparison between AirDet and baseline \cite{fan2020few}. In (a), By virtue of SCS, AirDet is capable of finding given support objects effectively. In (b), with similar proposals (\textcolor[rgb]{1,0,0}{red} boxes), AirDet can focus on the entire object (aeroplane) and notice the most representative parts (dog), resulting in more precise regression box and correct classification results. More examples are presented in \appref{sec:more_deep}.}
\label{fig:deep_rpn}
\end{figure}
\myparagraph{How effective is SCS?}
Given 2-shot per class, we first take the highest ranking proposal from RPN \cite{faster} to backpropagate the objectiveness score and resize the gradient map to the original image.
\fref{fig:deep_rpn} (a) exhibits the heat map from both AirDet and the baseline. We observe that AirDet generally concentrates on objects more precisely than the baseline.
Moreover, AirDet can focus better on objects belonging to the support class and is not distracted by other objects (2nd and 3rd row).
This means that AirDet can generate novel object proposals more effectively.
\myparagraph{How effective is GLR and detection head?}
In \fref{fig:deep_rpn} (b), we observe that with similar proposal boxes, AirDet head can better focus on the entire object, \textit{e.g.}, aeroplane is detected with a precise regression box, \textit{e.g.}, the dog is correctly classified with high score. This again demonstrates the effectiveness of our GLR and detection head.
\subsection{Real-World Test}\label{sec:real}
\begin{table*}[!t]
\centering
\setlength{\tabcolsep}{0.6mm}
\fontsize{5.5}{6.5}\selectfont
\caption{3-shot real-world exploration test of AirDet and baseline \cite{fan2020few}. AirDet can be directly applied without fine-tuning and performs considerably more robust than the baseline by virtue of the newly proposed SCS, GLR, and PRE modules.}
\begin{tabular}{ccc|cc|cc|cc|cc|cc}
\toprul
\multicolumn{13}{c}{Real-world Exploration Test} \\
\midrule
\multicolumn{1}{l}{Test/\#Frames} & \multicolumn{2}{c|}{1/\#248} & \multicolumn{2}{c|}{2/\#146} & \multicolumn{2}{c|}{3/\#127} & \multicolumn{2}{c|}{4/\#41} & \multicolumn{2}{c|}{5/\#248} & \multicolumn{2}{c}{6/\#46} \\
\midrule
Metric & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ \\
\textbf{AirDet (Ours)} & \textbf{17.10} & \textbf{54.10} & \textbf{17.90} & \textbf{47.40} & \textbf{24.00} & \textbf{57.50} & \textbf{26.94} & \textbf{48.20} & \textbf{11.28} & \textbf{38.17} & \textbf{20.40} & \textbf{70.63} \\
A-RPN \cite{fan2020few} & 13.56 & 40.40 & 14.30 & 38.80 & 20.20 & 47.20 & 22.41 & 40.14 & 6.75 & 24.10 & 14.70 & 59.38 \\
\midrule
Test/\#Frames & \multicolumn{2}{c|}{7/\#212} & \multicolumn{2}{c|}{8/\#259} & \multicolumn{2}{c|}{9/\#683} & \multicolumn{2}{c|}{10/\#827} & \multicolumn{2}{c|}{11/\#732} & \multicolumn{2}{c}{12/\#50} \\
\midrule
Metric & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ \\
\textbf{AirDet (Ours)} & \textbf{5.90} & \textbf{16.00} & \textbf{15.26} & \textbf{43.31} & \textbf{7.63} &\textbf{27.88} & \textbf{13.55} & \textbf{23.92} & \textbf{15.74} & \textbf{34.43} & \textbf{21.45} & \textbf{45.83} \\
A-RPN \cite{fan2020few} & 2.39 & 7.60 & 11.27 & 25.24 & 6.16 & 23.40 & 8.10 & 14.85 & 11.54 & 27.28 & 18.20 & 33.98 \\
\bottomrul
\end{tabular}%
\label{tab:subt}%
\end{table*}%
Real-world tests are conducted for AirDet and our baseline \cite{fan2020few} with 12 sequences that were collected from the DARPA Subterranean (SubT) challenge \cite{subtchallenge}.
Due to the requirements of \textit{online} response during the mission, the models can only be evaluated \textbf{without fine-tuning}, which makes existing methods \cite{li2021few,cao2021nips,han2021query,wu2021universal,Hu2021CVPR,sun2021fsce,zhu2021semantic,fan2020few,wu2020multi,xiao2020few,wang2020frustratingly,yan2019meta,kang2019few,wang2019meta,chen2018lstd} impractical.
The environments of SubT challenge also poses extra difficulties, \textit{e.g.}, a lack of lighting, thick smoke, dripping water, and cluttered or irregularly shaped environments, \textit{etc.}~
To test the generalization capabilities, we adopt the same models of AirDet and the baseline as those evaluated in \sref{sec:indomain} and \sref{sec:cross}. The performance of 3-shot for each class is exhibited in \tref{tab:subt}, where AirDet is proved better.
The robot is equipped with an NVIDIA Jetson AGX Xavier, where our method runs at 1-2 FPS without TensorRT acceleration or other optimizations.
\begin{table}[t]
\centering
\setlength{\tabcolsep}{2mm}
\fontsize{6}{6}\selectfont
\caption{Per class results of the real-world tests. We report the instance number of each novel class along with the 3-shot AP results from AirDet and A-RPN \cite{fan2020few}. Compared with the baseline, AirDet achieves higher results for all classes.}
\begin{tabular}{cccccccc}
\toprule[1.2pt]
Class & Backpack & Helmet & Rope & Drill & Vent & Extinguisher & Survivor \\
\midrule
Instances & 626 & 674 & 723 & 587 & 498 & 1386 & 205 \\
AirDet & \textbf{32.3} & \textbf{9.7} & \textbf{13.9} &\textbf{ 10.8} & \textbf{16.2} & \textbf{10.5} & \textbf{10.7} \\
Baseline \cite{fan2020few} & 26.6 & 9.7 & 6 & 9 & 14.4 & 5.6 & 9.1 \\
\bottomrule[1.2pt]
\end{tabular}%
\label{tab:subt_cls}%
\end{table}%
In \tref{tab:subt_cls}, we present the number of instances and the performance on each novel class. To our excitement, AirDet shows smaller variance and higher precision cross different classes.
We also present the support images and representative detected objects in \fref{fig:subt}.
Note that AirDet can detect the novel objects accurately in the query images even if they have distinct scales and different illumination conditions with the supports. We regard this capability to the carefully designed SCS in AirDet.
More visualization are presented in \appref{sec:quali}.
The robustness and strong generalization capability of AirDet in the real-world tests demonstrated its promising prospect and feasibility for autonomous exploration.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{SUBT.pdf}
\caption{The provided support images and examples of detection results in the real-world tests. AirDet is robust to distinct object scales and different illumination conditions.}
\label{fig:subt}
\end{figure}
\section{Limitation and Future Work}
Despite the promising prospect and outstanding performance, AirDet still has several limitations.
(1) Since abundant base classes are needed to generalize, AirDet needs a relatively large base dataset to train before inference on novel classes.
(2) Second, AirDet relies on the quality of support images to work well without fine-tuning. This is because the provided few support images are the only information for the unseen classes.
(3) We observe that the failure cases of AirDet are mainly due to false classification, resulting in a high result variance among different classes in COCO and VOC.
(4) Since SCS and the detection head run in loops for multiple novel classes, the efficiency of AirDet will suffer from a large number of novel classes.
We provide quantitative results for limitation (1), (2), and (3) in \appref{sec:de_limi}.
\section{Conclusion}
This paper presents a brand new few-shot detector, AirDet, which consists of 3 newly proposed \textit{class-agnostic relation}-based modules and is free of fine-tuning.
Specifically, with proposed spatial relation and channel relation, we construct support guided cross-scale feature fusion for region proposals, global-local relation network for shots aggregation, and prototype relation embedding for precise localization. With the strong capability to extract \textit{class-agnostic relation}, AirDet can work comparably or even better than those exhaustively fine-tuned methods in both in-domain and cross-domain evaluation.
AirDet is also tested on real-world data with a robotic platform, where its feasibility for autonomous exploration is demonstrated.
\\
\par\noindent
\myparagraph{Acknowledgement}
This work was sponsored by ONR grant \#N0014-19-1-2266 and ARL DCIST CRA award W911NF-17-2-0181. The work was done when Bowen Li and Pranay Reddy were interns at The Robotics Institute, Carnegie Mellon University. The authors would like to thank all members of the Team Explorer for providing data collected from the DARPA Subterranean Challenge.
\bibliographystyle{splncs04}
| 2024-02-18T23:39:39.986Z | 2022-07-26T02:09:22.000Z | algebraic_stack_train_0000 | 7 | 8,522 |
|
proofpile-arXiv_065-110 | \section{Introduction}
The numerical analysis of elastic shells is a
vast field with important applications in physics and engineering. In most
cases, it is carried out via the finite element method. In the physics and computer
graphics literature, there have been suggestions to use simpler methods based
on discrete differential geometry \cite{meyer2003discrete,bobenko2008discrete}. Discrete differential geometry of surfaces is the study of triangulated polyhedral surfaces. (The epithet ``simpler'' has to be understood as ``easier to implement''.) We mention in passing that models based on triangulated polyhedral surfaces have applications in materials science beyond the elasticity of thin shells. E.g., recently these models have been used to describe defects in nematic liquids on thin shells \cite{canevari2018defects}. This amounts to a generalization to arbitrary surfaces of the discrete-to-continuum analysis for the XY model in two dimensions that leads to Ginzburg-Landau type models in the continuum limit \cite{MR2505362,alicandro2014metastability}.
\medskip
Let us describe some of the methods mentioned above in more detail.
Firstly, there are the so-called
\emph{polyhedral membrane models} which in fact can be used for a whole
array of physical and engineering problems (see e.g.~the review
\cite{davini1998relaxed}). In the context of plates and shells, the
so-called Seung-Nelson model \cite{PhysRevA.38.1005} is widely used.
This associates membrane and bending energy to a piecewise affine map $y:\R^2\supset U\to \R^3$, where the pieces are determined by a triangulation $\mathcal T$ of the polyhedral domain $U$. The bending energy is given by
\begin{equation}
E^{\mathrm{SN}}(y)= \sum_{K,L} |n(K)-n(L)|^2\,,\label{eq:1}
\end{equation}
where the sum runs over those unordered pairs of triangles $K,L$ in $\mathcal T$ that share an edge, and $n(K)$ is the surface normal on the triangle $K$. In \cite{PhysRevA.38.1005}, it has been argued that for a fixed
limit deformation $y$, the energy \eqref{eq:1} should approximate the Willmore energy
\begin{equation}
E^{\mathrm{W}}(y)=\int_{y(U)} |Dn|^2\; \mathrm{d}{\mathscr H}^2\label{eq:2}
\end{equation}
when the grid size of the triangulation $\mathcal T$ is sent to 0, and the argument of the discrete energy \eqref{eq:1} approximates the (smooth) map $y$. In \eqref{eq:2} above, $n$ denotes the surface normal and $\H^2$ the two-dimensional Hausdorff measure.
These statements have been made more precise in \cite{schmidt2012universal}, where it has been shown that the result of the limiting process depends on the used triangulations. In particular, the following has been shown in this reference: For $j\in\N$, let $\mathcal T_j$ be a triangulation of $U$ consisting of equilateral triangles such that one of the sides of each triangle is parallel to the $x_1$-direction, and such that the triangle size tends 0 as $j\to\infty$. Then the limit energy reads
\[
\begin{split}
E^{\mathrm{FS}}(y)=\frac{2}{\sqrt{3}}\int_U &\big(g_{11}(h_{11}^2+2h_{12}^2-2h_{11}h_{22}+3h_{22}^2)\\
&-8g_{12}h_{11}h_{12}+2 g_{22}(h_{11}^2+3h_{12}^2)\big)(\det g_{ij})^{-1}\; \mathrm{d} x\,,
\end{split}
\]
where
\[
\begin{split}
g_{ij}&=\partial_i y\cdot\partial_j y\\
h_{ij}&=n\cdot \partial_{ij} y \,.
\end{split}
\]
More precisely, if $y\in C^2(U)$ is given, then the sequence of maps $y_j$ obtained by piecewise affine interpolation of the values of $y$ on the vertices of the triangulations $\mathcal T_j$ satisfies
\[
\lim_{j\to \infty}E^{\mathrm{SN}}(y_j)=E^{\mathrm{FS}}(y)\,.
\]
Secondly, there is the more recent approach to using discrete differential
geometry for shells pioneered by Grinspun et al.~\cite{grinspun2003discrete}.
Their energy does not depend on an immersion $y$ as above, but is defined directly on triangulated surfaces. Given such a surface $\mathcal T$, the energy is given by
\begin{equation}
E^{\mathrm{GHDS}}(\mathcal T)=\sum_{K,L} \frac{l_{KL}}{d_{KL}} \alpha_{KL}^2\label{eq:3}
\end{equation}
where the sum runs over unordered pairs of neighboring triangles $K,L\in\mathcal T$, $l_{KL}$ is the length of the interface between $K,L$, $d_{KL}$ is the distance between the centers of the circumcircles of $K,L$, and $\alpha_{KL}$ is the difference of the angle between $K,L$ and $\pi$, or alternatively the angle between the like-oriented normals $n(K)$ and $n(L)$, i.e. the \emph{dihedral angle}.
In \cite{bobenko2005conformal}, Bobenko has defined an energy for piecewise affine surfaces $\mathcal T$ that is invariant under conformal transformations. It is defined via the circumcircles of triangles in $\mathcal T$, and the external intersection angles of circumcircles of neighboring triangles. Denoting this intersection angle for neighboring triangles $K,L$ by $\beta_{KL}$, the energy reads
\begin{equation}\label{eq:4}
E^\mathrm{B} (\mathcal T) = \sum_{K,L}\beta_{KL}-\pi\, \#\,\text{Vertices}(\mathcal T)\,.
\end{equation}
Here $\text{Vertices}(\mathcal T)$ denotes the vertices of the triangulation $\mathcal T$, the sum is again over nearest neighbors.
It has been shown in \cite{bobenko2008surfaces} that this energy is the same as
\eqref{eq:3} up to terms that vanish as the size of triangles is sent to zero
(assuming sufficient smoothness of the limiting surface). The reference
\cite{bobenko2008surfaces} also contains an analysis of the energy for this
limit. If the limit surface is smooth, and it is approximated by triangulated
surfaces $\mathcal T_\varepsilon$ with maximal triangle size $\varepsilon$ that satisfy
a number of technical assumptions, then the Willmore energy of the limit surface
is smaller than or equal to the limit of the energies \eqref{eq:3} for the approximating surfaces, see Theorem 2.12 in \cite{bobenko2008surfaces}. The technical assumptions are
\begin{itemize}
\item each vertex in the triangulation $\mathcal T_\varepsilon$ is connected to six other vertices by edges,
\item the lengths of the sides of the hexagon formed by six triangles that share one vertex differ by at most $O(\varepsilon^4)$,
\item neighboring triangles are congruent up to $O(\varepsilon^3)$.
\end{itemize}
Furthermore, it is stated that the limit is achieved if additionally the triangulation approximates a ``curvature line net''.
\medskip
The purpose of this present paper is to generalize this convergence result, and put it into the framework of $\Gamma$-convergence \cite{MR1968440,MR1201152}. Instead of fixing the vertices of the polyhedral surfaces to lie on the limiting surfaces, we are going to assume that the convergence is weakly * in $W^{1,\infty}$ as graphs. This approach allows to completely drop the assumptions on the connectivity of vertices in the triangulations, and the assumptions of congruence -- we only need to require a certain type of regularity of the triangulations that prevents the formation of small angles. %
\medskip
We are going to work with the energy
\begin{equation}\label{eq:5}
E(\mathcal T)=\sum_{K,L} \frac{l_{KL}}{d_{KL}} |n(K)-n(L)|^2\,,
\end{equation}
which in a certain sense is equivalent to \eqref{eq:3} and \eqref{eq:4} in the limit of vanishing triangle size, see the arguments from \cite{bobenko2008surfaces} and Remark \ref{rem:main} (ii) below.
\medskip
To put this approach into its context in the mathematical literature, we point out that it is another instance of a discrete-to-continuum limit, which has been a popular topic in mathematical analysis over the last few decades. We mention the seminal papers \cite{MR1933632,alicandro2004general} and the fact that a variety of physical settings have been approached in this vein, such as spin and lattice systems \cite{MR1900933,MR2505362}, bulk elasticity \cite{MR2796134,MR3180690}, thin films \cite{MR2429532,MR2434899}, magnetism \cite{MR2186037,MR2505364}, and many more.
\medskip
The topology that we are going to use in our $\Gamma$-convergence statement is much coarser than the one that corresponds to Bobenko's convergence result; however it is not the ``natural'' one that would yield compactness from finiteness of the energy \eqref{eq:5} alone. For a discussion of why we do not choose the latter see Remark \ref{rem:main} (i) below. Our topology is instead defined as follows:
Let $M$ be some fixed compact oriented two-dimensional $C^\infty$ submanifold of
$\R^3$ with normal $n_M:M\to S^2$. Let $h_j\in W^{1,\infty}(M)$, $j=1,2,\dots$,
such that $\|h_j\|_{W^{1,\infty}}<C$ and $\|h_j\|_{\infty}<\delta(M)/2$ (where $\delta(M)$ is the \emph{radius of injectivity} of $M$, see Definition \ref{def:radius_injectivity} below) such that $
T_j:= \{x+h_j(x)n_M(x):x\in M\}$ are triangulated surfaces (see Definition \ref{def:triangular_surface} below). We say
$\mathcal T_j\to \mathcal S:=\{x+h(x)n_M(x):x\in M\}$ if $h_j\to h$ in
$W^{1,p}(M)$ for all $1\leq p<\infty$. Our main theorem, Theorem \ref{thm:main}
below, is a $\Gamma$-convergence result in this topology.
The regularity assumptions that we impose on the triangulated surfaces under
considerations are ``$\zeta$-regularity'' and the ``Delaunay property''. The
definition of these concepts can be found in Definition
\ref{def:triangular_surface} below.
\begin{thm}
\label{thm:main}
\begin{itemize}
\item[(o)] Compactness: Let $\zeta>0$, and let $h_j$ be a bounded sequence in
$W^{1,\infty}(M)$ such that $\mathcal T_j=\{x+h_j(x)n_M(x):x\in M\}$ is a
$\zeta$-regular triangulated surface and $\|h_j\|_\infty\leq\delta(M)/2$ for $j\in \N$ with $\limsup_{j\to\infty}E(\mathcal T_j)<\infty$. Then there exists a subsequence $h_{j_k}$ and $h\in W^{2,2}(M)$ such that $h_{j_k}\to h $ in $W^{1,p}(M)$ for every $1\leq p < \infty$.
\item[(i)] Lower bound: Let $\zeta>0$. Assume that for $j\in\N$, $h_j\in W^{1,\infty}(M)$ with $\|h_j\|\leq \delta(M)/2$, $\mathcal T_j:=\{x+h_j(x)n_M(x):x\in M\}$ is a
$\zeta$-regular triangulated surface fulfilling the Delaunay
property, and that $\mathcal T_j\to S=\{x+h(x)n_M(x):x\in M\}$ for $j\to\infty$. Then
\[
\liminf_{j\to\infty} E(\mathcal T_j)\geq \int_{S} |Dn_S|^2\; \mathrm{d}\H^2\,.
\]
\item[(ii)] Upper bound: Let $h\in W^{1,\infty}(M)$ with $\|h\|_\infty\leq \delta(M)/2$ and
$S=\{x+h(x)n_M(x):x\in M\}$. Then there exists $\zeta>0$ and a sequence $(h_j)_{j\in\N}\subset W^{1,\infty}(M)$ such that $\mathcal T_j:=\{(x+h_j(x)n_M(x):x\in M\}$ is a
$\zeta$-regular triangulated surface satisfying the
Delaunay property for each $j\in \N$, and we have $\mathcal T_j\to S$ for $j\to \infty$ and
\[
\lim_{j\to\infty} E(\mathcal T_j)= \int_{S} |Dn_S|^2\; \mathrm{d}\H^2\,.
\]
\end{itemize}
\end{thm}
\begin{rem}\label{rem:main}
\begin{itemize}
\item[(i)] We are not able to derive a convergence result in a topology that
yields convergence from boundedness of the energy \eqref{eq:5} alone. Such an
approach would necessitate the interpretation of the surfaces as varifolds or
currents. To the best of our knowledge, the theory of
integral functionals on varifolds (see e.g.~\cite{menne2014weakly,hutchinson1986second,MR1412686}) is not
developed to the point to allow for a treatment of this question. In particular, there does not exist a sufficiently
general theory of lower semicontinuity of integral functionals for varifold-function pairs.
\item[(ii)] We can state
analogous results based on the energy functionals \eqref{eq:3},
\eqref{eq:4}. To do so, our proofs only need to be modified slightly: As soon as we have reduced the situation to the graph case
(which we do by assumption), the upper bound construction can be carried out
as here; the smallness of the involved dihedral angles assures that the
arguments from \cite{bobenko2005conformal} suffice to carry through the proof.
Concerning the lower bound, we also reduce to the case of small dihedral angles by a blow-up procedure around Lebesgue points of the derivative of the surface normal of the limit surface. (Additionally, one can show smallness of the contribution of a few pairs of triangles whose dihedral angle is not small.) Again, the considerations from \cite{bobenko2005conformal} allow for a translation of our proof to the case of the energy functionals \eqref{eq:3},
\eqref{eq:4}.
\item[(iii)] As we will show in Chapter \ref{sec:necess-dela-prop}, we need to require the Delaunay property in order to obtain the lower bound statement. Without this requirement, we will show that a hollow cylinder can be approximated by triangulated surfaces with arbitrarily low energy, see Proposition~\ref{prop: optimal grid}.
\item[(iv)] Much more general approximations of surfaces by discrete geometrical objects have recently been proposed in \cite{buet2017varifold,buet2018discretization,buet2019weak}, based on tools from the theory of varifolds.
\end{itemize}
\end{rem}
\subsection*{Plan of the paper}
In Section \ref{sec:defin-prel}, we will fix definitions and make some preliminary observations on triangulated surfaces. The proofs of the compactness and lower bound part will be developed in parallel in Section \ref{sec:proof-comp-lower}. The upper bound construction is carried out in Section \ref{sec:surf-triang-upper}, and in Section \ref{sec:necess-dela-prop} we demonstrate that the requirement of the Delaunay property is necessary in order to obtain the lower bound statement.
\section{Definitions and preliminaries}
\label{sec:defin-prel}
\subsection{Some general notation}
\begin{notation}
For a two-dimensional submanifold $M\subset\R^3$, the tangent space of $M$ in $x\in M$ is
denoted by $T_{x}M$. For functions $f:M\to\R$, we denote their gradient by $\nabla f\in T_xM$; the norm $|\cdot|$ on $T_xM\subset\R^3$ is the Euclidean norm inherited from $\R^3$. For $1\leq p\leq \infty$, we denote by $W^{1,p}(M)$ the space of functions $f\in L^p(M)$ such that $\nabla f\in L^p(M;\R^3)$, with norm
\[
\|h\|_{W^{1,p}(M)}=\|f\|_{L^p(M)}+\|\nabla f\|_{L^p(M)}\,.
\]
For $U\subset\R^n$ and a function
$f:U\to\R$, we denote the graph of $f$ by
\[
\mathrm{Gr}\, f=\{(x,f(x)):x\in U\}\subset\R^{n+1}\,.
\]
For $x_1,\dots,x_m\subset \R^k$, the convex hull of $\{x_1,\dots,x_m\}$ is
denoted by
\[
[x_1,\dots,x_m]=\left\{\sum_{i=1}^m \lambda_ix_i:\lambda_i\in [0,1] \text{
for } i=1,\dots,m, \, \sum_{i=1}^m\lambda_i=1\right\}\,.
\]
We will identify $\R^2$ with the subspace $\R^2\times\{0\}$ of $\R^3$. The $d-$dimensional Hausdorff measure is denoted by $\H^d$, the $k-$dimensional Lebesgue measure by $\L^k$. The
symbol ``$C$'' will be used as follows: A statement such as
``$f\leq C(\alpha)g$'' is shorthand for ``there exists a constant $C>0$ that
only depends on $\alpha$ such that $f\leq Cg$''. The value of $C$ may change
within the same line. For $f\leq C g$, we also write
$f\lesssim g$.
\end{notation}
\subsection{Triangulated surfaces: Definitions}
\begin{defi}
\label{def:triangular_surface}
\begin{itemize}
\item [(i)] A \textbf{triangle} is the convex hull $[x,y,z]\subset \R^3$ of three points $x,y,z \in \R^3$. A \textbf{regular} triangle is one where $x,y,z$ are not colinear, or equivalently ${\mathscr H}^2([x,y,z])>0$.
\item[(ii)]
A \textbf{triangulated surface} is a finite collection
${\mathcal T} = \{K_i\,:\,i = 1,\ldots, N\}$ of regular triangles
$K_i = [x_i,y_i,z_i] \subset \R^3$ so that
$\bigcup_{i=1}^N K_i \subset \R^3$ is a topological two-dimensional manifold
with boundary; and the intersection of two different triangles $K,L\in {\mathcal T}$ is either empty, a common vertex, or a common edge.
We identify ${\mathcal T}$ with its induced topological manifold
$\bigcup_{i=1}^N K_i \subset \R^3$ whenever convenient. We say that ${\mathcal T}$ is \textbf{flat} if
there exists an affine subplane of $\R^3$ that contains ${\mathcal T}$.
\item[(iii)] The \textbf{size} of the triangulated surface, denoted $\size({\mathcal T})$, is the
maximum diameter of all its triangles.
\item[(iv)] The triangulated surface ${\mathcal T}$ is called $\zeta$\textbf{-regular}, with
$\zeta > 0$, if the minimum angle in all triangles is at least $\zeta$ and
$\min_{K\in {\mathcal T}} \diam(K) \geq \zeta \size({\mathcal T})$.
\item[(v)] The triangulated surface satisfies the \textbf{Delaunay}
property if for every triangle
$K = [x,y,z] \in {\mathcal T}$ the following property holds: Let $B(q,r)\subset \R^3$ be the
smallest ball such that $\{x,y,z\}\subset \partial{B(q,r)}$. Then $B(q,r)$ contains
no vertex of any triangle in ${\mathcal T}$. The point $q = q(K)\in \R^3$ is called
the \textbf{circumcenter} of $K$, $\overline{B(q,r)}$ its
\textbf{circumball} with circumradius $r(K)$, and $\partial B(q,r)$ its \textbf{circumsphere}.
\end{itemize}
\end{defi}
Note that triangulated surfaces have normals defined on all triangles and are compact
and rectifiable. For the argument of the circumcenter map
$q$, we do not distinguish between triples of points $(a,b,c)\in \R^{3\times
3}$ and the triangle $[a,b,c]$ (presuming $[a,b,c]$ is a regular triangle).
\begin{notation}
If ${\mathcal T}=\{K_i:i=1,\dots,N\}$ is a triangulated surface, and
$g:{\mathcal T}\to \R$,
then we identify $g$ with the function
$\cup_{i=1}^N K_i\to \R$ that is
constant on the (relative) interior of each triangle $K$, and equal to
$0$ on $K\cap L$ for $K\neq L\in {\mathcal T}$. In particular we may write in this case
$g(x)=g(K)$ for $x\in \mathrm{int}\, K$.
\end{notation}
\begin{defi}
Let ${\mathcal T}$ be a triangulated surface and $K,L \in {\mathcal T}$. We set
\[
\begin{split}
\l{K}{L} &:= \H^1(K\cap L)\\
d_{KL} &:= |q(K) - q(L)|
\end{split}
\]
If $K,L$ are \textbf{adjacent}, i.e. if $\l{K}{L} > 0$, we may define $|n(K) - n(L)|\in \R$ as the norm of the difference of the normals $n(K),n(L)\in S^2$ which share an orientation, i.e. $2\sin \frac{\alpha_{KL}}{2}$, where $\alpha_{KL}$ is the dihedral angle between the triangles, see Figure \ref{fig:dihedral}. The discrete bending energy is then defined as
\[
E({\mathcal T}) = \sum_{K,L\in {\mathcal T}} \frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2.
\]
Here, the sum runs over all unordered pairs of triangles. If $|n(K) - n(L)| = 0$ or $\l{K}{L} = 0$, the energy density is defined to be $0$ even if $d_{KL}=0$. If $|n(K) - n(L)| > 0$, $\l{K}{L} > 0$ and $d_{KL} = 0$, the energy is defined to be infinite.
\end{defi}
\begin{figure}[h]
\begin{subfigure}{.45\textwidth}
\begin{center}
\includegraphics[height=5cm]{dihedral_triangles_v2.pdf}
\end{center}
\caption{ \label{fig:dihedral}}
\end{subfigure}
\hspace{5mm}
\begin{subfigure}{.45\textwidth}
\includegraphics[height=5cm]{d_KL_l_KL.pdf}
\caption{\label{fig:dkllkl}}
\end{subfigure}
\caption{($\mathrm{A}$) The dihedral angle $\alpha_{KL}$ for triangles $K,L$. It is related to the norm of the difference between the normals via $|n(K)-n(L)|=2\sin\frac{\alpha_{KL}}{2}$. ($\mathrm{B}$) Definitions of $d_{KL}$, $l_{KL}$.}
\end{figure}
\begin{notation}
\label{not:thetaKL}
Let $H$ be an affine subplane of $\R^3$.
For triangles $K,L\subset H$ that share an edge and $v\in\R^3$ parallel to
$H$, we define the function
$\mathds{1}^v_{KL}:H \to \{0,1\}$ as $\mathds{1}_{KL}^v(x) = 1$ if and
only if
$[x,x+v]\cap (K\cap L) \neq \emptyset$. If the intersection $K\cap L$ does
not consist of a single edge, then $\mathds{1}_{KL}^v\equiv
0$. Furthermore, we let $\nu_{KL}\in \R^3$ denote the unit vector parallel to $H$
orthogonal to the shared edge of $K,L$ pointing from $K$ to $L$ and
\[
\theta_{KL}^v=\frac{|\nu_{KL}\cdot v|}{|v|}\,.
\]
\end{notation}
See Figure \ref{fig:parallelogram} for an illustration of Notation \ref{not:thetaKL}.
\begin{figure}
\includegraphics[height=5cm]{char_fun_1.pdf}
\caption{Definition of $\theta_{KL}^v$: The parallelogram spanned by $v$ and the shared side $K\cap L$ has area $\theta^v_{KL}l_{KL}|v|$. This parallelogram translated by $-v$ is the support of $\mathds{1}_{KL}^v$. \label{fig:parallelogram}}
\end{figure}
\medskip
We collect the notation that we have introduced for triangles and triangulated
surfaces for the reader's convenience in abbreviated form: Assume that $K=[a,b,c]$ and $L=[b,c,d]$
are two regular triangles in $\R^3$. Then we have the following notation:
\begin{equation*}
\boxed{
\begin{split}
q(K)&: \text{ center of the smallest circumball for $K$}\\
r(K)& :\text{ radius of the smallest circumball for $K$}\\
d_{KL}&=|q(K)-q(L)|\\
l_{KL}&:\text{ length of the shared edge of $K,L$}\\
n(K)&: \text{ unit vector
normal to $K$ }
\end{split}
}
\end{equation*}
The following are defined if $K,L$
are contained in an affine subspace $H$ of $\R^3$, and $v$ is a vector
parallel to $H$:
\begin{equation*}
\boxed{
\begin{split}
\nu_{KL}&:\text{ unit vector parallel to $H$
orthogonal to}\\&\quad\text{ the shared edge of $K,L$ pointing from $K$ to $L$}\\
\theta_{KL}^v&=\frac{|\nu_{KL}\cdot v|}{|v|}\\
\mathds{1}_{KL}^v&: \text{ function defined on $H$, with value one if}\\
&\quad\text{ $[x,x+v]\cap (K\cap L)\neq \emptyset$, zero otherwise}
\end{split}
}
\end{equation*}
\subsection{Triangulated surfaces: Some preliminary observations}
For two adjacent triangles $K,L\in {\mathcal T}$, we have $d_{KL} = 0$ if and only if the vertices of $K$ and $L$ have the same circumsphere. The following lemma states that for noncospherical configurations, $d_{KL}$ grows linearly with the distance between the circumsphere of $K$ and the opposite vertex in $L$.
\begin{lma}\label{lma: circumcenter regularity}
The circumcenter map $q:\R^{3\times 3} \to \R^3$ is $C^1$ and Lipschitz when
restricted to $\zeta$-regular triangles. For two adjacent triangles $K =
[x,y,z]$, $L = [x,y,p]$, we have that
\[d_{KL} \geq \frac12 \big| |q(K)-p| -r(K) \big|\,.
\]
\end{lma}
\begin{proof}
The circumcenter $q = q(K)\in \R^3$ of the triangle $K = [x,y,z]$ is the solution to the linear system
\begin{equation}
\begin{cases}
(q - x)\cdot (y-x) = \frac12 |y-x|^2\\
(q - x)\cdot (z-x) = \frac12 |z-x|^2\\
(q - x)\cdot ((z-x)\times (y-x)) = 0.
\end{cases}
\end{equation}
Thus, the circumcenter map $(x,y,z)\mapsto q$ is $C^1$ when restricted to $\zeta$-regular $K$. To see that the map is globally Lipschitz, it suffices to note that it is $1$-homogeneous in $(x,y,z)$.
For the second point, let $s=q(L)\in \R^3$ be the circumcenter of $L$. Then by the triangle inequality, we have
\begin{equation}
\begin{aligned}
|p-q|\leq |p-s| + |s-q| = |x-s| + |s-q| \leq |x-q| + 2|s-q| = r + 2d_{KL},\\
|p-q| \geq |p-s| - |s-q| = |x-s| - |s-q| \geq |x-q| - 2 |s-q| = r - 2d_{KL}.
\end{aligned}
\end{equation}
This completes the proof.
\end{proof}
\begin{lma}
\label{lem:char_func}
Let $\zeta>0$, and $a,b,c,d\in \R^2$ such that $K=[a,b,c]$ and $L=[b,c,d]$ are $\zeta$-regular.
\begin{itemize}\item[(i)]
We have that
\begin{equation*}
\int_{\R^2} \mathds{1}_{KL}^v(x)\d x = |v|l_{KL}\theta_{KL}\,.
\end{equation*}
\item[(ii)] Let $\delta>0$, $v,w\in\R^2$, $\bar v=(v,v\cdot w)\in \R^3$,
$\bar a=(a,a\cdot w)\in \R^3$ and $\bar b, \bar c,\bar d\in \R^3$ defined
analogously. Let
$\bar K=[\bar a,\bar b,\bar c]$, $\bar L=[\bar b,\bar c,\bar d]$.
Then
\[
\int_{\R^2} \mathds{1}_{KL}^v(x)\, \d x = \frac{|\bar v|}{\sqrt{1+|w|^2}}
\theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}\,.
\]
\end{itemize}
\end{lma}
\begin{proof}
The equation (i) follows from the fact that $\mathds{1}_{KL}^v$ is the
characteristic function of a parallelogram, see Figure \ref{fig:parallelogram}.
To prove (ii) it suffices to observe that
$\int_{\R^2} \mathds{1}_{KL}^v(x)\sqrt{1+w^2}\d x$ is the volume of the
parallelogram from (i) pushed forward by the map $\tilde h(x)= (x,x\cdot
w)$, see Figure \ref{fig:char_fun_2}.
\end{proof}
\begin{figure}[h]
\includegraphics[height=5cm]{char_fun_2.pdf}
\caption{The parallelogram pushed forward by an affine map $x\mapsto (x,x\cdot w)$. \label{fig:char_fun_2}}
\end{figure}
\subsection{Graphs over manifolds}
\begin{assump}
\label{ass:Mprop}
We assume $M\subset\R^3$
is
an oriented compact two-dimensional $C^\infty$-submanifold of $\R^3$.
\end{assump}
This manifold will be fixed in the following. We denote the normal of $M$ by $n_M:M\to S^2$, and the second fundamental form at $x_0\in M$ is denoted by $S_M(x_0):T_{x_0}M\to T_{x_0}M$.
\medskip
\begin{defi}
\label{def:radius_injectivity}
The \emph{radius of injectivity} $\delta(M)>0$ of $M$ is the largest number such that the map $\phi:M\times (-\delta(M),\delta(M))\to \R^3$, $(x,h) \mapsto x + h n_M(x)$ is injective and the operator norm of $\delta(M)S_M(x)\in\mathcal{L}(T_xM)$ is at most $1$ at every $x\in M$.
\end{defi}
We define a graph over $M$ as follows:
\begin{defi}
\label{def:Mgraph}
\begin{itemize}
\item[(i)] A set $M_h = \{x+ h(x)n_M(x)\,:\,x\in M\}$ is called a \emph{graph} over $M$ whenever $h:M\to \R$ is a continuous function with $\|h\|_\infty \leq \delta(M)/2$.
\item[(ii)] The graph $M_h$ is called a ($Z$-)Lipschitz graph (for $Z > 0$) whenever $h$ is ($Z$-)Lipschitz, and a smooth graph whenever $h$ is smooth.
\item[(iii)] A set $N\subset B(M,\delta(M)/2)$ is said to be locally a tangent Lipschitz
graph over $M$ if for every $x_0\in M$ there exists $r>0$ and a Lipschitz
function $h:(x_0 +T_{x_0}M)\cap B(x_0,r)\to \R$ such that the intersection of $N$
with the cylinder $C(x_0,r,\frac{\delta(M)}{2})$ over $(x_0 +T_{x_0}M)\cap B(x_0,r)$ with height $\delta(M)$ in both
directions of $n_M(x_0)$, where
\[
C(x_0,r,s) := \left\{x + tn_M(x_0)\,:\,x\in (x_0 + T_{x_0}M) \cap B(x_0,r), t\in [-s,s] \right\},
\]
is equal to the graph of $h$ over $T_{x_0}M\cap B(x_0,r)$,
\[
N \cap C\left(x_0,r,\frac{\delta(M)}{2}\right) =\{x+h(x)n_M(x_0):x\in (x_0+T_{x_0}M)\cap B(x_0,r)\}\,.
\]
\end{itemize}
\end{defi}
\begin{lma}\label{lma: graph property}
Let $N\subset B(M,\delta(M)/2)$ be locally a tangent Lipschitz graph over $M$.Then $N$ is a Lipschitz graph over $M$.
\end{lma}
\begin{proof}
By Definition \ref{def:Mgraph} (iii), we have that for every $x\in M$, there
exists exactly one element
\[
x'\in
N\cap \left( x+n_M(x_0)[-\delta(M),\delta(M)]\right)\,.
\]
We write $h(x):=(x'-x)\cdot n_M(x)$, which obviously
implies $N=M_h$. For every $x_0\in
M$ there exists a neighborhood of $x_0$ such that $h$ is Lipschitz continuous in
this neighborhood by
the locally tangent Lipschitz property and the regularity of $M$. The global
Lipschitz property for $h$ follows from the local one by a standard covering argument.
\end{proof}
\begin{lma}
\label{lem:graph_rep}
Let $h_j\in W^{1,\infty}(M)$ with $\|h_j\|_{\infty}\leq\delta(M)/2$ and $h_j\weakstar h\in W^{1,\infty}(M)$ for $j\to \infty$. Then
for every point $x\in M$, there exists a neighborhood $V\subset x+T_xM$,
a Euclidean motion $R$ with $U:=R(x+T_xM)\subset \R^2$, functions $\tilde
h_j:U\to\R$ and $\tilde h:U\to \R$ such that $\tilde h_j\weakstar \tilde h$
in $W^{1,\infty}(U)$ and
\[
\begin{split}
R^{-1}\mathrm{Gr}\, \tilde h_j&\subset M_{h_j} \\
R^{-1}\mathrm{Gr}\, \tilde h&\subset M_{h} \,.
\end{split}
\]
\end{lma}
\begin{proof}
This follows immediately from our assumption that $M$ is $C^2$ and the
boundedness of $\|\nabla h_j\|_{L^\infty}$.
\end{proof}
\section{Proof of compactness and lower bound}
\label{sec:proof-comp-lower}
\begin{notation}
\label{not:push_gen}
If $U\subset\R^2$, ${\mathcal T}$ is a flat triangulated surface ${\mathcal T}\subset U$, $h:U\to\R$ is Lipschitz, and
$K=[a,b,c]\in{\mathcal T}$, then we write
\[
h_*K=[(a,h(a)),(b,h(b)),(c,h(c))]\,.
\]
We denote by $h_*{\mathcal T}$ for the triangulated surface defined by
\[
K\in{\mathcal T}\quad\Leftrightarrow \quad h_*K\in
h_*{\mathcal T}\,.
\]
\end{notation}
For an illustration Notation \ref{not:push_gen}, see Figure \ref{fig:push_gen}.
\begin{figure}[h]
\includegraphics[height=5cm]{pushforward_general.pdf}
\caption{Definition of the push forward of a triangulation $\mathcal T\subset \R^2$ by a map $h:\R^2 \to \R$. \label{fig:push_gen}}
\end{figure}
\begin{lma}
\label{lem:CS_trick}
Let $U\subset\R^2$, let ${\mathcal T}$ be a flat triangulated surface with $U\subset
{\mathcal T}\subset\R^2$, let $h$ be a Lipschitz function $U\to \R$ that is
affine on each triangle of ${\mathcal T}$, ${\mathcal T}^*=h_*{\mathcal T}$, let $g$ be a function that is constant on each
triangle of ${\mathcal T}$, $v\in \R^2$, $U^v=\{x\in\R^2:[x,x+v]\subset U\}$, and $W\subset U^v$.
\begin{itemize}\item[(i)]
Then
\[
\begin{split}
\int_{W}& |g(x+v)-g(x)|^2\d x\\
&\leq | v| \left(\sum_{K,L\in{\mathcal T}}
\frac{l_{K^*L^*}}{d_{K^*L^*}} |g(K)-g(L)|^2\right) \max_{x\in W}
\sum_{K,L\in{\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta_{KL}^vl_{KL}d_{K^*L^*}}{l_{K^*L^*}}
\,,
\end{split}
\]
where we have written $K^*=h_*K$, $L^*=h_*L$ for $K,L\in {\mathcal T}$.
\item[(ii)]
Let $w\in\R^2$, and denote by
$\bar K$, $\bar L$ the triangles $K,L$ pushed forward by the map
$x\mapsto (x,x\cdot w)$.
Then
\[
\begin{split}
\int_{W}& |g(x+v)-g(x)|^2\d x\\
&\leq \frac{|\bar v|}{\sqrt{1+|w|^2}} \left(\sum_{K,L\in{\mathcal T}}
\frac{l_{K^*L^*}}{d_{K^*L^*}} |g(K)-g(L)|^2\right) \max_{x\in W}
\sum_{K,L\in{\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta_{\bar K\bar
L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}\,.
\end{split}
\]
\end{itemize}
\end{lma}
\begin{proof}
By the Cauchy-Schwarz inequality, for $x\in W$, we have that
\[
\begin{split}
| g(x+v)- g(x)|^2&\leq \left(\sum_{K,L\in {\mathcal T}} \mathds{1}_{K
L}^v(x)| g(K)- g(L)|\right)^2\\
&\leq \left(\sum_{K,L\in {\mathcal T}}
\frac{l_{K^*L^*}}{\theta_{KL}^vl_{KL}d_{K^*L^*}}\mathds{1}_{K
L}^v(x)| g(K)- g(L)|^2\right)\\
&\qquad \times\left(\sum_{K,L\in {\mathcal T}}
\mathds{1}^v_{KL}(x)\frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}\right)\,.
\end{split}
\]
Using these estimates and Lemma \ref{lem:char_func} (i), we obtain
\begin{equation}
\begin{aligned}
&\int_{U^v} | g(x+v) - g(x)|^2\,\d x\\
\leq & \int_{U^v} \left( \sum_{K,L\in {\mathcal T}}
\mathds{1}^v_{KL}(x)\frac{l_{K^*L^*}}{\theta^v_{KL}l_{KL}d_{K^*L^*}} | g(K) - g(L)|^2 \right)\\
&\quad \times
\left( \sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}} \right) \,\d x\\
\leq & |v|\left( \sum_{K,L\in {\mathcal T}}
\frac{l_{K^*L^*}}{d_{K^*L^*}} | g(K) - g(L)|^2 \right) \max_{x\in U^v} \sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}\,.
\end{aligned}
\end{equation}
This proves (i). The claim (ii) is proved analogously, using $
\frac{\theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}$ instead of $
\frac{\theta_{ K L}^{v}l_{ K L}d_{K^*L^*}}{l_{K^*L^*}}$ in the
Cauchy-Schwarz inequality, and then Lemma \ref{lem:char_func} (ii).
\end{proof}
In the following proposition, we will consider sequences of flat triangulated surfaces
${\mathcal T}_j$ with $U\subset{\mathcal T}_j\subset\R^2$ and sequences of Lipschitz functions
$h_j:U\to \R$. We write ${\mathcal T}_j^*=(h_j)_*{\mathcal T}_j$, and for $K\in {\mathcal T}_j$, we write
\[
K^*=(h_j)_*K\,.
\]
\begin{prop}\label{prop:lower_blowup}
Let $U,U'\subset\R^2$ be open, $\zeta>0$, $({\mathcal T}_j)_{j\in \N}$
a sequence of flat $\zeta$-regular triangulated surfaces with $U\subset{\mathcal T}_j\subset
U'$ and $\mathrm{size} ({\mathcal T}_j) \to
0$. Let $(h_j)_{j\in\N}$ be a sequence of Lipschitz functions $U'\to \R$ with
uniformly bounded gradients such that $h_j$ is affine
on each triangle of ${\mathcal T}_j$ and the triangulated surfaces
${\mathcal T}_j^*=(h_j)_*{\mathcal T}_j$ satisfy the Delaunay property.
\begin{itemize}
\item[(i)] Assume that
\[
\begin{split}
h_j&\weakstar h\quad \text{ in } W^{1,\infty}(U')\,,
\end{split}
\]
and $\liminf_{j\to \infty} \sum_{K,L\in{\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |n(K^*) - n(L^*)|^2<\infty$. Then $h\in W^{2,2}(U)$.
\item[(ii)] Let $U=Q=(0,1)^2$, and let $(g_j)_{j\in\N}$ be a sequence of functions $U'\to\R$ such that $g_j$ is constant on
each triangle in ${\mathcal T}_j$. Assume that
\[
\begin{split}
h_j&\to h\quad \text{ in } W^{1,2}(U')\,,\\
g_j&\to g \quad\text{ in }
L^2(U')\,,
\end{split}
\]
where $h(x)=w\cdot x$ and $g(x)=u\cdot x$ for some $u,w\in \R^2$.
Then we have
\[
u^T\left(\mathds{1}_{2\times 2}+w\otimes w\right)^{-1}u \sqrt{1+|w|^2}\leq \liminf_{j\to \infty} \sum_{K,L\in{\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g_j(K) - g_j(L)|^2\,.
\]
\end{itemize}
\end{prop}
\begin{proof}[Proof of (i)]
We write
\[
E_j:= \sum_{K,L\in {\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |n(K^*)
- n(L^*)|^2 \,.
\]
Fix $v\in B(0,1)\subset\R^2$, write $U^v=\{x\in\R^2:[x,x+v]\subset U\}$,
and
fix $k\in \{1,2,3\}$.
Define the function $N_j^k:U\to \R^3$ by requiring $N_j^k(x)=n(K^*)\cdot e_k$ for $x\in K\in{\mathcal T}_j$.
By Lemma \ref{lem:CS_trick} with $g_j=N_j^k$, we
have that
\begin{equation}
\label{eq:11}
\int_{U^v} |N_{j}^k(x+v) - N_j^k(x)|^2\,\d x
\leq |v|
\left(\max_{x\in U^v} \sum_{K,L\in{\mathcal T}_j}\mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}
\right) E_j\,.
\end{equation}
Since
$h_j$ is uniformly Lipschitz, there exists a constant $C>0$ such that
\[
\frac{l_{KL}}{l_{K^*L^*}}
d_{K^*L^*}<C d_{KL}\,.
\]
We claim that
\begin{equation}\label{eq:15}
\begin{split}
\max_{x\in U^v} \sum_{K,L\in {\mathcal T}_j} \mathds{1}_{KL}^v(x) \theta_{KL}d_{KL}
&\lesssim |v|+C\size({\mathcal T}_{j})\,.
\end{split}
\end{equation}
Indeed, let $K_0,\ldots,K_N\in {\mathcal T}_{j}$ be the sequence of triangles so that there is $i:[0,1]\to \{1,\ldots,N\}$ non-decreasing with $x+tv\in K_{i(t)}$.
We have that for all pairs $K_i,K_{i+1}\in {\mathcal T}_{j}$,
\begin{equation}
\label{eq:12}
\theta_{K_iK_{i+1}} d_{K_iK_{i+1}} = \left|(q(K_{i+1})-q(K_i)) \cdot \frac{v}{|v|}\right| \,,
\end{equation}
which yields the last estimate in \eqref{eq:15}.
Inserting in \eqref{eq:11} yields
\begin{equation}
\int_{U^v} |N_{j}^k(x+v) - N_j^k(x)|^2\,\d x
\leq
C|v|(|v|+C\size({\mathcal T}_{j})) E_j\,.
\end{equation}
By passing to the limit $j\to\infty$ and standard difference quotient arguments,
it then follows that the limit $N^k=\lim_{j\to\infty} N_j^k$ is in
$W^{1,2}(U)$. Since $h$ is also in $W^{1,\infty}(U)$ and $(N^k)_{k=1,2,3}=(\nabla
h,-1)/\sqrt{1+|\nabla h|^2}$ is the normal to the graph of $h$, it follows that $h\in W^{2,2}(U)$.
\end{proof}
\bigskip
\begin{proof}[Proof of (ii)]
We write
\[
E_j:= \sum_{K,L\in {\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g_j(K)
- g_j(L)|^2
\]
and may assume without loss of generality that $\liminf_{j\to \infty}
E_j<\infty$.
Fix $\delta > 0$. Define the set of bad triangles as
\[
{\mathcal B}_j^\delta := \{K \in {\mathcal T}_{j}\,:\,\left|\nabla h_j(K)- w\right| > \delta\}.
\]
Fix $v\in B(0,1)$, and write $Q^v=\{x\in \R^2:[x,x+v]\subset Q\}$. Define the set of good points as
\[
A_j^{\delta,v} := \left\{x\in Q^v: \#\{K\in {\mathcal B}_j^\delta\,:
\,K \cap [x,x+v] \neq \emptyset\} \leq
\frac{\delta|v|}{\size({\mathcal T}_{j})}\right\}.
\]
We claim that
\begin{equation}\label{eq:17}
\L^2(Q^v \setminus A_j^{\delta,v}) \to 0\qquad\text{ for } j\to\infty\,.
\end{equation}
Indeed,
let $v^\bot=(-v_2,v_1)$, and let $P_{v^\bot}:\R^2\to v^\bot \R$ denote the projection onto the linear subspace parallel to $v^\bot$. Now by the definition of $A_j^{\delta,v}$, we may estimate
\[
\begin{split}
\int_{Q^v}|\nabla h_j-w|^2\d x \gtrsim & \# \mathcal B_j^{\delta} \left(\size {\mathcal T}_j \right)^2 \delta\\
\gtrsim & \frac{\L^2(Q\setminus A_j^{\delta,v})}{|v|\size{\mathcal T}_j}\frac{\delta|v|}{\size {\mathcal T}_j} \left(\size {\mathcal T}_j \right)^2 \delta\\
\gtrsim &\L^2(Q^v \setminus A_j^{\delta,v})\delta^2|v|\,,
\end{split}
\]
and hence \eqref{eq:17} follows by
$h_j\to h$ in $W^{1,2}(Q)$.
For the push-forward of $v$ under the affine map $x\mapsto (x,h(x))$,
we write
\[
\bar v= (v,v\cdot w)\in\R^3\,.
\]
Also, for $K=[a,b,c]\in {\mathcal T}_j$, we write
\[
\bar K=[(a,a\cdot w),(b,b\cdot w),(c,c\cdot w)]=h_*K\,.
\]
By Lemma \ref{lem:CS_trick}, we have that
\begin{equation}
\label{eq: difference quotient estimate}
\begin{split}
\int_{A_j^{\delta, v}} &| g_{j}(x+v) - g_j(x)|^2\d x \\
& \leq \frac{|\bar v|}{\sqrt{1+|w|^2}} \left(\max_{x\in A_j^{\delta, v}}
\sum_{K,L\in {\mathcal T}_j} \mathds{1}^v_{KL}(x) \frac{\theta_{\bar K\bar L}^{\bar
v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}\right) E_j\,.
\end{split}
\end{equation}
We claim that
\begin{equation}
\max_{x\in A_j^{\delta, v}} \sum_{K,L\in {\mathcal T}_j} \mathds{1}_{KL}(x)
\frac{\theta_{\bar K \bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}} \leq
(1+C\delta)\left(|\bar v|+C\size({\mathcal T}_j)\right)\,.\label{eq:16}
\end{equation}
Indeed,
Let $K_0,\ldots,K_N\in {\mathcal T}_{j}$ be the sequence of triangles so that there is $i:[0,1]\to \{1,\ldots,N\}$ non-decreasing with $x+tv\in K_{i(t)}$.
For all pairs $K_i,K_{i+1}\in {\mathcal T}_{j} $ we have
\begin{equation}
\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}d_{\bar K_i\bar K_{i+1}} = (q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|} \,.
\end{equation}
Also, we have that for $K_i,K_{i+1}\in {\mathcal T}_{j} \setminus
{\mathcal B}_j^\delta$,
\begin{equation*}
\begin{split}
\frac{l_{K_i^*K_{i+1}^*}d_{\bar K_i\bar K_{i+1}}}{l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}&\leq 1+C\delta\,.
\end{split}
\end{equation*}
Hence
\begin{equation}\label{eq: good triangles}
\begin{split}
\sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta = \emptyset}&
\frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}\\
& \leq (1+C\delta)\sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta = \emptyset}
\left(\left(q(\bar K_{i+1})-q(\bar K_i)\right)\cdot
\frac{\bar v}{|\bar v|}\right)\,
\end{split}
\end{equation}
If one of the triangles $K_i,K_{i+1}$ is in ${\mathcal B}_j^\delta$, then we may estimate
\[
\left|\left(q(\bar K_{i+1})-q(\bar K_i)\right) \cdot \frac{\bar v}{|\bar v|}\right|\leq C\size{\mathcal T}_j\,.
\]
Since there are few bad triangles along $[x,x+v]$, we have, using $x\in A_j^{\delta,v}$,
\begin{equation}\label{eq: bad triangles}
\begin{split}
\sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta \neq \emptyset}&
\frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}-(q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|}\\
&\leq C\#\{K\in
{\mathcal B}_j^\delta\,:\,K \cap [x,x+v] \neq \emptyset\} \size({\mathcal T}_j)
\\
&\leq C\delta|\bar v|\,.
\end{split}
\end{equation}
Combining \eqref{eq: good triangles} and \eqref{eq: bad triangles} yields
\begin{equation*}
\begin{split}
\sum_{i = 0}^{N-1}\frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}&
\leq (1+C\delta)\sum_{i = 0}^{N-1}(q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|}+C\delta|\bar v|\\
&= (1+C\delta)(q(\bar K_N) -
q(\bar K_0)) \cdot \frac{\bar v}{|\bar v|} + C \delta |\bar v| \\
&\leq (1+C\delta)\left(|\bar v|
+ C\size({\mathcal T}_{j})\right).
\end{split}
\end{equation*}
This proves \eqref{eq:16}.
\medskip
Inserting \eqref{eq:16} in \eqref{eq: difference quotient estimate}, and
passing to the limits $j\to\infty$ and $\delta\to 0$, we obtain
\[|v\cdot u |^2
\leq \frac{|\bar v|^2}{\sqrt{1+|w|^2}}\liminf_{j\to \infty}E_j\,.
\]
Now let
\[
\underline{u}:=\left(\mathds{1}_{2\times 2},w\right)^T \left(\mathds{1}_{2\times 2}+w\otimes w\right)^{-1}u\,.
\]
Then we have $|\underline{u}\cdot \bar v|=|u\cdot v|$ and hence
\[
\begin{split}
|\underline{u}|^2&=\sup_{v\in \R^2\setminus \{0\}}\frac{|\underline{u}\cdot \bar v|^2}{|\bar v|^2}\\
&\leq \frac{1}{\sqrt{1+|w|^2}}\liminf_{j\to \infty}E_j\,.
\end{split}
\]
This proves the proposition.
\end{proof}
\subsection{Proof of compactness and lower bound in Theorem \ref{thm:main}}
\begin{proof}[Proof of Theorem \ref{thm:main} (o)]
For a subsequence (no relabeling), we have that $h_j\weakstar h$ in
$W^{1,\infty}(M)$. By Lemma \ref{lem:graph_rep}, ${\mathcal T}_j$ may be locally
represented as the graph of a Lipschitz function $\tilde h_j:U\to \R$, and $M_h$
as the graph of a Lipschitz function $\tilde h:U\to \R$, where $U\subset\R^2$
and $\tilde h_j\weakstar \tilde h$ in $W^{1,\infty}(U)$
\medskip
It remains to prove that $\tilde h\in W^{2,2}(U)$. Since the norm of the
gradients are uniformly bounded,
$\|\nabla \tilde h_j\|_{L^\infty(U)}<C$, we have that the projections of ${\mathcal T}_j$ to
$U$ are (uniformly) regular flat triangulated surfaces. Hence by Proposition
\ref{prop:lower_blowup} (i), we have that $\tilde h\in W^{2,2}(U)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main} (i)]
Let $\mu_j = \sum_{K,L\in {\mathcal T}_j} \frac{1}{d_{KL}} |n(K) - n(L)|^2
\H^1|_{K\cap L}\in {\mathcal M}_+(\R^3)$. Note that either a subsequence of $\mu_j$ converges
narrowly to some $\mu \in {\mathcal M}_+(M_h)$ or there is nothing to show. We will show in
the first case that
\begin{equation}
\frac{d\mu}{d\H^2}(z) \geq |Dn_{M_h}|^2(z)\label{eq:7}
\end{equation}
at $\H^2$-almost every point $z\in M_h$ which implies in particular the lower bound.
By Lemma \ref{lem:graph_rep}, we may reduce the proof to the situation that $M_{h_j}$, $M_h$ are given as
graphs of Lipschitz functions $\tilde h_j:U\to \R$, $\tilde h:U\to \R$
respectively, where $U\subset \R^2$ is some open bounded set.
We have that $\tilde h_j$ is piecewise
affine on some (uniformly in $j$) regular triangulated surface $\tilde {\mathcal T}_j$ that satisfies
\[
(\tilde h_j)_*\tilde {\mathcal T}_j={\mathcal T}_j\,.
\]
Writing down the surface normal to $M_h$ in the coordinates of $U$,
\[N(x)=\frac{(-\nabla \tilde h, 1)}{\sqrt{1+|\nabla \tilde h|^2}}\,,
\]
we have that almost every $x\in U$ is a Lebesgue point of $\nabla N$.
We write $N^k=N\cdot e_k$ and note that \eqref{eq:7} is equivalent to
\begin{equation}
\label{eq:8}
\frac{\d\mu}{\d\H^2}(z)\geq \sum_{k=1}^3\nabla N^k(x)\cdot \left(\mathds{1}_{2\times
2}+\nabla \tilde h(x)\otimes\nabla \tilde h(x)\right)^{-1}\nabla
N^k(x)\,,
\end{equation}
where $z=(x,\tilde h(x))$.
Also, we define $N_j^k:U\to\R^3$ by letting $N_j^k(x)=n((\tilde h_j)_*K)\cdot
e_k$ for $x\in K\in
\tilde {\mathcal T_j}$. (We recall that $n((\tilde h_j)_*K)$ denotes the
normal of the triangle $(\tilde h_j)_*K$.)
\medskip
Let now $x_0\in U$ be a Lebesgue point of $\nabla \tilde h$ and $\nabla N$.
We write $z_0=(x_0,\tilde h(x_0))$.
Combining the narrow convergence $\mu_j\to\mu$ with the Radon-Nikodym differentiation Theorem, we may choose a sequence $r_j\downarrow 0$ such that
\[
\begin{split}
r_j^{-1}\size{{\mathcal T}_j}&\to 0\\
\liminf_{j\to\infty}\frac{\mu_j(Q^{(3)}(x_0,r_j))}{r_j^2}&= \frac{\d\mu}{\d\H^2}(z_0)\sqrt{1+|\nabla \tilde h(x_0)|^2}\,,
\end{split}
\]
where $Q^{(3)}(x_0,r_j)=x_0+[-r_j/2,r_j/2]^2\times \R$ is the cylinder over $Q(x_0,r_j)$.
Furthermore, let $\bar N_j,\bar h_j,\bar N,\bar h: Q\to \R$ be defined by
\[
\begin{split}
\bar N_j^k(x)&=\frac{N_j(x_0+r_j x)-N_j(x_0)}{r_j}\\
\bar N^k(x)&=\nabla N^k(x_0)\cdot (x-x_0)\\
\bar h_j(x)&=\frac{\tilde h_j(x_0+r_j x)-\tilde h_j(x_0)}{r_j}\\
\bar h(x)&=\nabla \tilde h(x_0)\cdot (x-x_0)\,.
\end{split}
\]
We recall that by assumption we have that $N^k\in W^{1,2}(U)$. This
implies in particular that (unless $x_0$ is contained in a certain set of
measure zero, which we discard), we have that
\begin{equation}\label{eq:9}
\bar N_j^k\to \bar N^k\quad\text{ in } L^2(Q)\,.
\end{equation}
Also, let $T_j$ be the blowup map
\[
T_j(x)=\frac{x-x_0}{r_j}
\]
and let ${\mathcal T}_j'$ be the triangulated surface one obtains by blowing up $\tilde{\mathcal T}_j$,
defined by
\[
\tilde K\in \tilde {\mathcal T}_j\quad \Leftrightarrow \quad T_j\tilde K \in {\mathcal T}_j'\,.
\]
Now let $\mathcal S_j$ be the smallest subset of ${\mathcal T}_j'$ (as sets of
triangles) such that
$Q\subset\mathcal S_j$ (as subsets of $\R^2$).
Note that $\size\mathcal S_j\to 0$, $\bar N_j^k$ is constant and $\bar h_j$ is
affine on each $K\in \mathcal S_j$. Furthermore, for $x\in K\in \tilde {\mathcal T}_j$, we have that
\[
\nabla \tilde h_j(x)=\nabla \bar h_j(T_jx)
\]
This implies in particular
\begin{equation}
\bar h_j\to \bar h\quad \text{ in } W^{1,2}(Q)\,.\label{eq:6}
\end{equation}
Concerning the discrete energy functionals, we have for the rescaled
triangulated surfaces $({\mathcal T}_j')^*=(\bar h_j)_* {\mathcal T}_j'$, with $K^*=(\bar h_j)_*K$
for $K\in {\mathcal T}_j'$,
\begin{equation}\label{eq:10}
\liminf_{j\to\infty} \sum_{K,L\in
{\mathcal T}_j'}\frac{l_{K^*L^*}}{d_{K^*L^*}} |\bar N_j(K)-\bar N_j(L)|^2\leq \liminf_{j\to\infty}r_j^{-2}\mu_j(Q^{(3)}(x_0,r_j)) \,.
\end{equation}
Thanks to \eqref{eq:9}, \eqref{eq:6}, we may apply Proposition
\ref{prop:lower_blowup} (ii) to the sequences of functions $(\bar h_j)_{j\in\N}$,
$(\bar N_j^k)_{j\in\N}$. This yields (after summing over $k\in\{1,2,3\}$)
\[
\begin{aligned}
|Dn_{M_h}|^2(z_0)&\sqrt{1+|\nabla \tilde
h(x_0)|^2}\\
& = \nabla N(x_0)\cdot \left(\mathds{1}_{2\times 2} +\nabla \tilde h(x_0)\otimes
\nabla \tilde h(x_0)\right)^{-1}\nabla N(x_0)\sqrt{1+|\nabla \tilde
h(x_0)|^2} \\
& \leq \liminf_{j\to\infty} \sum_{K, L\in
{\mathcal T}_j'}\frac{l_{K^*L^*}}{d_{K^*L^*}} |\bar N_j(K)-\bar N_j(L)|^2\,,
\end{aligned}
\]
which in combination with \eqref{eq:10} yields \eqref{eq:8} for $x=x_0$, $z=z_0$
and completes the proof
of the lower bound.
\end{proof}
\section{Surface triangulations and upper bound}
\label{sec:surf-triang-upper}
Our plan for the construction of a recovery sequence is as follows: We shall construct optimal sequences of triangulated surfaces first locally around a point $x\in M_h$. It turns out the optimal triangulation must be aligned with the principal curvature directions at $x$. By a suitable covering of $M_h$, this allows for an approximation of the latter in these charts (Proposition \ref{prop: local triangulation}). We will then formulate sufficient conditions for a vertex set to supply a global approximation (Proposition \ref{prop: Delaunay existence}). The main work that remains to be done at that point to obtain a proof of Theorem \ref{thm:main} (ii) is to add vertices to the local approximations obtained from Proposition \ref{prop: local triangulation} such that the conditions of Proposition \ref{prop: Delaunay existence} are fulfilled.
\subsection{Local optimal triangulations}
\begin{prop}\label{prop: local triangulation}
There are constants $\delta_0, C>0$ such that for all $U \subset \R^2$ open, convex, and bounded; and $h\in C^3(U)$ with $\|\nabla h\|_\infty \eqqcolon \delta \leq \delta_0$, the following holds:
Let $\varepsilon > 0$, $ C\delta^2 < |\theta| \leq \frac12$, and define $X \coloneqq \{(\varepsilon k + \theta \varepsilon l , \varepsilon l, h(\varepsilon k + \theta \varepsilon l, \varepsilon l))\in U\times \R\,:\,k,l\in \Z\}$. Then any Delaunay triangulated surface ${\mathcal T}$ with vertex set $X$ and maximum circumradius $\max_{K\in {\mathcal T}} r(K) \leq \varepsilon$ has
\begin{equation}\label{eq: local error}
\begin{aligned}
\sum_{K,L\in {\mathcal T}}& \frac{\l{K}{L}}{d_{KL}}|n(K) - n(L)|^2\\
\leq &\left(1+ C(|\theta|+\delta+\varepsilon)\right) \L^2(U) \times\\
&\times\left(\max_{x\in U} |\partial_{11} h(x)|^2 + \max_{x\in U} |\partial_{22} h(x)|^2 + \frac{1}{|\theta|} \max_{x\in U} |\partial_{12} h(x)|^2 \right)+C\varepsilon\,.
\end{aligned}
\end{equation}
\end{prop}
\begin{proof}
We assume without loss of generality that $\theta > 0$.
We consider the projection of $X$ to the plane,
\[
\bar X:=\{(\varepsilon k + \theta \varepsilon l , \varepsilon l)\in U:k,l\in\Z\}\,.
\]
Let $\bar{\mathcal T}$ be the flat triangulated surface that consists of the triangles of the form
\[
\begin{split}
\varepsilon[ ke_1+l(\theta e_1+e_2),(k+1)e_1+l(\theta e_1+e_2),ke_1+(l+1)(\theta e_1+e_2)]\\
\text{ or } \quad \varepsilon[ ke_1+l(\theta e_1+e_2),(k+1)e_1+l(\theta e_1+e_2),ke_1+(l-1)(\theta e_1+e_2)]\,,
\end{split}
\]
with $k,l\in \Z$ such that the triangles are contained in $U$, see Figure \ref{fig:upper2d_barT}.
\begin{figure}[h]
\centering
\includegraphics[height=5cm]{upper2d_barT.pdf}
\caption{The flat triangulated surface $\bar {\mathcal T}$. \label{fig:upper2d_barT}}
\end{figure}
Obviously the flat triangulated surface $\bar{\mathcal T}$
has vertex set $\bar X$. Also, we have that
\begin{equation}\label{eq:19}
|x-y|\leq |(x,h(x))-(y,h(y))|\leq (1+C\delta)|x-y|
\end{equation}
for all $x,y \in \bar X$. We claim that for $\delta$ chosen small enough, we have the implication
\begin{equation}\label{eq:18}
h_*K=[(x,h(x)),(y,h(y)),(z,h(z))]\in {\mathcal T}\quad \Rightarrow \quad K= [x,y,z]\in \bar{\mathcal T} \,.
\end{equation}
Indeed, if $K\not\in \bar {\mathcal T}$, then either $r(K)>\frac32\varepsilon$ or there exists $w\in X$ with $|w-q(K)|<(1 -C\theta)r(K)$. In the first case, $r(h_*K)>(1-C\delta)\frac32\varepsilon$ by \eqref{eq:19} and hence $h_*K\not\in {\mathcal T}$ for $\delta$ small enough. In the second case, we have by \eqref{eq:19} and Lemma \ref{lma: circumcenter regularity} that
\[
|(w,h(w))-q(h_*K)|<(1+C\delta)(1 -C\theta)r(h_*K)\,,
\]
and hence $h_*K$ does not satisfy the Delaunay property for $\delta$ small enough. This proves \eqref{eq:18}.
Let $[x,y]$ be an edge with either $x,y \in X$ or $x,y \in \bar X$. We call this edge \emph{horizontal} if $(y-x) \cdot e_2 = 0$, \emph{vertical} if $(y-x) \cdot (e_1 - \theta e_2)= 0$, and \emph{diagonal} if $(y-x) \cdot (e_1 + (1-\theta) e_2) = 0$.
By its definition, $\bar {\mathcal T}$ consists only of triangles with exactly one horizontal, vertical, and diagonal edge each. By what we have just proved,
the same is true for ${\mathcal T}$.
\medskip
To calculate the differences between normals of adjacent triangles, let us consider one fixed triangle $K\in {\mathcal T}$ and its neighbors $K_1,K_2,K_3$, with which $K$ shares a horizontal, diagonal and vertical edge respectively, see Figure \ref{fig:upper2d}.
\begin{figure}[h]
\includegraphics[height=5cm]{upper2d.pdf}
\caption{Top view of a triangle $K\in{\mathcal T}$ with its horizontal, diagonal and vertical neighbors $K_1,K_2,K_3$. \label{fig:upper2d}}
\end{figure}
We assume without loss of generality that one of the vertices of $K$ is the origin. We write
$x_0=(0,0)$, $x_1=\varepsilon(1-\theta,-1)$, $x_2=\varepsilon(1,0)$, $x_3=\varepsilon(1+\theta,1)$, $x_4=\varepsilon(\theta,1)$, $x_5=\varepsilon(\theta-1,1)$, and $y_i=(x_i, h(x_i))$ for $i=0,\dots,5$. With this notation we have $K=[y_0,y_2,y_4]$, $K_1=[y_0,y_1,y_2]$, $K_2=[y_2,y_3,y_4]$ and $K_3=[y_4,y_5,y_0]$. See Figure \ref{fig:upper2d_barT}. As approximations of the normals, we define
\[
\begin{split}
v(K)&=\varepsilon^{-2}y_2\wedge y_4\,\\
v(K_1)&=\varepsilon^{-2} y_1\wedge y_2\\
v(K_2)&= \varepsilon^{-2}(y_3-y_2)\wedge(y_4-y_2)\\
v(K_3)&= \varepsilon^{-2} y_4\wedge y_5\,.
\end{split}
\]
Note that $v(L)$ is parallel to $n(L)$ and $|v(L)|\geq 1$ for $L\in \{K,K_1,K_2,K_3\}$.
Hence for $i=1,2,3$, we have that
\[
|n(K)-n(K_i)|^2\leq |v(K)-v(K_i)|^2\,.
\]
For each $x_i$, we write
\[
h(x_i)= x_i \cdot \nabla h(0) + \frac12 x_i \nabla^2 h(0) x_i^T+O(\varepsilon^3)\,,
\]
where $O(\varepsilon^3)$ denotes terms $f(\varepsilon)$ that satisfy
$\limsup_{\varepsilon\to 0}\varepsilon^{-3}|f(\varepsilon)|<\infty$.
By an explicit computation we obtain that
\[
\begin{split}
\left|v(K)-v(K_1)\right|^2&= \varepsilon^2\left|(\theta-1)\theta \partial_{11} h+2(\theta-1)\partial_{12}h+\partial_{22}h\right|^2+O(\varepsilon^3)\\
\left|v(K)-v(K_2)\right|^2&= \varepsilon^2\left(\left|\theta\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\theta\partial_{11} h+(\theta-1)\partial_{12}h\right|^2\right)+O(\varepsilon^3)\\
\left|v(K)-v(K_3)\right|^2&=\varepsilon^2\left( \theta^2\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2\right)+O(\varepsilon^3)\,,
\end{split}
\]
where all derivatives of $h$ are taken at $0$.
Using the Cauchy-Schwarz inequality and $|1-\theta|\leq 1$, we may estimate the term on the right hand side in the first line above,
\[
\left|(\theta-1)\theta\partial_{11} h+2(\theta-1)\partial_{12}h+\partial_{22}h\right|^2
\leq (1+\theta) |\partial_{22}h|^2+ \left(1+\frac{C}{\theta}\right)\left(\theta^2 |\partial_{11} h|^2+|\partial_{12}h|^2\right)\,.
\]
In a similar same way, we have
\[
\begin{split}
\left|\theta\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\theta\partial_{11} h+(\theta-1)\partial_{12}h\right|^2&\leq C(|\partial_{12}h|^2
+\theta^2|\partial_{11} h|^2)\\
\theta^2\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2&\leq (1+\theta)|\partial_{11} h|^2+\frac{C}{\theta}|\partial_{12}h|^2\,,
\end{split}
\]
so that
\[
\begin{split}
\left|n(K)-n(K_1)\right|^2&\leq \varepsilon^2(1+\theta) |\partial_{22}h|^2+ C\varepsilon^2 \left(\theta |\partial_{11} h|^2+ \frac1\theta |\partial_{12}h|^2\right)+O(\varepsilon^3)\\
\left|n(K)-n(K_2)\right|^2&\leq C\varepsilon^2 (|\partial_{12}h|^2
+\theta^2|\partial_{11} h|^2)+O(\varepsilon^3)\\
\left|n(K)-n(K_3)\right|^2&\leq \varepsilon^2(1+\theta)|\partial_{11} h|^2+\frac{C}{\theta}\varepsilon^2|\partial_{12}h|^2+O(\varepsilon^3)\,,
\end{split}
\]
Also, we have by Lemma \ref{lma: circumcenter regularity} that
\[
\begin{split}
\frac{l_{KK_1}}{d_{KK_1}}&\leq 1+C(\delta+\varepsilon+\theta)\\
\frac{l_{KK_2}}{d_{KK_2}}&\leq \left(1+C(\delta+\varepsilon+\theta)\right) \frac{C}{\theta}\\
\frac{l_{KK_3}}{d_{KK_3}}&\leq 1+C(\delta+\varepsilon+\theta)\,.
\end{split}
\]
Combining all of the above, and summing up over all triangles in ${\mathcal T}$, we obtain the statement of the proposition.
\end{proof}
\begin{comment}
\begin{proof}[Old proof]
Consider the points $a,b,c,d,e,f \in U$ defined, after translation, as $a \coloneqq (0,0)$, $b \coloneqq (\varepsilon,0)$, $c \coloneqq (\theta \varepsilon, \varepsilon)$, $d \coloneqq ((1+\theta)\varepsilon, \varepsilon)$, $e \coloneqq ((1-\theta)\varepsilon, -\varepsilon)$, and $f \coloneqq ((\theta-1)\varepsilon,\varepsilon)$. We consider the lifted points $A = (a,h(a))\in X$, and likewise $B,C,D,E,F\in X$.
First, we note that ${\mathcal T}$ only contains translated versions of the triangles $K \coloneqq [A,B,C]$, $L \coloneqq [B,C,D]$, $P \coloneqq [A,B,D]$, and $S \coloneqq [A,C,D]$, as all other circumballs $\overline{B(q,r)}$ with $r\leq \varepsilon$ contain some fourth point in $X$.
We note that ${\mathcal T}$ contains the triangles $K \coloneqq [A,B,C]$, $L \coloneqq [B,C,D]$, $N \coloneqq [A,B,E]$, and $O \coloneqq [A,C,F]$. We then define
\begin{equation}
\begin{aligned}
v(K) \coloneqq \frac1{\varepsilon^2}(B-A)\times(C-A), v(L) \coloneqq \frac1{\varepsilon^2}(D-C)\times (D-B),\\
v(N) \coloneqq \frac1{\varepsilon^2}(B-A) \times (B-E), v(O) \coloneqq \frac1{\varepsilon^2}(C-F)\times(C-A).
\end{aligned}
\end{equation}
Then by the convex projection theorem
\begin{equation}
\begin{aligned}
|n(K) - n(L)| \leq &|v(K) - v(L)| \leq \frac{1}{\varepsilon} (2+\theta) |h(d)-h(c)-h(b)+h(a)|,\\
|n(K) - n(N)| \leq &|v(K) - v(N)| = \frac{1}{\varepsilon} |h(c)-h(a)-h(b)+h(e)|,\\
|n(K) - n(O)| \leq &|v(K) - v(O)| \leq \frac{1}{\varepsilon} (1+\theta) |h(b)-h(a)-h(c)+h(f)|.
\end{aligned}
\end{equation}
By the fundamental theorem of calculus, we may rewrite these second differences of the parallelogram by e.g.
\begin{equation}
h(d)-h(c)-h(b)+h(a) = \frac{1}{|(b-a)\wedge (c-a)|}\int_{[a,b,c,d]} (b-a) \cdot D^2 h(x) (c-a)\,dx
\end{equation}
and thus estimate
\begin{equation}
\begin{aligned}
|h(d)-h(c)-h(b)+h(a)| \leq &(1+C\theta) \int_{[a,b,c,d]} |\partial_{12} h(x)| + \theta |\partial_{11} h(x)|\,dx\\
|h(c)-h(a)-h(b)+h(e)| \leq &(1+C\theta) \int_{[a,b,c,e]}|\partial_{22} h(x)| + |\partial_{12} h(x)| + \theta |\partial_{11} h(x)|\,dx\\
|h(b)-h(a)-h(c)+h(f)| \leq &(1+C\theta)\int_{[a,b,c,f]} |\partial_{12} h(x)| + |\partial_{11} h(x)|\,dx.
\end{aligned}
\end{equation}
We note that all interactions appearing in the left-hand side of \eqref{eq: local error} are translations of the diagonal $(K,L)$, vertical $(K,N)$, and horizontal $(K,O)$ cases above. We estimate the prefactors using Lemma \ref{lma: circumcenter regularity}:
\begin{equation}
\frac{\l{K}{L}}{d_{KL}} \leq \frac{C}{\theta},\,\frac{\l{K}{N}}{d_{KN}} \leq 1 + C\theta + C\delta,\,\frac{\l{K}{O}}{d_{KO}} \leq 1 + C\theta + C\delta.
\end{equation}
This allows us to bound all diagonal interactions, using H\"older's inequality, by
\begin{equation}
\frac12 \sum_{\tilde K, \tilde L\text{ diagonal}} \frac{\l{\tilde K}{\tilde L}}{d_{\tilde K \tilde L}} |n(\tilde K) - n(\tilde L)|^2 \leq \frac{C}{\theta}\left(C\delta^2 + C\theta^2\int_U |\partial_{11} h(x)|^2\,dx\right).
\end{equation}
On the other hand, we may bound the vertical interactions by
\begin{equation}
\begin{aligned}
&\frac12 \sum_{\tilde K, \tilde N\text{ vertical}} \frac{\l{\tilde K}{\tilde N}}{d_{\tilde K \tilde N}} |n(\tilde K) - n(\tilde N)|^2\\
\leq &(1+C\theta+C\delta) \left((1+\theta)\int_U |\partial_{22} h(x)|^2\,dx + \frac{C}{\theta} \int_U|\partial_{12} h(x)|^2 + \theta^2 |\partial_{11}h(x)|^2\,dx \right),
\end{aligned}
\end{equation}
and similarly all horizontal interactions by
\begin{equation}
\begin{aligned}
&\frac12 \sum_{\tilde K, \tilde O\text{ horizontal}} \frac{\l{\tilde K}{\tilde O}}{d_{\tilde K \tilde O}} |n(\tilde K) - n(\tilde O)|^2\\
\leq &(1+C\theta+C\delta) \left((1+\theta)\int_U |\partial_{11} h(x)|^2\,dx + \frac{C}{\theta} \int_U|\partial_{12} h(x)|^2\,dx \right),
\end{aligned}
\end{equation}
Combining these three estimates yields the result.
\end{proof}
\end{comment}
\subsection{Global triangulations}
We are going to use a known fact about triangulations of point sets in $\R^2$, and transfer them to $\R^3$. We first cite a result for planar Delaunay triangulations, Theorem \ref{thm: planar Delaunay} below, which can be found in e.g. \cite[Chapter 9.2]{berg2008computational}. This theorem states the existence of a Delaunay triangulated surface associated to a \emph{protected} set of points.
\begin{defi}
Let $N\subset\R^3$ be compact, $X\subset N$ a finite set of points and
\[
D(X,N)=\max_{x\in N}\min_{y\in X}|x-y|\,.
\]
We say that $X$ is $\bar \delta$-protected if whenever $x,y,z \in X$ form a regular triangle $[x,y,z]$ with circumball $\overline{B(q,r)}$ satisfying $r \leq D(X,N)$, then $\left| |p-q| - r \right| \geq \bar\delta$ for any $p\in X \setminus \{x,y,z\}$.
\end{defi}
\begin{thm}\label{thm: planar Delaunay}[\cite{berg2008computational}]
Let $\alpha > 0$. Let $X\subset \R^2$ be finite and not colinear. Define $\Omega := \conv(X)$. Assume that
\[\min_{x\neq y \in X} |x-y| \geq \alpha D(X,\Omega)\,,
\]
and that $X$ is $\delta D(X,\Omega)$-protected for some $\delta>0$. Then there exists a unique maximal Delaunay triangulated surface ${\mathcal T}$ with vertex set $X$, given by all regular triangles $[x,y,z]$, $x,y,z\in X$, with circumdisc $\overline{B(q,r)}$ such that $B(q,r) \cap X = \emptyset$.
The triangulated surface ${\mathcal T}$ forms a partition of $\Omega$, in the sense that
\[
\sum_{K\in {\mathcal T}} \mathds{1}_K = \mathds{1}_\Omega\quad {\mathscr H}^2\text{almost everywhere}\,,
\]
where $\mathds{1}_A$ denotes the characteristic function of $A\subset \R^3$.
Further, any triangle $K\in {\mathcal T}$ with $\dist(K,\partial \Omega) \geq 4D(X,\Omega)$ is $c(\alpha)$-regular, and $d_{KL} \geq \frac{\delta}{2} D(X,\Omega)$ for all pairs of triangles $K \neq L \in {\mathcal T}$.
\end{thm}
We are now in position to formulate sufficient conditions for a vertex set to yield a triangulated surface that serves our purpose.
\begin{prop}\label{prop: Delaunay existence}
Let $N\subset\R^3$ be a 2-dimensional compact smooth manifold, and let $\alpha, \delta > 0$. Then there is $\varepsilon = \varepsilon(N,\alpha,\delta)>0$ such that whenever $X\subset N$ satisfies
\begin{itemize}
\item [(a)]$D(X,N) \leq \varepsilon$,
\item [(b)] $\min_{x,y\in X} |x-y| \geq \alpha D(X,N)$,
\item [(c)] $X$ is $\delta D(X,N)$-protected
\end{itemize}
then there exists a triangulated surface ${\mathcal T}(X,N)$ with the following properties:
\begin{itemize}
\item [(i)] $\size({\mathcal T}(X,N)) \leq 2D(X,N)$.
\item [(ii)] ${\mathcal T}(X,N)$ is $c(\alpha)$-regular.
\item [(iii)] ${\mathcal T}(X,N)$ is Delaunay.
\item [(iv)] Whenever $K\neq L \in {\mathcal T}(X,N)$, we have $d_{KL} \geq \frac{\delta}{2} D(X,N)$.
\item [(v)] The vertex set of ${\mathcal T}(X,N)$ is $X$.
\item [(vi)] ${\mathcal T}(X,N)$ is a $C(\alpha, N)D(X,N)$-Lipschitz graph over $N$. In particular, ${\mathcal T}(X,N)$ is homeomorphic to $N$.
\end{itemize}
\end{prop}
The surface case we treat here can be viewed as a perturbation of Theorem \ref{thm: planar Delaunay}. We note that the protection property (c) is vital to the argument. A very similar result to Proposition \ref{prop: Delaunay existence} was proved in \cite{boissonnat2013constructing}, but we present a self-contained proof here.
\begin{proof}[Proof of Proposition \ref{prop: Delaunay existence}]
We construct the triangulated surface ${\mathcal T}(X,N)$ as follows: Consider all regular triangles $K=[x,y,z]$ with $x,y,z\in X$ such that the Euclidean Voronoi cells $V_x,V_y,V_z$ intersect in $N$, i.e. there is $\tilde q \in N$ such that $|\tilde q - x| = |\tilde q - y| = |\tilde q - z| \leq |\tilde q - p|$ for any $p\in X\setminus \{x,y,z\}$.
\emph{Proof of (i):} Let $[x,y,z]\in {\mathcal T}(X,N)$. Let $\tilde q \in V_x \cap V_y \cap V_z \cap N$, set $\tilde r := |\tilde q - x|$. Then $\tilde r = \min_{p\in X} |\tilde q - p| \leq D(X,N)$, and because $[x,y,z]\subset \overline{B(\tilde q, \tilde r)}$ we have $\diam([x,y,z])\leq 2 \tilde r \leq 2D(X,N)$.
\emph{Proof of (ii):} Let $\overline{B(q,r)}$ denote the Euclidean circumball of $[x,y,z]$. Then $r\leq \tilde r$ by the definition of the circumball. Thus $\min(|x-y|,|x-z|,|y-z|) \geq \alpha r$, and $[x,y,z]$ is $c(\alpha)$-regular by the following argument: Rescaling such that $r = 1$, consider the class of all triangles $[x,y,z]$ with $x,y,z \in S^1$, $\min(|x-y|,|x-z|,|y-z|) \geq \alpha$. All these triangles are $\zeta$-regular for some $\zeta>0$, and by compactness there is a least regular triangle in this class. That triangle's regularity is $c(\alpha)$.
\emph{Proof of (iii):} Because of (ii), $N\cap \overline{B(q,r)}$ is a $C(\alpha, N)\varepsilon$-Lipschitz graph over a convex subset $U$ of the plane $ x + \R(y-x) + \R(z-x)$, say $N\cap \overline{B(q,r)} = U_h$. It follows that $\tilde q - q = h(\tilde q) n_U$. Because $h(x)= 0$, it follows that $|\tilde q - q| = |h(\tilde q)| \leq C(\alpha, N) D(X,N)^2$.
Thus, for $D(X,N) \leq \delta(2C(\alpha,N))^{-1}$, we have that $|\tilde q - q| \leq \frac{\delta}{2}D(X,N)$. This together with (c) suffices to show the Delaunay property of ${\mathcal T}(X,N)$: Assume there exists $p\in X \setminus \{x,y,z\} \cap B(q,r)$. Then by (c) we have $|p-q| \leq r - \delta D(X,N)$, and by the triangle inequality $|p-\tilde q| \leq |p- q| + \frac{\delta}{2}D(x,N) < \tilde r$, a contradiction.
\emph{Proof of (iv):}
It follows also from (c) and Lemma \ref{lma: circumcenter regularity} that for all adjacent $K,L\in {\mathcal T}(X,N)$ we have $d_{KL} \geq \frac{\delta}{2} D(X,N)$.
\emph{Proof of (v) and (vi):} Let $\eta>0$, to be fixed later. There is $s>0$ such that for every $x_0\in N$, the orthogonal projection $\pi:\R^3 \to x_0 + T_{x_0}N$ is an $\eta$-isometry when restricted to $N\cap B(x_0,s)$, in the sense that that $|D\pi - \mathrm{id}_{TN}|\leq \eta$.
Let us write $X_\pi=\pi(X\cap B(x_0,s))$. This point set fulfills all the requirements of Theorem \ref{thm: planar Delaunay} (identifying $x_0+T_{x_0}N$ with $\R^2$), except for possibly protection.
We will prove below that
\begin{equation}\label{eq:23}
X_\pi\text{ is } \frac{\delta}{4}D(X,N) \text{-protected}.
\end{equation}
We will then consider the planar Delaunay triangulated surface ${\mathcal T}' \coloneqq {\mathcal T}(X_\pi, x_0 + T_{x_0}N)$, and show that for $x,y,z\in B(x_0,s/2)$ we have
\begin{equation}\label{eq:22}
K:=[x,y,z]\in {\mathcal T}(X,N)\quad \Leftrightarrow \quad K_\pi:=[\pi(x),\pi(y),\pi(z)]\in {\mathcal T}'\,
\end{equation}
If we prove these claims, then (v) follows from Theorem \ref{thm: planar Delaunay}, while (vi) follows from Theorem \ref{thm: planar Delaunay} and Lemma \ref{lma: graph property}.
\medskip
We first prove \eqref{eq:23}: Let $\pi(x),\pi(y),\pi(z)\in X_\pi$, write $K_\pi= [\pi(x),\pi(y),\pi(z)]$, and assume $r(K_\pi)\leq D(X_\pi,\mathrm{conv}(X_{\pi}))$. For a contradiction, assume that $\pi(p)\in X_\pi\setminus \{\pi(x),\pi(y),\pi(z)\}$ such that
\[
\left||q(K_\pi)-\pi(p)|-r(K_\pi)\right|<\frac{\delta}{4}D(X,N)\,.
\]
Using again $|D\pi-\mathrm{id}_{TN}|<\eta$ and Lemma \ref{lma: circumcenter regularity},
we obtain, with $K=[x,y,z]$,
\[
\left||q(K)-p|-r(K)\right|<(1+C\eta)\frac{\delta}{4}D(X,N)\,.
\]
Choosing $\eta$ small enough, we obtain a contradiction to (c). This completes the proof of \eqref{eq:23}.
\medskip
Next we show the implication $K\in {\mathcal T}\Rightarrow K_\pi\in {\mathcal T}'$: Let $p\in X\cap B(x_0,s) \setminus \{x,y,z\}$.
Assume for a contradiction that $\pi(p)$ is contained in the circumball of $K_\pi$,
\[
|\pi(p) - q(K_\pi)|\leq r(K_\pi)\,.
\] Then by $|D\pi-\mathrm{id}_{TN}|<\eta$ and Lemma \ref{lma: circumcenter regularity}\,,
\[
|p-q(K)|\leq r(K) + C(\alpha)\eta D(X,N)\,.
\]
Choosing $\eta<\delta/(2C(\alpha))$, we have by (c) that
\[|p-q(K)| \leq r(K) - \delta D(X,N)\,,
\]
which in turn implies $|p-\tilde q| < \tilde r$.
This is a contradiction to $\tilde q \in V_x \cap V_y \cap V_z$, since $p$ is closer to $\tilde q$ than any of $x,y,z$. This shows $K_\pi\in {\mathcal T}'$.
Now we show the implication $K_\pi\in {\mathcal T}'\Rightarrow K\in {\mathcal T}$: Let $x,y,z\in X\cap B(x_0,s/2)$ with $[\pi(x),\pi(y),\pi(z)]\in {\mathcal T}'$. Let $p\in X\cap B(x_0,s) \setminus \{x,y,z\}$. Assume for a contradiction that $|p-\tilde q| \leq \tilde r$. Then again by Lemma \ref{lma: circumcenter regularity} we have
\[
|p - \tilde q| < \tilde r \Rightarrow |p-q| < r + \delta D(X,N) \Rightarrow |p-q| \leq r - \delta D(X,N) \Rightarrow |\pi(p) - q'| < r'.
\]
Here again we used (c) and the fact that $D(X,N)$ is small enough. The last inequality is a contradiction, completing the proof of \eqref{eq:22}, and hence the proof of the present proposition.
\end{proof}
\begin{rem}
A much shorter proof exists for the case of the two-sphere, $N = \mathcal{S}^2$. Here, any finite set $X\subset \mathcal{S}^2$ such that no four points of $X$ are coplanar and every open hemisphere contains a point of $X$ admits a Delaunay triangulation homeomorphic to $\mathcal{S}^2$, namely $\partial \conv(X)$.
Because no four points are coplanar, every face of $\partial \conv(X)$ is a regular triangle $K = [x,y,z]$. The circumcircle of $K$ then lies on $\mathcal{S}^2$ and $q(K) = n(K)|q(K)|$, where $n(K)\in \mathcal{S}^2$ is the outer normal. (The case $q(K)=-|q(K)|n(K)$ is forbidden because the hemisphere $\{x\in \mathcal{S}^2\,:\,x \cdot n(K)>0\}$ contains a point in $X$.) To see that the circumball contains no other point $p\in X\setminus \{x,y,z\}$, we note that since $K\subset \partial \conv(X)$ we have $(p-x)\cdot n(K)< 0$, and thus $|p-q(K)|^2 = 1 + 1 - 2p \cdot q(K) > 1 + 1 - 2x \cdot q(K) = |x-q(K)|^2$.
Finally, $\partial \conv(X)$ is homeomorphic to $\mathcal{S}^2$ since $\conv(X)$ contains a regular tetrahedron.
\end{rem}
We are now in a position to prove the upper bound of our main theorem, Theorem \ref{thm:main} (ii).
\begin{figure}
\includegraphics[height=5cm]{upper_global.pdf}
\caption{The global triangulation of a smooth surface is achieved by first covering a significant portion of the surface with the locally optimal triangulation, then adding additional points in between the regions, and finally finding a global Delaunay triangulation. \label{fig:upper bound}}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thm:main} (ii)]
We first note that it suffices to show the result for $h\in C^3(M)$ with $\|h\|_\infty < \frac{\delta(M)}{2}$. To see this, we approximate in the general case $h\in W^{2,2}(M)\cap W^{1,\infty}(M)$, $\|h\|_\infty \leq \frac{\delta(M)}{2}$ by smooth functions $h_{\beta} := H_\beta h$, where $(H_\beta)_{\beta >0 }$ is the heat semigroup. Clearly $H_\beta h \in C^\infty(M)$, and $\nabla H_\beta h \to \nabla h$ uniformly, so that $\|h\|_{\infty}\leq \frac{\delta}{2}$ and $\|\nabla h_{\beta}\|_\infty <\|\nabla h\|_{\infty}+1$ for $\beta$ small enough.
Then
\[
\int_M f(x,h_\beta(x),\nabla h_\beta(x), \nabla^2 h_\beta) \,\d{\mathscr H}^2 \to \int_M f(x,h(x),\nabla h(x), \nabla^2 h) \,\d{\mathscr H}^2
\]
for $\beta\to 0$ whenever
\[f:M \times [-\delta(M)/2, \delta(M)/2] \times B(0,\|\nabla h\|_{\infty}+1) \times (TM \times TM) \to \R
\]
is continuous with quadratic growth in $\nabla^2 h$. The Willmore functional
\[
h\mapsto \int_{M_h} |Dn_{M_h}|^2\d{\mathscr H}^2\,,
\]
which is our limit functional, may be written in this way. This proves our claim that we may reduce our argument to the case $h\in C^3(M)$, since the above approximation allows for the construction of suitable diagonal sequences in the strong $W^{1,p}$ topology, for every $p<\infty$.
\medskip
For the rest of the proof we fix $h\in C^3(M)$. We choose a parameter $\delta>0$. By compactness of $M_h$, there is a finite family of pairwise disjoint closed
open sets $(Z_i)_{i\in I}$ such that
\[
{\mathscr H}^2\left(M_h \setminus \bigcup_{i\in I} Z_i\right) \leq \delta
\]
and such that, after applying a rigid motion $R_i:\R^3\to \R^3$, the surface $R_i(M_h \cap Z_i)$ is the graph of a function $h_i\in C^2(U_i)$ for some open sets $(U_i)_{i\in I}$ with $\|\nabla h_i\|_\infty \leq \delta$ and $\|\nabla^2 h_i - \diag(\alpha_i,\beta_i)\|_\infty \leq \delta$.
\medskip
We can apply Proposition \ref{prop: local triangulation} to $R_i(M_h \cap Z_i)$ with global parameters $\theta := \delta$ and $\varepsilon>0$ such that $\dist(Z_i,Z_j)>2\varepsilon$ for $i\neq j$, yielding point sets $X_{i,\varepsilon}\subset M_h \cap B_i$. The associated triangulated surfaces ${\mathcal T}_{i,\varepsilon}$ (see \ref{fig:upper bound}) have the Delaunay property, have vertices $X_{i,\varepsilon}$ and maximum circumball radius at most $\varepsilon$. Furthermore, we have that
\begin{equation}\label{eq: sum local interactions}
\begin{aligned}
\sum_{i\in I} &\sum_{K,L\in {\mathcal T}_{i,\varepsilon}} \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2\\
& \leq (1+C(\delta+\varepsilon)) \sum_{i\in I} \L^2(U_i)\times\\
&\quad \times \left(\max_{x\in U_i}|\partial_{11}h_i(x)|^2+\max_{x\in U_i}|\partial_{22}h_i(x)|^2+\delta^{-1}\max_{x\in U_i}|\partial_{12}h_i(x)|^2\right)+C\varepsilon\\
&\leq (1+C(\delta+\varepsilon)) \sum_{i\in I} \int_{M_h \cap Z_i} |Dn_{M_h}|^2 \,\d{\mathscr H}^2+C(\varepsilon+\delta)\,,
\end{aligned}
\end{equation}
where in the last line we have used $\|\nabla h_i\|_{\infty}\leq \delta$, $\|\dist(\nabla^2h_i,\diag(\alpha_i,\beta_i)\|_{\infty}\leq \delta$, and the identity
\[
\begin{split}
\int_{M_h \cap Z_i} |Dn_{M_h}|^2 \,\d{\mathscr H}^2&=
\int_{(U_i)_{h_i}}|Dn_{(U_i)_{h_i}}|^2\d\H^2\\
&=\int_{U_i}\left|(\mathbf{1}_{2\times 2}+\nabla h_i\otimes \nabla h_i)^{-1}\nabla^2 h_i\right|^2(1+|\nabla h_i|^2)^{-1/2}\d x\,.
\end{split}
\]
We shall use the point set $Y_{0,\varepsilon} := \bigcup_{i\in I} X_{i,\varepsilon}$ as a basis for a global triangulated surface. We shall successively augment the set by a single point $Y_{n+1,\varepsilon} := Y_{n,\varepsilon} \cup \{p_{n,\varepsilon}\}$ until the construction below terminates after finitely many steps. We claim that we can choose the points $p_{n,\varepsilon}$ in such a way that for every $n\in\N$ we have
\begin{itemize}
\item [(a)] $\min_{x,y\in Y_{n,\varepsilon}, x\neq y} |x-y| \geq \frac{\varepsilon}{2}$.
\item [(b)] Whenever $x,y,z,p\in Y_{n,\varepsilon}$ are four distinct points such that the circumball $\overline{B(q,r)}$ of $[x,y,z]$ exists and has $r\leq \varepsilon$, then
\[
\left| |p-q| - r \right| \geq \frac{\delta}{2} \varepsilon.
\]
If at least one of the four points $x,y,z,p$ is not in $Y_{0,\varepsilon}$, then
\begin{equation}\label{eq:21}
\left| |p-q| - r \right| \geq c \varepsilon,
\end{equation}
where $c>0$ is a universal constant.
\end{itemize}
First, we note that both (a) and (b) are true for $Y_{0,\varepsilon}$.
Now, assume we have constructed $Y_{n,\varepsilon}$. If there exists a point $x\in M_h$ such that $B(x,\varepsilon) \cap Y_{n,\varepsilon} = \emptyset$, we consider the set $A_{n,\varepsilon}\subset M_h \cap B(x,\frac{\varepsilon}{2})$ consisting of all points $p\in M_h \cap B(x,\frac{\varepsilon}{2})$ such that for all regular triangles $[x,y,z]$ with $x,y,z\in Y_{n,\varepsilon}$ and circumball $\overline{B(q,r)}$ satisfying $r\leq 2\varepsilon$, we have $\left||p-q| - r\right| \geq c \varepsilon$.
Seeing as how $Y_{n,\varepsilon}$ satisfies (a), the set $A_{n, \varepsilon}$ is nonempty if $c>0$ is chosen small enough, since for all triangles $[x,y,z]$ as above we have
\[
{\mathscr H}^2\left(\left\{ p\in B(x,\frac{\varepsilon}{2})\cap M_h\,:\,\left||p-q| - r\right| < c \varepsilon \right\}\right) \leq 4c \varepsilon^2,
\]
and the total number of regular triangles $[x,y,z]$ with $r\leq 2\varepsilon$ and $\overline{B(q,r)}\cap B(x,\varepsilon)\neq \emptyset$ is universally bounded as long as $Y_{n,\varepsilon}$ satisfies (a).
We simply pick $p_{n,\varepsilon}\in A_{n,\varepsilon}$, then clearly $Y_{n+1,\varepsilon} \coloneqq Y_{n,\varepsilon} \cup \{p_{n,\varepsilon}\}$ satisfies (a) by the triangle inequality. We now have to show that $Y_{n+1,\varepsilon}$ still satisfies (b).
This is obvious whenever $p = p_{n,\varepsilon}$ by the definition of $A_{n,\varepsilon}$. If $p_{n,\varepsilon}$ is none of the points $x,y,z,p$, then (b) is inherited from $Y_{n,\varepsilon}$. It remains to consider the case $p_{n,\varepsilon} = x$. Then $x$ has distance $c\varepsilon$ to all circumspheres of nearby triples with radius at most $2\varepsilon$. We now assume that the circumball $\overline{B(q,r)}$ of $[x,y,z]$ has radius $r \leq \varepsilon$ and that some point $p\in Y_{n,\varepsilon}$ is close to $\partial B(q,r)$. To this end, define
\[
\eta \coloneqq \frac{\left||p-q| - r \right|}{\varepsilon}\,.
\]
We show that $\eta \geq \eta_0$ for some universal constant. To this end, we set
\[
p_t \coloneqq (1-t)p + t\left(q+r\frac{p-q}{|p-q|}\right)
\]
(see Figure \ref{fig:pt}) and note that if $\eta\leq \eta_0$, all triangles $[y,z,p_t]$ are uniformly regular.
\begin{figure}[h]
\centering
\includegraphics[height=5cm]{pt.pdf}
\caption{The definition of $p_t$ as linear interpolation between $p_0$ and $p_1$. \label{fig:pt}}
\end{figure}
Define the circumcenters $q_t \coloneqq q(y,z,p_t)$, and note that $q_1 = q$. By Lemma \ref{lma: circumcenter regularity}, we have $|q_1 - q_0| \leq C|p_1 - p_0| = C\eta \varepsilon$ if $\eta\leq \eta_0$. Thus the circumradius of $[y,z,p_0]$ is bounded by
\[
|y-q_0| \leq |y-q| + |q-q_0| \leq (1+C\eta)\varepsilon \leq 2\varepsilon
\]
if $\eta\leq \eta_0$. Because $x\in Y_{n+1,\varepsilon} \setminus Y_{n,\varepsilon} \subset A_{n,\varepsilon}$, we have, using \eqref{eq:21},
\[
c\varepsilon \leq \left| |x-q_0| - |p-q_0|\right| \leq \left| |x-q| - |p-q| \right| + 2 |q - q_0| \leq (1+2C)\eta\varepsilon,
\]
i.e. that $\eta \geq \frac{c}{1+2C}$. This shows (b).
\begin{comment}
We note that by (a) we have $r\geq \frac{\varepsilon}{4}$. We set $p_t \coloneqq (1-t) p + t\left((q + r\frac{p-q}{|p-q|}\right)$ for $t\in[0,1]$. If $\eta<\eta_0$, then the triangles $[y,z,p_t]$ are all $\zeta_0$-regular triangles for some universal constants $\zeta_0,\eta_0>0$. By Lemma \ref{lma: circumcenter regularity}, then $|q(y,z,p_0) - q(y,z,p)|\leq C \eta \varepsilon$. However, $q(y,z,p_0) = q$, and $|p-q(y,z,p)| \leq 2\varepsilon$ for $\eta<\eta_0$. By the choice $x\in A_{n,\varepsilon}$ then
\[
\left| |x-q(y,z,p)| - |p-q(y,z,p)| \right| \geq c\varepsilon,
\]
which implies that
\[
\left||x-q| - |p-q| \right| \geq c \varepsilon - C \eta \varepsilon,
\]
i.e. that $\eta \geq \min\left( \frac{1}{1+C}, \eta_0, \frac14 \right)$, which is a universal constant. This shows (b).
\end{comment}
Since $M_h$ is compact, this construction eventually terminates, resulting in a set $X_\varepsilon := Y_{N(\varepsilon),\varepsilon} \subset M_h$ with the properties (a), (b), and $D(X_\varepsilon,M) \leq \varepsilon$.
\medskip
Consider a Lipschitz function $g:M_h\to \R$. Since $M_h$ is a $C^2$ surface, we have that for $\|g\|_{W^{1,\infty}}$ small enough, $(M_h)_g$ is locally a tangent Lipschitz graph over $M$, see Definition \ref{def:Mgraph} (iii). By Lemma \ref{lma: graph property}, this implies that $(M_h)_g$ is a graph over $M$.
Invoking Proposition \ref{prop: Delaunay existence} yields a Delaunay triangulated surface ${\mathcal T}_\varepsilon \coloneqq {\mathcal T}(X_\varepsilon, M_h)$ with vertex set $X_\varepsilon$ that is $\zeta_0$-regular for some $\zeta_0>0$, and $\bigcup_{K\in {\mathcal T}_\varepsilon} = (M_h)_{g_\varepsilon}$ with $\|g_\varepsilon\|_{W^{1,\infty}}\leq C(\delta)\varepsilon$.
By the above, there exist Lipschitz functions $h_\varepsilon:M\to \R$ such that
$(M_h)_{g_\varepsilon} = M_{h_\varepsilon}$, with $h_\varepsilon \to h$ in $W^{1,\infty}$, $\|h_\varepsilon\|_\infty \leq \frac{\delta(M)}{2}$ and $\|\nabla h_\varepsilon\|\leq \|\nabla h\|_{\infty}+1$.
\medskip
It remains to estimate the energy. To do so, we look at the two types of interfaces appearing in the sum
\[
\sum_{K,L\in {\mathcal T}_\varepsilon} \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2.
\]
First, we look at pairwise interactions where $K,L\in {\mathcal T}(X_{i,\varepsilon})$ for some $i$. These are bounded by \eqref{eq: sum local interactions}.
Next, we note that if $\varepsilon < \min_{i\neq j \in I} \dist(B_i,B_j)$, it is impossible for $X_{i,\varepsilon}$ and $X_{j,\varepsilon}$, $i\neq j$, to interact.
Finally, we consider all interactions of neighboring triangles $K,L\in {\mathcal T}_\varepsilon$ where at least one vertex is not in $Y_{0,\varepsilon}$. By \eqref{eq:21}, these pairs all satisfy $\frac{l_{KL}}{d_{KL}} \leq C$ for some universal constant $C$ independent of $\varepsilon,\delta$, and $|n(K) - n(L)|\leq C\varepsilon$ because ${\mathcal T}$ is $\zeta_0$-regular and $M_h$ is $C^2$. Further, no points were added inside any $B_I$. Thus
\[
\begin{split}
\sum_{\substack{K,L\in {\mathcal T}_\varepsilon\,:\,\text{at least}\\\text{ one vertex is not in }Y_{0,\varepsilon}}}& \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2 \\
&\leq C{\mathscr H}^2\left(M_h \setminus \bigcup_{i\in I}B(x_i, r_i - 2\varepsilon)\right)\\
&\leq C \delta + C(\delta)\varepsilon.
\end{split}
\]
Choosing an appropriate diagonal sequence $\delta(\varepsilon) \to 0$ yields a sequence ${\mathcal T}_\varepsilon = M_{h_\varepsilon}$ with $h_\varepsilon\to h$ in $W^{1,\infty}(M)$ with
\[
\limsup_{\varepsilon \to 0} \sum_{K,L\in {\mathcal T}_\varepsilon} \frac{l_{KL}}{d_{KL}} |n(K) -n(L)|^2 \leq \int_{M_h} |Dn_{M_h}|^2\,d{\mathscr H}^2.
\]
\end{proof}
\section{Necessity of the Delaunay property}
\label{sec:necess-dela-prop}
We now show that without the Delaunay condition, it is possible to achieve a lower energy. In contrast to the preceding sections, we are going to choose an underlying manifold $M$ with boundary (the ``hollow cylinder'' $S^1\times[-1,1]$). By ``capping off'' the hollow cylinder one can construct a counterexample to the lower bound in Theorem \ref{thm:main}, where it is assumed that $M$ is compact without boundary.
\begin{prop}\label{prop: optimal grid}
Let $M =S^1\times[-1,1] \subset \R^3$ be a
hollow cylinder and $\zeta>0$. Then there are $\zeta$-regular triangulated
surfaces ${\mathcal T}_j\subset \R^3$ with $\size({\mathcal T}_j) \to 0$ and ${\mathcal T}_j \to M$ for
$j\to\infty$ with
\[
\limsup_{j\to\infty} \sum_{K,L\in {\mathcal T}_j} \frac{\l{K}{L}}{d_{KL}}
|n(K)-n(L)|^2 < c(\zeta) \int_M |Dn_M|^2\,d\H^2\,,
\]
where the positive constant $c(\zeta)$ satisfies
\[
c(\zeta)\to 0 \quad \text{ for } \quad\zeta\to 0\,.
\]
\end{prop}
\begin{figure}
\includegraphics[height=5cm]{cylinder2.pdf}
\caption{A non-Delaunay triangulated cylinder achieving a low energy . \label{fig:cylinder}}
\end{figure}
\begin{proof}
For every $\varepsilon = 2^{-j}$ and $s\in\{2\pi j^{-1}:j=3,4,5,\dots\}$, we
define a flat triangulated surface ${\mathcal T}_j\subset \R^2$ with
$\size({\mathcal T}_j) \leq \varepsilon$ as follows: As manifolds with boundary, ${\mathcal T}_j=[0,2\pi]\times
[-1,1]$ for all $j$; all triangles are isosceles,
with one side a translation of $[0,\varepsilon]e_2$ and height $s\varepsilon$ in
$e_1$-direction. We neglect the triangles close to the boundary
$[0,2\pi]\times\{\pm 1\}$, and leave it to
the reader to verify that their contribution will be negligeable in the end.
\medskip
We then wrap this triangulated surface around the cylinder, mapping the
corners of triangles onto the surface of the cylinder via $(\theta,t) \mapsto
(\cos\theta, \sin\theta, t)$, to obtain a triangulated surface $\tilde {\mathcal T}_j$.
Obviously, the topology of $\tilde {\mathcal T}_j$ is $S^1\times[-1,1]$.
Then we may estimate all terms $\frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2$. We
first find the normal of the reference triangle $K\in \tilde {\mathcal T}_j$ spanned by the points $x = (1,0,0)$, $y = (1,0,\varepsilon)$, and $z = (\cos(s\varepsilon),\sin(s\varepsilon),\varepsilon/2)$. We note that
\[
n(K) = \frac{(y-x) \times (z-x)}{|(y-x) \times (z-x)|} = \frac{(-s\varepsilon\sin(s\varepsilon), s\varepsilon(\cos(s\varepsilon)-1),0)}{s\varepsilon(2-2\cos(s\varepsilon))} = (1,0,0) + O(s\varepsilon).
\]
We note that the normal is the same for all translations $K+te_3$ and for all triangles bordering $K$ diagonally. The horizontal neighbor $L$ also has $n(L) = (1,0,0) + O(s\varepsilon)$. However, we note that the dimensionless prefactor satisfies $\frac{\l{K}{L}}{d_{KL}} \leq \frac{2\varepsilon}{\varepsilon/s} = s$. Summing up the $O(s^{-1}\varepsilon^{-2})$ contributions yields
\[
\sum_{K,L\in {\mathcal T}_j} \frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2 \leq C \frac{s^3\varepsilon^2}{s\varepsilon^2} = Cs^2.
\]
This holds provided that $\varepsilon$ is small enough. Letting $s\to 0$, we see that this energy is arbitrarily small.
\end{proof}
\bibliographystyle{alpha}
| 2024-02-18T23:39:40.213Z | 2021-06-14T02:22:34.000Z | algebraic_stack_train_0000 | 19 | 15,219 |
|
proofpile-arXiv_065-129 | "\\section{Introduction}\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=0.75\\t(...TRUNCATED) | 2024-02-18T23:39:40.296Z | 2020-07-22T02:14:35.000Z | algebraic_stack_train_0000 | 21 | 7,788 |
|
proofpile-arXiv_065-210 | "\\section{Introduction} \\label{sec:intro}\n\nLinear waves in an inviscid, perfectly-conducting fl(...TRUNCATED) | 2024-02-18T23:39:40.688Z | 2020-07-22T02:14:48.000Z | algebraic_stack_train_0000 | 39 | 6,021 |
|
proofpile-arXiv_065-296 | "\\section{Introduction}\nMolecular descriptors \\cite{ref1} are mathematical quantities that descri(...TRUNCATED) | 2024-02-18T23:39:40.991Z | 2020-07-22T02:05:21.000Z | algebraic_stack_train_0000 | 57 | 3,932 |
|
proofpile-arXiv_065-339 | "\\subsection{Detection Statistics}\n\nTable 2 lists the 150 H$_2$O maser sites\nobserved at IRAM in(...TRUNCATED) | 2024-02-18T23:39:41.182Z | 1996-09-07T01:12:47.000Z | algebraic_stack_train_0000 | 67 | 9,574 |
|
proofpile-arXiv_065-428 | "\\section{Guidelines}\n\nIt is well known that the effects on the physics of a field, \ndue to a mu(...TRUNCATED) | 2024-02-18T23:39:41.423Z | 1996-09-19T12:32:13.000Z | algebraic_stack_train_0000 | 84 | 1,524 |
|
proofpile-arXiv_065-463 | "\\section{Introduction}\n\n\\begin{figure}\n\\vspace*{13pt}\n\\vspace*{6.7truein} \n\\special{p(...TRUNCATED) | 2024-02-18T23:39:41.543Z | 1996-09-30T23:32:09.000Z | algebraic_stack_train_0000 | 90 | 736 |
|
proofpile-arXiv_065-679 | "\\subsection*{1. Introduction}\n\nThe connection of positive knots with transcendental numbers, via(...TRUNCATED) | 2024-02-18T23:39:42.282Z | 1996-11-18T11:27:03.000Z | algebraic_stack_train_0000 | 131 | 5,534 |
|
proofpile-arXiv_065-929 | "\\section{Introduction}\\label{intro}\n\nSpiral waves are spatio-temporal patterns typically foun(...TRUNCATED) | 2024-02-18T23:39:43.058Z | 1996-09-09T22:09:39.000Z | algebraic_stack_train_0000 | 179 | 11,992 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 84